Toward a Windows Native Client (WNC) Meghan McClelland LAD2013

Similar documents
Introduction The Project Lustre Architecture Performance Conclusion References. Lustre. Paul Bienkowski

Open SFS Roadmap. Presented by David Dillow TWG Co-Chair

Lustre overview and roadmap to Exascale computing

Lustre File System. Proseminar 2013 Ein-/Ausgabe - Stand der Wissenschaft Universität Hamburg. Paul Bienkowski Author. Michael Kuhn Supervisor

Distributed Systems 16. Distributed File Systems II

An Introduction to GPFS

Challenges in HPC I/O

COS 318: Operating Systems. NSF, Snapshot, Dedup and Review

Lustre/HSM Binding. Aurélien Degrémont Aurélien Degrémont LUG April

Cloud Filesystem. Jeff Darcy for BBLISA, October 2011

Introduction To Gluster. Thomas Cameron RHCA, RHCSS, RHCDS, RHCVA, RHCX Chief Architect, Central US Red

COS 318: Operating Systems. Journaling, NFS and WAFL

<Insert Picture Here> Lustre Development

Ceph Block Devices: A Deep Dive. Josh Durgin RBD Lead June 24, 2015

Assistance in Lustre administration

LUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan)

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights

Applying DDN to Machine Learning

EIOW Exa-scale I/O workgroup (exascale10)

Quobyte The Data Center File System QUOBYTE INC.

Choosing an Interface

European Lustre Workshop Paris, France September Hands on Lustre 2.x. Johann Lombardi. Principal Engineer Whamcloud, Inc Whamcloud, Inc.

System that permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files

Shared File System Requirements for SAS Grid Manager. Table Talk #1546 Ben Smith / Brian Porter

Testing Lustre with Xperior

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

IME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning

Developing SQL Databases (762)

Samba in a cross protocol environment

Challenges in making Lustre systems reliable

Status of the Linux NFS client

An Exploration into Object Storage for Exascale Supercomputers. Raghu Chandrasekar

Parallel File Systems for HPC

NFS Version 4.1. Spencer Shepler, Storspeed Mike Eisler, NetApp Dave Noveck, NetApp

Filesystems on SSCK's HP XC6000

Samba and Ceph. Release the Kraken! David Disseldorp

Distributed File Systems

Andreas Dilger. Principal Lustre Engineer. High Performance Data Division

Introduction to Lustre* Architecture

Improving I/O Bandwidth With Cray DVS Client-Side Caching

Simplified Multi-Tenancy for Data Driven Personalized Health Research

Let s Make Parallel File System More Parallel

Dynamic Metadata Management for Petabyte-scale File Systems

An introduction to GPFS Version 3.3

Zhang, Hongchao

RedHat Gluster Storage Administration (RH236)

OCFS2: Evolution from OCFS. Mark Fasheh Senior Software Developer Oracle Corporation

HITACHI VIRTUAL STORAGE PLATFORM F SERIES FAMILY MATRIX

Provisioning with SUSE Enterprise Storage. Nyers Gábor Trainer &

VirtFS A virtualization aware File System pass-through

Emulating Windows file serving on POSIX. Jeremy Allison Samba Team

Community Release Update

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM

IBM Active Cloud Engine centralized data protection

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

Evaluating Cloud Storage Strategies. James Bottomley; CTO, Server Virtualization

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Global Locking. Technical Documentation Global Locking

Welcome to Manila: An OpenStack File Share Service. May 14 th, 2014

Lustre * 2.12 and Beyond Andreas Dilger, Intel High Performance Data Division LUG 2018

Multi-Rail LNet for Lustre

EMC Unisphere for VMAX Database Storage Analyzer

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl

HITACHI VIRTUAL STORAGE PLATFORM G SERIES FAMILY MATRIX

Demonstration Milestone Completion for the LFSCK 2 Subproject 3.2 on the Lustre* File System FSCK Project of the SFS-DEV-001 contract.

Extraordinary HPC file system solutions at KIT

The CephFS Gateways Samba and NFS-Ganesha. David Disseldorp Supriti Singh

Overlayfs And Containers. Miklos Szeredi, Red Hat Vivek Goyal, Red Hat

Cluster Management Workflows for OnCommand System Manager

Triton file systems - an introduction. slide 1 of 28

LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions

Storage Performance Validation for Panzura

Solving Linux File System Pain Points. Steve French Samba Team & Linux Kernel CIFS VFS maintainer Principal Software Engineer Azure Storage

Oracle Database 10g The Self-Managing Database

Administrator s Guide. StorageX 8.0

Recap. CSE 486/586 Distributed Systems Google Chubby Lock Service. Recap: First Requirement. Recap: Second Requirement. Recap: Strengthening P2

COMP 530: Operating Systems File Systems: Fundamentals

An Overview of The Global File System

Load Dynamix Enterprise 5.2

Lustre A Platform for Intelligent Scale-Out Storage

CephFS A Filesystem for the Future

Glamour: An NFSv4-based File System Federation

Andreas Dilger, Intel High Performance Data Division SC 2017

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

SMB3.1.1 POSIX Protocol Extensions: Summary and Current Implementation Status

FILE SYSTEMS, PART 2. CS124 Operating Systems Fall , Lecture 24

Community Release Update

Fujitsu's Lustre Contributions - Policy and Roadmap-

NFS-Ganesha w/gpfs FSAL And Other Use Cases

Hitachi Data Instance Manager Software Version Release Notes

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 3.1 and 3.1.1

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

Lustre Parallel Filesystem Best Practices

DataDirect Networks Japan, Inc.

High Performance Computing. NEC LxFS Storage Appliance

Lustre 2.12 and Beyond. Andreas Dilger, Whamcloud

Bright Cluster Manager

Data Movement & Tiering with DMF 7

Transcription:

Toward a Windows Native Client (WNC) Meghan McClelland Meghan_McClelland@xyratex.com LAD2013

Overview At LUG 2013 there was expressed strong interest in a WNC client. Xyratex acquired IP from Oracle. The Lustre code has changed since last updates, but material is still useful starting point. Funding proposal for evaluation being submitted for consideration by OpenSFS Items in RED text in this Lustre is a registered trademark of Xyratex Technology Ltd. presentation are asking for feedback Windows is a registered trademark of Microsoft Corporation in the United States and other countries.

What is done today CIFS or pcifs gateway NFS gateway Windows run in Linux hypervisor Maybe ok for staging small data problems: limited numbers of gateways (gateway bottleneck) coherency between gateways data flow through gateways additional layers of software

Current State There is an existing prototype! Implemented features and functions Mounting and unmounting Basic metadata operations File I/O direct (cached, mmap close but need work) LNET Byte range lock ls, create, rename, open, delete, attributes flock Directory change notification (client only) sorta like fcntl(f_notify) but with more details

Known Issues mmap lock implementation issues emulated page layer compiler issues testing - need common Windows software

TODO? Branch code sync read-ahead CIFS re-sharing xattr & security support (ACLs?) Links / symlinks support Params-tree port lctl and lfs porting documentation Various codepage support (UTF8, gb2321) testing - partial acc-small port (at least sanity)? Or common Window software? GUI Tools mmap conflict detection cached I/O

Risks Caching mmap hooking Lustre filesystem constantly changing Code landing Licensing

Solution Alternatives Run Windows in hypervisor on Linux Export via SMB Export vis NFS pcifs directly to OSS

Feedback! Welcome and encouraged Especially for RED items If this is a project you d like to see funded, let your EOFS and OpenSFS representatives know!

E10 Update

E10 Core Components

Implementation in Progress( prototyope builds) Schemas and Distributed Containers POSIX Schema being worked on Use of Haskell Programming Language Storage Interface and Implementation In-Memory for debugging/testing - for now Container Manager Implementation Namespaces for distributed containers Contact: JonathanJouty/ParSci

High Availability Highly available architecture for Infrastructure Nodes Contact: Paxos type algorithms being worked out Matthew Boespflug/Parsci Achieving consensus among nodes Implementation in progress ( Haskell) File system interfaces for Exascale Clovis interface being discussed Object store interface Key-Value based Metadata Contact: Concept of transactions Nikita Danilov/Xyratex A full interface Complete storage interface for E10 Concept of resource management and layouts

Observability: Always On Infrastructure Telemetry data at Exascale Needs specialised analytics solutions (100s of TBs/day of Telemetry data for reasonably large clusters) Specialised Anomaly Detection and Root Cause Analysis methods What if Predictive Capability at Exascale What if we provide a flash tier? What if we introduce PCM(Phase Change Memory) in the mix? What happens as we scale this architecture out? Learning Engine - learns from the above components and helps the infrastructure to adapt

Current Activity for E10 through the SIOX project - Led by University of Hamburg Project involves data collection and Analysis of activity patterns and performance metrics targeted towards Exascale I/O SIOX aims to provide fine grain system performance SIOX aims to locate and diagnose problems SIOX learns optimizations to be fed back into the Exascale I/O system Current Status High Level architecture available for the different subcomponents of SIOX Early prototypes under development utilizing basic SIOX libraries Contact: Julian Kunkel at the University of Hamburg

Current Activity for E10 through the Exascale I/O simulation Framework - Led by Xyratex Project involves development of Exascale I/O Simulation Engine Operation Log Editors for editing Exascale I/O workloads Simulation drivers that provide models of I/O hardware components at Exascale Current Status High Level architecture available for the Exascale I/O Simulation Engine Utilizes Queuing Frameworks Contact: Sai Narasimhamurthy at Xyartex

Thank You Meghan_McClelland@xyratex.com