AFS + Object Storage

Similar documents
Embedded Filesystems (Direct Client Access to Vice Partitions)

OpenAFS and Large Volumes. Mark Vitale AFS and Kerberos Best Practices Workshop 19 August 2015

File System Case Studies. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter 11: File System Implementation. Objectives

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Operating Systems. Operating Systems Professor Sina Meraji U of T

Experiences with HP SFS / Lustre in HPC Production

File System Case Studies. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Current Topics in OS Research. So, what s hot?

Name: Instructions. Problem 1 : Short answer. [48 points] CMU / Storage Systems 20 April 2011 Spring 2011 Exam 2

BeoLink.org. Design and build an inexpensive DFS. Fabrizio Manfredi Furuholmen. FrOSCon August 2008

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

A scalable storage element and its usage in HEP

The Google File System

CA485 Ray Walshe Google File System

What is a file system

Example Implementations of File Systems

Lustre overview and roadmap to Exascale computing

ZFS: NEW FEATURES IN REPLICATION

Distributed File Systems

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

Coordinating Parallel HSM in Object-based Cluster Filesystems

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

Overcoming Obstacles to Petabyte Archives

CS 318 Principles of Operating Systems

GFS: The Google File System

Tape pictures. CSE 30341: Operating Systems Principles

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

ICS Principles of Operating Systems

Dynamic Metadata Management for Petabyte-scale File Systems

Name: Instructions. Problem 1 : Short answer. [48 points] CMU / Storage Systems 20 April 2011 Spring 2011 Exam 2

Mass-Storage Structure

IBM Spectrum Scale Archiving Policies

Computer Systems Laboratory Sungkyunkwan University

Chapter 10: Mass-Storage Systems

V. File System. SGG9: chapter 11. Files, directories, sharing FS layers, partitions, allocations, free space. TDIU11: Operating Systems

File System Internals. Jo, Heeseung

Distributed File Systems II

ECE 598 Advanced Operating Systems Lecture 14

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

File systems CS 241. May 2, University of Illinois

OPERATING SYSTEM. Chapter 12: File System Implementation

Metadaten Workshop 26./27. März 2007 Göttingen. Chimera. a new grid enabled name-space service. Martin Radicke. Tigran Mkrtchyan

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

Google File System. Arun Sundaram Operating Systems

MASS-STORAGE STRUCTURE

HPC File Systems and Storage. Irena Johnson University of Notre Dame Center for Research Computing

Chapter 11: Implementing File Systems

UNIT-IV HDFS. Ms. Selva Mary. G

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Operating Systems. Week 9 Recitation: Exam 2 Preview Review of Exam 2, Spring Paul Krzyzanowski. Rutgers University.

Chapter 11: Implementing File

EMC VPLEX Geo with Quantum StorNext

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

File System: Interface and Implmentation

File Systems. What do we need to know?

CS 318 Principles of Operating Systems

dcache Introduction Course

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

Triton file systems - an introduction. slide 1 of 28

Shared Object-Based Storage and the HPC Data Center

Module 13: Secondary-Storage

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

CS5460: Operating Systems Lecture 20: File System Reliability

Filesystems on SSCK's HP XC6000

Chapter 11: Implementing File Systems

[537] Fast File System. Tyler Harter

HTRC Data API Performance Study

Chapter 14: Mass-Storage Systems

The Google File System (GFS)

File System Case Studies. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

File System Implementation

Background. 20: Distributed File Systems. DFS Structure. Naming and Transparency. Naming Structures. Naming Schemes Three Main Approaches

CSE 124: Networked Services Lecture-17

Lustre Clustered Meta-Data (CMD) Huang Hua Andreas Dilger Lustre Group, Sun Microsystems

Introduction to OS. File Management. MOS Ch. 4. Mahmoud El-Gayyar. Mahmoud El-Gayyar / Introduction to OS 1

IBM Spectrum Scale Archiving Policies

Data storage services at KEK/CRC -- status and plan

an Object-Based File System for Large-Scale Federated IT Infrastructures

Hiding HSM Systems from the User

Implementation should be efficient. Provide an abstraction to the user. Abstraction should be useful. Ownership and permissions.

File Systems Ch 4. 1 CS 422 T W Bennet Mississippi College

GFS: The Google File System. Dr. Yingwu Zhu

The Google File System

Backup and archiving need not to create headaches new pain relievers are around

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

Lecture 10: Crash Recovery, Logging

CSE 124: Networked Services Fall 2009 Lecture-19

ECE 598 Advanced Operating Systems Lecture 18

Virtual Memory. Chapter 8

Storage Systems. Storage Systems

Chapter 12: File System Implementation

Chapter 4 File Systems. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved

TGCC OVERVIEW. 13 février 2014 CEA 10 AVRIL 2012 PAGE 1

Chapter 14 Mass-Storage Structure

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Transcription:

AFS + Object Storage Hartmut Reuter reuter@rzg.mpg.de Rainer Többicke, CERN, Switzerland Andrei Maslennikov, Ludovico Giammarino, Roberto Belloni, CASPUR, Italy Hartmut Reuter, RZG, Germany 1

The Bad News: Our good friend MR-AFS passed away We were the last site using it. After 14 years on March 8 the last server was shut down 2

Now the Good News OpenAFS + Object Storage has replaced MR-AFS at RZG It is in full production and offers similar features as MR-AFS did It brings HSM (Hierarchical Storage Management) to OpenAFS It makes AFS scale better for high throughput It offers better performance for high speed clients and large files It has some extra goodies to analyze volumes and files and make administration of the cell easier It behaves like classical OpenAFS for volumes without osd flag. 3

Overview What is Object Storage? OpenAFS + Object Storage - a R&D project sponsored by CERN and ENEA The implementation Results from the March 2007 test campaign at CERN The current version OpenAFS-1.4.7-osd New commands and subcommands related to object storage Back strategies for OpenAFS + Object Storage Hierarchical Storage Management (HSM) for AFS Other goodies The future 4

What is object storage Object storage systems are distributed filesystems which store data in object storage devices (OSD) and keep metadata in metadata servers. Access on a client to a file in object storage consists in the following steps: directory look of the file rpc to metadata server which checks permissions and returns for each object the file consists of a special encrypted handle. rpc to the OSDs to read or write objects using the handle obtained from the metadata server The OSD can decrypt the handle by use of a secret shared between OSD and metadata server. The advantage of this technique is that the OSD doesn't need any knowledge about users and access rights. existing object storage systems claim to follow the SCSI T10 standard SCSI because there was hope to integrate the OSD logic into the disk firmware 5

Object Storage Systems The most popular object storage systems are Lustre and Panasas Compared to parallel filesystems such as GPFS the advantage of object storage is clients cannot corrt the filesystem permission checks are done on the metadata server not on the client. Disadvantages of today's object storage systems limited number of platforms (Lustre: only Linux, Panasas:?) no world-wide access to data in object storage (only a cluster filesystem) 6

The T10 standard T10 Technical Comitee on SCSI Storage Interfaces defines standards for SCSI commands and devices. Two subgros are workig on object storage: Object-Based Storage Device Commands (OSD) Object-Based Storage Devices - 2 (OSD-2) There have been published some drafts about the OSD-standard which we read and analyzed. As far as possible we tried to follow these standards: For each object 64-bit object-id we use vnode, uniquifier, and tag (NAMEI inode) 64-bit partition-id we use the volume-id of the RW-volume A data structure cdb is used in SCSI commands to OSDs. we use a sub-structure of it to transport the encrypted object-handle It turned out that the T10 standard is not rich enough to sport full AFS semantic link counts for objects needed for volume replication are missing the security model is too weak for insecure networks 7

Why AFS and Object Storage can easily go together AFS infra-stucture has already many components necessary for good object storage. central user authentication and registration AFS fileserver could act as OSD-metadata server allowing for better scalability than in other object storage systems. OSDs for AFS could use rx-protocol and NAMEI-partitions, both components available for all platforms. Use of OSDs can remove 'hot spots' in very active volumes by distributing data over many OSDs increase peak performance by striping of files (HPC environment) allow for RW-replication by use of mirrored objects offer HSM functionality in AFS Implementing object storage in the AFS environment would allow to use objects storage on any platform (not just Linux) and to use it world wide. 8

The R&D Project OpenAFS + Object Storage In 2004 Rainer Többicke from CERN implemented a first version of AFS with object storage as a proof of concept. It was tested at the CASPUR StorageLab in Rome. However, Rainer couldn't find the time to develop his idea any further In 2005 Andrei Maslennikov from CASPUR managed to bring the interested institutions and persons together to form a real project. Design decisions were made on a meeting at CERN Funds were raised from CERN, and ENEA to hire two system programmers to work at CASPUR Most of the development was actually done at RZG because there was the most experience with the AFS source. In 2007 the official project ended with a week of tests at CERN. Since then RZG has continued to implement all what was needed to replace MR-AFS 9

Implementation: The Components rxosd the object storage device (OSD) used for AFS that's where the data are stored. Works on NAMEI partitions. RPC-interface used by client, fileserver, volserver, archival rxosd and osd-command osddb a ubik-database (like vldb) describing OSDs policies to be used in different AFS volumes for allocation of files in object storage. AFS-fileserver, acting as metadata-server metadata are stored in a new volume-special file extensions also to volserver and salvager AFS-client has been restructured to allow for multiple protocols got the ability to store and fetch data directly from OSDs Legacy-interface to sport old clients to allow move of volumes which use object storage back to classic fileservers 10

How does it work In struct AFSFetchStatus the unused SyncCounter is replaced by Protocol Protocol is set to RX_OSD when it is a file in object storage. This information is copied into vcache->protocol. When such a file is going to be fetched from the server the client does a RXAFS_GetOSDlocation RPC to the fileserver With the information returned the client can do a RXOSD_read RPC to the OSD. Before a new file is stored the first time the client does a RXAFS_Policy RPC. If the volume has the osdflag set the fileserver tries to find an appropriate OSD Sophisticated policies are not yet implemented. Therefore the fileserver decides only based on the the file size. The fileserver deletes the inode and allocates instead an object on an OSD. When RXAFS_Policy has returned protocol == RX_OSD the client calls RXAFS_GetOSDlocation and then use RXOSD_write to store the data. 11

Example: reading a file in OSD 1 osddb fileserver 2 osd 3 osd client osd The fileserver gets a list of osds from the osddb (1). The client knows from status information that the file he wants to read is in object storage. Therefore he does an rpc (2) to the fileserver to get its location and the permission to read it. The fileserver returns an encrypted handle to the client. With this handle the client gets the data from osd (3). osd 12

Example: writing a file into OSD 1 osddb fileserver 2 46 3 osd 5 osd client osd The fileserver gets a list of osds from the osddb (1). The client asks (2) the fileserver where to store the file he has in his cache. The fileserver decides following his policies to store it in object storage and chooses an osd. He allocates an object on the osd (3). osd The client asks for permission and location of the object (4) getting an encrypted handle which he uses to store the data in the osd (5). Finally afs_storemini informs the fileserver of the actual length of the file (6). 13

Describing a file: Object Data are stored in objects inside OSDs. simplest case: one file == one object OSDs 14

Describing a file: Objects Data of a file could be stored in multiple objects allowing for data striping ( to 8 stripes, each in a separate OSD) data mirroring ( to 8 copies, each in a separate OSD) OSDs 15

Describing a file: Objects + Segments To a file existing on some OSDs later more data could be appended. The appended data may be stored on different OSDs (in case there is not enough free space on the old ones) This leads to the concept of segments The appended part is stored in objects belonging to a new segment. OSDs 16

Describing a file: Objects + Segments + File Copies The whole file could get a copy on an archival OSD (tape, HSM) This leads to the concept of file copies Archival OSD 17

Describing a file: The complete osd metadata The vnode of an AFS file points to quite a complex structure of osd metadata: Objects are contained in segments Segments additional metadata are contained in file copies such as md5-checksums may be included. Even in the simplest case of a file stored as a single object the whole hierarchy (file copy, segment, object) exists in the metadata. The osd metadata of all files belonging to a volume are stored together in a single volume special file This osdmetada file has slots of constant length which the vnodes point to In case of complicated metadata multiple slots can be chained The osd metadata are stored in network byte order to allow easy transfer during volume move or replication to other machines. 18

Describing a file: osd metadata in memory osd_p_filelist len In memory the file is described by a tree of C-structures for the different components val osd_p_file segmlist metalist archvers archtime spare len val len val osd_p_segm osd_p_meta objlist length offset stripes stripesize copies len val osd_p_obj obj_id part_id osd_id stripe These structures are serialized in net-byte-order by means of rxgen-created xdrroutines into slots of the volume special file osdmetadata. 19

Example for OSD metadata The new fs -subcommand osd shows the OSD-metadata of a file. The 1st entry describes the actual disk file (its length is taken from the vnode) the 2nd and 3rd entries describe archival copies with md5-checksums The on-line file has been restored from the second archive on April 1. > fs osd mylargefile.tar mylargefile.tar has 436 bytes of osd metadata, v=3 On-line, 1 segm, flags=0x0 segment: lng=0, offs=0, stripes=1, strsize=0, cop=1, 1 objects object: obj=1108560417.96.161157.0, osd=8, stripe=0 Archive, dv=67, 2008-03-19 12:17:23, 1 segm, flags=0x2 segment: lng=2307727360, offs=0, stripes=1, strsize=0, cop=1, 1 objects object: obj=1108560417.96.161157.0, osd=5, stripe=0 metadata: md5=6e71eff09adc46d74d1ba21e14751624 as from 2008-03-19 13:43:56 Archive, dv=67, 2008-03-19 13:43:56, 1 fetches, last: 2008-04-01, 1 segm, flags=0x2 segment: lng=2307727360, offs=0, stripes=1, strsize=0, cop=1, 1 objects object: obj=1108560417.96.161157.0, osd=13, stripe=0 metadata: md5=6e71eff09adc46d74d1ba21e14751624 as from 2008-03-24 02:15:27 > 20

The osddb data base The osddb database contains entries for all OSDs with id, name, priorities, size ranges... The filesystem parameters are dated every 5 minutes by the OSDs. OSD 1 is a dummy used by the default policy to determine maximum file size in filserver partition ~: osd l id name(loc) 1 local_disk 4 raid6 5 tape 8 afs16-a 9 mpp-fs9-a 10 afs4-a 11 w7as(hgw) 12 afs1-a 13 gpfs 14 afs6-a 17 mpp-fs8-a 18 mpp-fs3-a 19 mpp-fs10-a 20 sfsrv45-a 21 sfsrv45-b 22 mpp-fs2-a 23 mpp-fs11-a 24 mpp-fs12-a 25 mpp-fs13-a 26 afs0-a 27 afs11-z 32 afs8-z ~: ---total space--4095 7442 4095 11079 4095 2721 1869 3139 1228 2047 2047 5552 329 923 2047 6143 6143 6143 1862 1861 1023 68.4 31.3 84.8 4.5 85.0 61.9 39.8 56.6 79.1 15.6 77.6 13.8 10.3 6.4 79.4 0.6 0.0 0.0 85.0 84.8 80.0 flag arch arch arch prior. wr rd 70 64 64 30 80 80 80 80 80 80 65 80 70 70 50 32 72 72 80 80 80 80 80 80 32 64 32 64 80 80 80 80 80 80 80 80 80 80 80 80 79 80 own. server lun size range (0kb-1mb) afs15.rz 0 (1mb-8mb) styx.rzg 0 (1mb-100) afs16.rz 0 (1mb-100) mpp mpp-fs9. 0 (1mb-100) afs4.bc. 0 (1mb-100) afs-w7as 0 (1mb-9) afs1.rzg 0 (1mb-100) tsm.rzg. 12 (8mb-100) toc afs6.rzg 0 (8mb-100) mpp mpp-fs8. 0 (1mb-100) mpp mpp-fs3. 0 (1mb-100) mpp mpp-fs10 0 (1mb-100) sfsrv45. 0 (1mb-8mb) sfsrv45. 1 (8mb-10) mpp mpp-fs2. 0 (1mb-100) mpp mpp-fs11 0 (1mb-100) mpp mpp-fs12 0 (1mb-100) mpp mpp-fs13 0 (1mb-100) afs0.rzg 0 (1mb-100) afs11.rz 25 (1mb-100) afs8.rzg 25 (1mb-100) 21

Why use OpenAFS + Object Storage? OpenAFS scales very well if thousands of clients access files in different volumes on different servers Student's home directories distributed over many servers accessed from slow clients OpenAFS doesn't really scale when thousands of clients access files in the same RW-volume Huge software repositories in batch environments (CERN) Distributing files over many OSDs can help. It doesn't scale when fast clients read large files Striping files over multiple OSDs can help It doesn't scale when mutliple clients read the same large files Multiple copies of the files on different OSDs can help If you have large infrequently used files HSM functionality would be nice Long time preservation of photos, audio- and video- data or raw data from experiments Archival OSDs make data migration into underlying HSM system possible. 22

Simple Example Huge volumes: Classical OpenAFS OpenAFS + Object Storage Same amount of disk space, but split between fileserver partition (lower piece) and object storage (per piece). Load to servers is balanced even if only red volume is presently used Huge volumes can still be moved or replicated 23

CERN Tests: Read/Write 50 clients, variable number of OSDs The diagram shows that the total throughput to a single AFS volume scales with the number of OSDs used. the traffic is distributed equally over the OSDs The order of read and write guaranteed that the files to be read could no be in the client's cache. Total network traffic on OSDs were measured in steady-state. Values for 6 and 7 OSDs are missing (NFS problem when writing the log!) Each new OSD contributes ~46 MB/s as long as the metadata server (fileserver) is not the bottle-neck. Total Throughput (read+write) MB/s 400 350 osd_8 osd_7 osd_6 osd_5 osd_4 osd_3 osd_2 osd_1 300 250 200 150 100 50 0 1 OSD 2 OSDs 3 OSDs 4 OSDs 5 OSDs 6_OSDs 7 OSDs 8 OSDs 24

CERN Tests: 50 clients reading the same 1.3 GB file Normal AFS file (left column) The normal file read can use full bandwidth because disk I/O doesn't play a role. Mirorred file in 7 OSDs (right column) Of course, writing the file was slower than writing a simple file. The diagam shows that not all copies are accessed equally (osd_2 much less) Total throughput on clients was 518 MB/s for the mirrored file! Total read throughput MB/s 600 500 osd_7 osd_6 osd_5 osd_4 osd_3 osd_2 osd_1 400 300 200 100 0 Normal Fileserver Mirrored OSDs 25

CERN Tests: ~120 clients reading randomly in large RW-volume This test simulates the access pattern to the ATLAS LCG software volume at CERN. the 50 GB rw-volume has > 700,000 small files and 115,000 directories nearly all acceses are stat or read with 10 times more stat than read. The contents of the volume was copied twice into the test cell 1st copy normal AFS volume (left column) 2nd copy all files distributed to 8 OSDs (right column) scripts running in parallel with random access to files and directories 10 times more stat than read Total read throughput MB/s 45 40 35 30 25 20 15 10 5 0 Normal volume All files in OSD osd_8 osd_7 osd_6 osd_5 osd_4 osd_3 osd_2 osd_1 26

New command: osd This administrator command is used to manage OSDs and check their content Some sub-commands: list list OSDs in the database add make new OSD entry in the database set change existing OSD entry in the database volumes list all volumes which have data in an OSD objects list all objects in an OSD belonging to a volume examine details of an object (size, time, link count...) increment increase link count of an object decrement decrease link count of an object wipecandidates md5sum make md5 checksum of an object and display it threads show currently active RPCs on an OSD list objects which have not been used for a long time 27

OSDs in our cell ~: id 1 4 5 8 9 10 11 12 13 14 17 18 19 20 21 22 23 24 25 26 27 32 33 ~: osd list name(loc) local_disk raid6 tape afs16-a mpp-fs9-a afs4-a w7as(hgw) afs1-a gpfs afs6-a mpp-fs8-a mpp-fs3-a mpp-fs10-a sfsrv45-a sfsrv45-b mpp-fs2-a mpp-fs11-a mpp-fs12-a mpp-fs13-a afs0-a afs11-z afs8-z test ---total space--5119 7442 4095 11079 4095 2721 1869 31236 1228 2047 2047 5552 329 923 2047 6143 6143 6143 1862 1861 1023 99 67.8 33.7 84.6 27.8 84.9 62.0 72.8 33.0 82.3 40.7 77.6 36.0 21.4 65.1 79.4 20.7 33.1 30.0 83.8 83.2 73.3 4.6 flag arch arch arch prior. wr rd 64 64 64 40 80 80 80 80 80 80 80 80 70 80 64 30 70 80 80 80 80 80 80 80 79 64 78 64 80 80 80 80 80 80 80 80 80 80 80 80 80 80 10 10 own. server lun size range (0kb-1mb) afs15.rz 0 (1mb-8mb) styx.rzg 0 (1mb-200) afs16.rz 0 (1mb-100) mpp mpp-fs9. 0 (1mb-100) afs4.bc. 0 (1mb-100) afs-w7as 0 (1mb-100) tok afs1.rzg 0 (1mb-100).rzg. 12 (8mb-100) tok afs6.rzg 0 (8mb-100) mpp mpp-fs8. 0 (1mb-100) mpp mpp-fs3. 0 (1mb-100) mpp mpp-fs10 0 (1mb-100) sfsrv45. 0 (1mb-8mb) sfsrv45. 1 (8mb-4) mpp mpp-fs2. 0 (1mb-100) mpp mpp-fs11 0 (1mb-100) mpp mpp-fs12 0 (1mb-100) mpp mpp-fs13 0 (1mb-100) afs0.rzg 0 (1mb-100) afs11.rz 25 (1mb-100) afs8.rzg 25 (1mb-100) afs3.bc. 0 (1mb-20) 4.4 TB 28

Wipeable OSDs in our cell Last column shows you how long files remain on-line. ~: id 8 9 10 14 17 18 19 20 21 22 23 24 25 26 27 32 ~: In some OSDS no files have been wiped since I introduced the new field in the database! osd list -wipeable name(loc) size afs16-a 4095 mpp-fs9-a 11079 afs4-a 4095 afs6-a 1228 mpp-fs8-a 2047 mpp-fs3-a 2047 mpp-fs10-a 5552 sfsrv45-a 329 sfsrv45-b 923 mpp-fs2-a 2047 mpp-fs11-a 6143 mpp-fs12-a 6143 mpp-fs13-a 6143 afs0-a 1862 afs11-z 1861 afs8-z 1023 state own mpp tok mpp mpp mpp mpp mpp mpp mpp usage 84.6 27.8 84.9 82.3 40.7 77.6 36.0 21.4 65.1 79.4 20.7 33.1 30.0 83.8 83.2 73.3 limit wipe > 85.0 0 mb 85.0 64 mb 85.0 0 mb 85.0 0 mb 85.0 64 mb 85.0 64 mb 85.0 64 mb 80.0 0 mb 80.0 0 mb 85.0 64 mb 85.0 64 mb 85.0 64 mb 85.0 64 mb 85.0 0 mb 85.0 0 mb 80.0 0 mb newest wiped May 22 14:39 May 21 00:19 May 18 09:20 May 26 08:59 Jul 3 23:30 Jul 4 09:34 4.4 TB 29

Additions for vos small additions vos dump -metadataonly dumps only directories and osd metadata. To be used dumptool to identify objects vos dump -osd includes data on OSDs in the dump (not useful in HSM environment!) vos setfields -osdflag sets the osdflag (otherwise OSDs will not be used). new subcommands archcand lists the oldest OSD-files without archive copy. Used in scripts for automatic archiving. listobjects lists objects on a specified OSD for the whole server or a single volume salvage checks (and repairs) link counts and sizes spiltvolume splits large volume at subdirectory traverse creates file size histogram and more 30

vos traverse File Size Histogram of cell ipp-garching.mpg.de 245 TB File Size Range Files run Data run ---------------------------------------------------------------0 B 4 KB 57495847 50.95 50.95 71.626 GB 0.03 0.03 4 KB 8 KB 9010902 7.98 58.93 49.160 GB 0.02 0.05 8 KB - 16 KB 8171655 7.24 66.17 87.148 GB 0.03 0.08 16 KB - 32 KB 8618207 7.64 73.81 180.994 GB 0.07 0.15 32 KB - 64 KB 7166017 6.35 80.16 329.492 GB 0.13 0.29 64 KB - 128 KB 4921966 4.36 84.52 433.479 GB 0.17 0.46 128 KB - 256 KB 3593541 3.18 87.71 636.145 GB 0.25 0.71 256 KB - 512 KB 3814530 3.38 91.09 1.306 TB 0.53 1.24 512 KB 1 MB 2738299 2.43 93.51 1.776 TB 0.72 1.97 1 MB 2 MB 1757244 1.56 95.07 2.436 TB 0.99 2.96 2 MB 4 MB 1693764 1.50 96.57 4.404 TB 1.80 4.76 4 MB 8 MB 1480593 1.31 97.88 7.820 TB 3.19 7.95 8 MB - 16 MB 840360 0.74 98.63 9.068 TB 3.70 11.64 16 MB - 32 MB 538000 0.48 99.10 11.118 TB 4.53 16.18 32 MB - 64 MB 400603 0.35 99.46 16.570 TB 6.76 22.93 64 MB - 128 MB 255473 0.23 99.68 21.421 TB 8.73 31.67 128 MB - 256 MB 134016 0.12 99.80 23.066 TB 9.40 41.07 256 MB - 512 MB 92352 0.08 99.89 34.134 TB 13.92 54.99 512 MB 1 GB 115540 0.10 99.99 78.996 TB 32.21 87.20 1 GB 2 GB 11109 0.01 100.00 16.089 TB 6.56 93.76 2 GB 4 GB 2003 0.00 100.00 5.171 TB 2.11 95.86 4 GB 8 GB 558 0.00 100.00 3.400 TB 1.39 97.25 8 GB - 16 GB 288 0.00 100.00 2.723 TB 1.11 98.36 16 GB - 32 GB 53 0.00 100.00 1.063 TB 0.43 98.80 32 GB - 64 GB 39 0.00 100.00 1.698 TB 0.69 99.49 64 GB - 128 GB 11 0.00 100.00 866.071 GB 0.34 99.83 128 GB - 256 GB 3 0.00 100.00 418.694 GB 0.17 100.00 --------------------------------------------------------------- Totals: 112852973 Files 245.276 TB 99,5 of all files < 64MB 56.2 TB 93.5 of all files < 1MB 4.8 TB 31

vos traverse Storage Usage Storage usage: --------------------------------------------------------------1 local_disk 109782404 files 49.140 TB arch. Osd 4 raid6 1091200 objects 3.366 TB arch. Osd 5 tape 3070437 objects 196.127 TB Osd 8 afs16-a 22271 objects 3.379 TB Osd 9 mpp-fs9-a 131524 objects 2.776 TB Osd 10 afs4-a 24995 objects 3.363 TB Osd 11 w7as 96184 objects 1.611 TB Osd 12 afs1-a 254905 objects 1.283 TB arch. Osd 13 gpfs 380514 objects 58.713 TB Osd 14 afs6-a 8179 objects 979.375 GB Osd 17 mpp-fs8-a 15090 objects 620.179 GB Osd 18 mpp-fs3-a 13766 objects 1.138 TB Osd 19 mpp-fs10-a 24619 objects 1.476 TB Osd 20 sfsrv45-a 17901 objects 65.087 GB Osd 21 sfsrv45-b 1426 objects 571.135 GB Osd 22 mpp-fs2-a 4600 objects 1.566 TB Osd 23 mpp-fs11-a 24743 objects 1.246 TB Osd 24 mpp-fs12-a 25273 objects 1.991 TB Osd 25 mpp-fs13-a 26469 objects 1.807 TB Osd 26 afs0-a 8582 objects 1.460 TB Osd 27 afs11-z 5382 objects 1.490 TB Osd 32 afs8-z 2761 objects 733.612 GB Osd 33 test 1 objects 4.656 GB --------------------------------------------------------------Total 115033226 objects 334.852 TB 32

vos traverse Data without a copy Data without a copy: --------------------------------------------------------------if!replicated: 1 local_disk 109782404 files 49.140 TB arch. Osd 4 raid6 12 objects 81.383 MB arch. Osd 5 tape 1495613 objects 133.329 TB Osd 8 afs16-a 14 objects 4.908 GB Osd 9 mpp-fs9-a 72 objects 334.187 MB Osd 10 afs4-a 8 objects 1.144 GB arch. Osd 13 gpfs 7 objects 95.192 MB Osd 17 mpp-fs8-a 9 objects 38.744 MB Osd 19 mpp-fs10-a 14 objects 66.043 MB Osd 20 sfsrv45-a 10 objects 62.979 MB Osd 23 mpp-fs11-a 14 objects 66.107 MB Osd 24 mpp-fs12-a 14 objects 65.891 MB Osd 25 mpp-fs13-a 15 objects 407.047 MB --------------------------------------------------------------Total 111278206 objects 182.476 TB vos traverse doesn't know whether volumes are replicated (But they are!). Data without a copy on non-archival OSDs should be new (< 1 hour) After a disaster with TSM-HSM we are in copying objects from Osd 5 to Osd 13. These objects must be read from tape, so it may take a year. 33

Additions for fs Some new subcommands (mostly for administrators) ls like ls -l, but differentiates between files, objects, and wiped objects vnode shows the contents of a file's vnode. fidvnode as above, but on fid. Shows also the relative AFS path inside volume. [fid]osd shows OSD metadata of an OSD-file [fid]archive generates a copy on an archival OSD. The md5 checksum is saved in the osd metadata [fid]replaceosd [fid]wipe erases the on-line copy if file has good archive copy. translate translates NAMEI-path to fid and shows relative AFS path inside the volume threads shows active RPCs on the fileserver protocol turns OSD usage on/off in the client listlocked shows locked vnodes on the fileserver createstripedfile moves an object from one OSD to another (or from local disk to OSD, or from an OSD to local disk, or removes a copy on an OSD) preallocates striped and/or mirrored OSD file 34

Back Strategies Classical Back Volumes using object storage can be backed by vos dump The dumps do not contain the object's data, only their metadata. Back by volume replication to another server for later use with vos convertrotorw The RO-volumes are just mirrors of the RW-volumes and don't contain the object's data Both techniques do not protect data in OSDs. Therefore files in object storage need to be backed separately: Have archival OSDs (either on disk or in HSM systems with tape robots) Run an archive script as instance on the database servers which calls vos archcand... to extracts list of candidates from the fileservers fs fidarchive... to create the archival copy of the files on an archival OSD metadata of the archival copies are stored in the volume along with md5-checksums 35

HSM functionality for AFS Now that we have archival copies - why not use them to obtain HSM-functionality? We run a wiper script as instance on the database servers which checks disk usage by osd list -wipeable and when a high-water-mark is reached gets candidates by osd wipecandidates... and wipes (frees) on-line version of the file in the RW-volume with fs fidwipe.... and at the end also in the RO-volumes by vos release About 10,000 of our 25,000 volumes use object storage To these volumes belong 195 TB == 80 of the 245 TB total data Of these 195 TB only 25 TB == less than14 are on-line. Wiped files come automatically back to disk when they are accessed Wiped files can be prefetched to disk by fs prefetch... to prepare for batch jobs The queuing of fetch requests uses an intelligent scheduler which Allows users to submit many requests at a time But lets compete always the 1st entries of all users. So a single request of one user can bypass a long queue of another user 36

vos archcand This command goes through all RW-volumes on the server and looks for files in object storage which need an archival copy. It returns sorted by age the FIDs of files older than one hour without enough copies. Syntax: vos archcand -server <machine name> [-minsize <minimal file size>] [-maxsize <maximal file size>] [-copies <number of archival copies required, default 1>] [-candidates <number of candidates default 256>] [-osd <id of archival osd>] [-cell <cell name>] [-localauth] [-encrypt] [-help] With the -osd argument only files suitable for this OSD are elected. This makes it possible to run for each archival OSD a separate script resulting in best throughput. 37

fs fidarchive This command creates an archival copy for the file belonging to the FID. Syntax: fs fidarchive -fid <volume.vnode.uniquifier> [-osd <osd number>] [-cell <cellname>] [-help] Without the -osd argument the best choice is taken from the osddb database. During the archiving process the md5 checksum is calculated and inserted into the OSD metadata of the file. 38

osd wipecandidates This command a list of objects to be wiped from the OSD. Syntax: osd wipecandidates -server <name or IP-address> [-lun <0 for /vicepa, 1 for /vicepb...>] [-max <number of candidates, default 100 >] [-criteria <0:age, 1:size, 2:age*size>] [-cell <cell name>] [-help] Returns a sorted list of objects based on atime and size of the object. 39

Fs fidwipe This command tries to remove the on-line copy of the file belonging to the FID. Syntax: fs fidwipe -fid <volume.vnode.uniquifier> [-cell <cellname>] [-help] Before removing the on-line copy of the file the existence and correct length of an actual archival copy of the files are checked. In case of TSM-HSM the object there must already have got a copy on tape (state m or p ) otherwise the wipe request aborts. Wiped files appear in fs ls with the character w at the begin of the line. automatically come back on-line when they are being opened. can asynchronously be brought on-line by the fs prefetch command 40

Other Goodies: vos split Splits a volume at a subdirectory without moving any data Just volume special files (volinfo, small and large vnode-files, osd-metadata) are copied pointing to the same real inodes/objects. Real objects get hard links into the new volume's NAMEI-tree In the new volume all references to the part not covered by it are removed And in the old volume all references to files under the subdirectory The subdirectory is replaced by a mount point to the new volume. Very useful for especially for volumes containing files in HSM! Syntax: vos splitvolume -id <volume name or ID> -newname <name of the new volume> -dirvnode <vnode number of directory where the volume should be split> [-cell <cell name>] The vnode number of the subdirectory can be obtained by fs getfid 41

CERN Tests: ~120 clients reading randomly in large RW-volume This test simulates the access pattern to the ATLAS LCG software volume at CERN. the 50 GB rw-volume has > 700,000 small files and 115,000 directories nearly all acceses are stat or read with 10 times more stat than read. The contents of the volume was copied twice into the test cell 1st copy normal AFS volume (left column) 2nd copy all files distributed to 8 OSDs (right column) scripts running in parallel with random access to files and directories 10 times more stat than read Total read throughput MB/s 45 40 35 30 25 20 15 10 5 0 Normal volume All files in OSD osd_8 osd_7 osd_6 osd_5 osd_4 osd_3 osd_2 osd_1 42

vos split As a T2 center for LHC of CERN also our cell contained a giant ATLAS LCG volume: ~: vos ex mpp.lcg.atl-grid.sl10 mpp.lcg.atl-grid.sl10 1108561605 RW mpp-fs13.t2.rzg.mpg.de /vicepz ----snip---- 77175850 K On-line 1639780 files 73 GB with 1.6 million vnodes, > 240 thousand directories, > 1.3 million files I split it last week to Then I distributed all files > 1 MB into object storage. 12 pieces each one < 8 GB with < 200 thousand files 8459 files with 44 GB into 6 OSDs (plus copies in the archival OSDs) Splitting took between 2 and 16 minutes depending on the new volume's size. 43

vos split Example volume to split New volume's name Directory vnode 13 ~/: vos split mpp.lcg.atl-grid.sl10 mpp.lgc.atl.r_12-0_1 13 1st step: extract vnode essence from large vnode file 242300 large vnodes found 2nd step: look for name of vnode 13 in directory 1108563747.7.1432513 name of 13 is rel_12-0_1 3rd step: find all directory vnodes belonging to the subtree under 13 "rel_12-0_1" 10266 large vnodes will go into the new volume 4th step extract vnode essence from small vnode file 1397474 small vnodes found 5th step: find all small vnodes belonging to the subtree under 13 "rel_12-0_1" 60464 small vnodes will go into the new volume 6th step: create hard links in the AFSIDat tree between files of the old and newvolume 7th step: create hard links in the AFSIDat tree between directories of the old and new volume and make dir 13 to new volume's root directory. 8th step: write new volume's metadata to disk 9th step: create mountpoint "rel_12-0_1" for new volume in old volume's directory 7. 10th step: delete large vnodes belonging to subtree in the old volume. 11th step: delete small vnodes belonging to subtree in the old volume. Finished! ~/: 44

Other Goodies: vos salvage Checks consistency of an AFS volume Compares file sizes in the vnode and on local disk and OSDs Link counts on local disk and OSDs compared to number or RO- and BK-volumes Example: > vos salvage 536878151 536878151: 22 inodes and 9284 objects checked, 0 errors detected > vos salvage 536938786 Object 536938786.67956.53754.0: linkcount wrong on 5 (4 instead of 3) Object 536938786.122620.97761.0: linkcount wrong on 5 (4 instead of 3) Object 536938786.129314.102237.1 has wrong length on 5 (11819691 instead of 11870419) 58130 inodes and 57800 objects checked, 3 errors detected ATTENTION > With some options you can let repair the inconsistencies Increment or decrement the wrong link counts Correct the file length in the vnode. 45

More Goodies: afsio This command can be used to write and read files bypassing the cache manager Subcommands: read reads AFS-file (normal or OSD) and writes result to stdout write reads from stdin and writes AFS-file (normal or OSD) append as write, but as name says, append fidread as read, but with fid instead of path fidwrite... Fidappend... Note: you still need an AFS-client to locate the file and the fileserver. afsio is heavily used at the AUG experiment at IPP Garching to copy large files from the data acquisition systems into AFS 46

What's new since AFS Workshop in Newark Integration of Matt Benjamin's cache bypass code for read Multiple parallel streams for write to ODSs on WAN connections Sport of DCACHE as archival OSD Some bug fixes 47

Matt Benjamin's Bypass Cache now integrated At the Workshop in Newark Matt Benjamin presented a version of the OpenAFS client for Linux which can bypass the cache when reading large files I integrated his code into my OpenAFS 1.4.7 tree and added what was necessary to bypass the cache for files in object storage. Configure with enable-cache-bypass to get this feature. Use fs bypassthreshold <size> to activate, example: fs bypass 32m sets threshold to 32 MB First test results are mixed: On my laptop read was slower with bypassing the cache than with caching On some MP-machines read was faster when bypassing the cache Bypassing does asynchronous read ahead, but why that can be slower is not yet understood. 48

Multiple parallel streams for store operations in WAN With high round trip times throughput is limited by the window size: Example: rtt = 22 ms (as we see between Garching and Greifswald) Max. Datarate = windowsize * packetsize / rtt = 32 * 1440 /.022 = 2.1 MB/s Measured data rates are even lower: I saw 1.2 MB/s rxosd contained already from the beginning for testing code to fake striping. With fs protocol rxosd -streams <n> root may change the number of streams which are used for rtt > 10ms. n may be 1, 2, 4, 8. Possible only for files which are not already striped or mirrored With 8 streams I measured 6.2 MB/s writing a 100 MB file to Greifswald For read I haven't seen an improvement because each chunk requires a new RPC. Therefore parallel streams are used only for store operations. It's better to use bypass cache (I saw 1.65 MB/s instead of 1.2 MB/s). 49

DCACHE Sport DESY (Hamburg and Zeuthen) are using DCACHE along with OSM as HSM system. DCACHE is not a complete filesystem It can be mounted via NFS2, but read/write performance is very bad. It can be accessed ba library calls from libdcap.a using an URL instead of a filesystem path. To sport DCACHE as archival OSD some changes had to be made in src/vol/ihandle.h, src/vol/namei_ops.c, and src/rxosd/rxosd.c /vicep<x> contains only the link tables while the files are stored directly in DCACHE Configure accepts new operands: --enable-dcache- --with-dcache-path <path to dcap.h and libdcap.a> --enable-dcache-pnfs A rxosd in a test cell at DESY Hamburg works successful with DCACHE However, large file sport not yet satisfying DCACHE still needs to be mounted via NFS for opendir... 50

When should you use OpenAFS + Object Storage? Without HSM system in the background you can get better throughput for data in volumes which cannot be replicated Object storage distributes the data equally over many OSDs Striping distributes load even for single large files Mirroring useful for data that are much more often read then written With a HSM system You can get HSM functionality for AFS Infrequently used data go on secondary storage Archival copies of files in object storage as back solution along with RO-volumes for small files and directories 51

What you should remember about OpenAFS + Object Storage 1. If the osd flag in the volume is not set it behaves exactly as classical OpenAFS 2. If it's set data may be stored outside the volume's partition in OSDs Load can be distributed between many OSDs and fileservers Huge volumes can still easily be moved or replicated 3. OpenAFS + Object Storage can make use of HSM systems All data in object storage get copies on the HSM system Actively used data remain on disk, inactive data are kept only on tape This allows you to offer infinite disk space to your users 4. The current OpenAFS + Object Storage source is based on OpenAFS 1.4.7 and can also be compiled as classical OpenAFS with some additional goodies. 5. OpenAFS + Object Storage needs to be merged into the OpenAFS CVS as soon as possible to avoid diverging developments 52

Future Development Felix Frank at DESY Zeuthen in Germany is working on the policies. The policies will control which files go into object storage (based on name and size) how they should be striped or mirrored and into which OSDs they should go The policies will be stored in the osddb and the directory vnodes will contain the policy number to apply for files in this directory. Other work will continue to be done at RZG. 53

Testers are invited Next step must be to bring the code into the OpenAFS CVS tree. This had been promised already a year ago! (I understand: it is a lot of work) Once it's there it will take certainly a while until it appears in the official stable releases. (For large file sport it took 3 years!) The current code can be found on the Subversion server of DESY: https://svnsrv.desy.de/viewvc/openafs-osd or in AFS: /afs/ipp-garching.mpg.de/.cs/openafs/openafs-1.4.7-osd-bpc To build it with object storage you need to configure it at least with configure --enable-object-storage --enable-namei-fileserver Without --enable-object-storage you should get a classical OpenAFS with some of the goodies mentioned before. 54

DISCLAIMER This code still may have lots of bugs! If configured with --enable-object-storage you have to move volumes to such a server, don't start it on partitions with volumes created with classical OpenAFS (changes in vnode and volume structures) 55

Last Question Someone here who wants to replace me when I retire next year? Interesting work (cell administration and AFS development) RZG is one of the leading sercomputer centers in Europe Garching near Munich is situated in the nice Bavarian landscape remember the famous Biergärten and Oktoberfest and the Alps and lakes near by 56