COSC 6397 Big Data Analytics Distributed File Systems (II) Edgar Gabriel Spring 2017 HDFS Basics An open-source implementation of Google File System Assume that node failure rate is high Assumes a small number of large files Write-once-ready-many pattern Reads are performed in a large streaming fashion Large throughput instead of low latency Moving computation is easier than moving data 1
HDFS components Namenode Manages the File System's namespace/meta-data/file blocks Runs on 1 machine to several machines Datanode Stores and retrieves data blocks Reports to Namenode Runs on many machines Secondary Namenode Not used for high-availability not a backup for Namenode Performs house keeping work for Namenode reduces the workload of the Namenode Requires similar hardware as Namenode machine 2
HDFS Blocks Files are split into blocks Managed by Namenode, stored by Datanode Transparent to user Blocks are traditionally either 64MB or 128MB Default is 64MB The motivation is to minimize the cost of seeks as compared to transfer rate Namenode determines replica placement Default replication is 3 1st replica on the local rack 2nd replica on the local rack but different machine 3rd replica on the different rack Namenode Abitrator and repository for all HDFS metadata Executes file system namespace operations open, close, rename files and directories Determines mapping of blocks to Datanodes Data does not flow through Namenode Metadata in Memory The entire metadata is in main memory Types of metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g. creation time, replication factor A Transaction Log Records file creations, file deletions etc 3
DataNode A Block Server Stores data in the local file system (e.g. ext4, xfs) Stores metadata of a block (e.g. CRC) Serves data and metadata to Clients Block Report Periodically sends a report of all existing blocks to the NameNode Facilitates Pipelining of Data Forwards data to other specified DataNodes Client retrieves a list of DataNodes on which to place replicas of a block Client writes block to the first DataNode The first DataNode forwards the data to the next node in the Pipeline When all replicas are written, the Client moves on to write the next block in file 4
Rebalancer Goal: % disk full on DataNodes should be similar Usually run when new DataNodes are added Cluster is online when Rebalancer is active Rebalancer is throttled to avoid network congestion Command line tool HDFS limitations Bad at handling large number of small files Write limitations Single writer per file Writes only at the end of file, no-support for arbitrary offset Low-latency reads High-throughput rather than low latency for small chunks of data (In memory data stores address this issue) 5
Read/Write operations Serve read / write requests from client Block creation, deletion and replication upon instruction from Namenode No knowledge of HDFS files Stores HDFS data in files on local file system Determines optimal file count per directory Creates subdirectories automatically 6
Comparison HDFS to PVFS2 PVFS2 HDFS Metadata server Distributed Federation of Metadata server in v2.2.0 Dataserver Stateless Probably stateful (bc. of single writer restriction) Default stripe size 64KB 64MB POSIX support No, kernel interfaces implement similar semantics No, similar interfaces available through FUSE 7
Comparison HDFS to PVFS2 Reliability Support for concurrent write to the same file PVFS2 No/ high availability PVFS2 experimental Yes HDFS Replication No Locking No No Other features Strided operations Atomic append File System Java API org.apache.hadoop.fs.filesystem Abstract class that serves as a generic file system representation Note: it s a class and not an Interface Hadoop ships with multiple concrete implementations: org.apache.hadoop.fs.localfilesystem Good old native file system using local disk(s) org.apache.hadoop.hdfs.distributedfilesystem Hadoop Distributed File System (HDFS) Will mostly focus on this implementation org.apache.hadoop.hdfs.hftpfilesystem Access HDFS in read-only mode over HTTP org.apache.hadoop.fs.ftp.ftpfilesystem File system on FTP server 8
Example: implementation of ls public class SimpleLocalLs { public static void main(string[] args) throws Exception{ Path path = new Path("/"); Hadoop's Path object represents a if ( args.length == 1){ file or a directory path = new Path(args[0]); (URI) Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); FileStatus [] files = fs.liststatus(path); for (FileStatus file : files ){ System.out.println(file.getPath().getName()); DistributedFileSystem instance will be created (utilizes fs.default.name property from configuration file) Reading data from HDFS InputStream input = null; try { input = fs.open(filetoread); finally { IOUtils.closeStream(input); fs.open returns org.apache.hadoop.fs.fsdatainputstream Another FileSystem implementation will return their own custom implementation of InputStream Opens stream with a default buffer of 4k If you want to provide your own buffer size use fs.open(path f, int buffersize) Use Hadoop IOUtils for simplicity 9
Reading data from HDFS IOUtils.copyBytes(inputStream, outputstream, buffer); Copy bytes from InputStream to OutputStream Hadoop s IOUtils makes the task simple buffer parameter specifies number of bytes to buffer at a time Reading data from HDFS public class ReadFile { public static void main(string[] args) throws IOException { Path filetoread = new Path("/data/readMe.txt"); FileSystem fs = FileSystem.get(new Configuration()); InputStream input = null; try { input = fs.open(filetoread); IOUtils.copyBytes(input, System.out, 4096); finally { IOUtils.closeStream(input); 10
Reading data - seek FileSystem.open returns FSDataInputStream Extension of java.io.datainputstream Supports random access and reading via interfaces: PositionedReadable : read chunks of the stream Seekable : seek to a particular position in the stream FSDataInputStream implements Seekable interface void seek(long pos) throws IOException Seek to a particular position in the file Next read will begin at that position If you attempt to seek past the file boundary IOException is emitted Expensive operation strive for streaming and not seeking Reading data - seek public class SeekReadFile { public static void main(string[] args) throws IOException { Path filetoread = new Path("/training/data/readMe.txt"); FileSystem fs = FileSystem.get(new Configuration()); FSDataInputStream input = null; try { input = fs.open(filetoread); System.out.print("start postion="+input.getpos()+": IOUtils.copyBytes(input, System.out, 4096, false); input.seek(11); System.out.print("start postion="+input.getpos()+": IOUtils.copyBytes(input, System.out, 4096, false); finally { IOUtils.closeStream(input); 11
Writing Data in HDFS 1. Create FileSystem instance 2. Open OutputStream a) FSDataOutputStream in this case b) Open a stream directly to a Path from FileSystem c) Creates all needed directories on the provided path 3. Copy data using IOUtils HDFS C API #include "hdfs.h" int main(int argc, char **argv) { hdfsfs fs = hdfsconnect("namenode_hostname",namenode_port); if (!fs) fprintf(stderr, "Cannot connect to HDFS.\n");exit(-1); int exists = hdfsexists(fs, filename); if (exists > -1) { fprintf(stdout, "File %s exists!\n", filename); else{ // Create and open file for writing hdfsfile outfile = hdfsopenfile(fs, filename, O_WRONLY O_CREAT, 0, 0, 0); if (!outfile) { fprintf(stderr, Open failed %s\n", filename); exit(-2); hdfswrite(fs, outfile, (void*)message, strlen(message)); hdfsclosefile(fs, outfile); 12
HDFS C API // Open file for reading hdfsfile infile = hdfsopenfile(fs, filename, O_RDONLY, 0, 0, 0); if (!infile) { fprintf(stderr, "Failed to open %s for reading!\n", filename); exit(-2); char* data = malloc(sizeof(char) * size); // Read from file. tsize readsize = hdfsread(fs, infile, (void*)data, size); fprintf(stdout, "%s\n", data); free(data); hdfsclosefile(fs, infile); hdfsdisconnect(fs); return 0; 13