CS370 Operating Systems

Similar documents
CS370 Operating Systems

CS370 Operating Systems

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

V. Mass Storage Systems

CS3600 SYSTEMS AND NETWORKS

Module 13: Secondary-Storage Structure

Chapter 12: Mass-Storage

Chapter 10: Mass-Storage Systems

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Chapter 14: Mass-Storage Systems. Disk Structure

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Mass-Storage Structure

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Disk Scheduling. Based on the slides supporting the text

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

CS370 Operating Systems

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

CS370 Operating Systems

Tape pictures. CSE 30341: Operating Systems Principles

Mass-Storage Systems. Mass-Storage Systems. Disk Attachment. Disk Attachment

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition

Chapter 11: Mass-Storage Systems

Chapter 12: Mass-Storage Systems. Operating System Concepts 9 th Edition

CS420: Operating Systems. Mass Storage Structure

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

MASS-STORAGE STRUCTURE

Chapter 10: Mass-Storage Systems

Chapter 12: Secondary-Storage Structure. Operating System Concepts 8 th Edition,

Final Review Spring 2018 Also see Midterm Review

EIDE, ATA, SATA, USB,

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage

Chapter 14: Mass-Storage Systems

Free Space Management

Overview of Mass Storage Structure

Mass-Storage. ICS332 Operating Systems

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Module 13: Secondary-Storage

CHAPTER 12: MASS-STORAGE SYSTEMS (A) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

Operating Systems. Mass-Storage Structure Based on Ch of OS Concepts by SGG

Chapter 14: Mass-Storage Systems

Chapter 14 Mass-Storage Structure

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Mass-Storage Structure

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

Operating Systems 2010/2011

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

Chapter 12: Mass-Storage Systems

CS 537 Fall 2017 Review Session

Silberschatz, et al. Topics based on Chapter 13

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

CS370 Operating Systems

Chapter 5: Input Output Management. Slides by: Ms. Shree Jaswal

I/O Systems and Storage Devices

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management

Lecture 11 Hadoop & Spark

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

I/O Device Controllers. I/O Systems. I/O Ports & Memory-Mapped I/O. Direct Memory Access (DMA) Operating Systems 10/20/2010. CSC 256/456 Fall

UNIT 4 Device Management

Mass-Storage Systems

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

CSE380 - Operating Systems. Communicating with Devices

UNIT-7. Overview of Mass Storage Structure

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Lecture 9. I/O Management and Disk Scheduling Algorithms

Part IV I/O System. Chapter 12: Mass Storage Structure

CSE 153 Design of Operating Systems

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Lecture 29. Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Disc Allocation and Disc Arm Scheduling?

Chapter 11: File System Implementation. Objectives

CSE 380 Computer Operating Systems

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

CSE Opera+ng System Principles

Chapter 7: Mass-storage structure & I/O systems. Operating System Concepts 8 th Edition,

Operating Systems. No. 9 ศร ณย อ นทโกส ม Sarun Intakosum

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Operating Systems. Operating Systems Professor Sina Meraji U of T

CS5460: Operating Systems Lecture 20: File System Reliability

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

CS370 Operating Systems

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

CA485 Ray Walshe Google File System

Today: Secondary Storage! Typical Disk Parameters!

Distributed Filesystem

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

Transcription:

CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 Lecture 24 Mass Storage, HDFS/Hadoop Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1

FAQ What 2

4 Moving-head Disk Mechanism

5 SSD vs HDD

Storage Array Can just attach disks, or arrays of disks to an I/O port Storage Array has controller(s), provides features to attached host(s) Ports to connect hosts to array Memory, controlling software A few to thousands of disks RAID, hot spares, hot swap Shared storage -> more efficiency 6

Storage Area Network Common in large storage environments Multiple hosts attached to multiple storage arrays - flexible Storage traffic 7

Storage Area Network (Cont.) SAN is one or more storage arrays Hosts also attach to the switches Storage made available from specific arrays to specific servers Easy to add or remove storage, add new host and allocate it storage Over low-latency Fibre Channel fabric 8

Network-Attached Storage Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) Remotely attaching to file systems NFS and CIFS are common protocols Implemented via remote procedure calls (RPCs) between host and storage over typically TCP or UDP on IP network iscsi protocol uses IP network to carry the SCSI protocol Remotely attaching to devices (blocks) 9 Protocol

Disk Scheduling The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth Minimize seek time Seek time seek distance (between cylinders) Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer 10

Disk Scheduling (Cont.) There are many sources of disk I/O request OS System processes Users processes I/O request includes input or output mode, disk address, memory address, number of sectors to transfer OS maintains queue of requests, per disk or device Idle disk can immediately work on I/O request, busy disk means work must queue Optimization algorithms only make sense when a queue exists 11

Disk Scheduling (Cont.) Note that drive controllers have small buffers and can manage a queue of I/O requests (of varying depth ) Several algorithms exist to schedule the servicing of disk I/O requests The analysis is true for one or many platters We illustrate scheduling algorithms with a request queue (cylinders 0-199) 98, 183, 37, 122, 14, 124, 65, 67 Head pointer 53 (head is at cylinder 53) 12

FCFS (First come first served) Illustration shows total head movement 13 Total seek time 640 cylinders

SSTF Shortest Seek Time First Shortest Seek Time First selects the request with the minimum seek time from the current head position SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests total head movement of 236 cylinders 14

SCAN The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. SCAN algorithm Sometimes called the elevator algorithm But note that if requests are uniformly dense, largest density at other end of disk and those wait the longest 15

SCAN (Cont.) Total 53+ 183= 236 cylinders 16

C-SCAN (circular scan) Provides a more uniform wait time than SCAN The head moves from one end of the disk to the other, servicing requests as it goes When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip Treats the cylinders as a circular list that wraps around from the last cylinder to the first one Total number of cylinders? 183 17

C-SCAN (Cont.) Total (199-53)+ 199+37= 382 cylinders 18

C-LOOK LOOK a version of SCAN, C-LOOK a version of C-SCAN Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk Total number of cylinders? 19

20 C-LOOK (Cont.)

Selecting a Disk-Scheduling Algorithm SSTF is common and has a natural appeal SCAN and C-SCAN perform better for systems that place a heavy load on the disk Less starvation Performance depends on the number and types of requests Requests for disk service can be influenced by the file-allocation method And metadata layout Either SSTF or LOOK is a reasonable choice for the default algorithm What about rotational latency? Difficult for OS to calculate 21

Disk Management Low-level formatting, or physical formatting Dividing a disk into sectors that the disk controller can read and write Each sector can hold header information (sector number), plus data, plus error correction code (ECC) Usually 512 bytes of data but can be selectable To use a disk to hold files, the operating system still needs to record its own data structures on the disk Partition the disk into one or more groups of cylinders, each treated as a logical disk Logical formatting or making a file system To increase efficiency most file systems group blocks into clusters Disk I/O done in blocks File I/O done in clusters 22

Disk Management (Cont.) Raw disk access for apps that want to do their own block management, keep OS out of the way (databases for example) Boot block initializes system The tiny bootstrap code is stored in ROM Bootstrap loader program stored in boot blocks of boot partition which loads OS. Methods such as sector sparing used to handle bad blocks Boot disk: has a boot partition 23

Booting from a Disk in Windows 24 MBR: Master boot record Kernel loaded from boot partition

CS370 Operating Systems Colorado State University Yashwant K Malaiya Reliability & RAIDs Various sources 26 26

Reliability Storage is inherently unreliable. How can it be made more reliable? Redundancy Complete mirrors of data: 2, 3 or more copies. Use a good copy when there is failure. Additional bits: Use parity. Use parity to reconstruct corrupted data Rollback and retry Go back to previously saved known good state and recompute. 27

RAID Structure RAID redundant array of inexpensive disks multiple disk drives provides Higher reliability Higher performance /storage capacity A combination Increases the mean time to failure Mean time to repair exposure time when another failure could cause data loss Mean time to data loss based on above factors If mirrored disks fail independently, consider disk with 100,000 hour mean time to failure and 10 hour mean time to repair Mean time to data loss is 100, 000 2 / (2 10) = 500 10 6 hours, or 57,000 years! Inverse of probability that the two will fail within 10 hours 28

RAID Techniques Striping uses multiple disks in parallel by splitting data: higher performance, no redundancy (ex. RAID 0) Mirroring keeps duplicate of each disk: higher reliability (ex. RAID 1) Block parity: One Disk hold parity block for other disks. A failed disk can be rebuilt using parity. Wear leveling if interleaved (RAID 5, double parity RAID 6). Ideas that did not work: Bit or byte level level striping (RAID 2, 3) Bit level Coding theory (RAID 2), dedicated parity disk (RAID 4). Nested Combinations: RAID 01: Mirror RAID 0 RAID 10: Multiple RAID 1, striping RAID 50: Multiple RAID 5, striping others 29

RAID Replicate data for availability RAID 0: no replication RAID 1: mirror data across two or more disks Google File System replicated its data on three disks, spread across multiple racks RAID 5: split data across disks, with redundancy to recover from a single disk failure RAID 6: RAID 5, with extra redundancy to recover from two disk failures 30

RAID 1: Mirroring Replicate writes to both disks Reads can go to either disk 31

Parity Parity block: Block1 xor block2 xor block3 10001101 block1 01101100 block2 11000110 block3 -------------- 00100111 parity block (ensures even number of 1s) Can reconstruct any missing block from the others 32

33 RAID 5: Rotating Parity

Read Errors and RAID recovery Example: RAID 5 10 one-tb disks, and 1 fails Read remaining disks to reconstruct missing data Probability of an error in reading 9 TB disks = 10-15 *(9 disks * 8 bits * 10 12 bytes/disk) = 7.2% Thus recovery probability = 92.8% Even better: RAID-6: two redundant disk blocks Can work even in presence of one bad disk Scrubbing: read disk sectors in background to find and fix latent errors 36

RAID Levels Not common: RAID 2, 3,4 Most common RAID 5 37

CS370 Operating Systems Colorado State University Yashwant K Malaiya Big Data: Hadoop HDFS and mapreduce 39 Various sources 39

Hadoop: Distributed Framework for Big Data Recent development. Not in the book. Distributed file system Distributed execution 41

Hadoop: Core components Hadoop (originally): MapReduce + HDFS MapReduce: A programming framework for processing parallelizable problems across huge datasets using a large number of commodity machines. HDFS: A distributed file system designed to efficiently allocate data across multiple commodity machines, and provide self-healing functions when some of them go down Commodity machines: lower performance per machine, lower cost, perhaps lower reliability compared with special high performance machines. RAID (redundant array of inexpensive disks): using multiple disks in a single enclosure for achieving higher reliability and/or higher performance. 42

Challenges in Distributed Big Data Common Challenges in Distributed Systems Node Failure: Individual computer nodes may overheat, crash, experience hard drive failures, or run out of memory or disk space. Network issues: Congestion/delays (large data volumes), Communication Failures. Bad data: Data may be corrupted, or maliciously or improperly transmitted. Other issues: Multiple versions of client software may use slightly different protocols from one another. Security 43

HDFS Architecture HDFS Block size: 64-128 MB ext4: 4KB HDFS file size: Big Single HDFS FS cluster can span many nodes possibly geographically distributed. datacenters-racks-blades Node: system with CPU and memory Metadata (corresponding to superblocks, Inodes) Name Node: metadata, where blocks are physically located Data (files blocks) Data Nodes: hold blocks of files (files are distributed) 44

HDFS Architecture http://a4academics.com/images/hadoop/hadoop-architecture-read-write.jpg 45

HDFS Write operation CERN 46

HDFS Fault-tolerance Disks use error detecting codes to detect corruption. Individual node/rack may fail. Data Nodes (on slave nodes): data is replicated. Default is 3 times. Keep a copy far away. Send periodic heartbeat (I m OK) to Name Nodes. Perhaps once every 10 minutes. Name node creates another copy if no heartbeat. 47

HDFS Fault-tolerance Name Node (on master node) Protection: Transaction log for file deletes/adds, etc (only metadata recorded). Creation of more replica blocks when necessary after a DataNode failure Standby name node: namespace backup In the event of a failover, the Standby will ensure that it has read all of the edits from the Journal Nodes and then promotes itself to the Active state Implementation/delay version dependent Name Node metadata is in RAM as well as checkpointed on disk. On disk the state is stored in two files: fsimage: Snapshot of file system metadata editlog: Changes since last snapshot 48

HDFS Command line interface hadoop fs help hadoop fs ls : List a directory hadoop fs mkdir : makes a directory in HDFS hadoop fs rm : Deletes a file in HDFS copyfromlocal : Copies data to HDFS from local filesystem copytolocal : Copies data to local filesystem Java code can read or write HDFS files (URI) directly https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/filesystemshell.html 49

Distributing Tasks MapReduce Engine: JobTracker splits up the job into smaller tasks( Map ) and sends it to the TaskTracker process in each node. TaskTracker reports back to the JobTracker node and reports on job progress, sends partial results ( Reduce ) or requests new jobs. Tasks are run on local data, thus avoiding movement of bulk data. Originally developed for search engine implementation. 50

Hadoop Ecosystem Evolution 51 Hadoop YARN: A framework for job scheduling and cluster resource management, can run on top of Windows Azure or Amazon S3. Apache spark is more general, faster and easier to program than map Reduce. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, Berkeley, 2012