Copyright 2007 Ramez Elmasri and Shamkant B. Navathe

Similar documents
Copyright 2007 Ramez Elmasri and Shamkant B. Navathe

Chapter 1 Disk Storage, Basic File Structures, and Hashing.

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

Chapter 17. Disk Storage, Basic File Structures, and Hashing. Records. Blocking

Database Systems. Session 8 Main Theme. Physical Database Design, Query Execution Concepts and Database Programming Techniques

Database Technology. Topic 7: Data Structures for Databases. Olaf Hartig.

Hashed-Based Indexing

Hash-Based Indexes. Chapter 11

Selection Queries. to answer a selection query (ssn=10) needs to traverse a full path.

Chapter 11: File System Implementation. Objectives

Hash-Based Indexing 1

Hash-Based Indexes. Chapter 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Lecture 8 Index (B+-Tree and Hash)

The physical database. Contents - physical database design DATABASE DESIGN I - 1DL300. Introduction to Physical Database Design

Indexing: Overview & Hashing. CS 377: Database Systems

Storing Data: Disks and Files

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

Storage and File Structure

TUTORIAL ON INDEXING PART 2: HASH-BASED INDEXING

Storing Data: Disks and Files

Indexing Methods. Lecture 9. Storage Requirements of Databases

Storage hierarchy. Textbook: chapters 11, 12, and 13

Hashing for searching

UNIT III DATA STORAGE AND QUERY PROCESSING

File Management. Chapter 12

Disks and I/O Hakan Uraz - File Organization 1

CSC 261/461 Database Systems Lecture 17. Fall 2017

RAID in Practice, Overview of Indexing

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7

Indexing. Jan Chomicki University at Buffalo. Jan Chomicki () Indexing 1 / 25

Lecture Data layout on disk. How to store relations (tables) in disk blocks. By Marina Barsky Winter 2016, University of Toronto

Figure 5.1 (a) A single-sided disk with read/write hardware. (b) A disk pack with read/write hardware.

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

File Structures and Indexing

CSIT5300: Advanced Database Systems

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy (Cont.) Storage Hierarchy. Magnetic Hard Disk Mechanism

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy. Storage Hierarchy (Cont.) Speed

Hashing. Data organization in main memory or disk

CS 405G: Introduction to Database Systems. Storage

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:

CS143: Disks and Files

Module 3: Hashing Lecture 9: Static and Dynamic Hashing. The Lecture Contains: Static hashing. Hashing. Dynamic hashing. Extendible hashing.

CSE 380 Computer Operating Systems

key h(key) Hash Indexing Friday, April 09, 2004 Disadvantages of Sequential File Organization Must use an index and/or binary search to locate data

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Physical Disk Structure. Physical Data Organization and Indexing. Pages and Blocks. Access Path. I/O Time to Access a Page. Disks.

Introduction. Secondary Storage. File concept. File attributes

L9: Storage Manager Physical Data Organization

Hashing file organization

1. What is the difference between primary storage and secondary storage?

Part IV I/O System. Chapter 12: Mass Storage Structure

Disks, Memories & Buffer Management

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Chapter 9: Peripheral Devices: Magnetic Disks

Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management

Some Practice Problems on Hardware, File Organization and Indexing

Question Bank Subject: Advanced Data Structures Class: SE Computer

File Management By : Kaushik Vaghani

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page

Chapter 11: Implementing File Systems. Operating System Concepts 8 th Edition,

Chapter 11. I/O Management and Disk Scheduling

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification)

Ch 11: Storage and File Structure

Material You Need to Know

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hashing Techniques. Material based on slides by George Bebis

Hashing IV and Course Overview

Storing Data: Disks and Files

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe. Slide 14-1

CS 222/122C Fall 2016, Midterm Exam

Chapter 11. I/O Management and Disk Scheduling

BBM371- Data Management. Lecture 2: Storage Devices

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

I/O CANNOT BE IGNORED

Chapter 18 Indexing Structures for Files. Indexes as Access Paths

Chapter 11: Storage and File Structure. Silberschatz, Korth and Sudarshan Updated by Bird and Tanin

CSC 553 Operating Systems

Storing Data: Disks and Files

File Systems. ECE 650 Systems Programming & Engineering Duke University, Spring 2018

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15

V. Mass Storage Systems

Tables. The Table ADT is used when information needs to be stored and acessed via a key usually, but not always, a string. For example: Dictionaries

File System CS170 Discussion Week 9. *Some slides taken from TextBook Author s Presentation

Hash-Based Indexes. Chapter 11 Ramakrishnan & Gehrke (Sections ) CPSC 404, Laks V.S. Lakshmanan 1

CS143: Index. Book Chapters: (4 th ) , (5 th ) , , 12.10

Definition of RAID Levels

Topics to Learn. Important concepts. Tree-based index. Hash-based index

DATA STRUCTURES USING C

File System Interface and Implementation

Chapter 12: Indexing and Hashing. Basic Concepts

Kathleen Durant PhD Northeastern University CS Indexes

Lecture 9. I/O Management and Disk Scheduling Algorithms

Chapter 5: Physical Database Design. Designing Physical Files

Secondary Storage (Chp. 5.4 disk hardware, Chp. 6 File Systems, Tanenbaum)

Physical Database Design: Outline

Disks and Files. Storage Structures Introduction Chapter 8 (3 rd edition) Why Not Store Everything in Main Memory?

Chapter 12: File System Implementation

RAID (Redundant Array of Inexpensive Disks)

System Structure Revisited

Transcription:

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe

Chapter 17 - Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible Hashing Techniques RAID Technology Slide 13-2

Disk Storage Devices Electromechanical disks are the preferred secondary storage device for high storage capacity and low cost. Data stored as magnetized areas on magnetic disk surfaces. A disk pack contains several magnetic disks connected to a rotating spindle. Disks are divided into concentric circular tracks on each disk surface. Track capacities vary typically from 4 to 50 Kbytes or more Slide 13-3

Disk Storage Devices Open casing of 2.5-inch traditional hard disk drive (left) and solid-state drive (center). Picture from: http://en.wikipedia.org/wiki/solid-state_drive Slide 13-4

Disk Storage Devices (contd.) A track is divided into smaller blocks or sectors because it usually contains a large amount of information The division of a track into sectors is hard-coded on the disk surface and cannot be changed. One type of sector organization calls a portion of a track that subtends a fixed angle at the center as a sector. A track is divided into blocks. The block size B is fixed for each system. Typical block sizes range from B=512 bytes to B=4096 bytes. Whole blocks are transferred between disk and main memory for processing. Slide 13-5

Disk Storage Devices (contd.) Slide 13-6

Disk Storage Devices (contd.) A read-write head moves to the track that contains the block to be transferred. Disk rotation moves the block under the read-write head for reading or writing. A physical disk block (hardware) address consists of: a cylinder number (imaginary collection of tracks of same radius from all recorded surfaces) the track number or surface number (within the cylinder) and block number (within track). Reading or writing a disk block is time consuming because of the seek time s and rotational delay (latency) rd. Double buffering can be used to speed up the transfer of contiguous disk blocks. Slide 13-7

Disk Storage Devices (contd.) Slide 13-8

Typical Disk Parameters (Courtesy of Seagate Technology) Slide 13-9

Records Fixed and variable length records Records contain fields which have values of a particular type E.g., amount, date, time, age Fields themselves may be fixed length or variable length Variable length fields can be mixed into one record: Separator characters or length fields are needed so that the record can be parsed. Slide 13-10

Blocking Blocking: Refers to storing a number of records in one block on the disk. Blocking factor (bfr) refers to the number of records per block. There may be empty space in a block if an integral number of records do not fit in one block. Spanned Records: Refers to records that exceed the size of one or more blocks and hence span a number of blocks. Slide 13-11

Files of Records A file is a sequence of records, where each record is a collection of data values (or data items). A file descriptor (or file header) includes information that describes the file, such as the field names and their data types, and the addresses of the file blocks on disk. Records are stored on disk blocks. The blocking factor bfr for a file is the (average) number of file records stored in a disk block. A file can have fixed-length records or variable-length records. Slide 13-12

Files of Records Slide 13-13

Files of Records (contd.) File records can be unspanned or spanned Unspanned: no record can span two blocks Spanned: a record can be stored in more than one block The physical disk blocks that are allocated to hold the records of a file can be contiguous, linked, or indexed. In a file of fixed-length records, all records have the same format. Usually, unspanned blocking is used with such files. Files of variable-length records require additional information to be stored in each record, such as separator characters and field types. Usually spanned blocking is used with such files. Slide 13-14

Files of Records (contd.) Slide 13-15

Operation on Files Typical file operations include: OPEN: Readies the file for access, and associates a pointer that will refer to a current file record at each point in time. FIND: Searches for the first file record that satisfies a certain condition, and makes it the current file record. FINDNEXT: Searches for the next file record (from the current record) that satisfies a certain condition, and makes it the current file record. READ: Reads the current file record into a program variable. INSERT: Inserts a new record into the file & makes it the current file record. DELETE: Removes the current file record from the file, usually by marking the record to indicate that it is no longer valid. MODIFY: Changes the values of some fields of the current file record. CLOSE: Terminates access to the file. REORGANIZE: Reorganizes the file records. For example, the records marked deleted are physically removed from the file or a new organization of the file records is created. READ_ORDERED: Read the file blocks in order of a specific field of the file. Slide 13-16

Unordered Files Also called a heap or a pile file. New records are inserted at the end of the file. A linear search through the file records is necessary to search for a record. This requires reading and searching half the file blocks on the average, and is hence quite expensive. Record insertion is quite efficient. Reading the records in order of a particular field requires sorting the file records. Slide 13-17

Ordered Files Also called a sequential file. File records are kept sorted by the values of an ordering field. Insertion is expensive: records must be inserted in the correct order. It is common to keep a separate unordered overflow (or transaction) file for new records to improve insertion efficiency; this is periodically merged with the main ordered file. A binary search can be used to search for a record on its ordering field value. This requires reading and searching log 2 of the file blocks on the average, an improvement over linear search. Reading the records in order of the ordering field is quite efficient. Slide 13-18

Ordered Files (contd.) Try inserting Adama, Williams Slide 13-19

Average Access Times The following table shows the average access time to access a specific record for a given type of file Slide 13-20

Hashed Collections Small collections that can fit in memory could be stored using an internal hashing organization. A simple hashing function is (location): h(key) = Key mod M Slide 13-21

Hashed Files Hashing for disk files is called External Hashing The file blocks are divided into M equal-sized buckets, numbered bucket 0, bucket 1,..., bucket M-1. Typically, a bucket corresponds to one (or a fixed number of) disk block. One of the file fields is designated to be the hash key of the file. The record with hash key value K is stored in bucket i, where i=h(k), and h is the hashing function. Search is very efficient on the hash key. Collisions occur when a new record hashes to a bucket that is already full. An overflow file is kept for storing such records. Overflow records that hash to each bucket can be linked together. Slide 13-22

Hashed Files (contd.) There are numerous methods for collision resolution, including the following: Open addressing: Proceeding from the occupied position specified by the hash address, the program checks the subsequent positions in order until an unused (empty) position is found. Chaining: For this method, various overflow locations are kept, usually by extending the array with a number of overflow positions. In addition, a pointer field is added to each record location. A collision is resolved by placing the new record in an unused overflow location and setting the pointer of the occupied hash address location to the address of that overflow location. Multiple hashing: The program applies a second hash function if the first results in a collision. If another collision results, the program uses open addressing or applies a third hash function and then uses open addressing if necessary. Slide 13-23

Hashed Files (contd.) Slide 13-24

Hashed Files (contd.) To reduce overflow records, a hash file is typically kept 70-80% full. The hash function h should distribute the records uniformly among the buckets Otherwise, search time will be increased because many overflow records will exist. Main disadvantages of static external hashing: Fixed number of buckets M is a problem if the number of records in the file grows or shrinks. Ordered access on the hash key is quite inefficient (requires sorting the records). Slide 13-25

Hashed Files - Overflow handling Slide 13-26

Dynamic And Extendible Hashed Files Dynamic and Extendible Hashing Techniques Hashing techniques are adapted to allow the dynamic growth and shrinking of the number of file records. These techniques include the following: dynamic hashing, extendible hashing, and linear hashing. Both dynamic and extendible hashing use the binary representation of the hash value h(k) in order to access a directory. In dynamic hashing the directory is a binary tree. In extendible hashing the directory is an array of size 2 d where d is called the global depth. Slide 13-27

Dynamic And Extendible Hashing (contd.) The directories can be stored on disk, and they expand or shrink dynamically. Directory entries point to the disk blocks that contain the stored records. An insertion in a disk block that is full causes the block to split into two blocks and the records are redistributed among the two blocks. The directory is updated appropriately. Dynamic and extendible hashing do not require an overflow area. Linear hashing does require an overflow area but does not use a directory. Blocks are split in linear order as the file expands (one linear bucket at the time). Slide 13-28

Extendible Hashing Slide 13-29

Dynamic Hashing Slide 13-30

Extendible & Dynamic Hashing Example Organize the following file using the previous schemas Key binary SampleData 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Each record consists of a key and data. The binary representation of the key (using 4-bits) is provided for your convenience Slide 13-31

Dynamic Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-32

Dynamic Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-33

Dynamic Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-34

Extendible Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-35

Extendible Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-36

Extendible Hashing 01 0001 data1 07 0111 data2 03 0011 data3 08 1000 data4 12 1100 data5 14 1110 data6 11 1011 data7 02 0010 data8 10 1010 data9 13 1101 data10 04 0100 data11 09 1001 data12 Slide 13-37

Linear Hashing How does it work? Consider a hash table consisting of N buckets with addresses 0, 1,..., N - 1. Linear hashing increases the address space gradually by splitting the buckets in a predetermined order: first bucket 0, then bucket 1, and so on, up to and including bucket N - 1. This current position is retained by the split variable s. Splitting a bucket involves moving approximately half of the records from the s bucket to a new bucket at the end of the table. Suppose the placement of data is done using the hashing function h 1 (k) = k mod N. The corresponding bucket location is determined as follows: if ( h 1 (k) < s ) then h 2 (k) else h 1 (k); Slide 13-38

Linear Hashing How does it work? Dealing with OVERFLOW Whenever there is an overflow situation follow the next steps, 1. the new data item is attached to the overflowing bucket using chain-pointing. 2. a new node is added to the end of the bucket area (N, N+1, ) 3. the current splitting bucket (Bucket s ) is rehashed using the function h 2 (k) = k mod 2*N. 4. The split variable s is incremented to the next position. 5. If s becomes N, it is reset to zero and the h 1 (k) is replaced by h 2 (k). At this point the bucket space has duplicated. Slide 13-39

Linear Hashing Example Consider the key set given below. Hashing parameter M=3; therefore use the functions h 1 (K) = k mod 3 and h 2 (K)= k mod 6. The bucket size is 2. Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Slide 13-40

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Slide 13-41

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Slide 13-42

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Slide 13-43

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Slide 13-44

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Use h1(k)= k mod 6 to locate proper bucket Slide 13-45

Linear Hashing Example continuation Keys = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 } Note. The final search function Bucket h 1 (k) is computed as: h 1 (k) = k mod 6; if (h 1 (k) < s) { h 1 (k) = k mod 12; } Slide 13-46

Extendible, Dynamic & Linear Hashing Your turn Try to organize the following file using the previous schemas 20 00010100 data1 40 00101000 data2 10 00001010 data3 30 00011110 data4 15 00001111 data5 35 00100011 data6 07 00000111 data7 26 00011010 data8 18 00010010 data9 22 00010110 data10 05 00000101 data11 42 00101010 data12 13 00001101 data13 46 00101110 data14 27 00011011 data15 08 00001000 data16 32 00100000 data17 38 00100110 data18 24 00011000 data19 45 00101101 data20 25 00011001 data21 Each record consists of a key and data. The binary representation of the key is provided for your convenience Slide 13-47

Parallelizing Disk Access using RAID Technology. Secondary storage technology must take steps to keep up in performance and reliability with processor technology. A major advance in secondary storage technology is represented by the development of RAID, which originally stood for Redundant Arrays of Inexpensive Disks. The main goal of RAID is to even out the widely different rates of performance improvement of disks against those in memory and microprocessors. Slide 13-48

RAID Technology (contd.) A natural solution is a large array of small independent disks acting as a single higher-performance logical disk. A concept called data striping is used, which utilizes parallelism to improve disk performance. Data striping distributes data transparently over multiple disks to make them appear as a single large, fast disk. Slide 13-49

RAID Technology (contd.) For a tutorial on RAID technology see http://www.acnc.com/04_01_00.html Slide 13-50

RAID Technology (contd.) Different raid organizations were defined based on different combinations of the two factors of granularity of data interleaving (striping) and pattern used to compute redundant information. Raid level 0 has no redundant data and hence has the best write performance at the risk of data loss Raid level 1 uses mirrored disks. Raid level 2 uses memory-style redundancy by using Hamming codes, which contain parity bits for distinct overlapping subsets of components. Level 2 includes both error detection and correction. Raid level 3 uses a single parity disk relying on the disk controller to figure out which disk has failed. Raid Levels 4 and 5 use block-level data striping, with level 5 distributing data and parity information across all disks. Raid level 6 applies the so-called P + Q redundancy scheme using Reed- Soloman codes to protect against up to two disk failures by using just two redundant disks. Slide 13-51

Use of RAID Technology (contd.) Different raid organizations are being used under different situations Raid level 1 (mirrored disks) is the easiest for rebuild of a disk from other disks It is used for critical applications like logs Raid level 2 uses memory-style redundancy by using Hamming codes, which contain parity bits for distinct overlapping subsets of components. Level 2 includes both error detection and correction. Raid level 3 (single parity disks relying on the disk controller to figure out which disk has failed) and level 5 (block-level data striping) are preferred for Large volume storage, with level 3 giving higher transfer rates. Most popular uses of the RAID technology currently are: Level 0 (with striping), Level 1 (with mirroring) and Level 5 with an extra drive for parity. Design Decisions for RAID include: Level of RAID, number of disks, choice of parity schemes, and grouping of disks for block-level striping. Slide 13-52

Use of RAID Technology (contd.) Slide 13-53

Trends in Disk Technology Slide 13-54

Storage Area Networks The demand for higher storage has risen considerably in recent times. Organizations have a need to move from a static fixed data center oriented operation to a more flexible and dynamic infrastructure for information processing. Thus they are moving to a concept of Storage Area Networks (SANs). In a SAN, online storage peripherals are configured as nodes on a high-speed network and can be attached and detached from servers in a very flexible manner. This allows storage systems to be placed at longer distances from the servers and provide different performance and connectivity options. Slide 13-55

Storage Area Networks (contd.) Advantages of SANs are: Flexible many-to-many connectivity among servers and storage devices using fiber channel hubs and switches. Up to 10km separation between a server and a storage system using appropriate fiber optic cables. Better isolation capabilities allowing non-disruptive addition of new peripherals and servers. SANs face the problem of combining storage options from multiple vendors and dealing with evolving standards of storage management software and hardware. Slide 13-56

Cloud Storage Cloud storage architecture offers an Internet based data model where an external hosting company is responsible for keeping the data. Cloud data is accessed typically through a web-based API. Advantages: unlimited virtual capacities, low operating cost, reliable service, transparent technology, worry-free business model. Concerns: security, vulnerability, intrusion, hacking, espionage. Slide 13-57

Cloud Storage Free service for personal use Slide 13-58

Cloud Storage http://aws.amazon.com/ec2/ Paid service for enterprise computing Slide 13-59

Cloud Storage Reference http://windows.microsoft.com/en-us/skydrive/compare Slide 13-60

Summary Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible Hashing Techniques RAID Technology Slide 13-61

Additional Reading P. Larson, Dynamic hash tables (Pdf ) Communications of the ACM Volume 31, Issue 4 (April 1988) ISSN:0001-0782 Litwin, Witold (1980), "Linear hashing: A new tool for file and table addressing" (PDF), Proc. 6th Conference on Very Large Databases: 212 223. Slide 13-62

Appendix. Java Random Files Random access files permit non-sequential, or random, access to a file's contents. To access a file randomly, you open the file, seek a particular location, and read from or write to that file. Some code fragments: File file = new File("c:\\temp\\myrandomdata.data"); RandomAccessFile randomfile = new RandomAccessFile(file, "rw"); int position = (i) * REC_SIZE; randomfile.seek(position); randomfile.writebytes(stringdata) randomfile.read(bytebuffer); Reference: http://docs.oracle.com/javase/tutorial/essential/io/rafs.html Slide 13-63

Appendix. Example Java Random Files 1of 3 public class Drive { // ********************************************************************** // Goal: Create a random access file, populate it, retrieve data. Records // are simple strings of length REC_SIZE // ********************************************************************** final static int REC_SIZE = 20; static int[] keyset = { 1, 7, 3, 8, 12, 4, 11, 2, 10, 13, 14, 9 }; final static int INITIAL_FILE_SIZE = keyset.length; public static void main(string[] args) { StringBuffer strbuffer = new StringBuffer(REC_SIZE); byte[] bytebuffer = new byte[rec_size]; try { // define a handler to reach the random file File file = new File("c:\\temp\\myrandomdata.data"); RandomAccessFile randomfile = new RandomAccessFile(file, "rw"); // preallocate dummy entries in the file for (int i = 0; i < INITIAL_FILE_SIZE + 10; i++) { strbuffer = new StringBuffer(); strbuffer.setlength(rec_size); randomfile.writebytes(strbuffer.tostring().replace("\0", "-")); } Slide 13-64

Appendix. Example Java Random Files 2 of 3 // Allocate (re-write) a number of test entries in the file // each entry consists of Key(5char) data(rest) for (int i = 0; i < INITIAL_FILE_SIZE; i++) { //get the corresponding starting byte position int position = (i) * REC_SIZE; randomfile.seek(position); } // prepare the text data to be stored in each record String key = String.format("%05d", keyset[i] ); String data = String.format("%15s", "[Data-row-" + i ); strbuffer = new StringBuffer(key + data); strbuffer.setcharat(rec_size-1, ']'); strbuffer.setlength(rec_size); // write record on disk randomfile.writebytes(strbuffer.tostring().replace("\0", " ")); System.out.printf("\nAllocating>>%2d:%2d:%s", i, strbuffer.length(), strbuffer); // create hashing directory, // provide record key & record location System.out.println("\nDone Allocating\n"); randomfile.close(); Slide 13-65

Appendix. Example Java Random Files 3 of 3 } } // /////////////////////////////////////////////////// // Re-Open random access file, read some of its data // /////////////////////////////////////////////////// randomfile = new RandomAccessFile(file, "rw"); for (int i = 0; i < INITIAL_FILE_SIZE + 10; i += 3) { int position = (i) * REC_SIZE; randomfile.seek(position); randomfile.read(bytebuffer); String strrecord = new String(bytebuffer); System.out.printf("\nReading %2d:%2d:%s", i, strrecord.length(), strrecord); } } catch (FileNotFoundException e) { e.printstacktrace(); } catch (IOException e) { e.printstacktrace(); } Slide 13-66

Appendix. Example Java Random Files 1 of 1 CONSOLE OUTPUT Allocating>> 0:20:00001 Allocating>> 1:20:00007 Allocating>> 2:20:00003 Allocating>> 3:20:00008 Allocating>> 4:20:00012 Allocating>> 5:20:00004 Allocating>> 6:20:00011 Allocating>> 7:20:00002 Allocating>> 8:20:00010 Allocating>> 9:20:00013 Allocating>>10:20:00014 Allocating>>11:20:00009 Done Allocating [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-] [Data-row-1] [Data-row-1] Reading 0:20:00001 [Data-row-] Reading 3:20:00008 [Data-row-] Reading 6:20:00011 [Data-row-] Reading 9:20:00013 [Data-row-] Reading 12:20:-------------------- Reading 15:20:-------------------- Reading 18:20:-------------------- Reading 21:20:-------------------- Slide 13-67