Eindhoven University of Technology MASTER. Evolution of oblivious RAM schemes. Teeuwen, P.J.P. Award date: 2015

Size: px
Start display at page:

Download "Eindhoven University of Technology MASTER. Evolution of oblivious RAM schemes. Teeuwen, P.J.P. Award date: 2015"

Transcription

1 Eindhoven University of Technology MASTER Evolution of oblivious RAM schemes Teeuwen, P.J.P. Award date: 2015 Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 20. Jul. 2018

2 Eindhoven University of Technology Department of Mathematics and Computer Science Evolution of Oblivious RAM schemes Paul Teeuwen Master s Thesis in Computer Science & Engineering Information Security Technology Supervisor: dr.ir. L.A.M. (Berry) Schoenmakers June 2015

3 Abstract Oblivious Random-access Memories (ORAMs) are cryptographic schemes that can be used to completely hide the data access pattern for IO operations. This can be useful when sensitive data is being stored in the cloud, for example. This thesis studies and compares several ORAM schemes from the literature. The core ideas and algorithms of the ORAM schemes are extensively discussed. The ORAM schemes that are being discussed give a complete chronological picture about the evolution of the first ORAM scheme up to the current state of the art. This shows how the different schemes achieve obliviousness and how their concepts influenced each other. The ORAM schemes are asymptotically analyzed with respect to several important performance metrics. The early ORAM schemes have an unacceptable performance overhead or are so complicated that they are hard to comprehend and implement. More recent ORAM schemes are much more practical in the sense that they are simpler to understand, and also achieve an acceptable performance overhead for certain applications. The approaches that state of the art ORAM schemes use to achieve better performance are quite diverse. Some ORAM schemes even use advanced techniques such as additive homomorphic encryption. Although, such schemes tend to not be practical. Nonetheless, the core ideas are interesting, and future research might end up using these techniques to find even better ORAM schemes.

4 Acknowledgments I would like to greatly thank my supervisor, Berry Schoenmakers for introducing me to the wonderful topic of Oblivious RAMs. I am grateful for his excellent supervision, for his helpful feedback, and for lots of useful suggestions. Without his help, I would not have been able to finish this thesis. Additionally, I would like to thank the two other members of the graduation committee, Boris Škorić and Benne de Weger. Finally, I would like to thank all my professors from Eindhoven University of Technology for everything they taught me about computer science and mathematics.

5 Contents 1 Introduction 5 2 Oblivious RAM Definition Address spaces Encryption by the client Obliviousness Cryptographic protocols Algorithms and pseudocode Blocks Performance and complexity Usage settings Lower bounds Building blocks Oblivious scan Trivial ORAM Oblivious sort Square root ORAM Concept Shelter Obliviousness Construction Analysis Hierarchical ORAM Concept Levels and epochs Hash tables Structure Oblivious merge and reshuffle Construction

6 CONTENTS CONTENTS 5.3 Analysis Improvements Cuckoo hashing Flaws and further improvements Partition ORAM Concept Partitioning Client storage Eviction Partitions Hierarchical partition scheme Construction Recursive construction Analysis Improvements Tree ORAM Concept Tree structure Leaf assignment Bucket ORAM Eviction Construction Recursive construction Analysis Improvements Improved tree ORAM Further improvements Path ORAM Concept Buckets Stash Paths Construction Recursive construction Security parameter Analysis Secure processor

7 CONTENTS CONTENTS 9 Ring ORAM Concept Tree structure Buckets Eviction and reshuffling Construction Improvements Tree top caching XOR technique Augmented leaf map De-amortized eviction Recursive construction Analysis Onion ORAM Concept Additive homomorphic encryption Homomorphic select and encryption layers Ciphertext expansion Chunks and metadata Tree structure Eviction De-amortization Construction Recursive Construction Analysis Other homomorphic ORAMs Other ORAM literature Comparison Conclusion 89 4

8 Chapter 1 Introduction An oblivious RAM (ORAM) is a cryptographic scheme that can be used to store data at an untrusted third party. The general idea is to use the storage of the third party, without revealing anything about the data being stored. The interaction between the entity executing the ORAM scheme and the untrusted third party can be seen as a client server interaction. The untrusted third party acts as a server storing the data. The client sends read and write operation requests to the server. The server executes the requested operations. Hence, the server knows exactly the content of the data being stored. Since the server is untrusted with the content of the data, the client cannot just trivially store its data on the server. A straightforward solution to this problem is to just encrypt the data before it is sent to the server. However, this is not sufficient to meet the confidentiality requirement. Observe, that even though the server cannot understand the content of the data because of the encryption, the server knows the exact data access sequence. This by itself can leak a lot of information as demonstrated in [IKK12]. ORAMs prevent that a server that analyzes the data access pattern of the data being stored, is able to learn anything about the data. In the literature ORAMs were originally studied as a way to prevent piracy and reverse engineering of software. The idea was to create a secure processor with a tiny memory that is just large enough to execute the ORAM scheme on the computer s untrusted main memory. In this situation the secure processor is the client and the main memory is the server. This would effectively make it impossible to reverse engineer the software run on the computer as long as the attacker is unable to look into the trusted secure processor and its tiny memory. A more recent application is the already mentioned scenario of the untrusted third party cloud storage providers. For instance, a company can use an ORAM scheme to store and access its confidential information on for instance Dropbox or Google Drive. In this thesis several ORAM schemes are studied and compared. The schemes discussed include the first ORAM scheme, the current state of the art, and everything in between. The core ideas, concepts and algorithmic aspects of the ORAM schemes are extensively discussed. Furthermore, how exactly the different ORAM schemes relate to each other is discussed as well. This is done in the following two ways. First, by asymp- 5

9 CHAPTER 1. INTRODUCTION totically analyzing the different ORAM schemes with respect to a number of important performance metrics. Second, by discussing how the core ideas of one ORAM scheme influenced the design of other ORAM schemes. This gives a complete chronological picture of the evolution of the ORAM schemes and the corresponding research. It shows how the ORAM schemes evolved from completely impractical, complicated, bad performing theoretical schemes to comprehensible schemes that can be implemented and have an acceptable performance overhead for certain applications. Finally, some state of the art ORAM schemes such as the onion ORAM are discussed. Although the onion ORAM is not a practical ORAM scheme, the concepts used are interesting for further research. 6

10 Chapter 2 Oblivious RAM The goal of an ORAM is to store and retrieve the data to and from the server in such a way that the server cannot infer anything about the actual data access pattern. In other words, the ORAM hides the real access pattern from the untrusted server. The ORAM translates virtual data access patterns to physical data access patterns that are computationally indistinguishable from one another for anyone but the client. An ORAM usually achieves this by storing data at the server in the form of a particular data structure and by adding additional data access requests. These additional data access requests are superfluous from the point or view of the client, but they serve a purpose; the server is unable to distinguish the superfluous requests from the ones that actually are relevant to the client. This makes sure that the server cannot learn anything about the data access pattern of the client. 2.1 Definition More formally, an ORAM is a cryptographic scheme that consists of a number of cryptographic protocols. These cryptographic protocols allow the user of the ORAM to read and write its data to the server, in such a way that the user of the ORAM does not have to worry about obliviousness. A cryptographic protocol is a distributed algorithm executed by multiple parties. The protocol precisely describes how the parties interact and which algorithms they execute. In case of an ORAM scheme, the cryptographic protocols usually have just two parties: the client and the server. The client uses the server for its data storage facilities, in fact the server s only task is to store and retrieve the data requested by the client. The server and client interact through a network protocol that allows the client to request either a read or a write operation on a fixed size data block. The server responds by reading the block and returning its content, or writing (and updating) the block respectively. The server does not perform any further processing on the data it stores and retrieves. Note that the network between the client and the server might be untrusted. In such a scenario it makes sense to deploy the network protocol on top of a secure authenticated 7

11 2.2. ADDRESS SPACES CHAPTER 2. OBLIVIOUS RAM channel provided by, for instance the ubiquitous TLS protocol. Although it is important to take such precautions, this topic is not inside the scope of the ORAM discussion, and therefore outside the scope of this thesis. The cryptographic protocols of the ORAM scheme use this network protocol for the required interaction between the client and the server. The network protocol and actions performed by the server in essence do not contribute to the obliviousness that is achieved by the cryptographic protocols. The algorithmic aspects of the cryptographic protocol that achieve the required obliviousness are entirely executed on the client. Therefore, the most interesting aspects of the cryptographic protocols do involve the algorithms that are executed on the client. In fact, the interaction aspects of the cryptographic protocols can be abstracted away in a straightforward way; the client considers the server s storage facilities as an external memory. Every read or write operation performed on this external memory results in an interaction with the server through the mentioned network protocol. This just leaves the algorithmic aspects of the cryptographic protocol. This is why in this thesis the terms, ORAM scheme and ORAM algorithm are used in an interchangeable way. In some situations the term ORAM algorithm is preferred over ORAM scheme. In such a situation this is done to emphasize the algorithmic aspects of the ORAM scheme. Furthermore this makes it possible to describe the cryptographic protocols of the ORAM scheme in the same way as an algorithm would be described, for instance through pseudocode. The only thing that has to be specified in addition, is which data is stored on the server and vice versa which data is stored on the client. Note that more advanced ORAM schemes can deviate a little from this model. For instance more advanced ORAM algorithms try to combine as many as possible read and write operations into a single request to the server. This results in a single round trip for multiple read and write operations. This reduces the number of round trips and therefore improves performance. Furthermore, there are a couple of ORAM schemes that use homomorphic encryption on the server to combine multiple blocks into a single block. This is done to reduce the amount of data communicated between the client and the server. Nonetheless, this does not essentially change the basic idea that the server can be considered as an external memory used by ORAM algorithms executed on the client, as argued above. 2.2 Address spaces When discussing ORAM schemes, two distinct address spaces are considered. Both are address spaces for fixed size data blocks. It is important not to confuse these address spaces and different kinds of blocks. The two address spaces and its corresponding data blocks are: Physical address space. This is the address space that the server exposes to the client. It is the address space of the data blocks of the server. The client request operations on this address space through the simple network protocol. The data blocks of this address space are called the physical blocks. The reason for this is, 8

12 2.3. ENCRYPTION BY THE CLIENT CHAPTER 2. OBLIVIOUS RAM that the untrusted server physically stores those data blocks. Hence, the server can observe the content and access pattern of the blocks of this address space. The ORAM scheme must make sure that given this fact, the obliviousness requirement is met. It is not uncommon that for efficiency purposes, the client arranges the physical blocks in a certain data structure. Virtual address space. This is the address space that the ORAM scheme exposes to its user. It is the address space that the user of the ORAM uses to indicate which data block it wishes to read or write to. The blocks of this address space are called virtual blocks. The reason for this is, that the untrusted server cannot observe the virtual blocks or the virtual access pattern. This is necessary since the user of the ORAM is not expected to have an oblivious data access patterns. In contrast to the physical address space, the blocks of the virtual address space are not stored directly. The ORAM translates IO (input / output) requested on the virtual address space into a sequence of IO operations on the physical address space. This has to be done in such a way that the obliviousness requirement is met. In summary, the core task of an ORAM algorithm is to translate the non-oblivious virtual access pattern of the client into the oblivious physical data access pattern at the untrusted server. 2.3 Encryption by the client Having an oblivious data access pattern on the server is not helpful, if data stored on the server is not encrypted. If data is not encrypted, it is easy for the server to observe how exactly data is being moved around. Therefore, it is true for all ORAM schemes that the data of the physical blocks that are being stored on the server is encrypted on the client before they are written to the server, unless specified otherwise. Furthermore, almost all ORAM schemes require that everything stored in the physical address space is encrypted using a probabilistic encryption scheme (semantically secure). This is because certain building blocks for ORAM schemes require that re-encrypting a block stored on the server results in a different ciphertext. The idea behind this is, that it is indistinguishable for a server whether a certain block on the server is read and modified or just read, if after a read operation the block is re-encrypted. Since storing the data in encrypted form is so ubiquitous for ORAM schemes, the encryption and decryption steps are usually implied instead of mentioned explicitly, when data is being obtained or modified from the physical address space. So by default, everything is encrypted by the client using a semantically secure cipher, unless mentioned otherwise. 9

13 2.4. OBLIVIOUSNESS CHAPTER 2. OBLIVIOUS RAM 2.4 Obliviousness Originally, oblivious computations were first studied in [PF79]. This was done in the context of an oblivious Turing machine. Up to this point in this thesis, no precise definition of obliviousness has been given. A formal definition of the oblivious requirement for an ORAM scheme is given here. A sequence of IO operations is defined to be of the form: ((op 1, arg 1 ), (op 2, arg 2 ),..., (op n, arg n )), where op i is either read or write and arg i is either an address or a pair of an address and a value respectively. If x is a sequence of virtual IO operations, let A(x) denote the corresponding sequence of physical IO operations given by the ORAM scheme. An ORAM scheme is oblivious if and only if for every virtual sequence of IO operations y and z such that y = z = n, the corresponding sequences of physical IO operations: A(y) and A(z) are computationally indistinguishable for anyone but the client. Computationally indistinguishable means that there are no probabilistic polynomial time algorithms that can distinguish between A(y) and A(z). The intuition behind this is, that the physical access pattern does not reveal anything about the virtual access pattern. Note that this definition does not completely cover our intuition about not leaking anything about the virtual access pattern. For instance, the following things are not taken into consideration by this definition: Virtual access patterns with different lengths. The timing of the IO operations in the virtual access pattern. Those two items are potential side channels that can still leak information about the virtual access patterns. Generally speaking, in the literature those two issues are not considered problems that are within the scope of ORAM research. Therefore ORAM schemes do not have to take those things into consideration. The problem of leaking information through the timing and a countermeasure are discussed in [FRY + 14]. 2.5 Cryptographic protocols An ORAM scheme consists of a number of cryptographic protocols. These protocols are used to implement the user s virtual IO operations on the physical blocks at the server in an oblivious way. The following cryptographic protocols are defined for an ORAM scheme: Setup protocol It is not uncommon for an ORAM scheme that a couple of things need to be initialized. For instance, an ORAM scheme could arrange the physical blocks on the server as a specific data structure, for efficiency reasons. Such a data structure might need to be initialized. Another example is the generation of cryptographic keys. Read virtual block protocol This is the protocol that is invoked when the user of the ORAM performs a read operation on a virtual IO block. The ORAM scheme 10

14 2.6. ALGORITHMS AND PSEUDOCODE CHAPTER 2. OBLIVIOUS RAM translates the virtual read into a sequence of physical IO operations on the physical address space. Write virtual block protocol This is the protocol that is invoked when the user of the ORAM performs a write operation on a virtual IO block. The ORAM scheme translates the virtual write into a sequence of physical IO operations on the physical address space. Note that the obliviousness requirement of the ORAM scheme dictates that a server must not be able to distinguish virtual read and virtual write operations. 2.6 Algorithms and pseudocode Since the server should not be able to distinguish virtual read and write operations, these cryptographic protocols are usually implemented in a similar way. In most ORAM schemes both cryptographic protocols are described as a single algorithm in the form of pseudocode. Hence, a description of an ORAM scheme usually consists of pseudocode of the following algorithms: The combined read and write algorithm. This is denoted as the Access function in pseudocode. It takes 3 variables: the operation, read or write; the virtual address of the corresponding block; the value to be written, this is only relevant in case of a write operation. The setup algorithm. This is denoted as the setup function in pseudocode. The purpose of this function is to initialize everything that is needed for this particular ORAM scheme. In many cases the setup pseudocode is either trivial or straightforward and therefore not interesting. Hence, in many ORAM schemes the description of the setup algorithm (cryptographic protocol) is omitted. 2.7 Blocks The number of blocks of the virtual address space is denoted by N. The size in bits of a single block is denoted by B. The total virtual size of the ORAM equals N B bits. In principle the size of the virtual and physical block size are the same, both are exactly B bits. In practice, a constant fraction of the number of bits of the physical blocks could be used for purposes other than storing the data of the user of the ORAM. This additional data about the block itself is called the metadata of the block. It differs per ORAM scheme, what kind of metadata is stored within a block, and how large this metadata is. For instance, initialization vectors and MAC tags for the encryption scheme can be part of this metadata. Another example is the encrypted tags used by oblivious sorting algorithms (described later). For most ORAM schemes, physical blocks that correspond 11

15 2.8. PERFORMANCE AND COMPLEXITY CHAPTER 2. OBLIVIOUS RAM with a virtual block will also have the corresponding virtual address stored in encrypted form as metadata of the block. Most ORAM literature does not distinguish between the physical and the effective virtual block size, it uses only one constant: B. This is most likely the case because in practice the size of the metadata is only a small constant fraction of the size of the physical block size. This constant fraction disappears in the asymptotic analysis of the storage complexity. Since most literature does only use a single constant, this thesis also does not differentiate between the physical and effective virtual block size. Nonetheless, it is important to note that physical blocks can store metadata. 2.8 Performance and complexity One of the most important aspects of an ORAM scheme is how it performs. To analyze the performance of an ORAM scheme, a couple of performance metrics need to be considered. Asymptotic analysis of complexities is used for this. Although the asymptotic complexity does not always give the complete picture. Sometimes the size of the constants hidden by the asymptotic analysis plays an important role in the performance of the ORAM schemes. The following performance measurements are considered when analyzing ORAM schemes: Computational complexity. The analysis of the number of computations performed by an algorithm is probably one of the most used metrics in computer science. In the case of ORAM schemes, a distinction is made between: client computational complexity and server computational complexity. The client computational complexity is often not that relevant for ORAM schemes. This is because usually, the communication between the client and the server has much more of an impact on the performance than the computations done by the client. Therefore, usually other metrics such as communication and round complexity are considered more important for ORAM schemes. In most ORAM schemes the server only fetches the blocks requested by the client, so the server computational complexity is also usually not that interesting for those ORAM schemes. This might be different for some more advanced ORAM schemes that let the server perform more complicated computations. Communication complexity. A virtual IO operation usually results in communication between the client and the server. Most ORAM schemes simply cannot just fetch or update a single block on the server, since this would not be oblivious. The total number of blocks communicated between client and server for virtual IO operations is one of the most important performance metrics of an ORAM scheme. Sometimes, a distinction is made between Online communication complexity and Offline communication complexity. Online communication is about the communication that needs to be done before the virtual IO operation can be considered completed from the point of view of the user of the ORAM. That means, the 12

16 2.8. PERFORMANCE AND COMPLEXITY CHAPTER 2. OBLIVIOUS RAM communication needed for returning the value of the block requested by the read operation, or the updating of the block requested by the write operation. Offline communication is about further communication that needs to be done for completing the virtual IO operation, that does not require the user of the ORAM to wait anymore. For example: updating or cleaning up of the data structures used by the ORAM scheme. Round complexity. The total amount of communicated data, does not give a complete picture of the performance impact caused by the communication. Another important aspect is the number of communication rounds needed for a virtual IO operation. Every round trip results in latency. This is especially important in the scenario of client and server communicating over networks with significant latency, such as the internet. Therefore, round complexity is also one of the more important performance metrics for ORAM schemes. For round complexity it is also possible to make a distinction between online round complexity and offline round complexity, this is analogous to distinction made between online communication complexity and offline communication complexity. IO complexity. When ORAM schemes are used for scenarios such as cloud storage, the amount of data stored in the ORAM can be quite large. The physical size of the ORAM scheme can be too large to fit into the internal memory of the server. In such a scenario, when the client requests blocks from the server for a virtual IO operation, the server might not have cached the requested blocks in its internal memory. Therefore, the server has to obtain the blocks from its external memory. Since usually, this is quite a lot slower than accessing the internal memory, this can have a serious impact on the performance of the ORAM scheme. Therefore, the IO complexity, the amount of data the client causes the server to access for a virtual IO operation, is another relevant performance metric for ORAM schemes. In most ORAM schemes all the data that is being read or written by the server, is also being communicated between the client and the server. Similarly, the data that is being communicated between client and server is also read or written on the server. This would imply that the IO complexity is equal to the communication complexity in most ORAM schemes. This is why quite a lot of literature only mentions either the communication complexity or the IO complexity. Nonetheless, both have an impact on the performance of the ORAM scheme, and both are conceptually different things that impact the performance of an ORAM scheme. In more advanced ORAM schemes, the equivalence of those complexities might not hold. For instance, certain advanced schemes use homomorphic encryption to combine multiple blocks into a single block that is sent back to the client. Storage complexity. The amount of storage used is also an important factor in comparing different ORAM schemes. A distinction is made between server storage complexity and client storage complexity. Most ORAM schemes need to store and use additional data blocks on the server to 13

17 2.9. USAGE SETTINGS CHAPTER 2. OBLIVIOUS RAM achieve the obliviousness requirement. In more formal terms: the physical address space is larger than the virtual address space for most ORAM schemes. In this thesis, the less formal: size of the ORAM is used synonymously for size of the physical address space. Sometimes, the storage overhead factor is discussed instead of the total size of the ORAM. This is useful to emphasize the extra factor of storage space needed on the server, instead of merely discussing the total size of the ORAM scheme. For instance, an ORAM scheme with size (server storage complexity) O(N log N) has a storage overhead factor of O(log N). The server storage overhead is an important property of an ORAM scheme, especially when a lot of data is stored in an ORAM. The main purpose of an ORAM scheme is to let the client use the storage capacity of the server in an oblivious way. Therefore the amount of data stored on the client: the client storage complexity, is usually significantly smaller than the server storage complexity. There is always something that the ORAM scheme needs to store on the client, such as the cryptographic keys used for encrypting the blocks on the server. Some schemes store a lot more on the client than just that. In fact, there are important differences between the the client storage complexities of different ORAM schemes. Furthermore, there are also different situations which have different requirements on the client storage complexity of ORAM schemes. In some scenarios an O( N) client storage complexity is acceptable, while in other scenarios it is not. Therefore, client storage complexity is also an important metric for comparing ORAM schemes. All the above mentioned complexities play a role in the performance of the ORAM schemes. Some complexities play a bigger role than others. For instance, the client computational complexity usually has negligible impact on the performance since other complexities dominate the performance of the ORAM. Furthermore, it is important to distinguish between amortized complexity and worst case complexity. Because of this, and because of all the different mentioned complexities, the analysis of ORAM schemes can be quite involved. 2.9 Usage settings Roughly speaking, there are two different usage settings for ORAMs. The original motivation for studying ORAMs was to implement a secure processor. The idea was to encrypt the main memory and use an ORAM algorithm to make the access pattern of the encrypted memory oblivious. The use case for this was to make it harder to reverse engineer the software executed by the processor. This can help to protect intellectual property (prevent piracy). Recently, the cloud computing paradigm emerged. ORAM schemes can be applied to cloud storage to obtain an oblivious version of cloud storage called: oblivious storage. The main idea is that oblivious cloud storage is a better way to protect privacy in the cloud than just encrypting the data in the cloud. 14

18 2.9. USAGE SETTINGS CHAPTER 2. OBLIVIOUS RAM Both situations are essentially the same in the sense that the requirements are the same. The ORAM scheme is used to make the access pattern of the memory or cloud storage oblivious. Even though the requirements are the same, the way the ORAM scheme is used is completely different. In case of secure processor scenario, the client is the processor and the server is the memory. They are connected by a hardware bus. While in case of the cloud storage scenario, the client is a computer and the server is a computer as well, they are connected through a network such as the internet. The following differences are important to observe: The cloud storage model supports a reasonable amount of client side storage. The secure processor model does only allow a limited amount of client side storage. Processors probably can hold something like a cryptographic key, but it cannot store anything significantly larger. The cloud storage model supports server side computations. In the secure processor scenario, the server is the memory so it cannot use server side computations. Most ORAM schemes do not use server side computations. Even so it is important to notice that ORAM schemes that use server side computations cannot work in the secure processor model. Since the secure processor model uses a hardware bus for communication, the impact of the round complexity is probably much less than in the cloud storage model. This is because latencies for round trips on networks such as the internet are much more larger than on a hardware bus. ORAM schemes frequently make trade offs between different complexities. ORAM schemes that optimize for good round complexity probably perform relatively better in the oblivious storage scenario than in the secure processor scenario. Furthermore, certain ORAM schemes make trade offs such as using more client side storage to reduce the amount of communication. Such a trade off can be done in the cloud storage usage setting, but is simply impossible in the secure processor usage setting. Similarly ORAM schemes that use server side computations to reduce the amount of communication also cannot be used in the secure processor usage setting. In summary, certain optimizations cannot be done in the secure processor usage setting and optimizing certain complexities can have different result in both usage settings. There is one more important usage setting to discuss. Sometimes, ORAMs are used within other ORAMs. The primary ORAM makes use of another ORAM scheme as a building block. Depending on the usage setting of the primary ORAM, and how exactly the primary ORAM uses the other ORAM, different requirements can hold for such a scenario. Therefore, it is not really possible to list common requirements or properties for such a scenario. 15

19 2.10. LOWER BOUNDS CHAPTER 2. OBLIVIOUS RAM 2.10 Lower bounds Goldreich and Ostrovsky prove in [Gol87, GO96] that the communication and IO complexity of ORAM schemes must have an Ω(log N) lower bound, to be oblivious. It is important to note, that these early ORAM papers were discussing ORAMs in the secure processor usage setting. This is hardly surprising since, in 1987 and also in 1996 the cloud computing paradigm did not exist yet. Their proof assumes a client storage complexity of O(1), and furthermore it assumes that the server only acts as an external memory for the client. This also implies that they assume a limited amount of server computational complexity. Therefore, this lower bound does not apply to the many ORAM schemes that do store more data on the client, or let the server perform more sophisticated computations. In [BM10], Beame et al. prove an even bigger lower bound of Ω(log N log log N) for the communication and the IO complexity of ORAM schemes. Of course, this again only applies to ORAM schemes in the limited sense of the secure computation usage setting. Furthermore, as acknowledged in a later version of this paper [BM11], this bigger lower bound applies to an even more restricted definition of oblivious RAMs. In fact, in [WCS14], Wang et al. demonstrate that the Ω(log N) lower bound of Goldreich and Ovstrovsky is tight in their definition of the ORAM. From a more practical point of view these lower bounds are not that interesting. This is because later in this thesis some ORAM schemes will be discussed that manage to break both lower bounds. 16

20 Chapter 3 Building blocks 3.1 Oblivious scan An important building block for ORAM algorithms is the oblivious scan algorithm. Generally speaking, while searching for a certain element in an array, scanning is oblivious as long as the whole array is scanned. By definition the access pattern of the scan is fixed, and does neither depend on the content of the array, nor on what exactly is being searched for. If the data stored on the server is encrypted with a probabilistic encryption scheme, changing the content of an array can still be oblivious. This is done by reading, decrypting, possibly changing the value, encrypting and writing back for every item in the array, see Algorithm 1. The e and d notations are used to emphasize the encryption and decryption respectively. This is oblivious even if a couple of entries of Algorithm 1 Oblivious Scan 1: A is stored encrypted on the server 2: ProcessAndUpdate is a function that represents the operations 3: performed on the content of the array while scanning 4: function Scan(A, ProcessAndUpdate) 5: for i 1 to A do 6: tmp d A i 7: ProcessAndUpdate(tmp, i) e 8: A i tmp the array are changed, while the rest of the entries do not change. This is true, because the probabilistic re-encryption makes sure the server cannot observe which entries in the array are being changed. Note that an oblivious scan always reads, decrypts, possibly modifies, encrypts and writes every item of the complete array. Since the oblivious scan is an important building block, it is frequently used by other ORAM schemes. In pseudocode, the scan keyword denotes an oblivious scan. 17

21 3.2. TRIVIAL ORAM CHAPTER 3. BUILDING BLOCKS 3.2 Trivial ORAM This alone can be used to obtain a trivial ORAM algorithm. The idea is rather simple: store the whole ORAM encrypted using a probabilistic encryption scheme on the server. For every virtual IO operation just scan all the blocks, when at the block with the right virtual address either remember the corresponding value in case of a read, or in case of a write update the value before writing (and encrypting) it back, see Algorithm 2. Algorithm 2 Trival ORAM 1: function Access(op, addr, val) 2: scan the array of blocks 3: if block has virtual address addr then 4: if op = read then 5: data block 6: else if op = write then 7: block val 8: if op = read then 9: return data The trivial ORAM algorithm is a straightforward application of the oblivious scan. It is a simple solution to the ORAM problem. Unfortunately it also has an enormous overhead. For every virtual IO operation, the whole ORAM is scanned. This means that the trivial ORAM has a communication and IO complexity of O(N). Needless to say, this is quite inefficient. The algorithm described here also has a round complexity of O(N). The round complexity can be reduced to O(1) by reading the whole ORAM into the clients memory, then updating it, and finally writing it back to the server. This requires two round trips, one for loading the array into the client memory, and one for writing it back. The problem with this is, that this requires a client storage complexity of O(N). A trade off can be made by reading exactly k blocks into the client memory at once, updating these k blocks, and writing them back. This needs to be done N k times to cover the complete ORAM. This works for arbitrary k and gives a client storage complexity of O(k) and a round complexity of O( N k ). For instance, if k = O( N) then this gives a client storage complexity of O( N) and a round complexity of O( N) as well. 3.3 Oblivious sort With just the oblivious scan building block, it is already possible to get the trivial ORAM. However, there exist more sophisticated and efficient ORAM algorithms, that also require an oblivious sorting algorithm as a building block. Sorting an encrypted array using a normal sorting algorithm such as quick sort is not oblivious. Even if a probabilistic encryption scheme is used on the input, it is still not oblivious. 18

22 3.3. OBLIVIOUS SORT CHAPTER 3. BUILDING BLOCKS For instance, consider the partition step of the quicksort algorithm. The partition step picks a pivot item from the array to sort. It then compares all other elements to this pivot, and uses this to partition the array in two smaller arrays. In one of the two arrays, it puts the items that are smaller than the pivot, and similarly in the other array it puts the items that are larger (or equal) than the pivot. Quicksort then recursively proceeds to sort the two arrays and will concatenate the result together with the pivot in-between. An observer can see which item is being compared to the pivot, and also in which array the item is stored. The observer can therefore conclude whether the item is smaller than or at least as big as the pivot. This is even possible when the arrays are re-encrypted using a probabilistic encryption scheme. The reason why the observer can infer this, is that the access pattern of the partition step depends on its input. In summary, the access pattern of quicksort depends on its input and therefore quicksort is not oblivious. Ordinary sorting algorithms such as quicksort are therefore not that useful as building blocks for ORAM algorithms. Sorting algorithms that have an access pattern that do not depend on its input, are called oblivious sorting algorithms. In as early as 1968 Batcher studied efficient hardware circuits for sorting, see [Bat68]. The hardware circuits use networks of comparison elements to sort. These comparison elements have two inputs and two outputs. One outputs gives the maximum of the two inputs, while the other output gives the minimum. When the inputs and outputs of such a comparison element are encrypted, an observer cannot figure out which of the inputs is the minimum and the maximum. By definition these sorting networks are oblivious, since the sequence of comparisons performed is fixed. The sequence of comparisons does not depend on the input of the sorting algorithm, since it is a fixed network. Oblivious sorting algorithm such as the sorting networks of batcher can be used as building blocks for ORAM algorithms. Batcher s sorting network uses O(n log 2 n) comparison elements. A sorting network with O(n log n) comparison elements was invented in 1983 by Atjai et al, see [AKS83]. This sorting network is asymptotically optimal, but the asymptotic notation actually hides a rather large constant, which makes this sorting network impractical. A practical asymptotically optimal oblivious sorting algorithm was only discovered recently in 2010 by Goodrich et al, see [Goo10]. It is based on a randomized version of the shellsort sorting algorithm. In [Goo14] Goodrich describes a deterministic oblivious sorting algorithm that uses O(n log n) comparisons. This construction is also based on shellsort. It is important to consider that it is not uncommon for ORAM schemes, to have to sort a large amount of data obliviously. In fact, it is not uncommon that the amount of data that needs to be obliviously sorted, is larger than the size of the internal memory. This means that IO efficiency is also an important property that oblivious sorting algorithms should have. Sometimes, this is called cache oblivious in the literature, this terminology is not used here to avoid confusion with the obliviousness requirement on the access pattern. External memory algorithms perform better when they perform IO operations in a spatially and/or temporally local way. The reason why this is true, is because usually a complete data block of size B is obtained from the external memory 19

23 3.3. OBLIVIOUS SORT CHAPTER 3. BUILDING BLOCKS and cached in the internal memory. This implies that spatially or temporally local IO operations can avoid IO operations on the external memory, and instead perform the IO operation on the cached data in the internal memory. This IO efficiency can be denoted by dividing by B. For instance, the asymptotic optimally complexity of an oblivious external memory sorting algorithm is O( N B log M ( N B B )), where N denotes the size of the external memory, and M denotes the size of the internal memory. In [GM11] Goodrich and Mitzenmacher describe an oblivious sorting algorithm with IO complexity O( N B log2 M ( N B )). This is the first sorting algorithm that is both oblivious B and IO efficient, but unfortunately, it is not asymptotically optimal. An oblivious and IO efficient sorting algorithm with the asymptotic optimal O( N B log M ( N B B )) complexity was discovered by Goodrich in [Goo11]. Both these sorting algorithms are randomized. The round complexity for an oblivious sort used in an ORAM scheme depends on the algorithm used. According to [HICT14], the AKS sorting network can obliviously sort O(n) blocks using O(log n) round complexity. Batcher s sorting network can do this with O(log 2 n) round complexity, while the randomized shell sort has a round complexity of O(n) rounds. So, when oblivious sorting is needed by an ORAM scheme, an extra trade off can be made between round complexity and communication / IO complexity. Just like the oblivious scan, the oblivious sort is an important building block for ORAM schemes. In pseudocode, the sort keyword denotes an oblivious sort. 20

24 Chapter 4 Square root ORAM The trivial ORAM has a huge communication and IO complexity of O(N) for every IO operation. The primary goal of the square root ORAM [Gol87, GO96] is to reduce this to something more acceptable. 4.1 Concept Shelter The square root ORAM, just like the trivial ORAM, has an array of N physical blocks that correspond with all the virtual blocks. Additionally the square root ORAM has a temporary buffer called the shelter. Instead of scanning through the complete ORAM for every IO operation, the idea is to scan through the shelter each time. If the block needed for a particular operation is not in the shelter, it is obtained from the large array directly (without scanning the whole array). At the end of a virtual IO operation, the (updated) block is put into the shelter. If there already is an old version of that block in the shelter, it will be overwritten. The shelter has to be periodically emptied into the large array, in an oblivious way. This has to be done to make sure that the shelter can be kept small Obliviousness When directly applying this idea of using the shelter as an buffer, there are a couple of issues with respect to obliviousness that arise: 1. If multiple IO operations on the same virtual block occur, at first there is no problem with respect to obliviousness. This is true because, after the first operation the block will be put in the shelter. After that, the shelter is always completely scanned. When the shelter is emptied into the larger array, an observer might be able to link virtual blocks. The reason for this is, because the next time the same virtual block is needed, it will be fetched directly from the large array again at exactly the same place it was obtained from the first time. 21

25 4.1. CONCEPT CHAPTER 4. SQUARE ROOT ORAM Figure 4.1: An illustration of the square root ORAM 2. If the needed virtual block is in the shelter, no extra lookup in the large array will be necessary. This means that an observer can tell when the requested virtual block is in the shelter by the absence of a lookup in the large array. 3. The square root ORAM can only improve upon the performance of the trivial ORAM, if the shelter is significantly smaller than the trivial ORAM. As a result of this, the square root ORAM works with epochs. At the end of an epoch, the shelter is emptied into the large array. At the beginning of an epoch the content of the large array is shuffled. To be more precise, this means that a random permutation of the large array is computed. After that, the large array is obliviously sorted according to this permutation. This resolves the first issue, because in the next epoch the virtual blocks will have different positions in the large array (with a very high probability). This makes sure that an observer cannot learn anything from comparing the access patterns of the large array of multiple epochs. To resolve the second issue, extra dummy elements are added to the large array. The idea is that, when the needed virtual block is found in the shelter, an extra lookup of a dummy element in the large array is done to make it oblivious. In fact, if the shelter has size s then s different dummy elements are added to the large array. Furthermore, an epoch consists of s steps. There need to be this many different dummy elements to make sure that for every step in the epoch a different dummy element can be obtained from the large array. Because otherwise, an observer could detect if dummy elements are being obtained from the large array, by observing that the same element from the large array is being accessed multiple times. To address the third issue, at the end of an epoch (s steps) the shelter is potentially full, therefore the content of the shelter needs to be emptied into the large array. This is done by obliviously sorting the content of the shelter into the large array (older versions of blocks will be removed through a scan). The size of the shelter is defined to be N, this means an epoch consists of N virtual IO operations. Giving the shelter this size, turns out to give a better amortized asymptotic complexity compared to the trivial ORAM. 22

26 4.2. CONSTRUCTION CHAPTER 4. SQUARE ROOT ORAM 4.2 Construction The square root ORAM stores one large array, A on the server. The large array consists of N real blocks, N dummy blocks, and another N blocks for the shelter. See Figure 4.1 for an illustration of this. Hence, the array has a total size of N +2 N blocks. The interval A[1, N + N] consists of N real blocks followed by N and dummy blocks. The interval A[N + N +1, N +2 N] consists of shelter blocks. For the construction of the pseudo-random permutations, [IS04] suggests that it can be constructed as described in [LR88]. The original description of the square root ORAM [Gol87] uses a different less efficient approach. Note that everything is encrypted with a probabilistic encryption scheme as discussed earlier. Algorithm 3 describes how a virtual IO operation works on the square root ORAM. Hence, the following state is maintained on the server in between virtual IO operations: Server data A Description array containing N virtual blocks, N dummy blocks and N shelter blocks, in that order t virtual IO operation counter of current epoch, initially t = 1. t {1... N} π permutation of [1, N + N] for current epoch Observe that the algorithm roughly consists of the following steps: 1. At the beginning of an epoch, pick a random permutation for the large array and shuffle the large array according to this permutation. 2. Scan the shelter and look for the virtual block. 3. If the block was not found, lookup the virtual block from the large array, otherwise lookup an unique dummy value from the large array. 4. In case of a write operation, update the value. Add or update the virtual block in the shelter. 5. At the end of an epoch, obliviously shuffle the shelter into the large array and remove old versions of blocks. 6. In case of a read operation, return the requested block. 4.3 Analysis Scanning the shelter has O( N) communication and IO complexity. At the begin and at the end of an epoch, an array of size N + N and size N + 2 N is obliviously sorted. Since N + N N +2 N 2N = O(N) this can be done with communication 23

27 4.3. ANALYSIS CHAPTER 4. SQUARE ROOT ORAM Algorithm 3 The square root ORAM 1: function Access(op, addr, val) 2: if t = 1 then Begin of epoch, apply random permutation 3: π a random permutation on A[1, N + N] 4: scan A[1, N + N] 5: Tag block with virtual address i with tag π(i) 6: sort A[1, N + N] on the tagged value 7: scan A[N + N + 1, N + 2 N] Scan the shelter 8: if block has virtual address addr then 9: store value in data 10: InShelter true 11: if InShelter then 12: Read A π(n+t) Read dummy value 13: else 14: data A π(addr) 15: if op = write then 16: data val 17: scan A[N + N + 1, N + 2 N] Scan the shelter 18: (over)write in next available slot with address addr and value data 19: if t = N then End of epoch, merge shelter in, remove old blocks 20: scan A[N + N + 1, N + 2 N] 21: tag all blocks in shelter as new 22: sort A[1, N + 2 N] on the virtual address 23: scan A[1, N + 2 N] 24: on sequence of new and old of same address, make old dummy 25: t 1 26: else 27: t t : if op = read then 29: return data 24

28 4.3. ANALYSIS CHAPTER 4. SQUARE ROOT ORAM and IO complexity of O(N log N). Note that these oblivious sorts only happen once for every epoch, that is once per N virtual IO operations. Therefore the amortized communication and IO complexity for the oblivious sorts is only O( N log N). This means that the square root ORAM has a total amortized communication and IO complexity of O( N log N) while having a worst case communication and IO complexity of O(N log N). The trivial ORAM has a worst case communication and IO complexity of O(N) this means that the square root ORAM is asymptoticly better in the amortized case, but worse in the worst case scenario. The worst case scenario only happens at the end and the beginning of an epoch. The square root ORAM has a server storage complexity of N + 2 N = O(N). This is of course worse than the trivial ORAM, which has the optimal server storage complexity of O(N). The round complexity depends on which oblivious sorting algorithm is used. The worst case round complexity can be O(log N) when the AKS sorting network is used, or O(N) if the randomized shell sort algorithm is used, or something else if another oblivious sorting algorithm is used. 25

29 Chapter 5 Hierarchical ORAM The square root ORAM manages to obtain an amortized communication and IO complexity of O( N log N). At the same time, Goldreich proved an Ω(log N) lower bound on the communication and IO complexity. The goal of the hierarchical ORAM [Ost90, Ost92, GO96] is to reduce the gap between these complexities. The hierarhcial ORAM manages to obtain an amortized polylogarithmic communication and IO complexity. 5.1 Concept Levels and epochs The hierarchical ORAM builds upon the ideas of the square root ORAM. Instead of having one shelter the core idea is to have a hierarchy of shelters. Those shelters are called levels. The hierarchical ORAM has log N + 1 levels. Each level has a different size. Level i, 1 i log N + 1, should be able to store 2 i virtual blocks. Each level therefore has its own epoch of 2 i steps. After 2 i steps, at the end of an epoch, level i gets obliviously merged and reshuffled into level 2 i+1. Level i + 1 only obtains extra blocks when level i is merged into level i + 1. Because level i + 1 is twice the size of level i, and because level i s epoch ends twice as often as level i + 1 s epoch, it is always possible to merge level i into level i + 1 (at the end of level i s epoch). See figure 5.1 for an illustration of the levels Hash tables A major difference with the shelter of the square root ORAM is that the levels (except the first) are not arrays. The levels are hash tables. The general idea is that each level has a randomly selected hash function h i from a family of hash functions. At the end of the epoch of the level, a new hash function is randomly selected for the next epoch. The hash functions are used to hash the virtual addresses to slots in the hash table. So level i has a hash function that maps to values in [0, 2 i ). The purpose of the hash function is to make it easier to figure out if a certain virtual block is located in a level. When looking for a virtual block, instead of scanning the whole level, the hash of the virtual 26

30 5.1. CONCEPT CHAPTER 5. HIERARCHICAL ORAM Figure 5.1: An illustration of the hierarchical ORAM address is computed to find the slot in which the virtual block should be, if it is located in that particular level. To deal with hash collisions in an oblivious way, the hash table must not leak when collisions occur or how many virtual blocks are placed in the same slot. Therefore each slot in the hash table consists of a bucket of size s = Θ(log N). This makes sure that the probability of a bucket overflow is negligible. When looking for a virtual block in a bucket or adding a virtual block to a bucket, the whole bucket is always scanned to make sure everything remains oblivious Structure The core idea is that the hierarchical ORAM has log N + 1 levels. Level i is a hash table that has 2 i slots. Each slot in a hash table is a bucket of size s = Θ(log N) blocks. Hence level i has a physical size of 2 i s blocks. For every virtual IO operation, a hash table lookup is done in all the levels to find in an oblivious way (the newest version of) the virtual block needed. A virtual IO operation works like this: for every level, lower levels first, the hash function is computed over the virtual address to find the slot needed from the hash table. The corresponding bucket is scanned to look for the needed virtual block. Recall, that each physical block stores metadata such as the corresponding virtual address, so while scanning a bucket the client knows to which virtual block a physical block corresponds. If in a certain level the needed virtual block is found, the loop through the levels still continues to keep it oblivious. Nonetheless, there is one important difference, after the block has been found, a random dummy lookup gets done in the other levels. When in each level a bucket has been scanned, if the virtual block has not been found, it is considered to be found with a zero value. The virtual block gets re-inserted in the first level. In case of a write operation, the value of the block is updated before re-insertion. 27

31 5.2. CONSTRUCTION CHAPTER 5. HIERARCHICAL ORAM Because of the re-insertion, it is possible that there are multiple version of the same virtual block on different levels, since the old version does not get removed. This is the reason why, after a virtual block has been found at a level, on the other levels a random dummy lookup is done. If this was not the case, it would be possible to lookup the same virtual block on multiple levels. This would not be oblivious because an observer has already seen the lookup of the old version in a previous virtual IO operation at exactly the same location. This is analogous to the dummy lookups of the square root ORAM, when the block was found in the shelter Oblivious merge and reshuffle After the block has been inserted in the first level, for each level it is checked if the epoch of the level has ended. If the epoch of level i has ended, level i gets obliviously merged with level i + 1. It is not permitted to have multiple version of the same virtual block on the same level. Therefore, the oblivious merge must also remove the old version of a virtual block, in the case that there is an old version in level i + 1 and a newer version in level i. Furthermore, when an oblivious merge occurs, a new random hash function is computed for level i + 1. All the virtual blocks get oblivious shuffled such that the ordering of the virtual blocks given by the new hash function is obtained. This at the same time makes sure that the virtual blocks after the merge can be found at the expected slots of the hash table, and that at the beginning of the new epoch the content is reshuffled. Note that when level i s epoch ends, by definition for all levels below i, the epoch also ends. This is true because the size and epoch of the higher levels are by definition multiples of the lower level. So there is always a consecutive sequence of levels of which epochs are at an end. As discussed earlier, the lower levels must be merged in the higher levels in order. 5.2 Construction The following state is maintained on the server in between virtual IO operations: Server data Description levels in total there are log N + 1 levels, level i, 1 i log N + 1 denotes the i-th level, level i has 2 i buckets of Θ(log N) blocks t virtual IO operation counter, automatically increments h i randomly selected hash function for level i for the current epoch of level i, changes every epoch This description of the hierarchical ORAM corresponds to the pseudocode of Algorithm 4. The algorithm roughly consists of the following of steps: 1. Scan through the whole first level, and look for the wanted virtual block. 28

32 5.2. CONSTRUCTION CHAPTER 5. HIERARCHICAL ORAM 2. Loop through all the other levels in order, compute the hash function and look for the wanted block by scanning the corresponding bucket. If the block was already found in a lower level, a random bucket is scanned to keep it oblivious. 3. In case of a write operation, update the found value 4. Scan through the whole first level again, and write back the (updated) block, if an old version of the block is already in the first level, overwrite it. 5. For all levels of which the epoch has ended, obliviously merge and shuffle it with the next level. This must be done in the right order. 6. In case of a read operation return the found block. If no block was found, assume a default zero value. Algorithm 4 The hierarchical ORAM 1: function Access(op, addr, val) 2: scan through the entire first level (not just a bucket) 3: if block has virtual address addr then 4: data block 5: found true 6: for i 2 to log N + 1 do 7: if not found then 8: bucket h i (addr) 9: else 10: bucket h i ( dummy t) 11: scan bucket of level i 12: if not found then 13: if block has virtual address addr then 14: data block 15: found true 16: if op = write then 17: data val 18: scan through the entire first level (not just a bucket) 19: write block with address addr and value data in next available location 20: if a block with address addr is already there, overwrite instead 21: for i 2 to log N do 22: if level i is at the end of its epoch then 23: ObliviousMergeReshuffle(i) 24: else level i s epoch not at end all levels above not at end 25: break 26: if op = read then 27: return data 29

33 The most complicated part of the algorithm is the merge and reshuffle of level i into level i+1 in an oblivious way. To achieve the obliviousness requirement, the merging and reshuffling is implemented by performing a sequence of oblivious scans and sorts. See Algorithm 5 for the pseudocode of the oblivious merge and reshuffling. It is interesting to note that smart combinations of oblivious scanning and sorting are used to obtain the required results. The algorithm roughly consists of the following steps: 1. Put the content of level i and level i + 1 into a temporary array C. 2. Remove old versions, by a sequence of: scan, sort, and scan. 3. Add dummy blocks. 4. Compute a new random hash function for level i+1, and use it to assign the blocks to the buckets. Check for overflow of buckets, on overflow repeat until no more overflows occurs. 5. Use oblivious scans and sorts in such a way to obtain the desired content of level i + 1 as a prefix of C. In more detail, a sequence of: scan, sort, scan and sort is used to achieve this. Cleanup and move into level i + 1. The dummy blocks need to be added for two reasons. First, to make sure that always exactly 2 i+1 blocks are being inserted into the hash table of level i. This avoids leaking how many actual non empty blocks are inserted into the hash table. Second, it makes sure that there are at least 2 i distinct dummy blocks. That is, at least one dummy block per virtual IO operation in the epoch. Just before insertion into level i, during the cleanup, the dummy blocks are turned into empty blocks. This is done because from the point of view of an observer a dummy block is indistinguishable from an empty block. So, conceptually there are still dummy blocks in the hash table. This is analogous to the dummy blocks of the square root ORAM.

34 Algorithm 5 Oblivious merge and reshuffle level i into level i+1 1: function ObliviousMergeReshuffle(i) 2: create encrypted array C of size (2 i + 2 i+1 ) s blocks on the server 3: scan level i and level i+1 4: if block not empty then 5: tag from level i as new, from i + 1 as old 6: copy block into C 7: sort C lexicographically on 8: 1. non empty before empty 9: 2. smaller virtual address first 10: 3. blocks marked new before old 11: scan C 12: Remove old versions of virtual blocks 13: 1. on consecutive new, old with same addr empty old block 14: remove all old and new tags on blocks 15: Add extra dummy blocks 16: 2. add dummy items s.t. there are a total of 2 i+2 non empty items 17: dummy items have virtual address 0 and a random content 18: repeat 19: compute a new random hash function that maps to [0, 2 i+1 ) 20: scan C Assign all non empty blocks to a bucket 21: if block not empty then 22: if block is dummy then 23: tag with hash of the random content 24: else 25: tag with hash of virtual address 26: sort C lexicographically on 27: 1. non empty before empty 28: 2. smaller tags first 29: scan C Check if hash function overflows a bucket 30: check if no more than s blocks have the same tag 31: until no bucket has overflown 32: scan C Assign at least s (empty) blocks to every bucket 33: tag 2 i+1 s empty words such that each tag is given to s blocks 34: sort C lexicographically on 35: 1. tagged before non tagged 36: 2. lower tags first 37: 3. non empty before empty 38: make sure exactly s block are assigned to every bucket 39: scan C, count how often the same tag consecutively occurs 40: if count for the current tag is more than s then 41: remove the tag 42: sort C lexicographically on remove the non tagged gaps 43: 1. tagged before non tagged 44: 2. lower tags first 45: scan C Clean up and copy final content in level i+1 46: 1. remove all tags 47: 2. remove (make empty) dummy blocks (virtual address 0) 48: 3. copy the prefix of size 2 i+1 s to level i+1 49: remove C from server

35 5.3. ANALYSIS CHAPTER 5. HIERARCHICAL ORAM 5.3 Analysis For every virtual IO operation, on O(log N) levels, a bucket of size Θ(log N) is scanned. This gives a O(log 2 N) communication and IO complexity for scanning the buckets. The most expensive part of a virtual IO operation is the oblivious merging and shuffling of the levels at the end of an epoch. The most expensive step is the oblivious sort used on the levels. Level i and i + 1 have size 2 i log N + 2 i+1 log N = O(2 i log N). This can be sorted by using O((2 i log N) log(2 i log N)) comparisons. That is equal to O(2 i i log N + 2 i log N log log N). In the worst case scenario, all levels are at the end of an epoch, this gives the following number of comparisons: log N+1 i=1 O((2 i i log N + 2 i log N log log N) = O(N log N log N + N log N log log N) = O(N log 2 N) In the amortized case, it is important to notice that the epoch of level i ends once for every 2 i virtual IO operations. This gives the following number of amortized comparisons: log N+1 i=1 log N+1 i=1 1 2 i O((2i i log N + 2 i log N log log N) = O((i log N + log N log log N) = O(log 2 N log N + log N log log N) = O(log 3 N) So, in the worst case scenario the oblivious merging and shuffling of the levels has a communication and IO complexity of O(N log 2 N). The corresponding amortized complexity is O(log 3 N). This is also the total (amortized) communication and IO complexity of hierarchical ORAM, since scanning of the buckets is only O(log 2 N). The server storage complexity of the hierarchical ORAM is: log N+1 i=1 O(2 i log N) = O(N log N) In conclusion, the hierarchical ORAM has an extra factor of O(log N) storage requirement, and has a worse communication and IO complexity in the worse case, compared to the square root ORAM. Nonetheless, it has a polylogarithmic amortized IO and communication complexity. The round complexity of the hierarchical ORAM depends on the round complexity of the oblivious sorting algorithm, this makes it hard to say anything in general about. It is safe to say that at least an Ω(log N) round complexity is 32

36 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM needed for accessing the levels. This is without even considering the cost of the oblivious sorts. It is not possible to do the physical IO operations on the levels with just a single communication round, because it depends on the previous level if a dummy lookup is necessary or not. 5.4 Improvements Cuckoo hashing The hierarchical ORAM uses hash table data structures for the levels. When using hash tables, collisions can occur. The hierarchical ORAM must make sure not to leak how many blocks are put into a bucket, because otherwise it would not be oblivious. Therefore, the hashtable uses buckets that can store Θ(log N) blocks. This makes sure that the chance that a bucket overflows is negligible. The buckets are always completely scanned, when inserting or looking for a block within it. This makes sure that hash table collisions can be dealt with in a oblivious way. Recall that level i has a hash table of 2 i buckets. Level i s epoch ends after 2 i virtual IO operations. This means that level i contains at most 2 i virtual blocks. Therefore, each bucket will have an expected number of one virtual block at most, while it is able to hold Θ(log N) blocks. This is a huge overhead, that impacts both the size and the performance of the hierarchical ORAM. The ORAM scheme from [PR10] improves upon this by using cuckoo hashing [PR01, PR04, GHMT11]. Cuckoo hashing uses two hash functions, h 0 and h 1 for a single hash table. Furthermore, when k items need to be supported by a cuckoo hash table, it needs to have a size of roughly 2k slots. To be precise, it requires 2(k + ɛ) slots, that is slightly more than 2k slots. Each slot can just store one block. A cuckoo hash table does not use buckets such as the standard hierarchical ORAM does. When an item x is added to the hash table, it is inserted into slot h 0 (x). If there already is another item y at h 0 (x), y is kicked out of that slot. y was just removed from either h 0 (y) or h 1 (y), so either h 0 (x) = h 0 (y) or h 0 (x) = h 1 (y). In the first case, y is now inserted into h 1 (y), and possibly some other block z is kicked out of h 1 (y). The second case is similar but instead of h 1, h 0 is used. Now z is inserted at its second location etcetera. Hence, each item a can be found at either h 0 (a) or h 1 (a), this is the main invariant of cuckoo hashing. Therefore, a lookup can be done in constant time. Just two positions have to be checked. Insertion can theoretically take longer, it can lead to a chain reaction of elements kicking each other out of the slots. In the worst case scenario, there can be a cycle of collisions that can never be resolved by kicking out and relocating items. The chance that such a situation occurs is small, because there are 2k slots for just k items. Nonetheless, to avoid costly insertions and infinite loops, if the chain reaction takes too long, two new hash functions h 0 and h 1 are chosen at random. Then, the whole hash table is rehashed using the new hash function. This does not require allocating new tables, it s just a matter of looping through the table and relocating the items that violate the main invariant. In practice, this happens rarely, therefore the amortized 33

37 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM insertion cost of cuckoo hashing is also constant. When the normal hierarchical ORAM does a dummy scan of a bucket, this is oblivious since a bucket contains Θ(log N) blocks. An observer cannot tell which virtual block the ORAM is trying to lookup, or if it is a dummy lookup. In case of the cuckoo hash tables, each slot only contains one block. Therefore, to make sure that each lookup in the hash table is unique, the cuckoo version of the hierarchical ORAM needs unique dummy elements for each virtual IO operation in an epoch. This is a bit similar to the N dummy elements of the square root ORAM. Therefore, the cuckoo hashing variant of the hierarchical ORAM needs 2 i distinct dummy elements for level i. Together with the 2 i virtual blocks this gives a total of 2 2 i blocks that needs to be stored in the cuckoo hash table. Since, as mentioned earlier, the cuckoo hash table needs twice the amount of slots, this gives a total of 4 2 i physical blocks for level i of the cuckoo variant of the hierarchical ORAM. Actually, the normal variant also does something similar with dummies. Before the hashing and checking for overflow, it creates additional dummy elements to obtain a total of 2 i+1 blocks for level i. This makes sure to not leak how many non empty blocks there are, and to make sure that for every virtual IO operation in the epoch there is a distinct dummy element. Just before inserting the content into the hash table, the dummy blocks are made empty during the cleanup. This is done because from the point of view of an observer, a dummy block and an empty block in a bucket are indistinguishable. See Algorithm 6 for the pseudocode of the cuckoo variant of the hierarchical ORAM. The pseudocode of the cuckoo variant looks almost the same as the pseudocode of the normal version. The main difference is, instead of scanning a bucket per level, two blocks from the hash table are examined from the two hash functions. The oblivious merge and reshuffle algorithm, Algorithm 7, actually differs quite a bit more. Both algorithms start by obtaining the content from the two levels, then removing old versions of blocks that have a newer version, and then adding dummy blocks. The cuckoo hashing variant then does something new. It computes a random permutation of the temporary array, and uses an oblivious shuffle to apply the ordering of this permutation. Next, the cuckoo variant computes the two new hash functions for the next epoch, this is analogous to the normal variant with only one hash function. Where the normal variant proceeds by checking if no buckets overflow, the cuckoo variant instead just inserts the blocks directly into level i + 1, using the cuckoo hash insert algorithm with the two new hash functions. The normal variant then has to use a complex sequence of scan and sort operations to build the actual structure of the hash table before it can be inserted into level i + 1. The differences can be explained by the fact that the structure of the cuckoo hash table is more simpler. Instead of having a bucket, there is only one slot. So the cuckoo hashing variant does not have to build the structure of the actual buckets containing just the right amount of non empty and empty blocks in the right order. The cuckoo hashing variant just uses the cuckoo hashing insert algorithm, to insert the blocks into the level. That is also why the random permutation needs to be computed and applied on the temporary array before the insertion is done. Otherwise, an observer would know where the virtual blocks would be stored in the levels. To see why this is the case, observe 34

38 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM that before the permutation is computed and applied, the blocks are stored sorted on the virtual address. The size of level i of the cuckoo hashing variant is O(2 i ) instead of the O(2 i log N) of the normal variant. This results in a total server side storage complexity of: log N+1 i=1 O(2 i ) = O(N) Furthermore, instead of scanning a bucket per level, now only two lookups needs to be done, so the communication and IO complexity for the lookup are now only O(log N), instead of O(log 2 N) for the normal version. For the oblivious merge and reshuffling part, a total of O(2 i log 2 i ) = O(2 i i) comparisons need to be done for the oblivious sorts. This gives a worst case communication and IO complexity of: log N+1 i=1 O(2 i i) = O(N log N) And, a total amortized communication and IO complexity of: log N+1 i=1 log 1 N+1 2 i O(2i i) = i=1 O(i) = O(log 2 N) So, the server storage complexity, worst case communication and IO complexity, amortized communication and IO complexity are all a factor log N better compared with the normal version of the hierarchical ORAM Flaws and further improvements There are a couple of variants of the hierarchical ORAMs in the literature. In [WS08] a variant is discussed that partially sorts on the client. In continuation of that, in [WSC08] a variant is discussed that uses bloom filters [Blo70] in the levels to improve performance. This work is continued by Williams and Sion in the following papers [WSS11, WS12, WS13]. Furthermore, [GM11, GMOT12] discuss variants that use parallel MapReduce cuckoo hashing. In [GMOT11a] these variant are further improved by de-amortizing these schemes. This means that the expensive work that happens once per epoch, is evenly divided over the whole epoch. This results in a worst case complexity that is equal to the amortized complexity. Finally, in [KLO12] a security flaw is fixed in these kind of schemes. The security flaw also applies to the described cuckoo hashing variant from [PR10]. In this paper, the following observation is done: after a reshuffle, the server knows that the data at that level can be inserted in a cuckoo hash table with the current hash functions. This might seem trivial, but it results in the following knowledge for the server: there are no three elements a, b and c such that: h 0 (a) = h 0 (b) = h 0 (c) and h 1 (a) = h 1 (b) = h 1 (c) 35

39 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM Algorithm 6 Cuckoo variant of the hierarchical ORAM 1: function Access(op, addr, val) 2: scan through the entire first level 3: if block has virtual address addr then 4: data block 5: found true 6: for i 2 to log N + 1 do 7: let h i,0 and h i,1 denote the hash functions of level i (at the current epoch) 8: if not found then 9: index 0 h i,0 (addr) 10: index 1 h i,1 (addr) 11: if block at index 0 or index 1 from level i has virtual address addr then 12: data block 13: found true 14: else 15: let t denote the global incremental virtual IO operation counter 16: Read h i,0 ( dummy t) from level i 17: Read h i,1 ( dummy t) from level i 18: if op = write then 19: data val 20: scan through the entire first level (not just a bucket) 21: write block with address addr and value data in next available location 22: if a block with address addr is already there, overwrite instead 23: for i 2 to log N do 24: if level i is at the end of its epoch then 25: ObliviousMergeReshuffle(i) 26: else level i s epoch not at end all levels above not at end 27: break 28: if op = read then 29: return data 36

40 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM Algorithm 7 Cuckoo variant of oblivious merge and reshuffle 1: function ObliviousMergeReshuffle(i) 2: create encrypted array C of size 2 i+1 blocks on the server 3: find real blocks and copy to C 4: sort level i and level i+1 on 5: real blocks before dummy and empty blocks 6: scan first 2 i blocks from level i and first 2 i blocks from level i+1 7: if block not empty or dummy then 8: tag from level i as new, from i + 1 as old 9: copy block into C 10: Remove old versions of blocks at level i+1 that have newer version at level i 11: sort C lexicographically on 12: 1. smaller virtual address first 13: 2. blocks marked new before old 14: scan C 15: remove old versions of virtual blocks 16: a random block, is a block with a special virtual address and random data 17: 1. on consecutive new, old with same addr: 18: change old block into random block 19: remove all old and new tags on blocks 20: Make sure that a total of 2 i+1 real and random blocks will be inserted 21: 2. change empty and dummy blocks into random block 22: add 2 i+1 dummy blocks for every virtual IO operation in the next epoch 23: increase size of C to 2 i+2 24: add 2 i+1 dummies with addresses: { dummy t + j : j { i+1 }} to C 25: shuffle C to make sure observer cannot know virtual addresses of blocks 26: compute a pseudorandom permutation π of C 27: scan C 28: tag block at offset i with π(i) 29: sort C on 30: the tags of π 31: cuckoo hashing 32: repeat 33: compute two random hash functions: h i+1,0 and h i+1,1 that maps to [0, 2 i+1 ) 34: scan C 35: 1. remove tags of π 36: 2. try cuckoo hash with h i+1,0 and h i+1,1 of 2 2 i blocks into level i+1 37: 3. When insert takes too long: fail and repeat 38: until Cuckoo hashing with h i+1,0 and h i+1,1 of C succeeds 39: remove C from server 37

41 5.4. IMPROVEMENTS CHAPTER 5. HIERARCHICAL ORAM When a lookup is done in the ORAM scheme for elements x, y, z such that these elements are not in table. There is a chance that: h 0 (x) = h 0 (y) = h 0 (z) and h 1 (x) = h 1 (y) = h 1 (z) When a server observes that, it immediately knows that it is not possible that all three of x, y and z are in that hash table. At the same time, the authors of that paper claim that similar flaws exists in many other variants of hierarchical ORAM schemes. Luckily, the authors give a repaired scheme of the de-amortized parallel MapReduce cuckoo hashing variant as well. This scheme has as a worst case communication and IO complexity of O( log2 N log log N ). In [BMP11] a hybrid variant of the square root and the cuckoo hashing hierarchical ORAM is discussed. It is essentially a square root ORAM where the shelter is implemented as the cuckoo hashing hierarchical ORAM. Furthermore, a mapping from the virtual blocks to the levels and indexes of the shelter is stored at the client (This strongly resembles the partitions of the partition ORAM, discussed in the next chapter). It is the first ORAM scheme to consider the round complexity as an important metric for oblivious cloud storage. It achieves an O(1) online round and online communication complexity. It also allows for the reshuffling of the levels to happen in parallel with the virtual IO operations. Having said that, it still has an O(N log N) worst case communication and IO complexity. A new shuffle algorithm for this scheme is introduced in [GMOT11b]. 38

42 Chapter 6 Partition ORAM The hierarchical ORAM improves upon the square root ORAM by achieving an amortized polylogarithmic communication and IO complexity for virtual IO operations. Both the square root ORAM and the hierarchical ORAM do not perform well when the worst case complexity is taken into consideration. In the worst case scenario, both schemes have to do an oblivious sort on a significant fraction of the whole physical address space for a single virtual IO operation. The partition ORAM [SSS11] alleviates this problem. 6.1 Concept Partitioning The main idea of the partition ORAM is to partition one large ORAM into multiple smaller ORAMs. An ORAM with a virtual address space of N blocks is divided into P = N different smaller ORAMs, called partitions. Every virtual block will be assigned at random to one of the partitions. This means that every partition will have a virtual size of roughly N/P = N blocks. When a virtual IO operation occurs, the partition ORAM finds the partition that corresponds with the requested virtual block. The partition ORAM then transfers the virtual IO operation to the ORAM algorithm of that particular partition. This implies that the server can observe which partition is being used for a virtual IO operation. Since every partition is an ORAM, the server cannot observe what exactly happens inside the partition. Nonetheless, keeping a virtual block in the same partition when multiple virtual IO operations occur on that same virtual block, is not oblivious. Therefore, after every IO operation on a virtual block, that block is removed from the corresponding partition and assigned to be inserted into a random partition. By definition this results in a pseudo-random access pattern on the partitions. There is only one problem with this approach. If the virtual block would immediately be inserted in the assigned partition, the server could still track the virtual block, and therefore this would not be oblivious. 39

43 6.1. CONCEPT CHAPTER 6. PARTITION ORAM Client storage To make sure that the obliviousness requirement is met, the client has a buffer for virtual blocks. This buffer is called the stash. The stash has for each partition a slot in which virtual blocks can be held. When a virtual block is removed from a partition, it is always stored in the stash slot of the partition that it just got assigned to. Conceptually, the stash is bit like the shelter of the square root ORAM. It is a temporary storage used to achieve obliviousness. The main difference is that the stash is stored on the client instead of on the server. This is necessary to achieve obliviousness. Up until now, we have not discussed how the partition ORAM keeps track of where each virtual block is assigned to. The ORAM simply stores for every virtual block to which partition it has been assigned in a list called the partition map. This partition map is stored on the client as well. This is also done to avoid problems with obliviousness. The partition map tracks for every virtual block, to which partition it is assigned. This does not necessarily mean that the virtual block is also located in that partition. The virtual block could also be in the corresponding stash slot. So the virtual block is either in the stash slot or in the partition. When a virtual IO operation is going to be executed, first the stash slot is checked. If the virtual block is not found in the stash slot, the partition is queried for the virtual block. If the virtual block was found in the stash slot, the partition is queried for a dummy block. This must be done to avoid leaking if the block was found in the stash or not. See figure 6.1 for an illustration of the partition ORAM Eviction Eventually the stash slots have to be emptied into the partitions. This has to be done in such a way that the scheme remains oblivious. In fact, exactly when a certain stash slot is being evicted into its corresponding partition, must not depend on how many virtual blocks are in that particular stash slot, since that would not be oblivious. Generally speaking the eviction process must not depend at all on the content of the partition map or the content of the stash. After every virtual IO operation, the partition ORAM selects v stash slots to evict a block from. There are two different strategies to select which stash slots are going to have a block evicted: Random Just evict a block from a random stash slot. Deterministic Evict the stash slots in a deterministic order. For instance, start with the first slot, the next time pick the second slot etc... When the last slot has been picked, pick the first one again, the next time a slot must be picked. Note that both strategies are oblivious, since both do not depend on the content or state of the ORAM. It is important to note that exactly one block must be evicted from the stash slots that are selected to have a block evicted from. The situation can occur, that empty stash slots are going to be selected. In such a situation, a dummy block must be written into the corresponding partition. Once again, this is necessary to keep the ORAM scheme oblivious. 40

44 6.1. CONCEPT CHAPTER 6. PARTITION ORAM Figure 6.1: An illustration of the partition ORAM 41

Recursive ORAMs with Practical Constructions

Recursive ORAMs with Practical Constructions Recursive ORAMs with Practical Constructions Sarvar Patel Giuseppe Persiano Kevin Yeo September 30, 2017 Abstract We present Recursive Square Root ORAM (R-SQRT), a simple and flexible ORAM that can be

More information

Lectures 6+7: Zero-Leakage Solutions

Lectures 6+7: Zero-Leakage Solutions Lectures 6+7: Zero-Leakage Solutions Contents 1 Overview 1 2 Oblivious RAM 1 3 Oblivious RAM via FHE 2 4 Oblivious RAM via Symmetric Encryption 4 4.1 Setup........................................ 5 4.2

More information

Searchable Encryption Using ORAM. Benny Pinkas

Searchable Encryption Using ORAM. Benny Pinkas Searchable Encryption Using ORAM Benny Pinkas 1 Desiderata for Searchable Encryption Security No leakage about the query or the results Functionality Variety of queries that are supported Performance 2

More information

On the (In)security of Hash-based Oblivious RAM and a New Balancing Scheme

On the (In)security of Hash-based Oblivious RAM and a New Balancing Scheme On the (In)security of Hash-based Oblivious RAM and a New Balancing Scheme Eyal Kushilevitz Steve Lu Rafail Ostrovsky Abstract With the gaining popularity of remote storage (e.g. in the Cloud), we consider

More information

PanORAMa: Oblivious RAM with Logarithmic Overhead

PanORAMa: Oblivious RAM with Logarithmic Overhead PanORAMa: Oblivious RAM with Logarithmic Overhead Sarvar Patel 1, Giuseppe Persiano 1,2, Mariana Raykova 1,3, and Kevin Yeo 1 1 Google LLC 2 Università di Salerno 3 Yale University Abstract We present

More information

CSC 5930/9010 Cloud S & P: Cloud Primitives

CSC 5930/9010 Cloud S & P: Cloud Primitives CSC 5930/9010 Cloud S & P: Cloud Primitives Professor Henry Carter Spring 2017 Methodology Section This is the most important technical portion of a research paper Methodology sections differ widely depending

More information

Distributed Oblivious RAM for Secure Two-Party Computation

Distributed Oblivious RAM for Secure Two-Party Computation Distributed Oblivious RAM for Secure Two-Party Computation Steve Lu Rafail Ostrovsky Abstract Secure two-party computation protocol allows two players, Alice with secret input x and Bob with secret input

More information

Usable PIR. Network Security and Applied. Cryptography Laboratory.

Usable PIR. Network Security and Applied. Cryptography Laboratory. Network Security and Applied Cryptography Laboratory http://crypto.cs.stonybrook.edu Usable PIR NDSS '08, San Diego, CA Peter Williams petertw@cs.stonybrook.edu Radu Sion sion@cs.stonybrook.edu ver. 2.1

More information

Asymptotically Tight Bounds for Composing ORAM with PIR

Asymptotically Tight Bounds for Composing ORAM with PIR Asymptotically Tight Bounds for Composing ORAM with PIR Ittai Abraham 1, Christopher W. Fletcher 2, Kartik Nayak 3, Benny Pinkas 4, and Ling Ren 5 1 VMware Research, Israel iabraham@vmware.com, 2 University

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17 5.1 Introduction You should all know a few ways of sorting in O(n log n)

More information

Introduction to Secure Multi-Party Computation

Introduction to Secure Multi-Party Computation Introduction to Secure Multi-Party Computation Many thanks to Vitaly Shmatikov of the University of Texas, Austin for providing these slides. slide 1 Motivation General framework for describing computation

More information

Lectures 4+5: The (In)Security of Encrypted Search

Lectures 4+5: The (In)Security of Encrypted Search Lectures 4+5: The (In)Security of Encrypted Search Contents 1 Overview 1 2 Data Structures 2 3 Syntax 3 4 Security 4 4.1 Formalizing Leaky Primitives.......................... 5 1 Overview In the first

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Design and Implementation of the Ascend Secure Processor. Ling Ren, Christopher W. Fletcher, Albert Kwon, Marten van Dijk, Srinivas Devadas

Design and Implementation of the Ascend Secure Processor. Ling Ren, Christopher W. Fletcher, Albert Kwon, Marten van Dijk, Srinivas Devadas Design and Implementation of the Ascend Secure Processor Ling Ren, Christopher W. Fletcher, Albert Kwon, Marten van Dijk, Srinivas Devadas Agenda Motivation Ascend Overview ORAM for obfuscation Ascend:

More information

Sub-logarithmic Distributed Oblivious RAM with Small Block Size

Sub-logarithmic Distributed Oblivious RAM with Small Block Size Sub-logarithmic Distributed Oblivious RAM with Small Block Size Eyal Kushilevitz and Tamer Mour ( ) Computer Science Department, Technion, Haifa 32000, Israel eyalk@cs.technion.ac.il tamer.mour@technion.ac.il

More information

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02) Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 04 Lecture - 01 Merge Sort (Refer

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 13 Virtual memory and memory management unit In the last class, we had discussed

More information

Onion ORAM: Constant Bandwidth ORAM Using Additively Homomorphic Encryption Ling Ren

Onion ORAM: Constant Bandwidth ORAM Using Additively Homomorphic Encryption Ling Ren Onion ORAM: Constant Bandwidth ORAM Using Additively Homomorphic Encryption Ling Ren Joint work with: Chris Fletcher, Srini Devadas, Marten van Dijk, Elaine Shi, Daniel Wichs Oblivious RAM (ORAM) Client

More information

Computer Security CS 526

Computer Security CS 526 Computer Security CS 526 Topic 4 Cryptography: Semantic Security, Block Ciphers and Encryption Modes CS555 Topic 4 1 Readings for This Lecture Required reading from wikipedia Block Cipher Ciphertext Indistinguishability

More information

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer Module 2: Divide and Conquer Divide and Conquer Control Abstraction for Divide &Conquer 1 Recurrence equation for Divide and Conquer: If the size of problem p is n and the sizes of the k sub problems are

More information

(Refer Slide Time: 01:25)

(Refer Slide Time: 01:25) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 32 Memory Hierarchy: Virtual Memory (contd.) We have discussed virtual

More information

Recitation 4: Elimination algorithm, reconstituted graph, triangulation

Recitation 4: Elimination algorithm, reconstituted graph, triangulation Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Recitation 4: Elimination algorithm, reconstituted graph, triangulation

More information

COMP Data Structures

COMP Data Structures COMP 2140 - Data Structures Shahin Kamali Topic 5 - Sorting University of Manitoba Based on notes by S. Durocher. COMP 2140 - Data Structures 1 / 55 Overview Review: Insertion Sort Merge Sort Quicksort

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1]

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Marc André Tanner May 30, 2014 Abstract This report contains two main sections: In section 1 the cache-oblivious computational

More information

A Mathematical Proof. Zero Knowledge Protocols. Interactive Proof System. Other Kinds of Proofs. When referring to a proof in logic we usually mean:

A Mathematical Proof. Zero Knowledge Protocols. Interactive Proof System. Other Kinds of Proofs. When referring to a proof in logic we usually mean: A Mathematical Proof When referring to a proof in logic we usually mean: 1. A sequence of statements. 2. Based on axioms. Zero Knowledge Protocols 3. Each statement is derived via the derivation rules.

More information

Zero Knowledge Protocols. c Eli Biham - May 3, Zero Knowledge Protocols (16)

Zero Knowledge Protocols. c Eli Biham - May 3, Zero Knowledge Protocols (16) Zero Knowledge Protocols c Eli Biham - May 3, 2005 442 Zero Knowledge Protocols (16) A Mathematical Proof When referring to a proof in logic we usually mean: 1. A sequence of statements. 2. Based on axioms.

More information

How invariants help writing loops Author: Sander Kooijmans Document version: 1.0

How invariants help writing loops Author: Sander Kooijmans Document version: 1.0 How invariants help writing loops Author: Sander Kooijmans Document version: 1.0 Why this document? Did you ever feel frustrated because of a nasty bug in your code? Did you spend hours looking at the

More information

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning Lecture 7 Quicksort 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning In this lecture we consider two related algorithms for sorting that achieve a much better running time than

More information

arxiv: v1 [cs.cr] 19 Sep 2017

arxiv: v1 [cs.cr] 19 Sep 2017 BIOS ORAM: Improved Privacy-Preserving Data Access for Parameterized Outsourced Storage arxiv:1709.06534v1 [cs.cr] 19 Sep 2017 Michael T. Goodrich University of California, Irvine Dept. of Computer Science

More information

MITOCW watch?v=zlohv4xq_ti

MITOCW watch?v=zlohv4xq_ti MITOCW watch?v=zlohv4xq_ti The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To

More information

Distributed Oblivious RAM for Secure Two-Party Computation

Distributed Oblivious RAM for Secure Two-Party Computation Distributed Oblivious RAM for Secure Two-Party Computation Steve Lu 1 and Rafail Ostrovsky 2 1 Stealth Software Technologies, Inc., USA steve@stealthsoftwareinc.com 2 Department of Computer Science and

More information

Sorting is a problem for which we can prove a non-trivial lower bound.

Sorting is a problem for which we can prove a non-trivial lower bound. Sorting The sorting problem is defined as follows: Sorting: Given a list a with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

Ascend: Architecture for Secure Computation on Encrypted Data Oblivious RAM (ORAM)

Ascend: Architecture for Secure Computation on Encrypted Data Oblivious RAM (ORAM) CSE 5095 & ECE 4451 & ECE 5451 Spring 2017 Lecture 7b Ascend: Architecture for Secure Computation on Encrypted Data Oblivious RAM (ORAM) Marten van Dijk Syed Kamran Haider, Chenglu Jin, Phuong Ha Nguyen

More information

Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn:

Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn: CS341 Algorithms 1. Introduction Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn: Well-known algorithms; Skills to analyze the correctness

More information

Secure Multiparty Computation

Secure Multiparty Computation CS573 Data Privacy and Security Secure Multiparty Computation Problem and security definitions Li Xiong Outline Cryptographic primitives Symmetric Encryption Public Key Encryption Secure Multiparty Computation

More information

Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute

Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute Privacy-Preserving Computation with Trusted Computing via Scramble-then-Compute Hung Dang, Anh Dinh, Ee-Chien Chang, Beng Chin Ooi School of Computing National University of Singapore The Problem Context:

More information

ISA 562: Information Security, Theory and Practice. Lecture 1

ISA 562: Information Security, Theory and Practice. Lecture 1 ISA 562: Information Security, Theory and Practice Lecture 1 1 Encryption schemes 1.1 The semantics of an encryption scheme. A symmetric key encryption scheme allows two parties that share a secret key

More information

Selection (deterministic & randomized): finding the median in linear time

Selection (deterministic & randomized): finding the median in linear time Lecture 4 Selection (deterministic & randomized): finding the median in linear time 4.1 Overview Given an unsorted array, how quickly can one find the median element? Can one do it more quickly than bysorting?

More information

Lecture 10, Zero Knowledge Proofs, Secure Computation

Lecture 10, Zero Knowledge Proofs, Secure Computation CS 4501-6501 Topics in Cryptography 30 Mar 2018 Lecture 10, Zero Knowledge Proofs, Secure Computation Lecturer: Mahmoody Scribe: Bella Vice-Van Heyde, Derrick Blakely, Bobby Andris 1 Introduction Last

More information

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting)

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting) IS 709/809: Computational Methods in IS Research Algorithm Analysis (Sorting) Nirmalya Roy Department of Information Systems University of Maryland Baltimore County www.umbc.edu Sorting Problem Given an

More information

(Refer Slide Time: 01.26)

(Refer Slide Time: 01.26) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture # 22 Why Sorting? Today we are going to be looking at sorting.

More information

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order.

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Sorting The sorting problem is defined as follows: Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

Computer Science 210 Data Structures Siena College Fall Topic Notes: Complexity and Asymptotic Analysis

Computer Science 210 Data Structures Siena College Fall Topic Notes: Complexity and Asymptotic Analysis Computer Science 210 Data Structures Siena College Fall 2017 Topic Notes: Complexity and Asymptotic Analysis Consider the abstract data type, the Vector or ArrayList. This structure affords us the opportunity

More information

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 38 A Tutorial on Network Protocols

More information

1 A Tale of Two Lovers

1 A Tale of Two Lovers CS 120/ E-177: Introduction to Cryptography Salil Vadhan and Alon Rosen Dec. 12, 2006 Lecture Notes 19 (expanded): Secure Two-Party Computation Recommended Reading. Goldreich Volume II 7.2.2, 7.3.2, 7.3.3.

More information

Burst ORAM: Minimizing ORAM Response Times for Bursty Access Patterns

Burst ORAM: Minimizing ORAM Response Times for Bursty Access Patterns Burst ORAM: Minimizing ORAM Response Times for Bursty Access Patterns Jonathan Dautrich, University of California, Riverside; Emil Stefanov, University of California, Berkeley; Elaine Shi, University of

More information

CS125 : Introduction to Computer Science. Lecture Notes #38 and #39 Quicksort. c 2005, 2003, 2002, 2000 Jason Zych

CS125 : Introduction to Computer Science. Lecture Notes #38 and #39 Quicksort. c 2005, 2003, 2002, 2000 Jason Zych CS125 : Introduction to Computer Science Lecture Notes #38 and #39 Quicksort c 2005, 2003, 2002, 2000 Jason Zych 1 Lectures 38 and 39 : Quicksort Quicksort is the best sorting algorithm known which is

More information

Proofs for Key Establishment Protocols

Proofs for Key Establishment Protocols Information Security Institute Queensland University of Technology December 2007 Outline Key Establishment 1 Key Establishment 2 3 4 Purpose of key establishment Two or more networked parties wish to establish

More information

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer CS-621 Theory Gems November 21, 2012 Lecture 19 Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer 1 Introduction We continue our exploration of streaming algorithms. First,

More information

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array

More information

9. The Disorganized Handyman

9. The Disorganized Handyman 9. The Disorganized Handyman A bad handyman always blames his tools. Famous Proverb. What if my hammer is made of paper? Can I blame it then? Author Unknown. Programming constructs and algorithmic paradigms

More information

CSE373: Data Structure & Algorithms Lecture 18: Comparison Sorting. Dan Grossman Fall 2013

CSE373: Data Structure & Algorithms Lecture 18: Comparison Sorting. Dan Grossman Fall 2013 CSE373: Data Structure & Algorithms Lecture 18: Comparison Sorting Dan Grossman Fall 2013 Introduction to Sorting Stacks, queues, priority queues, and dictionaries all focused on providing one element

More information

Secure Remote Storage Using Oblivious RAM

Secure Remote Storage Using Oblivious RAM Secure Remote Storage Using Oblivious RAM Giovanni Malloy Mentors: Georgios Kellaris, Kobbi Nissim August 11, 2016 Abstract Oblivious RAM (ORAM) is a protocol that allows a user to access the data she

More information

CS2 Algorithms and Data Structures Note 1

CS2 Algorithms and Data Structures Note 1 CS2 Algorithms and Data Structures Note 1 Analysing Algorithms This thread of the course is concerned with the design and analysis of good algorithms and data structures. Intuitively speaking, an algorithm

More information

Chapter 3. Set Theory. 3.1 What is a Set?

Chapter 3. Set Theory. 3.1 What is a Set? Chapter 3 Set Theory 3.1 What is a Set? A set is a well-defined collection of objects called elements or members of the set. Here, well-defined means accurately and unambiguously stated or described. Any

More information

Lecture Notes on Quicksort

Lecture Notes on Quicksort Lecture Notes on Quicksort 15-122: Principles of Imperative Computation Frank Pfenning Lecture 8 September 20, 2012 1 Introduction In this lecture we first sketch two related algorithms for sorting that

More information

Introduction to Cryptography and Security Mechanisms. Abdul Hameed

Introduction to Cryptography and Security Mechanisms. Abdul Hameed Introduction to Cryptography and Security Mechanisms Abdul Hameed http://informationtechnology.pk Before we start 3 Quiz 1 From a security perspective, rather than an efficiency perspective, which of the

More information

Practical Oblivious RAM and its Applications

Practical Oblivious RAM and its Applications NORTHEASTERN UNIVERSITY Practical Oblivious RAM and its Applications by Travis Mayberry A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy in the Department of Computer Science

More information

Lecturers: Mark D. Ryan and David Galindo. Cryptography Slide: 24

Lecturers: Mark D. Ryan and David Galindo. Cryptography Slide: 24 Assume encryption and decryption use the same key. Will discuss how to distribute key to all parties later Symmetric ciphers unusable for authentication of sender Lecturers: Mark D. Ryan and David Galindo.

More information

CONIKS: Bringing Key Transparency to End Users

CONIKS: Bringing Key Transparency to End Users CONIKS: Bringing Key Transparency to End Users Morris Yau 1 Introduction Public keys must be distributed securely even in the presence of attackers. This is known as the Public Key Infrastructure problem

More information

(a) Which of these two conditions (high or low) is considered more serious? Justify your answer.

(a) Which of these two conditions (high or low) is considered more serious? Justify your answer. CS140 Winter 2006 Final Exam Solutions (1) In class we talked about the link count in the inode of the Unix file system being incorrect after a crash. The reference count can either be either too high

More information

Distributed Sorting. Chapter Array & Mesh

Distributed Sorting. Chapter Array & Mesh Chapter 9 Distributed Sorting Indeed, I believe that virtually every important aspect of programming arises somewhere in the context of sorting [and searching]! Donald E. Knuth, The Art of Computer Programming

More information

The divide and conquer strategy has three basic parts. For a given problem of size n,

The divide and conquer strategy has three basic parts. For a given problem of size n, 1 Divide & Conquer One strategy for designing efficient algorithms is the divide and conquer approach, which is also called, more simply, a recursive approach. The analysis of recursive algorithms often

More information

Online Graph Exploration

Online Graph Exploration Distributed Computing Online Graph Exploration Semester thesis Simon Hungerbühler simonhu@ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Supervisors: Sebastian

More information

Notes for Lecture 14

Notes for Lecture 14 COS 533: Advanced Cryptography Lecture 14 (November 6, 2017) Lecturer: Mark Zhandry Princeton University Scribe: Fermi Ma Notes for Lecture 14 1 Applications of Pairings 1.1 Recap Consider a bilinear e

More information

6.895 Final Project: Serial and Parallel execution of Funnel Sort

6.895 Final Project: Serial and Parallel execution of Funnel Sort 6.895 Final Project: Serial and Parallel execution of Funnel Sort Paul Youn December 17, 2003 Abstract The speed of a sorting algorithm is often measured based on the sheer number of calculations required

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS276: Cryptography Handout N24 Luca Trevisan April 21, 2009 Notes for Lecture 24 Scribed by Milosh Drezgich, posted May 11, 2009 Summary Today we introduce the notion of zero knowledge proof

More information

«Computer Science» Requirements for applicants by Innopolis University

«Computer Science» Requirements for applicants by Innopolis University «Computer Science» Requirements for applicants by Innopolis University Contents Architecture and Organization... 2 Digital Logic and Digital Systems... 2 Machine Level Representation of Data... 2 Assembly

More information

Efficient Private Information Retrieval

Efficient Private Information Retrieval Efficient Private Information Retrieval K O N S T A N T I N O S F. N I K O L O P O U L O S T H E G R A D U A T E C E N T E R, C I T Y U N I V E R S I T Y O F N E W Y O R K K N I K O L O P O U L O S @ G

More information

Efficient Oblivious Data Structures for Database Services on the Cloud

Efficient Oblivious Data Structures for Database Services on the Cloud Efficient Oblivious Data Structures for Database Services on the Cloud Thang Hoang Ceyhun D. Ozkaptan Gabriel Hackebeil Attila A. Yavuz Abstract Database-as-a-service (DBaaS) allows the client to store

More information

(Refer Slide Time: 1:27)

(Refer Slide Time: 1:27) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 1 Introduction to Data Structures and Algorithms Welcome to data

More information

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014 CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014 The main problem, stated carefully For now, assume we have n comparable elements in an array and we want

More information

Random Oracles - OAEP

Random Oracles - OAEP Random Oracles - OAEP Anatoliy Gliberman, Dmitry Zontov, Patrick Nordahl September 23, 2004 Reading Overview There are two papers presented this week. The first paper, Random Oracles are Practical: A Paradigm

More information

Chapter 7 Sorting. Terminology. Selection Sort

Chapter 7 Sorting. Terminology. Selection Sort Chapter 7 Sorting Terminology Internal done totally in main memory. External uses auxiliary storage (disk). Stable retains original order if keys are the same. Oblivious performs the same amount of work

More information

CSE 373: Data Structures and Algorithms

CSE 373: Data Structures and Algorithms CSE 373: Data Structures and Algorithms Lecture 19: Comparison Sorting Algorithms Instructor: Lilian de Greef Quarter: Summer 2017 Today Intro to sorting Comparison sorting Insertion Sort Selection Sort

More information

OptORAMa: Optimal Oblivious RAM

OptORAMa: Optimal Oblivious RAM OptORAMa: Optimal Oblivious RAM Gilad Asharov 1, Ilan Komargodski 1, Wei-Kai Lin 2, Kartik Nayak 3, and Elaine Shi 2 1 Cornell Tech 2 Cornell University 3 VMware Research and Duke University September

More information

Algorithms and Data Structures: Lower Bounds for Sorting. ADS: lect 7 slide 1

Algorithms and Data Structures: Lower Bounds for Sorting. ADS: lect 7 slide 1 Algorithms and Data Structures: Lower Bounds for Sorting ADS: lect 7 slide 1 ADS: lect 7 slide 2 Comparison Based Sorting Algorithms Definition 1 A sorting algorithm is comparison based if comparisons

More information

6 Pseudorandom Functions

6 Pseudorandom Functions 6 Pseudorandom Functions A pseudorandom generator allows us to take a small amount of uniformly sampled bits, and amplify them into a larger amount of uniform-looking bits A PRG must run in polynomial

More information

ObliviSync: Practical Oblivious File Backup and Synchronization

ObliviSync: Practical Oblivious File Backup and Synchronization ObliviSync: Practical Oblivious File Backup and Synchronization Adam J. Aviv, Seung Geol Choi, Travis Mayberry, Daniel S. Roche United States Naval Academy {aviv,choi,mayberry,roche}@usna.edu arxiv:1605.09779v2

More information

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017 CS 137 Part 8 Merge Sort, Quick Sort, Binary Search November 20th, 2017 This Week We re going to see two more complicated sorting algorithms that will be our first introduction to O(n log n) sorting algorithms.

More information

Lecture Notes on Quicksort

Lecture Notes on Quicksort Lecture Notes on Quicksort 15-122: Principles of Imperative Computation Frank Pfenning Lecture 8 February 5, 2015 1 Introduction In this lecture we consider two related algorithms for sorting that achieve

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Spring 2019 Alexis Maciel Department of Computer Science Clarkson University Copyright c 2019 Alexis Maciel ii Contents 1 Analysis of Algorithms 1 1.1 Introduction.................................

More information

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

Exploring Timing Side-channel Attacks on Path-ORAMs

Exploring Timing Side-channel Attacks on Path-ORAMs Exploring Timing Side-channel Attacks on Path-ORAMs Chongxi Bao, and Ankur Srivastava Dept. of ECE, University of Maryland, College Park Email: {borisbcx, ankurs}@umd.edu Abstract In recent research, it

More information

Exact Algorithms Lecture 7: FPT Hardness and the ETH

Exact Algorithms Lecture 7: FPT Hardness and the ETH Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far Chapter 5 Hashing 2 Introduction hashing performs basic operations, such as insertion, deletion, and finds in average time better than other ADTs we ve seen so far 3 Hashing a hash table is merely an hashing

More information

TSKT-ORAM: A Two-Server k-ary Tree Oblivious RAM without Homomorphic Encryption

TSKT-ORAM: A Two-Server k-ary Tree Oblivious RAM without Homomorphic Encryption future internet Article TSKT-ORAM: A Two-Server k-ary Tree Oblivious RAM without Homomorphic Encryption Jinsheng Zhang 1, Qiumao Ma 1, Wensheng Zhang 1, * and Daji Qiao 2 1 Department of Computer Science,

More information

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation Yehuda Lindell Department of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel. lindell@wisdom.weizmann.ac.il

More information

Introduction to Cryptography and Security Mechanisms: Unit 5. Public-Key Encryption

Introduction to Cryptography and Security Mechanisms: Unit 5. Public-Key Encryption Introduction to Cryptography and Security Mechanisms: Unit 5 Public-Key Encryption Learning Outcomes Explain the basic principles behind public-key cryptography Recognise the fundamental problems that

More information

TWORAM: Efficient Oblivious RAM in Two Rounds with Applications to Searchable Encryption

TWORAM: Efficient Oblivious RAM in Two Rounds with Applications to Searchable Encryption TWORAM: Efficient Oblivious RAM in Two Rounds with Applications to Searchable Encryption Sanjam Garg 1, Payman Mohassel 2, and Charalampos Papamanthou 3 1 University of California, Berkeley 2 Yahoo! Labs

More information

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 CS161, Lecture 2 MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 1 Introduction Today, we will introduce a fundamental algorithm design paradigm, Divide-And-Conquer,

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Scribe: Sam Keller (2015), Seth Hildick-Smith (2016), G. Valiant (2017) Date: January 25, 2017

Scribe: Sam Keller (2015), Seth Hildick-Smith (2016), G. Valiant (2017) Date: January 25, 2017 CS 6, Lecture 5 Quicksort Scribe: Sam Keller (05), Seth Hildick-Smith (06), G. Valiant (07) Date: January 5, 07 Introduction Today we ll study another sorting algorithm. Quicksort was invented in 959 by

More information

La Science du Secret sans Secrets

La Science du Secret sans Secrets La Science du Secret sans Secrets celebrating Jacques Stern s 60 s birthday Moti Yung Columbia University and Google Research Inspired by a Book by Jacques Popularizing Cryptography Doing research, teaching,

More information

On the Security of the 128-Bit Block Cipher DEAL

On the Security of the 128-Bit Block Cipher DEAL On the Security of the 128-Bit Block Cipher DAL Stefan Lucks Theoretische Informatik University of Mannheim, 68131 Mannheim A5, Germany lucks@th.informatik.uni-mannheim.de Abstract. DAL is a DS-based block

More information

(Refer Slide Time: 0:19)

(Refer Slide Time: 0:19) Theory of Computation. Professor somenath Biswas. Department of Computer Science & Engineering. Indian Institute of Technology, Kanpur. Lecture-15. Decision Problems for Regular Languages. (Refer Slide

More information

Computer Security. 08r. Pre-exam 2 Last-minute Review Cryptography. Paul Krzyzanowski. Rutgers University. Spring 2018

Computer Security. 08r. Pre-exam 2 Last-minute Review Cryptography. Paul Krzyzanowski. Rutgers University. Spring 2018 Computer Security 08r. Pre-exam 2 Last-minute Review Cryptography Paul Krzyzanowski Rutgers University Spring 2018 March 26, 2018 CS 419 2018 Paul Krzyzanowski 1 Cryptographic Systems March 26, 2018 CS

More information

Lecture #2. 1 Overview. 2 Worst-Case Analysis vs. Average Case Analysis. 3 Divide-and-Conquer Design Paradigm. 4 Quicksort. 4.

Lecture #2. 1 Overview. 2 Worst-Case Analysis vs. Average Case Analysis. 3 Divide-and-Conquer Design Paradigm. 4 Quicksort. 4. COMPSCI 330: Design and Analysis of Algorithms 8/28/2014 Lecturer: Debmalya Panigrahi Lecture #2 Scribe: Yilun Zhou 1 Overview This lecture presents two sorting algorithms, quicksort and mergesort, that

More information