Privacy for the Personal Data Vault

Size: px
Start display at page:

Download "Privacy for the Personal Data Vault"

Transcription

1 Privacy for the Personal Data Vault Tamás Balogh Thesis to obtain the Master of Science Degree in Information Systems and Computer Engineering Supervisors: Prof. Ricardo Jorge Fernandes Chaves Master Researcher Christian Schaefer Examination Committee Chairperson: Prof. Luís Eduardo Teixeira Rodrigues Supervisor: Prof. Ricardo Jorge Fernandes Chaves Member of the Committee: Prof. Nuno Miguel Carvalho dos Santos July 2014

2

3 Acknowledgments First of all I would like to thank Ericsson for providing me with the opportunity to work on this interesting research project. Special thanks goes out for Christian Schaefer for his great support during the thesis work. I would like to thank my thesis supervisor, Prof. Ricardo Chaves for his help and valuable feedback during the course of this work. My gratitude also goes out for the European Masters in Distributed Computing program coordinator, Prof. Johan Montelius, Prof. Luís Rodrigues and Prof. Luís Veiga, who guided me throughout my masters program. Last but not least, I would like to thank my family and friends for supporting me all along.

4

5 Abstract Privacy is an important consideration in how online businesses are conducted today. Personal user data is becoming a valuable resource that service providers collect and process ferociously. The user centric design, that stands for the basis of the Personal Data Vault (PDV) concept, is trying to mitigate this problem by hosting data under strict user supervision. Once the user s data leaves its supervision, however, the current privacy models offered for the PDV are no longer enough. The goal of this thesis is to investigate different privacy enhancing techniques that can be employed in the scenario where PDVs are used. We propose three different privacy enhancing models, all based around the use of the Sticky Policy (policy attached to data, describing usage restrictions) paradigm. Two of these models are inspired by previous research, while the third one is our novel approach that turns a simple Distributed Hash Table (DHT) into a privacy enforcing platform. We perform several evaluations of the proposed models, having different aspects in mind, such as: feasibility, trust model, and weaknesses. Keywords Personal Data Vault, privacy, Sticky Policy, trust, assurance iii

6

7 Resumo A privacidade é um aspecto importante a ter em consideração na forma como as trocas comerciais são realizadas hoje em dia. Os dados pessoais estão a tornar-se um recurso valioso que os fornecedores de serviços recolhem e processam copiosamente. Um design centrado ni utilizador, é a base do conceito do Personal Data Vault (PDV), que tenta mitigar este problema, acolhendo estes dados pessoais sob estrita supervisão do utilizador. No entanto, assim que o utilizador deixa de realizar esta supervisão, o modelo de privacidade actualmente disponibilizado pelo PDV deixa de ser suficiente. O objectivo desta dissertação é investigar diferentes técnicas de reforço desta privacidade, que poderão ser aplicadas nas situações onde os PDVs são usados. Seguidamente são propostos três modelos de privacidade reforçada, todos baseados no paradigma do uso de Sticky Policy (políticas associadas aos dados, descrevendo as restrições à sua utilização). Enquanto, dois destes modelos são inspirados no estado da arte existente, o terceiro constitui uma nova abordagem que transforma um simples Distributed Hash Table (DHT) numa plataforma de privacidade reforçada. Foram realizadas várias avaliações aos modelos propostos, tendo em mente diferentes aspectos, tais como: viabilidade, confiança e debilidades. Palavras Chave Personal Data Vault, privacidade, Sticky Policy, confiança, garantia v

8

9 Contents 1 Introduction Motivation Problem Statement System requirements Contributions Thesis Scope Dissertation Outline Background The Personal Data Vault PDV as an Abstraction PDVs in the Healthcare System Personal privacy concerns Summary Related Work XACML Usage Control UCON in practice TAS PrimeLife Other Privacy Enforcement Techniques DRM approach Trusted platform Cryptographic techniques Summary System Design PrimeLife Policy Language (PPL) Integration Verifiable Privacy vii

10 Contents Description Prerequisites Architecture Privacy Manager Architecture A Verifier B Monitor Interaction Models A Data Flow B Forwarding Chain Trusted Privacy Description Prerequisites Architecture Privacy Manager Architecture A Trust Negotiator B Monitor Interaction Models A Data Flow B Forwarding Chain Mediated Privacy Description Prerequisites Architecture DHT Peer Layer A The Remote Retrieval Operation B Membership C Keyspace Assignment D Business Ring Size E Business Ring Description Privacy Manager Layer A Sticky Policy Enforcement B Trust Management Logging Layer Interaction Models A Data Flow B Multiple Data Subject (DS) Interaction Model C Multiple Data Controller (DC) Interaction Model viii

11 Contents D Log Flow E Indirect data Prototype Implementation Details Summary Evaluation and Discussion Comparison on Requirements Establishing Trust Transparent User Data Handling Data Across Multiple Control Domains Maintaining Control A Direct Data B Indirect Data C Sticky Policy Comparison on Feasibility Comparison on Trust Models Comparison on Vulnerabilities and Weaknesses Weaknesses of the Sticky Policy Malicious Data Controller (DC) Platform Vulnerabilities Discussion Summary Conclusion Summary Future work ix

12 Contents x

13 List of Figures 2.1 Personal Data Vault Abstraction Personal Data Vault in the Healthcare System Overview of XACML Dataflow Collaboration Scenario Verifiable Privacy: Abstract Architecture of a single Policy Enforcement Point (PEP) node Verifiable Privacy: Interaction diagram between a PDV and a Service Provider (SP) Verifiable Privacy: Example of Forwarding Chain on Personal Health Record Trusted Privacy: Abstract Architecture of a single PEP node Trusted Privacy: Interaction Model of the Data Flow Mediated Privacy: Architecture of a DHT node Mediated Privacy: Business Ring formed around a healthcare scenario Mediated Privacy: Privacy as a Service (PaaS) design for the Hospital Service Business Ring node Mediated Privacy: DC - DS interaction model Mediated Privacy: Key Dissemination Mediated Privacy: Logging Mediated Privacy: Indirect Data xi

14 List of Figures xii

15 List of Tables 5.1 Requirements Comparison Table Detailed Comparison on Maintaining Control xiii

16 List of Tables xiv

17 List of Acronyms BFS Breadth First Search DC Data Controller DFS Depth First Search DHPol Data Handling Policy DHPref Data Handling Preference DHT Distributed Hash Table DRM Digital Rights Management DS Data Subject nosql Not Only SQL MP Mediated Privacy PaaS Privacy as a Service PD Protected Data PDP Policy Decision Point PDV Personal Data Vault PEP Policy Enforcement Point PHR Personal Health Record PM Privacy Manager PPL PrimeLife Policy Language RDBS Relational Database System RDF Resource Description Framework SQL Structured Query Language xv

18 List of Tables TCG Trusted Computing Group TP Trusted Privacy TPM Trusted Platform Module TTP Trusted Third Party UCON Usage Control UI User Interface VP Verifiable Privacy XACML extensible Access Control Markup Language xvi

19 1 Introduction Contents 1.1 Motivation Problem Statement System requirements Contributions Thesis Scope Dissertation Outline

20 1. Introduction The majority of interactions on today s internet is driven by personal user data. These information pieces come in different shapes and forms, some being more valuable than others. For example, banking details might be considered more valuable than a persons favourite playlist. What all of these data pieces have in common is that they all belong to some specific user. This property, however, is not reflected in how data is hosted and organized over the web, since the hosting entities of personal user data consists of multiple service providers. Data belonging to a single user is fragmented and kept independently under different control domains based on the context. For example data related to somebody s social life might be stored in some social network provider, while the same person s favourite playlist might be hosted by his music provider service. Different initiatives exist to unify these scattered data. The Personal Data Vault (PDV) can be considered one such proposed solution. The PDV is a user-centric vision of how personal digital data should be hosted. Rather than having bits of informations scattered around multiple sites, the PDV tries to capture these under a single control domain. Every user is associated with his own PDV where he hosts his personal data. PDVs are not only secure storage systems, but also offer ways to make access control decisions on hosted data. External entities, such as different service providers, can request user data at the user s PDV, in order to provide some functionality beneficial for the owner of the PDV. By unifying the source of the personal user data, we are expected to achieve a more flexibility and better control over how data is being disclosed. By employing an access control solution users can have assurance that only authorized entities are going to get access to their data. It does not, however, provide any privacy guarantees with regard to how personal data is being protected after it leaves the control domain of the PDV. The PrimeLife [5] was a European project that researched technical solutions for privacy guarantees. Their privacy enhancing model introduces a novel privacy policy language, which empowers both users and service providers to specify their intentions with regards to data handling. The privacy policy language, however, lacks the technical enforcement model needed to support its correct functioning. This enforcement model is required to provide trust and assurance to end users. A trust relationship needs to be established between remote entities prior to personal data exchange, while assurance needs to be provided as proof that user intentions have been respected. We propose a novel privacy policy enforcement model with an integrated trust and assurance framework. Our solution utilizes the completely decentralized construct of a Distributed Hash Table (DHT) to sustain a mediated space between PDVs and service providers. This mediated space serves as a platform for privacy enhanced data sharing. Pointers to the shared data objects, which live in the mediated space, are kept by both the owner and the requester. This way data owners can stay in control over their shared data. A distributed logging mechanism supports our enforcement model in delivering first hand assurance to end users. 2

21 1.1 Motivation 1.1 Motivation Personal user data is becoming a highly demanded and valuable resource, not just for the users themselves, but also the service providers. Data analytics are carried out at different sites in order to bring businesses forward. Sometimes these operations on personal user data are even carried out without the awareness of the user. Users are mostly unaware of how the explicit data that they provide, like name, address, phone number, etc. is handled by service providers, such as social networks or e-commerce systems. Moreover, users also lack control over the information that they are willing to share. The lack of control manifests in two ways: users are unable to specify the scope in which their data shall be used, and sometimes they are also unable to retrieve and remove personal information hosted on a service providers network. The lack of awareness and control leaves the user defenceless against privacy violations. The system in place today, used to avoid the privacy violations described above, is built around a trust framework. The Privacy Policies offered by service providers are considered to be the pillars of this trust framework. These Privacy Policies are often presented to the end user in the form of static texts, describing how personal user information is going to be treated by the data collector. Nowadays we are used to seeing more diverse privacy options that can be set by the end user, like the sharing setting regarding a post in a social networking website. The main problem we are faced with when looking at the approach to provide data privacy is that it is highly unbalanced. It offers guarantees of a one-sided privacy system, since the data collector is the sole entity that decides how personal data is handled, without the involvement of the user. This leaves their clients with a take it or leave it offer, which clients are often willing to take. The result of this compromise is that user data ends up under the full control of the data collector. Another problem with these Privacy Policies is that they are often lengthy and ambiguously stated, such that they become hard to decipher for the average user. Moreover, it only offers a static policy setting that might not fit every user s requirement. Their more dynamic counterpart, the user settable privacy options, offer a bit more flexibility, but the implementation of these settings are again fully up to the data collector himself. This in turn means that they can revoke or modify these privacy options without the consent of their users. The lack of a system that promotes the user-centric vision with regard to privacy concern motivates us to look for possible alternatives to improve how we handle personal data privacy today. 1.2 Problem Statement The problem that this thesis focuses on is the one of providing privacy guarantees for a system where PDVs are widely used. Although the PDV concept allows to have a fine grained access 3

22 1. Introduction control over the user s personal data, it still fails to address the issue of how remotely stored data should be protected. It is important to notice that once the user chooses to disclose some personal data he is left vulnerable to privacy violations. User privacy can become compromised through unawareness and lack of control. 1.3 System requirements In order to provide a higher degree of awareness and control to the end user the underlying technology needs to provide a higher level of trust and assurance. The user-centric design of the PDV system, although offers a comprehensive picture on how data should be organized, leaves many specifications open regarding the privacy requirements. The following details the major requirements set by this thesis. This list of requirements forms a solid foundation of the trust framework that in turn focuses on achieving a user-centric model. They are as follows: 1. Establishing trust between actors, like service providers and data owners. Trustworthiness refers to the degree of assurance in which an actor can be trusted to carry out actions that he is entrusted with. The user needs to have some sort of mechanism to determine whether a service provider is going to treat his data according to pre-agreed set of rules. Pre-agreed rules, or data handling rules, should be formulated in agreement with both parties, and they should adhere to the correct handling of personal data. 2. Transparent user data handling should be a priority for every service provider. Users need to get assurance that their preferences on how to handle their data are carried out by the actors. Assurances are a form of trustworthy reports that describe the business process that has been carried out over the user s data. Continuous assurance will turn into a higher degree of trust that users can develop over time. 3. Data protection across multiple control domains is needed in order to facilitate the safe interoperability of multiple service providers. Delegation of rights to forward user data is a common use case, therefore there should be a clear model that describes how delegations take place, and how the data protection rules apply to the third party who receives the data. 4. Maintaining control over distributed data promotes user centrality. In the user-centric model the owner of the personal data is considered to be the user, even in the case when he chooses to share it with other parties. He must have a way to continue his rights to exercise operations on his personal data, such as: modifications, revocation of rights, deletion, etc. 1.4 Contributions The goal of this thesis work is to research the existing privacy enhancing techniques that could be employed in a PDV oriented system. The first contribution for the work is to investigate whether 4

23 1.5 Thesis Scope the privacy policy language proposed by the PrimeLife [5] project fits the highly distributed PDV system. The second contribution is to categorize several different privacy enforcing models for the considered problem. These models are used to guarantee the correct functioning of privacy policies established in the first contribution, by covering some of the existing privacy enforcing techniques proposed by related research. While formulating these alternatives, we proposed a novel privacy enforcement model, which relies on the concept of a mediated space where shared objects live. The third contribution is to provide an evaluation of the proposed privacy enhancing models herein formulated. This evaluation takes into account different tradeoff criteria, namely the initially proposed requirements, feasibility, trust source, and vulnerabilities. By doing this, we evaluate the strengths and weaknesses of our proposed models. The final contribution is the development of a prototype implementation based on our novel enforcement model, to show that the proposed concept can be carried out within the scope of currently existing technology. 1.5 Thesis Scope The design and evaluation of different privacy enforcement models used together with PDVs bears complexities beyond the scope of this thesis project. First of all, we refrain from talking about the detailed design and architecture of a PDV. Furthermore, we also do not consider every security aspects related to the PDV concept. Instead, we use PDVs as abstract building blocks clearly defined in Section 2.1. The definition and design of the proposed privacy enforcing models in this thesis are also not subject to a complete security evaluation, as we are more concerned with the privacy aspects. Assumptions on the existence of secure channels and storage systems are made throughout this thesis. Moreover, we also assume a well defined identity framework which guarantees the identity provisioning and verification of every actor in the system. Providing privacy guarantees is also a vast research field by its own. This thesis is focused on enforcement techniques for privacy policy languages, such as the one outlined in the PrimeLife project. In order to define a clear goal for the thesis, the scope of the work regarding the design of the enforcement models is narrowed down to a set of requirements outlined in Section 1.3. Requirements are targeting aspects, such as: trust establishment, data handling transparency, data across multiple control domains, and maintaining control. These requirements also serve as a basis for evaluation. We refrain from talking about any quantitative performance measurements in our evaluations, since the thesis is carried out on a conceptual level. 5

24 1. Introduction 1.6 Dissertation Outline The upcoming chapters are organized as follows. Chapter 2 focuses on the description of the background concepts used in this thesis, containing the research involving PDVs and a short study on privacy concerns. Chapter 3 presents relevant projects involving research in privacy enforcement techniques. Chapter 4 presents the three privacy enforcement models herein proposed, highlighting the novelty of the proposed solution, called Mediated Privacy (MP). Chapter 5 contains the evaluation of the models proposed in Chapter 4, based on our requirement set, and other metrics, such as feasibility and trust source. Chapter 6 concludes the thesis with a summary of the conducted work and suggestions with regards to future works. 6

25 2 Background Contents 2.1 The Personal Data Vault Personal privacy concerns Summary

26 2. Background In Chapter 2 the relevant background material used to carry out this thesis is presented. The first section is focused on detailing the concept of a Personal Data Vault (PDV). The second section focuses on the description of privacy concerns, namely: awareness, control and trustworthiness. 2.1 The Personal Data Vault The interactions that people are having over the Internet contain a significant percentage of personal user data. Users are asked to provide personal information in exchange for access to some advertised online service. For example, a person might use a social media site to stay connected with friends and share information about himself, such as name, address, likes, and dislikes. This person might also be part of other social community sites, such as a virtual bookclub, or a career portal, where she has to share similar personal information again. Following this model, the data that belongs to a single user will end up at multiple hosting sites. This model, although suits the needs and desires of the service providers, leaves the users in a difficult position when they want to interact with remotely hosted data. It is becoming increasingly difficult for users to collect their data from multiple control sites to provide interoperability. One of the downsides of it is the phenomenon called lock-in. It is getting increasingly difficult for users to migrate between services that they are using, because the data that they previously shared with a service provider is locked-in under their control domain. Another concern is data fragmentation which lets data exist in inconsistent states. A user can have his address hosted by different services, but under different formatting, which in turn may lead to confusion when interoperability needs to be provided. The root of all of these concerns are that the user lacks the appropriate fine-grained control mechanism over his own data. In order to provide a solution for easy interoperability and fine-grained control the Personal Data Vault proposes a user-centric design that tries to unify personal data under a singe control domain. Built with security and legal protections that are better than most banks, your vault lets you store and organize all the information that powers your life. Whether using a computer, tablet or smartphone, your data is always with you. [19] The Personal Data Vault also appears under various other terminologies, like Personal Data Store or Personal Data Locker. The attempts to formalize the concept of a PDV are complementary in the sense that they all try to focus on providing a better control over personal data for the end user. However, a clear formalization of the term is still missing, since projects are built with different aims in mind. Some of them conceptualize a raw storage service with the only purpose to host data securely, while others focus on providing software solutions to manage already existing storage spaces or even link different user accounts. 8

27 2.1 The Personal Data Vault There have also been efforts to categorize different approaches that research projects take in order to formalize what a PDV is actually like [29]. These fall into three main categories: 1. Deployment of these unified user data stores can be facilitated by a centralized cloudbased service, which in turn grants the user full control over the hosted data. On the other hand, this requires a high level of trust in the hosting entity. Alternatively, deployment can also be split between multiple trusted hosting providers, or even kept under end user s local machines. 2. Federation is also an important consideration that focuses on interoperability between multiple different storage providers and individuals. It tries to outline different interaction models that facilitate the collaboration between different deployments. 3. Client-Side solutions are targeting individuals to use their own devices as data hosts together with a social peer-to-peer network. Without the need of a centralized entity to govern data movement the solution focuses on a more ad-hoc solutions. There is also a substantial difference in how these projects envision the data model and internal storage system that are used for hosting personal user data. While some are leaning towards using Relational Database System (RDBS), others are looking into solutions such as Not Only SQL (nosql), and semantic Resource Description Framework (RDF) stores. Since security is a central concern of all of these solutions, they mostly come with an additional data access layer on top of the storage system. This access layer facilitates the interoperability between different entities in a secure manner. The fine grained control can be achieved through the use of access control mechanism that rely on predefined policies. These policies can either be confirmed by the end user, or constructed on the fly. Another key aspect of these projects is the interoperability of different entities [11]. PDVs should integrate seamlessly with other entities and facilitate the secure sharing of data across different control domains. The security of these operations can be guaranteed by providing encrypted channels between entities. These interactions can be of multiple types depending on the acting sides. Person-to-person connections are trying to connect individuals: independent entities that serve as representative hosts for a person. Person-to-community solutions try to formulate groups of persons depending on some social context. Person-to-business connections are describing how individuals are interacting with different service providers. In order to achieve these features interoperability needs to be provided, that overcomes the differences in the underlying data model with the aid of standardized APIs and protocols PDV as an Abstraction For the purpose of this thesis work the PDVs is treated as an abstraction of a data layer together with a manager layer. We consider these to be entities made out of a single or multiple 9

28 2. Background machines with high availability. Moreover, we consider them resilient in the face of failures and secure in the face of vulnerabilities and exploits that may be used directly by a potential attacker. Herein, we disregard these security aspects, and focus on the privacy concerns that appear in the interoperability scenarios. Figure 2.1: Personal Data Vault Abstraction Figure 2.1 depicts the high level abstraction of a single PDV entity. The data layer on the bottom of the abstraction represents the collection of hosting machines, that facilitate secure storage of personal information. These machines can either be found under the direct control of the data owner or they can also be multiple interconnected machines residing on external entities that are fully trusted. Again, the purpose of this project is not to investigate safe data storage for PDV, but rather focus on what happens to data once it leaves the PDV. The manager layer above the data layer acts as a guard for the personal data. It guarantees that only authenticated and authorized requesters are able to get access to data. The rules describing the access control policies in place are under the full control of the PDV owner. Secondly, it also offers an external interface that facilitate the interoperability with different PDVs and external service provider entities as well PDVs in the Healthcare System Several research projects involving privacy enhancement [14][16][18] are focusing on the healthcare system as their main use case. Benefits of a safe and reliable information system interconnecting healthcare centers clearly outweighs the benefits in other domains, because of its potential of saving human life. The information systems of healthcare centers operate on Personal Health Records. A Personal Health Record (PHR) is a collection of relevant medical records belonging to a single patient, containing information such as chronic diseases, check-ups, allergies, etc. PHRs are, usually, hosted by the healthcare center in which a patient was examined. This design requires PHRs to be shared among different health centers in cases of patient migration, or emergency situations. This can become cumbersome, since it requires interoperability of multiple independent services. 10

29 2.2 Personal privacy concerns Figure 2.2: Personal Data Vault in the Healthcare System The user-centric design focusing on data unification fits the presented healthcare scenario operating on PHRs. Instead of healthcare centers hosting PHRs, they could be kept directly in a PDV, under the direct control of the owner of the PHR. Figure 2.2 illustrates how a PDV can become beneficial in an emergency scenario. Imagine Bob, owner of PDV-Bob, is using the Home Hospital Service for his regular check-ups and treatments. During check-ups and treatments the Home Hospital Service extends Bob s PHR with relevant information, such as his allergy of antibiotics. His PHR is regularly updated in his PDV. Imagine Bob going on vacation in a foreign country, and suffering an accident where he loses consciousness. As Bob is taken into the foreign hospital, the doctors determine that he needs antibiotics in order to prevent infections. Instead of a rushed procedure, the doctor could first discover the patient s identity from his ID card, then consult his PHR, from PDV-Bob, through the Foreign Hospital Service. Assuming that hospital s staffl are authorized to access Bob s PHR, the foreign doctor can discover his allergy and administer an alternative solution, potentially saving Bob s life. His treatment in the Foreign Hospital can be appended to his PHR and followed by the Home Hospital, once Bob returns from his vacation. 2.2 Personal privacy concerns The maintenance of personal privacy is becoming an increasingly important concern in how businesses are conducted over the internet today. The safeguarding of personal privacy rights is relying on a tangled framework which incorporates legal regulations and business policies. Business policies are required to be built on top of existing regulations that are in place at the location where the said business is conducted. For example, the Data Protection Directive formulated by the European Union [12] is one such legal regulation that provides a set of guidelines on how personal user privacy has to be protected in the virtual space. In the literature [12][26][5] we can highlight two important terminologies in use: the concept of Data Subject (DS) and Data Controller (DC). The Data Subject is an individual who is the subject of personal data. This may commonly be associated with the average user or client that is sharing some personal data. The Personal Data Vault (PDV) being an entity under 11

30 2. Background the control of its owner can also be considered as a DS. The Data Collector is an entity, or a collection of entities, who is in charge of deciding how collected personal data from the DS is used and processed. Most of these regulations are targeting the interaction between the DS and DC to assure that personal data is only collected and processed with the consent of the DS. The Data Protection Directive has been around since 1995, however, due to the changes in the IT technology and best practices since then, the directive is becoming obsolete. It fails to take into account concerns enveloping technologies such as cloud based services or social networks. A new directive has been proposed [13] in order to face these challenges, since business policies are starting to become increasingly divergent from initially established regulations. This new regulation tries to clarify and improve privacy regulations. However, the implementation of new reforms are always time consuming, and with the quickly changing technology there is no guarantee that these new regulations will not become obsolete once again. There is also a great difficulty in formalizing how these regulations protect personal data across different political zones where other regulations are in place. Business policies associated with service providers are global, since their services are available regardless of physical location, in most of the cases. Privacy regulations, on the other hand, are locally applicable laws that change across borders. The difficulty lies in integrating different local regulations together, since sometimes they are incompatible. The privacy concerns formulated by this and other data protection directives can be categorized under three important aspects [17], namely: awareness, control, and trustworthiness. Awareness: The first concern related to privacy is awareness. DSs have to be aware of how the data that they share is going to be handled by the DC. Handling of data should be in accordance with the purpose of usage and policies agreed upon by the DS. Policies describing user data handling are usually provided by DCs and they include information like: processing policies, modification and forwarding of personal data. These policies alone, however, only offer a limited amount of awareness for DS on how their explicitly shared data is processed. More alarmingly, implicit data collected about user behaviour on the internet, like search keywords, visited pages, clickstreams are also collected and processed without the user s consent. Service providers, such as social networking websites and e-commerce systems, are notoriously infamous for their practices in collecting their users personal information and through different analytical and profiling techniques use it for different purposes, such as targeted advertisement. Moreover, personal records may also be disclosed to third parties, such as governments, without the user being aware. Unawareness of how these personal information pieces are used surrounds many interactions over the web. In some cases, users can end up giving consent unawarely for information sharing 12

31 2.2 Personal privacy concerns because of deceitful user interfaces, or simple carelessness. When seeking comfort in the personal privacy policies provided by DCs, users can also be left confused because of the complexity and the abstractness of these statements. Missusage of personal data can lead to problems such as decontextualization. Explicitly shared personal information can get processed and reposted under a different context or purpose from which it was initially designed to. This may lead to confusion and loss of personal privacy. Control: Control is the second aspect that surrounds privacy concerns. The policies governing personal data handling should be created in correlation with the user s preference. Many service providers offer a set of privacy options which can give liberty to the user to formulate different privacy profiles. These options, however, lack the fine-grained control which the users need to have over their shared data. Policies should be flexible enough to let the users formulate how their data can be processed or even disclosed to third parties. There is also a need for being able to modify or even revoke previously given consents. Users should be able to retrieve their personal data at will. There is also another category of personal data, called indirect data, which completely lacks means of control by the DS. Indirect data can be considered data that are not explicitly shared by a DS, but it is still connected to his identity. For example, pictures that other people share of you over social networking sites can be considered as indirect data. Frequently, systems offer little to no control over data objects which are not shared explicitly by a user, but are still tied to his identity. This in turn can lead to disclosure of personal data without the consent of the original data owner. Another concern surrounds the way in which service providers physically host personal data. In order to offer features such as high availability and fault tolerance, systems often keep replicas and backup copies of data objects, sometimes across different control domains. This leads to difficulties when a user decides to discontinue the use of a service, and requests the service provider to delete all previously shared data. In many cases these service providers are retaining backup copies for an indefinite amount of time, even after the request for deletion has been completed. Trustworthiness: The mechanism to provide awareness and control are complemented with trust. Trust is given to DCs by DSs if they follow regulations and respect privacy policies. The existing privacy regulations should serve as the baseline of trust. However, as shown before these can often lead to confusion whenever contradictory regulations are encountered. Data Collectors are also trusted to have a secure system resilient to vulnerabilities and outside 13

32 2. Background attackers, such that personal data cannot be directly stolen. Failure to implement secure software solutions may lead to disastrous personal privacy violations in the face of data theft. Unfortunately, the technical means currently in use are providing little to no assurance in how well these systems are privacy compliant. Providing a highly trusted service should be a priority of every service provider, since the lack of trust discourages new clients from using the advertised services, which in turn is bad for business. Trust also applies to all entities that get access to users personal data. For example in the case of a social networking site the service provider is trusted to offer a secure and privacy respecting service, but friends who have direct access to a person s information are also trusted not to use it without their consent. 2.3 Summary The background of this thesis work involves privacy concerns in Personal Data Vaults (PDV). A PDV is an entity associated with a person or a business, providing safe storage and secure access to personal data. For the purpose of this thesis, we are using PDVs as abstract building blocks which serve as the main sources of personal user data. The applicability of PDVs is demonstrated through the example presented in the domain of the healthcare system, using Personal Health Records. Privacy concerns, such as awareness, control and trustworthiness, are surrounding online interactions. The Data Subject (DS) and Data Controller (DC) are two terms commonly used to denote the user, whose data is being collected, and the service provider, who collects the data. PDVs are generally seen as DSs, while the DC role is mostly assumed by external service providers. Existing local regulations on privacy protection and business privacy policies are not enough to prevent privacy violations, as shown by the examples of unawareness, lack of control, and untrusted services. 14

33 3 Related Work Contents 3.1 XACML Usage Control TAS PrimeLife Other Privacy Enforcement Techniques Summary

34 3. Related Work Chapter 3 focuses on existing related work which has been carried out with regards to the domain of privacy enforcement. The Chapter begins with a short introduction to the XACML policy framework in Section 3.1. Afterwards it presents some of the relevant research projects involving privacy enforcement, highlighting the PrimeLife project in Section XACML The extensible Access Control Markup Language (XACML) is an XML based policy language standardized by OASIS [4]. The language itself provides a set of well defined policy building blocks that facilitates the definition of complex access control scenarios. It supports multiple policies on a single resource, which in turn are combined in order to provide an access decision. The language is attribute based, meaning that actors and resources can be flexibly described by a set of attributes. Version 3.0 of XACML also supports obligations for extended access control. Obligations are specific actions that have to be taken on a predefined trigger, usually after the access decision has been carried out. Its highly extendible design granted its popularity among other existing policy language frameworks. Figure 3.1: Overview of XACML Dataflow 1 Apart from the language itself, it also offers a high level architecture that describes how the policy language can be used to build an access control engine. The dataflow of a high level architecture can be seen in Figure 3.1. Incoming access requests are routed through a Policy Enforcement Point (PEP) depicted in point (2) of Figure 3.1, which offers a well defined communication interface with the rest of the architecture(3). The Context Handler dispatches the request to a Policy Decision Point (PDP) (4), which is responsible for returning an access decision. The PDP combines relevant policies stored in the Policy Administration Point (1) with the required attributes (5). The required attributes (specific information on either a Subject, a Resource or the 16

35 3.2 Usage Control Environment) are collected by the Policy Information Point (6)(7)(8)(9) and facilitated to the PDP (10). After the PDP successfully combines the relevant policies, it returns an access decision to the requester via the Context Handler (11)(12). Additional restrictions that might apply in the form of obligations have to be carried out by the PEP with the help of an Obligation Service (13). 3.2 Usage Control There have been many approaches over the years to achieve the safeguarding of valuable digital objects. Traditional access control solutions offer a way to grant access to protected digital objects only for authorized entities. These solutions, however, often require a set of predefined entities in a closed system, such as a company. Trust management offers ways to employ access control on unknown entities over larger domains. Digital Rights Management (DRM) solutions are client-side systems that offer the protection of disseminated digital objects. Each of these mechanisms focuses on different digital object protection solutions depending on context and requirements. The Usage Control (UCON) [27][25] research tries to formalize a more extensive solution that offers digital object protection by embedding traditional access control, trust management and DRM together with two novel approaches for data protection. UCON tries to capture the whole lifecycle of a data object, even after it goes beyond authorization. By focusing on the whole lifecycle, UCON provides the privacy features that previous systems with digital object protection lack. The two proposed concepts that allow UCON to provide a more extensive control mechanism over its predecessors are the mutability of attributes and the continuity of access decision. UCON is envisioned to follow attribute based access control, which requires data requesters to poses a set of attributes that makes them eligible for authorization. Attributes are used to formulate rights that a given subject has on a given object. Up until now, this can be realized through the use of a traditional access control system. The mutability of attributes refers to the dynamic nature of the attributes, which can be subject to change. Based on these dynamic changes the authorization rules also have to adapt and be re-evaluated to provide a potentially new access decision. Continuity of access decision means that UCON tries to enforce certain security policies not only during authorization, but also while the object is being used, and after usage, thus covering the whole lifecycle of it. It carries this out by the use of certain policies that can appear under the form of: Authorizations: a set of required attributes that have to be provided and verified during the pre-authorization phase. This can include certain identity checks of the requesting party. Conditions: seen as attributes that describe environmental aspects that can affect the access decision. For example, an object can only be accessible during a given timeframe of the day. Such 1 Figure 3.1 source: /elementLinks/07fig09.jpg 17

36 3. Related Work conditions have to be evaluated on the pre-authorization phase and during the ongoing usage of the object. Obligations: predefined rules that safeguard a protected object after the authorization phase has granted access to it. Obligations can be activated in any phase during or after the access decision, and provide privacy enhancing features UCON in practice Although there have been many proposed approaches to implement UCON [27][25][7] it is generally considered to be a hard problem, given the complex and demanding set of requirements. In general, UCON tries to realize a data protection framework that relies on the use of certain enforcement points. These enforcement points can either be present on the server side, providing a more traditional central approach; or on the client-side, which resembles a DRM system that is controlling the secure dissemination of digital objects. Hybrid approaches have also been proposed [27] that try to formalize a symmetric system where both client and server side are becoming enforcement points. Another proposed solution in [7] is to harness the power of the quickly growing cloud industry. It proposes the implementation of the UCON framework by shifting the enforcement point into the cloud. A Software as a Service solution could provide safeguarding of user data by policies and mechanism described by the UCON research. Another subset of projects focuses on the security aspects of the enforcement points. In order to guarantee that these nodes are in fact safeguarding digital objects by enforcing policies, different technical measures can be taken. In order to provide assurance, [24] proposes monitoring on different levels of abstraction. In practice, it focuses on how specialized monitors, such as an OS monitor, can be used to trigger and carry out events described in obligations. Assurance can be complemented by providing trust in enforcement points. They propose an implementation that follow the design suggested by the Trusted Computing Group (TCG), which described how Trusted Platform Module (TPM) enhanced hardware can be used to guarantee that a remote system is tamper proof. The features described by the UCON research served as one of the basis for the requirements set for the models presented in this thesis work. The continuity of access decision captures the idea of maintaining control over shared personal data objects that are no longer under the direct control of the user. Moreover, some of the enforcement techniques associated with UCON are also present in some of our proposed data protection models. We diverge, however, from the vast focus of UCON to a more narrow scope involving privacy, which means that we are more concerned about what happens to shared data after disclosure, rather than looking at the whole lifecycle of digital object. 18

37 3.3 TAS Trusted Architecture for Securely Shared Services (TAS 3 ) Trusted Architecture for Securely Shared Services [6] was a European research project from the Seventh Framework Programme (FP7) concluded in 2011 which addressed some of the security and privacy concerns regarding personal data distribution across data collectors. Its main focus was to specify and design a security and trust framework that is generic enough to encompass multiple business domains and provides a user-centric data management in a completely heterogeneous setting. In order to promote user-centrality, it examines the possibility of a PDV-like design where data is kept under the direct control of the end users, rather than scattered around data collectors. The interaction model required to support data sharing in such a model is facilitated by the Vendor Relationship Management (VRM) [21]. VRM describes a reverse Customer Relationship Management (CRM) model where service providers are the ones who subscribe to the users personal information store to get access to data. It also addresses the difference between by-me and about-me data. By-me data counts as a direct form of personal data that is submitted or shared by the data owner explicitly. As an example, a personal CV containing the professional background information of a person is by me data. On the other hand, if this person attaches a transcript of grades from an institute, that can be considered about-me data, since its issuer and verifier is the institute rather than the individual. Control over about-me data can be considered much more cumbersome than that of by-me data, since about-me data is often hosted and controlled by entities other than the subject of the data. A proposed solution is to keep updated links pointing to about-me data such that the data subject can place a relevant data handling policy next to it. Other subprojects from within TAS 3 are examining how changes to the policy framework guarding personal data can promote user-centrality. Today s unilateral policy system does not meet the requirements concerned by data privacy, since it empowers the data collector to treat personal data at will. Traditionally, users are concerned about privacy, while service providers are concerned about access control over their resources. Instead of treating these two concepts separately, it tries to encapsulate them under a single bilateral policy framework that lets users formulate privacy policies and service providers have access control policies. In order to combine these two policy types, a policy negotiation framework is proposed in [23]. This framework is responsible for the creation of data protection policies, constraining access to the shared data. These policies are then signed and distributed in a non-refutable manner in order to assure that a potential privacy violation can be discovered. Every entity is then responsible for evaluating and respecting these contractual agreements in the processing and usage of every shared object. A large part of the research focus is directed towards designing a federated infrastructure [14][9] which is generic enough to accommodate many different use cases across heterogeneous 19

38 3. Related Work systems. The need for high interoperability between independent organizations is partly achieved by providing a privacy enhancing solution that does not rely on a specific policy language. Constraining access to personal data in highly distributed architectures require a complex decision making process that sometimes relies on multiple independent Policy Enforcement Points (PEP), which are designed in an application dependent manner. The incompatibility between different policy frameworks used across different entities raises conflicts when a suitable protection policy for a shared object has to be formulated. To provide interoperability across organizations a conflict resolution framework is needed. Policies and security concepts can have different implementations at different sites. The assumption that all organisations use the same terminology when it comes to data protections does not hold. In situations when two independent parties need to share data in a secure manner, a policy negotiation phase has to take place. In order to provide an automated solution it proposes an ontology based policy matching framework [10] which lets every actor express his security concerns in his own vocabulary and provides a generic way to map between vocabularies. Another approach [14] tries to solve the conflict resolution by introducing a central component called the MasterPDP which governs and combines the independent access decisions coming from the stateless Policy Decision Points (PDP). A version that offers a better scalability is proposed in [9]. Instead of having a central decision point, it introduces multiple application independent Policy Enforcement Points (PEP) that serve as wrappers over every application dependent PEP, and mediates the access decisions between the PEP and the PDP. These application independent PEPs are communicating on an independent communication channel and serve the resolved policies to their application dependent PEP. The requirements set by our proposed models, defined in Section 1.3, can be seen as a subset of the requirements formulated by TAS 3. We specifically offer an evaluation of our proposed models that takes into account the differences between by-me and about-me data. Although offering a generic solution greatly increases interoperability, our solutions are not built with federation as the main focus. 3.4 PrimeLife The PrimeLife Project [5] was a research project conducted in Europe under the Seventh Framework Programme (FP7), concerned with privacy and identity management of individuals. They are addressing newly appearing privacy challenges in large collaborative scenarios where users are leaving a life-long trail of data behind them as a result of every interaction with services. Its extensive research domain investigates privacy enhancing techniques in areas such as policy languages, infrastructure, service federation and cryptography. The Privacy and Identity Management for Europe (PRIME) [8] conducted in FP6, predeces- 20

39 3.4 PrimeLife sor of the PrimeLife project, also offers valuable insight on privacy and identity management. It uses pseudonymous identities to achieve different levels of unlinkability between users and their personal data trails in order to avoid profiling and preserve privacy. Moreover, it strives to give back control to the end user by designing an architecture that enforces pre-agreed data protection policies of shared objects. The functioning of such a design is highly dependent on the trust level given by the end users to service providers. PRIME tries to investigate the different layers of trust. The system that lets individuals share data with a pre-agreed data handling policy needs to be enforced by strong technical measures that provide trust and assurance. Major technical solutions to achieve trust are rooted in verification of trusted platforms in order to guarantee that remote services are privacy compliant. The PrimeLife project follows the work outlined in PRIME. One of its major contributions is the investigation and design of a suitable policy framework that encompasses the privacy features which promote user-centrality and control of private data. The proposed solution is centred around the development of the PrimeLife Policy Language (PPL) [33] which is a proposed extension of the existing XACML [4] standard. Figure 3.2: Collaboration Scenario 2 The core idea of how PrimeLife is intended to use PPL to facilitate privacy options can be described using a simple collaboration diagram in Figure 3.2. The scenario describes the interaction between the Data Subject (DS), who is considered the average user or data owner whose privacy needs protection; Data Controller (DC), which denotes a wide range of service providers that the user can be interacting with; and the Third Party, who is considered to be another entity involved in the business process, like an associate of the service provider. The interaction is initiated by the Data Subject who is requesting some sort of resource from the DC. The DC responds with its own request, describing what kind of information he expects from the user in exchange for the resource, and how he is willing to treat that information. The description provided by the 2 Figure 3.2 source: on design and implementationpublic.pdf 21

40 3. Related Work DC on how he will treat private personal data is called Data Handling Policy (DHPol). The DS examines the list of information requested together with the DHPol, and combines it with his own Data Handling Preference (DHPref). The DHPref is the user s way to describe how his personal disclosed information is preferred to be treated. A combination between the DHPol and DHPref results in a Sticky Policy that is sent together with the requested personal data, in exchange for the resource. The Sticky Policy contains all the relevant data protection rules which have to be respected by the DC. The direct collaboration between DS and DC ends here. However, the DC may decide to forward the collected personal data from the DS to a Third Party. In this case, the DC has to consult the Sticky Policy first, in order to examine whether he is allowed to forward the information collected from DS or not, and act accordingly. In order to support such a scenario an expressive language is needed. The PPL is a highly descriptive and easily extendible language that can support the collaboration scenario described above. PPL builds on the concept of the existing Sticky Policy paradigm, which serves as the basis for many privacy and data security related research projects [5][9][28][22]. Sticky Policies are data access rules and obligations formalized for machine interpretation that are tied together with a given data object which they protect. The intuition behind it is that data moves around across multiple control domains together with its associated Sticky Policy, which in turn describes how the data can be treated. This requires the data object to be closely coupled with its Sticky Policy. In order to assure that these policies will not get stripped off and ignored, certain Policy Enforcement Points (PEP) are required to enforce their usage. One of the contributions that the PPL brings to the existing Sticky Policy paradigm is the twosided data handling policy/preference that lets the DS and DC formulate a sticky policy suitable for both needs. As PPL is designed to be interpreted by the machine it also comes with an automated matching engine that is resolving conflicts between DHPol and DHPref. It is a symmetric language that requires both parties of the interaction to formulate their policies in this language. The language offers a strong expressive nature by which complex policies can be formulated to accommodate different use case scenarios. Provisional actions and required credentials can be specified in order to require some authentication before authorization. Data can be kept under the protection of the purpose of usage, which is used to constrain the actions that DCs can take with the collected data. It also allows for users to express whether their data can be forwarded to third parties or not, and under what conditions. More complex use cases can be modelled through the use of obligations. Obligations are a set of actions that have to be taken when triggered by a specific event. For example, an obligation could specify to send an acknowledgement back to the data owner every time his shared personal data gets forwarded to a third party. Research involving the development of the PPL [32] is also concerned about how the individuals fit in this new policy framework. Novel methods for human-computer interactions are required in order to ease the task of formulating complex data protection policies for the end user, since 22

41 3.5 Other Privacy Enforcement Techniques DHPref are fully relying on the assumption that the end user is able to comprehend and formulate his own policy. Moreover, situation where the policy matching engine is unable to combine a DHPol with a DHPref, require an explicit consent and interaction from the end user before the process can continue. In order to keep the demand for human interaction low an expressive User Interface (UI) needs to be provided. This thesis considers the PrimeLife Policy Language (PPL) as its main tool by which privacy guarantees are provided. However, instead of focusing on the language components of the PPL, it targets the enforcement model that can be used together with it. 3.5 Other Privacy Enforcement Techniques DRM approach Digital Rights Management (DRM) systems are used in order to offer a protection mechanism of distributed digital content over the web. They offer technical means, such as cryptography and access control, to safeguard the access to protected content. To achieve this, specialized software needs to be deployed on the machines of clients requesting access to these protected data objects. Once the digital content is distributed to the client side, the DRM system prevents unauthorized usage of it. User privacy protection, just like distributed content protection, is concerned with the safeguarding of personal user data. The valuable resource of privacy protection is the personal data itself. It is easy to observe the parallel between the requirements of privacy protection and distributed content protection, since they both can be seen as digital data. DRM-like solutions have been proposed to overcome the challenges of privacy protection [20]. The client side DRM transforms into a Privacy Rights Management system deployed at the Data Controller (DC). This new component is then responsible for safeguarding private user data once it has been disclosed, by enforcing the data protection policies applicable for the disclosed data. It is worth mentioning that DRM systems are not bulletproof, in the sense that they fail to offer any kind of protection once digital data has been disclosed in plain site. DRM offers only limited amount of protection that can sometimes be overcome by technical means. A proposed Privacy Management System would suffer from the same limitations. Moreover, the operator of such a PRM system is required to be trusted by the users who are willing to disclose personal information. Another consideration is that current DRM systems usually require a client-server scenario, whereas with entities such as PDVs and interconnected service providers, we are facing a much more distributed peer-to-peer-like structure, where roles such as DS and DC can be applied interchangeably on a single entity depending on the context. 23

42 3. Related Work Trusted platform Trust is one of the central requirements when it comes to sharing protected data between unknown entities. The Trusted Computing Group (TCG) defines trust as the expectation that a device will behave in a particular manner for a specific purpose [3]. The TCG offers a range of technical solutions to accommodate the rising needs for secure systems. Security is a concern on both the software and the hardware level. They propose an enhanced hardware extension that serves as the basis of a trusted system. The Trusted Platform Module (TPM) is a hardware component closely integrated with the motherboard that offers security features such as: RSA key generation, cryptographic operations, integrity check. By possessing an embedded asymmetric keypair the TPM is considered to be the root of trust for the platforms using it. Being a hardware component it is also considered tamper resistant. Several solutions have been proposed [22][30] for achieving privacy protection through the use of trusted hardware, and TPM in turn, by using software attestation techniques. By using the functionality of the TPM, the integrity of a running application can be attested dynamically. Checking the current state of an application against an expected value can bring assurance of the validity of the application, proving that it has not been tampered with. Privacy protection solutions use remote software attestation to prove that a given software component is in a valid state on the remote machine. It can, for example, provide proof that a known privacy policy enforcing software component is in place on a remote server, which brings assurance to the end user that his protected data is in capable hands Cryptographic techniques Cryptographic techniques are mainly used to ensure secrecy with regard to safe storage and transporting of sensitive information. There are initiatives researching the use of these also in the privacy protection domain. One of the proposed cryptographic models for privacy protection is called Type-based Proxy Re-Encryption (PRE) [36][18]. It assumes a semi-trusted Policy Enforcement Point (PEP) with an honest but curious nature, meaning that he is trusted to carry out user intentions, but is also curious about the shared data for his own purposes. The PEP is trusted to hold the data encrypted with the data owners public key together with its sticky policy. When a request arrives that asks for the data, first an authorization is carried out against the Sticky Policy. On permit, the PEP re-encrypts the data, such that only the recipient can see it. In this setting the PEP becomes the proxy that performs the re-encryption. They claim that if the receiving party and the PEP are not conspiring, it is safe to assume that the PEP is not able to decipher the protected data. It employs the usage of asymmetric keys and assumes that key dissemination and identities are placed and verified by a trusted third party. They take this solution further with the type-based PRE which assumes that there are multiple 24

43 3.5 Other Privacy Enforcement Techniques proxies from which the user can choose depending on the secrecy and security that he or she needs. The advantage of this is that if one of the proxies get compromised, there is only a partial loss of data. Following their vision, the PEP proxy can be the same as a semi trusted service provider, who is responsible for distributing personal data. A web-based health-record system, for example, is responsible for the safe storage and management of personal health records. In this simplified scenario there can be a doctor and a pharmaceutical company both requesting a personal health record for different purposes. Let us assume that the owner of the health record specified in his policy that data can be forwarded to his doctor, but not to any pharmaceutical company. Since the health-record system is only semi-trusted, the user stores his data encrypted with his public key, and only provides a re-encryption key tied to the identity of the trusted doctor. In this scenario the PEP of the health-record system will only be able to re-encrypt the cyphertext for the eligible doctor. Even if he tries to examine or forward the personal health record to the pharmaceutical company, all they will see is the cyphertext. By the encryption with the data owner s public key, his privacy will be protected, since neither the health-record system, nor the pharmaceutical company will be able to decipher it. The solution outlined above, however, is only suitable to a subset of existing use cases. It does not take into account, for example, service providers who are processing user data in an automated manner. This becomes impossible under the use of the PRE model, since the data is encrypted. Other research projects [15] investigate the potential of self-destructive data. In order to avoid the persistence of user data in data copies, the self destructive data model offers a method to render data unavailable after some period of time for everybody, even for the owner of the data. Their motivation is to avoid unauthorized disclosure of information even if it means losing the information completely. Some private data, such as private s do not need any persistence after they have been received and viewed. They employ a cryptographic method called threshold-based secret sharing, where a symmetric encryption key is split into multiple pieces, but can be reconstructed with a threshold amount of key pieces. By their design, personal data gets encrypted with a randomly generated key, that gets split into multiple pieces and scattered in pseudorandom locations on a Distributed Hash Table (DHT). The cypthertext, together with hints about the key pieces, is then transmitted to the recipient via some service. In order for the receiver to be able to decipher the data, he has to recompute the shared key from its pieces. The receiver will only have to retrieve a subset of the scattered key pieces from the DHT in order to recompute the encryption key. Once the key is recomputed, it can access the received object. Their security model relies on the high churn rate [34] of the DHT which makes key shares impossible to retain after a given time, either because responsible nodes leave the system, or data gets expired and deleted. The churn rate refers to 25

44 3. Related Work the rate at which nodes enter and leave the DHT system. 3.6 Summary This chapter focuses on the description of existing privacy enhancing techniques. The extensible Access Control Markup Language (XACML) is an accepted standard, that comes with a descriptive resource protection language and a high level architecture. Given its flexibility, it is employed as the basis for many privacy related work. Usage Control (UCON) represents a vast research area that is focused on the protection of user data through its whole lifecycle: before authorization is granted, during authorization, and after authorization. Two main concepts introduced by it are the mutability of attributes and continuity of access decision. TAS 3 is another initiative focused on multiple aspects of privacy protection, mainly interoperability and federation. The requirements formulated in Section 1.3 are a subset of the high level requirements defined by TAS 3. The PrimeLife project offers a privacy protection model based around the PrimeLife Policy Language (PPL) and the Sticky Policy paradigm. Given the highly descriptive nature of the PPL,the presented research focuses on how it can be used together with Personal Data Vaults, and what kind of enforcement models can be built to support it. Digital Rights Management (DRM) systems exhibit similarities with privacy protection, although they do not completely cover every aspect of it. The Trusted Computing Group (TCG) conducted relevant research in developing a trusted computing platform, which in turn can be used for privacy protection. Other initiatives include cryptographic methods to protect the correct dissemination of user data, only granting access to authorized parties. 26

45 4 System Design Contents 4.1 PrimeLife Policy Language (PPL) Integration Verifiable Privacy Trusted Privacy Mediated Privacy Summary

46 4. System Design Chapter 4 is dedicated to describe the policy enforcement models proposed by this thesis. The Chapter begins with an evaluation of the PrimeLife Policy Language (PPL) in Section 4.1 with regards to integration into the PDV design. The description of three privacy enforcement model follow, highlighting the novel solution proposed by this thesis in Section PrimeLife Policy Language (PPL) Integration In order to meet the requirements in Section 1.3 we base our approaches on the existence of a well-defined policy framework. This policy framework has to facilitate an extensible and descriptive policy language that can easily be adapted in specialized use cases. The XACML policy framework serves as a suitable choice, since its abstract architecture design and flexible policy language makes it applicable in a variety of use cases. Unfortunately, however, the XACML was designed to provide a descriptive access control mechanism, and only comes with a weak privacy profile. The PrimeLife Policy Language (PPL) from PrimeLife, however, outlines a privacyoriented XACML extension, which allows for a better approach. We will evaluate how the language feature of the PPL can fit our requirements. Trust between two parties who are about to exchange personal information has to be established prior to any access control decision. PPL provides two language features that have to be fulfilled by the data requester: CredentialRequirements and ProvisionalActions. Credential- Requirements contains a set of credentials that have to be provided by the requester to attest a required attribute. These credentials are usually tied to a verifiable identity. By verifying eachother s credentials, both parties can assume a basic trust level. ProvisionalActions can refer to any action that has to be carried out prior to any access decision. This can refer to signing a statement or spending some credential (if the requested resource has a limited amount of time it can be accessed). Transparency of user data handling refers to the DS s knowledge of how his personal data will be treated by the DC. The PPL facilitates the use of the Sticky Policy paradigm through the Data Handling Policy (DHPol) and Data Handling Preference (DHPref). The DHPol is the DC s proposal on how private user data will be used. The final policy, however, that provides transparency is the sticky policy itself. Sticky Policies are created from resolving the DHPol and DHPref that refer to the same object, and are composed of Authorizations and Obligations. Authorizations describe a specific purpose for which a data object can be used, while Obligations can be used to express more fine-grained control. Authorizations also contain authorizations on downstream usage together with a purpose. Downstream usage refers to the disclosure of personal information from the DC to Third Parties. This language feature allows for a description on how personal data can be forwarded and used across multiple control domains. The purpose attached to the downstream usage gives the user 28

47 4.2 Verifiable Privacy an even greater flexibility in describing under what circumstances can data be forwarded. In data forwarding scenarios the forwarded data copy has to have a Sticky Policy at least as strict as the original data copy, in order to avoid the degradation of the protection level. The main language feature that offers control for the end user is the Sticky Policy itself. Control over the usage of a specific shared private information can be achieved by the modification of the attached Sticky Policy. This method allows modification as well as revocation of accesses from the user. Obligations being part of Sticky Policies allow the user to set constrains on data, after it has already been shared. One such Obligation, for example, could require the DC to delete the collected data after a specified amount of time. The architecture of the system outlined by the PrimeLife project requires a specialized software, or multiple interconnected software components, that are responsible for carrying out the feature described by the language. Moreover, it is supposed to do this in a highly automated manner, working with predefined access control policies, matching DHPol with DHPref, and enforce Sticky Policies. This always on software component can be associated with traditional access control systems of service providers, which portrays the DC. However, this specialized software also has to be present at the DS site, which often portrays the end user. The PDV is a suitable data organization scheme which can integrate any kind of specialized software. This PrimeLife architecture also shows a high resemblance with the initial XACML architecture presented in Section 3.1, relying on components such as Policy Enforcement Point (PEP) and Policy Decision Point (PDP) to carry out access decisions. The PEP component, however, becomes a crucial building block that is responsible for evaluating and enforcing Sticky Policies. We will refer to this specialized software, often residing on a PEP and enforcing privacy policies, as the Privacy Manager (PM). As the Sticky Policy is considered to be the main element of user data protection, we also introduce the abstraction of Protected Data (PD). The PD encapsulates the user data object and its Sticky Policy under a single unbreakable logical unit. Throughout the formalization of policy enforcement models we will use the PD terminology when talking about a shared data object guarded by a Sticky Policy. The following details the design and description of the enforcement models that are applied to provide privacy guarantees by Sticky Policy enforcement using the PM. 4.2 Verifiable Privacy This sections presents the Verifiable Privacy (VP) policy enforcement model together with aspects of its architecture design and its interaction model. 29

48 4. System Design Description This model relies on remote software verification and monitoring solutions, hence its name: Verifiable Privacy. This section is dedicated to describe a solution involving enhanced hardware security. As the software systems running on the machines are becoming more complex and stacked, keeping track of security aspects becomes increasingly difficult. Software bugs and vulnerabilities are an unavoidable side effect of every system in production. In order to mitigate the problem of unsecured software solution, today s hardware components are built with strong security aspects in mind. The Trusted Computing Group (TCG) is a pioneer in the field of secure hardware. They offer a range of integrated components that can help carry out certain security measures. One of their main focus areas is the Trusted Platform Module (TPM), which is an embedded hardware component, that provides a root trust in the system. By providing strong cryptographic functionalities together with key generations, integrity checks, storage and reporting, the TPM provides a form of attestation on the security measures of the software running on top of it. In more detail, the TPM provides signed attestations of PCRs (Platform Configuration Registers), which contain information regarding the integrity, configuration and state of a software component. These signed attestations can be verified by external parties. The TPM itself does not provide any security solution on its own, it rather serves as a basis of trust between entities. The Verifiable Privacy relies on a solution that harnesses the power of the security enhanced hardware technology as an enforcement and trust mechanism. On top of this hardware a DRMlike software solution is responsible for attesting and verifying privacy settings of sensitive data. This DRM-like solution, referred to as the Privacy Manager (PM), intercepts all accesses to private data from running applications, and performs local access control decisions. The correct functioning of the TPM and the PM components is supported by another mechanism to keep the running applications in a secure sandbox, isolated from unauthorized actions. In the next sections we will elaborate on how these components fit together and what their responsibility is Prerequisites Since the Verifiable Privacy is employing the power of security enhanced hardware, it is a prerequisite, that every actor and machine involved in the transactions should be equipped with TPM. We assume that these machines are secured from any physical tampering, rendering the TPMs tamper-proof. The TPM is also responsible for key generation and management for multiple purposes. It generates asymmetric keys for both the Privacy Manager (PM) and any application running on top of the platform. TPM is also used to verify that the public keys of these software components are indeed bound to that specific machine. It has an internal safe storage of known keys, which can be used to re-encrypt data depending on the requester. Apart from the keys that are meant to 30

49 4.2 Verifiable Privacy be used by software, TPMs come equipped with a root key, for which the private key is embedded in the hardware. Encrypting data with the public counterpart of this root key will bring assurance that every data access by any software will have to consult the TPM before, in order to release the private information. Moreover, the PM should also be present on each PDV and service provider, in order to facilitate an interface for exchanging privacy related information between the actors. This components can be placed at different layers as we will see later on, but its main purpose remains to carry out privacy related actions, such as: remote attestation, trust establishment, and policy enforcement. We are also assuming the existence of a certain Trusted Third Party (TTP), which plays an important role in the correct functioning of the monitoring and assurance system described below Architecture This solution tries to approach the sticky policy enforcement problem by simply assuming that every machine that is involved in handling protected user data is essentially a Policy Enforcement Point (PEP). As such, it focuses on the design of a common architecture for PEPs, that will facilitate the interoperability of the system across multiple nodes, regardless of their control domain. When it comes to the designing of the architecture of a single PEP, we are faced with multiple choices that we can make. The base architecture, however, as depicted in Figure 4.1, stays the same. As one of our prerequisites, we have the TPM equipped hardware at the bottom layer. Figure 4.1: Verifiable Privacy: Abstract Architecture of a single PEP node On top of the hardware layer we have an abstraction called the Common Platform. When deciding what the common platform should be we have to take into consideration the level of isolation that we require to provide in such a system. Applications will have to reside on their own isolated space, such that interactions that happen outside of this isolated space can be monitored. This restriction becomes especially important when private data objects are transmitted to third 31

50 4. System Design parties. The communication between applications and transmission of data between two separate isolated spaces should happen with the consent and permission of the PM, who in turn enforces Sticky Policies. In practice the Common Platform can be two things: 1. A trusted operating system could take the place of the Common Platform. The isolation space, in this case, would be provided by the process Virtual Machines (VM) of the shared operating system. Monitoring, in this case, would be done on the hosting operating system, since inter process communications and external communications all go through the operating system. 2. Another solution would be to replace the Common Platform with a hypervisor, and let standalone services run in their own system virtual machine, thus offering isolation on the operating system level. Virtualization technology is maturing really fast, sometimes even achieving nearly native operating system speeds. Virtualization is also a commonly employed solution in cloud environments, which in turn are hosting several client oriented services on the web. System VMs are much more heavyweight then their process VM counterparts, thus there needs to be some planning involved when instantiating new services, not to waste resources. The strength of this model is also its drawback. Having applications running in their isolated spaces with the PM attached to them assures that they are subjected to continuous monitoring, and verification. The Verifier component makes sure that only eligible application get access to personal data, while he Monitor keep track of ongoing system events to avoid misusage. Through monitoring and verification the system delivers proof of trust and assurance to their users. This, however comes at the price of a strict architecture design Privacy Manager Architecture The Privacy Manager (PM) is the specialized software component, which is responsible for the localized enforcement of privacy policies. Whenever a Protected Data (PD) object is requested, either from an internal application or from an external entity, the PM is trusted to evaluate and enforce the Sticky Policy of the respective PD. Moreover, it is also responsible for delivering trust and assurance of its correct functioning through the Verifier and the Monitor components A Verifier Verification is a pro-active measure that is taken prior to any data disclosure, which is at the heart of this model. Trust that the user intentions are going to be carried out is partially rooted in the verification system. As complex systems are built in multiple layers it is important to provide verification from the lowest level (which is the hardware) to the highest one (which are the applications providing a certain service). 32

51 4.2 Verifiable Privacy Hardware verification is at the bottom layer and is done by the technology developed by the TCG. The TPM assists the software verifier in attesting that a specific software component is indeed running on top of the host platform. The states of different applications are kept hashed in the TPM registers, and they are signed and transmitted to any requesting party, on demand. This way, the requester can be assured that the machine he is communicating with has a software component running in the provided state. The Verifier component of the PM is responsible for carrying out the TPM assisted software verification. We distinguish two independent software components that need verification: one being the application providing some service, and the other being the PM itself. In order to build a trust framework, the verification of these components is carried out by different means. The PM component has to be verified to be in a valid state, since it is the core policy enforcing mechanism of the model. To provide assurance of a correctly functioning PM, remote software verification is needed, where the verifier entity is independent of the verified subject. The preferred solution would be to make the communicating parties verify each other s PMs. This would require an open PM specification and design, such that all of its valid states are known prior to any interaction, and are verifiable by anybody. Another alternative would be to outsource the responsibility of verification to a Trusted Third Party (TTP), which could be the developer of the PM or any other authority. An additional TTP, however will affect the scalability and complexity of the whole system. Further discussion on the identity of the verifier is out of scope for the purpose of this thesis. The verification of the application component, on the other hand, can be carried out locally to every node by the PM. The intuition behind it is that since the PM is remotely verified, it is trusted to carry out local verifications in a truthful manner. The Verifier component is entrusted to do a local software verification attested by the TPM of every application that requests access to some protected resource B Monitor Verification on its own only gives partial assurance about the behaviour of the communication partner. Certificates confirming the state of a remote software component could be vague or not descriptive enough. The Monitor component will complement the Verifier by providing a reactive monitoring service, in order to keep track of ongoing actions in the system, and notify the PEP whenever a potentially illegal operation is encountered. Monitoring goes hand in hand with log keeping. Logs are a powerful mechanism for reviewing past events that serve as evidence for or against a violation. The TPM assists the monitoring system by providing authenticity for the logs, as long as the monitoring system is trusted to do proper book keeping. The main reason behind executing applications in their own isolated space is that they can be 33

52 4. System Design monitored from the outside. Both process and system VMs offer solutions to monitor and intercept system calls and translate them into native behaviour. This way the monitoring service could attach itself to crucial system calls and monitor their execution. A store operation for example could be evaluated before execution, in order to verify whether the sticky policy allows the data to be stored. Interactions often require data to be transmitted between different application. These communicating parties could either be internal or external, and both need a method of monitoring. Two applications are internal if they both reside on the same machine, or external otherwise Interaction Models In the following sections we examine the interaction of two separate entities, highlighting the important parts of the protocol used for exchanging Protected Data (PD) objects. Afterwards, the case when multiple Data Controllers are requesting the same PD is examined A Data Flow The Privacy Manager is responsible for managing private user data that has been shared with a remote system. Just like a DRM system, the PM treats the user data as the protected resource and applies access control on it. Moreover, it goes beyond the standard DRM system, by providing a fine-grained access control that looks at data accesses on per application basis. Every application is evaluated independently before granting access to data. Figure 4.2: Verifiable Privacy: Interaction diagram between a PDV and a Service Provider (SP) A high level interaction digram can be seen in Figure 4.2 which follows a simple scenario with the PDV playing the role of the DS, and the SP being the DC. The App, who is considered to be the service running at SP, is requesting some user data from the PDV. The first two steps are part of the communication protocol between the two actors by which they establish trust and share protected information. The third step describes the data access by the requester App on the SP 34

53 4.2 Verifiable Privacy side, while the fourth step depicts a potential forwarding of data to an external entity. Note that an internal access from a second application, App2, would also happen through the PM. An external forwarding, on the other hand, will initiate another round of the communication protocol involving steps 1 and 2, with the SP playing the role of the resource owner. In order to accommodate the need for a system where the roles of DS and DC can be assigned to PDVs and service providers interchangeably depending on the context, the communications protocols should be the same, regardless of the real identity of the actors. Resulting from this is that an interaction diagram between two PDVs or two service providers would follow the same principles. The first step of the communication protocol establishes the trust relationship among the two parties by doing mutual verifications on eachother s systems. Usually the data requester (the service provider in our case) initiates the protocol by sending a signed certificate proving the validity of the Privacy Manager (PM) component running on his machine. This certificate is usually signed by thetpm SP proving that thepm SP has not been tampered with and it is in a valid state. It also contains the public key of thepm SP such that the user can encrypt sensitive data with it. A similar certificate from the PDV is sent back to the SP as proof that PM PDV is also genuine and it is attested by TPM PDV. The second step in the communication protocol is the exchange of private information between the parties. In our case, the PDV is sharing some information with SP. The shared data is encrypted with a secret key that will also be transferred under the receivers public key protection. Moreover, the data will be bundled together with its Sticky Policy, forming a Protected Data (PD) object. The Protected Data is kept in the secure storage of the PM SP. After the communication protocol concluded, the copy residing on the SP side can be requested by applications running on the same machine. Since data is kept encrypted by a secret key, in order for an application to get access to it, first it needs to be re-encrypted with Pub App by the PM SP. The PM SP only does this after the state of App SP is verified to be valid, and in accordance with the Sticky Policy guarding the data. Once access is granted, the App receives a copy of the data protected by his public key. The final step in the interaction diagram is the forwarding of shared data to a third party. This step is part of the interaction diagram, since data forwarding happens very frequently in every data processing system. Whenever data is forwarded another round of the communication protocol described in steps 1 and 2 is initiated by the SP and the Third Party. In this interaction the SP will assume the role of the data owner (DS) and the Third Party becomes the requester (DC) who initiates the protocol. Trust is established between the parties just as before, but before the data transfer can take place the SP has to verify that the forwarding is in accordance with the Sticky Policy. As long as the PM SP has been verified to be in a correct state, the PM is trusted to carry out the user preferences. 35

54 4. System Design Every forwarding action on private user data should be logged by every party, such that proof can be provided to the original data owner that his intentions were enforced during data processing. Logs should be aggregated by the original data requester and provided to the original data owner on a periodical basis B Forwarding Chain The Data Flow, described in Section A, only specifies the interaction of two entities. The Forwarding Chain, on the other hand, describes how data is shared across multiple parties. The Forwarding Chain is a tree-like structure of nodes that share a copy of a Protected Data (PD) object. The root of the Forwarding Chain is the source of the private data, such as a PDV. Figure 4.3 illustrates how a Forwarding Chain might be built up on a single user object, which is a Personal Health Record (PHR). A follow-up scenario of the one presented in Section using the healthcare system can be considered. The owner of the PDV shares his PHR with a Hospital Service under the protection of a Sticky Policy. The Hospital Service is in close collaboration with two other entities: a Pharmacy and a Research Center, so it shares collected Protected Data with them, assuming the Sticky Policy allows it. These two entities in turn can share the Protected Data themselves, like in the case of the Research Center publishing information on a News Service, thus creating a chain of forwarded data. It is worth noting that every link between two entities in this diagram actually represents an interaction based on Section A. Figure 4.3: Verifiable Privacy: Example of Forwarding Chain on Personal Health Record Whenever Protected Data is forwarded to a Third Party we can distinguish three different scenarios: 1. In the simplest case the Protected Data is shared as a whole, without modification. In this case the data copy can be considered as a duplicate. 2. DCs might decide to share only a fragment of the original data, in order to promote data minimization. The Hospital Service, for example, might decide to share only a subset of the 36

55 4.3 Trusted Privacy information residing in a PHR with its Pharmacy partner. The data fragment, however, still has to be protected with the same Sticky Policy to assure that it will not be misused. 3. In some cases the DC might want to disclose protected data under a stronger level of protection. In our example, the Hospital Service shares PHR with the Research Center under a stricter Sticky Policy, thus limiting the scope of usage of the data. In case of Protected Data forwarding, Sticky Policies can always be made stricter, but can never be made weaker. This rule ensures that the original policies set by the data owner will always be respected. In order to maintain the Forwarding Chain structure every node is responsible of keeping routing tables with pointer to the previously disclosed data. The maintenance of up-to-date pointers is a crucial requirement for the logging and the control system described below. The Monitor component being part of every PM is responsible for keeping logs on every node. Logs provide traces on the processing of every Protected Data, which can be viewed as assurance of data protection. Given the distributed nature of the Forwarding Chain, every node holds a fragment of the logs that are relevant for a single data piece. In our previous example, each entity keeps logs about the data processing done on the shared PHR. In order to turn logs into assurance they have to be aggregated and verified. Verification could be carried out locally to every node as well, thus skipping the step of aggregation. The local solution however offers relatively less assurance than its counterpart. We delegate the responsibility of log aggregation and verification to a Trusted Third Party (TTP), which can play the role of an audit company. The TTP has to collect these logs either by direct collaboration or some other means, and perform a verification on them. A final digest is then periodically sent to the data owner as the final form of assurance. Policy violations that show up in the logs are also included in the digest. In order to maintain control over already shared data objects the Forwarding Chain also assists in manipulating and revoking accesses on Protected Data (PD). When data owners wish to update the Sticky Policy attached to some object, he can do so by using a push method that propagates his updates starting from the initial data requester. It is important that the pointers are kept fresh, such that the chain is not broken. Every party who has a copy of the shared data has to update its policy locally in case of an update, and forward the update operation to all of its children in the chain. Every node is also responsible for collecting acknowledgements of the success of the operation and notify the user about the process. 4.3 Trusted Privacy This section describes the architecture and functioning of the model which closely resembles the design outlined by the PrimeLife project [5], together with its predecessor, the PRIME project [8]. Both are vast projects with years of research behind them. The scope of this description, 37

56 4. System Design however, is to present the main underlying model and design that these projects follow in order to provide policy enforcement Description Much like the Verifiable Privacy descibed in Section 4.2, the Trusted Privacy relies on the use of a specialized software: the Privacy Manager (PM). The architecture supporting the PM component is relaxed by employing a middleware oriented design. Apart from the basic Sticky Policy enforcement that is guaranteed by the PM, it comes with a different view on the employed trust framework. As its name suggests, the Trusted Privacy model relies on the correct functioning of the trust framework, which is the composition of two independent sources of trust Prerequisites The Trusted Privacy (TP) model assumes an active PM system on every participating actor, both for PDVs and service providers. In order to assure full functionality these components should be fully compatible with one another and tamper free. For PDVs, the PM acts like a client-side protection system designed to govern every interaction on the user s personal data. Queries on user data are passed through the PM layer which assures that only requesters that are found eligible can access the protected resource. Incorporating such a component into the PDVs is a straightforward task. On the other hand, the PM system must also be present at the service provider. This resembles a server-side system which acts as a DRM software, protecting shared user data. The server-side PM component holds the responsibility to act and protect on behalf of the clients, by respecting Sticky Policies. As the PM is a central component of the model, it has to be fully trusted. Mechanism describing how this trust can be achieved will follow. The existence of TTPs are also assumed playing an important role in achieving the desired trust level Architecture The PRIME project defines two different PM components: one for the client, and one for the server. In the original PRIME project the client and server-side components are different systems with different responsibilities. In the PrimeLife project, however, these two blend into a single component. Our scenario needs to accommodate PDVs together with service providers, and multiple interactions between them. This requirement leads to a need for uniformity. DS and DC are clear abstractions of PDV and service providers. These roles are not fixed, but rather dynamically assumed, based on the context. For example, if PDV1 requests some data from PDV2, it is clear that PDV1 is the Data Controller and PDV2 is the Data Subject. However, if PDV1 decides to forward 38

57 4.3 Trusted Privacy the collected data to a service provider, PDV1 becomes the Data Subject and service provider the Data Controller. It is easy to see that PDVs and service providers can assume both the Data Subject and Data Controller role. The need for uniformity discouraged us from using distinct PM components, thus the PM residing on the service provider has to have the same functionality as the one on the PDVs. Figure 4.4: Trusted Privacy: Abstract Architecture of a single PEP node Conceptually, the PM sits on top of the persistence layer (or Database), as shown in Figure 4.4. This way it takes the role of a middleware that governs the access over the database system underneath. The example of Figure 4.4 depicts a PEP node with the installed PM middleware. The Database system on top of the OS is entrusted with the safekeeping of stored data, and only lets itself be queried from the layer sitting right above it, and not any of the higher layers. This prevents the situation where Apps want to bypass the PM in order to get unrestricted access to some Protected Data (PD). The PM is a middleware that mediates the access to PD from the upper Application layer. Apart from safe storage and safe access to stored objects, the PM middleware also plays the role of a monitoring filter. Since interactions between Apps and remote or local systems is usually mediated through the OS, the convenient placement of the PM allows it to track ongoing interactions against policy violations Privacy Manager Architecture The Privacy Manager (PM) middleware closely resembles the PM of the Verifiable Privacy (VP) model in its functionality, however, the mechanisms by which trust and assurance are provided are different. The description of the Trust Negotiator and the Monitor components follows A Trust Negotiator Although using the PM would mean that every privacy rule is enforced by the middleware itself, our model still lacks components that provide trust in the infrastructure. The way that the Trusted Privacy is targeting the trust framework is by outsourcing trust to TTPs with the use of privacy seals and reputation systems. With the introduction of a new component into the PM, called Trust Policy Negotiator, users can evaluate the trustworthiness of an entity they are about to interact with. It gathers trust information and compiles it in a meaningful way. If the user is actively taking part in the interaction, this trust 39

58 4. System Design information should be presented to him in an intuitive way through the user interface. On the other hand, if the user is receiving a query, the PDV should be able to evaluate the trust level provided by this component in an automatic manner, and carry out a decision based on that. The sources and mechanism by which trust is evaluated are: Privacy and Trust Seals offer assurance that the remote party will not violate the privacy policies previously agreed upon. These seals are usually certified by TTPs. These seals provide proof that the system run the by the a remote party lives up to certain security and privacy standards, or uses a certain software solution. For example, it can provide assurance that a service provider uses the PM in its backend system. We can distinguish two types of trust seals: 1. Static Trust Seals are simple signed documents by the TTP, which attests the correct sate and functioning of a system at a given moment in time. Since these static trust seals come with a certain validity window, they need to be re-evaluated and re-issued in order to provide up to date proof. Since today s infrastructure is highly dynamic, these certificates might not be up to date all the time, as new threats and vulnerabilities are surfacing in a more frequent manner than that of re-issuing of certificates. 2. Dynamic Trust Seals are generated in real time by the machine serving the user s request. Dynamic Seals are only trustworthy if the process by which they are generated is also trustworthy. Usually these documents are generated with the assistance of tamper-proof hardware which attests their validity. The Dynamic Trust Seals highly resemble the verification certificate that is provided by the Verifiable Privacy in Section A. Through the security claims provided by Trust Seals a trust score can be evaluated for every remote party. It is worth mentioning that a Dynamic Trust Seal, if attested in the correct way, always provides a higher trust score than its static counterpart. The flexibility of the Trusted Privacy model lets each individual PEP node decide what form of trust certification it is willing to provide. Reputation Systems are considered to be the secondary source of trust in this model. This model assumes the existence of multiple independent reputation systems, such as customer feedback services or blacklist providers. Blacklist providers keep track of constant policy violators and notify every actor who tries to initiate an interaction with them. A User Feedback system harnesses the power of the crowd in collecting individual opinions, or experiences of previous interactions. External reputation providers also have to be trusted to base their rating on a well defined and relevant scale. In case of feedback from the crowd, on the other hand, the trust is divided between anonymous users who may or may not provide correct information. The collected scores of from the available Reputation Systems is combined in a reputations score. The reputation score should have its own scale, independent of the scales used by the actual sources of the score. 40

59 4.3 Trusted Privacy After the interaction with the relevant TTPs the Trust Negotiator combines the trust score and the reputation score into a final score. Based on this final value different levels of trust can be quantified, helping the automated decision making. The intuition behind the outsourcing of trust to multiple sources is that many independent trust scores from independent authorities can complement or cancel each other out, leaving the end user with a trustworthy estimate. This, of course, only works under the assumption that TTPs are truly independent and are not conspiring to provide a pre-agreed score B Monitor The Monitor component integrated in the PM is built to achieve the same functionality as described in Section B. Instead of the isolated spaces, this model uses the middleware approach to intercept and react to unauthorized operations issued by the application layer Interaction Models The following section presents the interaction model that covers the data flow between two remote entities, with aspects regarding establishing trust and execution paths A Data Flow The first interaction protocol of the two parties focuses on establishing trust by the use of the Trust Negotiator. The Trust Negotiator gathers all relevant trust and reputation scores and computes the final score on the remote party. If the final score satisfies the predefined trust threshold the interaction continues with the exchange of the desired protected data. Figure 4.5: Trusted Privacy: Interaction Model of the Data Flow The PD can take multiple paths once it has been shared to an external entity. Figure 4.5 depicts how PD is handled by the PEP of a service provider. PD is passed through the PM to the Application Layer which carries out the service provider logic. Two usual use cases include 41

60 4. System Design storing and forwarding the processed data. Both of these operations have to pass through the PM middleware in order to evaluate whether they are allowed to be stored or forwarded, respectively. The evaluation is carried out based on the Sticky Policies attached to the data objects. Similarly, PDs returned as results of a database query are also subject to evaluation. The PM only lets data through for applications which are authorized to operate on the requested data B Forwarding Chain Since monitoring is carried out individually at every PEP node, we are again faced with the problem of transforming logs into assurance. Just as the logging system described in Section B, the Trusted Privacy also relies on the use of the Forwarding Chain, when it comes to the modification of Sticky Policies by end users, and log verification. We introduce a slight deviation, however, in the way that logs are aggregated and verified from the Forwarding Chain. We eliminate the requirement of a TTP that plays the role of an audit company, and substitute it with a different scheme. The aggregation of logs is the responsibility of the original data requester who is in direct contact with the PDV. In the example presented in Section 4.3, the Hospital Service has to aggregate the PHR logs using a pull method. Every node in the chain is responsible for forwarding the pull request to its children then returning the gathered logs to its parent. Verification is carried out by the PDV who is the owner of the shared data on which the logs were provided. By providing the logs to the end users in a direct manner we intend to achieve a higher level of assurance than that of a simple digest of an external entity. PDVs are left with the responsibility to verify aggregated logs and alert the users first hand of a suspicious behaviour. This offers a much finer granularity of verification of logs, since PDVs can extract any requested information from the raw logs. 4.4 Mediated Privacy In the upcoming sections the novel policy enforcement model proposed by this thesis work is presented, tailored to fit the defined requirements Description The Mediated Privacy sticky policy enforcement model makes use of a mediated space between DSs and DCs, on which shared data lives. The requirements based on the user-centric model motivated us to design this mediated space, in order to improve awareness and control over the disclosed personal information. The mediated space does not belong to a single controlling entity, instead it focuses on providing a platform where DSs and DCs can interact on equal terms. 42

61 4.4 Mediated Privacy The idea of a mediated space can easily be captured by the concept of a Distributed Hash Table (DHT) [34]. DHTs are decentralized overlay networks, where each node is seen as equal. Nodes forming this overlay are responsible for maintaining a predefined keyspace, meaning that every node is responsible for a subset of the keyspace, called the keyspace slice. New data is entered under a key in the DHT, called the LookupKey, which is hashed in order to compute its place on the keyspace. Its place in the keyspace determines the node which will host the data physically. In this model we employ the concept of the DHT as our mediated space. Users are aware of all existing copies of their personal data throughout the system by simply maintaining a set of LookupKeys in the DHT. Awareness about who accesses it is also improved by tracking search queries that are targeted to a LookupKey. By holding the LookupKey for each personal data item, users are in charge of modifying and deleting them at any given time, greatly improving control Prerequisites One of the base prerequisites for our model is the existence of a DHT overlay network. DHTs are widely employed distributed data stores in today s data dominated world, since they scale well and offer a quick lookup of O(log(N)). On the other hand, there are only a few systems that consider it as a building block for data privacy [15]. Our design requires that both DS and DC entities to be part of the DHT as peers in an active manner. A follow-up assumption of the Mediated Privacy model states that data introduced in the DHT should only be queried and distributed through the DHT itself, avoiding the trading of personal data using outside copies. Distribution of the private data should only happen with the users consent. DCs who wish to distribute user data are required to do so via sharing the LookupKey, under which the specific data can be found. Such requirements rely on the actors of the system to obey this rule Architecture In the upcoming sections we will present how the DHT overlay network is formed around the DC and DS peers. Peers of the DHT, regardless of whether they are part of a PDV or a service provider, all operate on three layers. Figure 4.6 depicts the high level architecture of a single DHT node, based on three layers. The bottom layer, which serves as the base for the other two layers, incorporates all the conventional DHT functionalities. This includes the maintenance of the overlay topology, and the serving of basic operations, such as insert and retrieve. The Privacy Manager layer, on top of the DHT layer, is responsible for safeguarding protected data objects and trust establishment. The Logging layer sits on top of the stack and is responsible for keeping track of every DHT event regarding operations on private data. The following sections present in detail the functionality of every layer. 43

62 4. System Design Figure 4.6: Mediated Privacy: Architecture of a DHT node Business Ring The mediated space, represented by the DHT, is used to store disseminated user data. Because of this, both PDVs and service providers are part of this network. Since sharing user data is a frequent operation, we are expecting the deployment of a large shared data structure. The first important question to address is the rules by which a DHT is formed. The first solution that comes to mind is to have all the actors participate in a single DHT. The largest, currently active DHT is run on million nodes, and in practice can scale further [37]. The performance of operations like search, insert, or delete are bound by O(log(N)). Even though the DHT is a highly scalable structure, using a single one will result in some drawbacks. The drawback that we would like to point out is the requirement for uniformity. The behaviour of the DHT is said to be uniform across all nodes, since it is a completely decentralized system. This requirement for uniformity does not fit our requirements, since laws and regulations regarding virtual data handling and privacy are not uniform across different regions of the world. Moreover, different regulations can be in place on the business model level as well. Although having a single DHT would be a simpler solution, it would introduce the problem of handling complicated legal and trust schemas. Instead of having a single DHT, we introduce the concept of a Business Ring. We propose a solution where Business Rings are spawned as needed around a group of services that have a closely integrated business model. Service providers belonging to the same Business Ring are assumed to have an existing business agreement, which ties them together. In principal, these Business Rings can be formed around different branches of the existing industries. Competing service providers can either agree on belonging to the same Business Ring, or start their own. A mature Business Ring with a clear business model, however, is more likely to be targeted by users, than a less mature one. For example the Business Ring used in case of the healthcare scenario using PHRs presented in Section 4.3, could be formed according to Figure 4.7. The black nodes represent PDVs while the white nodes represent the service providers. The ring-like representation of the DHT from Figure 4.7 resembles a Chord network [35]. Every node of the Chord Business Ring is said to be responsible for the slice of the keyspace lying between 44

63 4.4 Mediated Privacy Figure 4.7: Mediated Privacy: Business Ring formed around a healthcare scenario himself and his predecessor in the ring. The keyspace slices of the service providers can be seen from the arrow markings. Note that the DHT solution used in an actual implementation can follow any kind of topology. We refrain from evaluating existing DHT solutions. Rather we try to describe a system in which any kind of generic DHT solution can be used. For simplicity and better understanding, however, we will keep talking about a Chord-like structure. The business model that ties the service providers together in the Business Ring of Figure 4.7 could be the public health services provided to users. Although these service providers offer independent services, they belong to the same logical ring, since they operate on the same set of PHRs. Together they form a clear business model, which is used as a basic characteristic of the Business Ring. These Business Rings can vary from business to business, depending on how many service providers are part of it, how big the network is, or what kind of general data policies apply for participants. Since both PDVs and service providers have to become peers of the DHT, we will investigate how this requirement fits into their design. PDV peer By their design PDVs are abstractions of always on entities that provide safe user data storage together with safe data management. The responsibilities of a Business Ring node could easily be incorporated as an additional component inside the PDV. Since they are always on for high availability, the downside of high churn rates can also be alleviated. Churn stands for the rate at which nodes enter and leave a DHT system. High churn rates forces the system to focus more on self-maintenance, while a low churn rate guarantees a more stable system. 45

64 4. System Design Service provider peer When it comes to our requirement to incorporate service providers as Business Ring peers, we are faced with a more complex scenario. Given that backend systems of service providers significantly differ from one another, it is hard to envision a generic solution. There is, however, a common design practice that can achieve the above mentioned Business Ring design by providing Privacy as a Service (PaaS). The responsabilities of a single service provider s DHT peer could be advertised like a service, which in turn can have any flexible design. Figure 4.8: Mediated Privacy: PaaS design for the Hospital Service Business Ring node Figure 4.8 depicts the backend system architecture of the Hospital Service with the PaaS as one of its frontend services. Being a part of the Business Ring, the Hospital Service is required to maintain control over his assigned keyspace slice depicted by the arrow. In its backend system, this could be load balanced and supported by multiple machines from his internal system through the PaaS. This design is flexible enough to be easily implemented in any backend system, while still maintaining the functionalities of a Business Ring peer DHT Peer Layer The bottom layer that every peer operates on is the DHT layer, which is responsible for executing all the classical DHT related functionality (insert, retrieve, remove). Special considerations have to be taken, however, for every remote retrieval operation, in order to avoid untraced data copies. The local retrieval operation maintains its normal behaviour. Apart from the classical functions, there are a couple of other aspects that need to be addressed, like membership, keyspace assignment, ring size and description A The Remote Retrieval Operation The classical remote retrieval operation of the DHT retrieves a data object belonging under a LookupKey in two phases. In the first phase an internal search operation is executed, which finds the host of the particular data object, depending on which node is responsible for the par- 46

65 4.4 Mediated Privacy ticular keyspace slice containing the LookupKey. After the right host is found, the second phase establishes a direct point to point connection between the requester and the host, on which the requested object is transmitted. This process, by its nature, creates an untraced data copy of the requested object, the PD in our case. In order to maintain references to all existing copies of a PD object, the retrieve operation is modified to act like a retrieve followed by an insert. Our modified retrieve operation does not return the new data copy directly, instead it inserts it back into the Business Ring under the keyspace slice of the requester. The first phase of the retrieval stays unmodified, but the second one is replaced by a DHT insert operation. The key for the insertion, called CopyLookupKey, needs to be included in the request process by the requester. The functionality of the retrieval operation stays the same, since in both cases the requester will have his own data copy on his local machine. The difference between the two, however, is that our modified retrieval keeps track of the data copy via its new CopyLookupKey, while the normal operation is not concerned with the tracking of data copies. The CopyLookupKey, pointing to the new data copy, can then be appended to the metadata of the original PD. This guarantees that the DS will be able to retrieve every CopyLookupKey pointing to different data copies B Membership A Business Ring has to be bootstrapped in the beginning, in order for other peers to join the network. The most convenient way would be to let service providers bootstrap the DHT overlay, and advertise their services together with a reference to the Business Ring. After the initial setup, we have to devise strategies on how PDVs should join the network. As explained later, having a certain amount of users in an DHT is desirable in order to enable data access tracking. Moreover, having a large user base can also act as a social incentive in order to establish trust in a given service. We try to distinguish several strategies: 1. PDVs who are involved with the services provided in a particular ring should be members of that Business Ring. This strict strategy states that only PDVs who share their data in the ring, are allowed to be part of it. Joining Business Rings as a result of a successful data exchange should be an automated process. Leaving a network can be caused by either an expiring date of shared data or manual intervention by the PDV owner, in case he decides not to keep track of shared data for any longer. His previously shared data will persist to exist in the ring, if not deleted explicitly either by the data owner, data collector, or a predefined obligation. 2. The previous strategy assumes that peers will have enough incentive to join and use the Business Ring. However, unpopular businesses could become buried, since nobody considers them safe enough to use, without having an initial user base. To accommodate this 47

66 4. System Design case we could have a set of randomly chosen nodes from the existing PDVs, who could join these rings. Their only duty will be to route messages and keep small shares of data, without taking part of any other interaction. A system could function with random nodes, since data owners need not to be part of the desired network for the system to function. Operations on the DHT can also be done from outside the system, by executing them via a randomly chosen node, who is a part of the system. Since the second strategy would introduce some indirection and complexity, we argue that the more stricter first strategy would suit our model better. One of our first initial assumptions was that every entities identity is verifiable. Following from this assumption, a Business Ring can be constructed strictly based on PDVs who are legitimate data sharers. The impact of anonymous nodes on a Business Ring is out of scope of this thesis C Keyspace Assignment An important consideration in the design of the system was to let the service providers decide the keys under which user data has to be inserted. Since every service provider has his own keyspace that he hosts locally, he is in charge of a set of keys. Whenever a DS wants to share an object with the said service provider, he does so by inserting it under one of the keys chosen from the service provider s keyspace. We also considered a random placement strategy where the PDVs are choosing a random key, under which their data is inserted. Once the service provider retrieves the chosen random key, he would have to issue a search on it. This scheme would introduce a performance penalty, since a single interaction would require two DHT lookups. To avoid this overhead we decided to put the service provider in charge of the keys where the user objects are going to be kept. We argue that this scheme does not empower the service provider with more trust, since they are bound to receive the same data anyway. Moreover, after the data has been inserted, the service provider can retrieve it from their local machines, without the need for an extra DHT lookup. To accommodate our design decision, we also have to take a look at how the keyspace is divided among the nodes in the Business Ring. Traditional DHT solutions strive to achieve a uniform distribution of keys among nodes in order to load balance the system. This, however, is not suitable for our needs, since the service providers are the real hosts of user data, while PDVs have a different role in the system. We propose an unbalanced key distribution schema which favours the service provider nodes. Our key distribution is represented by the arrows in Figure 4.7. The mechanism that determined how large the keyspace associated to a service provider can be is closely related to the trust framework of our system, and will be discussed later on. 48

67 4.4 Mediated Privacy D Business Ring Size As mentioned in Section B, the size of the Business Ring plays an important role in the operation tracking and logging system. The logging system presented later on relies on the existence of routing nodes of the DHT, which route operations such as insert and retrieve. A minimum predetermined DHT size (counting the PDV nodes) would be desirable to maintain, in order to make sure that every operation will get routed at least by one random router node. This minimum value could be computed, depending on the size of the routing tables used by the particular DHT implementation. A possibility solution to achieve this minimum is combining the membership strategies from Section B. Use the participants of the business model as a base, and compensate with random nodes until the minimum desired size is met. This solution, on the other hand, would require a centralized coordinator entity that governs the memberships. An alternative strategy, less reliant on a centralized entity, is to start new Business Rings as part of an already existing mature Business Ring with a stable userbase. The mature Business Ring can serve as a nursery for the newly created one. After the new Business Ring gathers enough momentum to build a stable userbase, it can be separated from the nursery E Business Ring Description Every Business Ring should offer a description of the network. Nodes who join the network should have a way to see which are the service providers that are involved in that particular ring. Service provider nodes could self advertise their own description regarding client restrictions, and generally applying policies. The Business Ring description should also contain the keyspace sizes assigned for each service provider from within that ring for trust establishing reasons. The size of the ring also has to be a public information based on which different trust decisions can be carried out. Details related to the keyspace size, and DHT size do not have to be precise at any given time. An estimate of these values would be sufficient for the workings of the system. Such an estimate can be computed by a gossip algorithm [31] that would run on piggybacked routing messages in the system, making sure that each node has an estimate value for both the service provider s keyspace size, and the DHT size. However, the design of a system to provide accurate estimates is out of the scope of this thesis work. Additionally, one might imagine that different business models have different policies regarding customer requirements. For example, a user could only join the ring of a bank, if he is a customer there. All these extra policies regarding restriction from the service provider side can also be taken into consideration. 49

68 4. System Design Privacy Manager Layer The Privacy Manager Layer stands for the PM component which is responsible for the safeguarding of PD objects by enforcing Sticky Policies. Its main responsibility is to filter the incoming and outgoing operations that are happening on the DHT layer. This layer acts as a guard of the user data objects hosted at every node A Sticky Policy Enforcement The main method of data safeguarding is sticky policy enforcement. Business rings are required to operate on PD objects, that guarantee the existence of a Sticky Policy next to some shared data. There are two big use cases covered by Sticky Policies: local data usage and forwarding of data. When a DC wants to process the collected PD he just has to issue a local retrieval operation to the Business Ring through one of his own local nodes. Before the local nodes return the desired data, the PM evaluates the Sticky Policy against the requester s attributes and grants or denies access to it. Forwarding of collected user data is following the same rules, but using a remote retrieval operation. As stated in Section 4.4.2, entities are only allowed to externally forward LookupKeys, and not the actual PD, since data sharing has to happen through the ring. Third parties interested in collecting some shared data have to be part of the same Business Ring with the DS and DC. Only then, a third party can issue a remote retrieval request for a PD object. The PM layer of the hosting entity is responsible for evaluating Sticky Policies before the actual data transfer can happen. The PM layer is also in charge of the obligation engine, which makes sure all obligations are triggered and carried out. An obligation requiring the deletion of a PD object can be easily implemented issuing a delete operation on the DHT layer. It is worth mentioning, that the deletion is verifiable by the DS himself, since he also holds a reference to the LookupKey of the PD. By periodically interrogating the ring for known LookupKeys, the DS can always know which of his previously shared object are still there, and which ones have been deleted B Trust Management Since trust is a required component of every framework, the PM layer also offers a trust negotiating mechanism for peers of the Business Ring. The unbalanced keyspace assignment described in Section C reduces the communication overhead between a DS and DC, but it can also be used as a measure of trustworthiness. Taking the size of the keyspace slices of every service provider, we can offer a quantification, by which a trust comparison can be carried out. The size of the assigned keyspace slice allows a service provider to host a limited set of shared user data, depending on the size of his slice. 50

69 4.4 Mediated Privacy A keyspace slice is made out of a set of lookup keys, which can be used to host a set of PD. The intuition behind the keyspace slice as a trust measure is that trusted service providers are allowed to host a bigger set of PD than less trusted ones. In this way, every service provider can be assigned a trust level based on the size of its keyspace slice. The establishment of these trust levels is the responsibility of the entity who is in charge of assigning keyspace slices, since he can decide how big or small it can be. Letting service providers claim their own slices will lead to a greedy scenario, in which case the trust measurement loses its value. A better alternative is to involve the whole Business Ring in deciding the keyspace slice sizes. A minimum baseline keyspace slice size can be assigned to every node, leaving them equally at the same bottom level of trust. This minimum value can vary from use case to use case, and its establishment is independent of this work. After the initial assignment, a consensus algorithm can be run across the peers in order to grant some more space, or take away some space from different entities. Since the majority of nodes are required to achieve consensus, we can assume that if the majority of the peers are trustworthy, then the keyspace assignment is also trustworthy. A trustworthy keyspace assignment leads to a quantitative trust measure that can be used to categorize each service provider in its own trust level, and define the automated decision making depending on it. A secondary trust source can be derived from the description offered by the Business Ring itself. With the assumed node identities in place, a list of participating service providers can be derived from the provided Business Ring description. By looking at the individual service providers in the list, a DS can set his custom trust level. For instance, a DS may decide not to use the services of a Business Ring which has a government agency as its member. On the other hand, he also might be more comfortable with sharing data in a ring that has a well known trusted service provider as its member Logging Layer The Logging layer is the top layer which offers a wrapper around every operation on a PD. Being at the top, it is responsible for saving traces of every operation. Logging is an essential mechanism that verifies the validity of the claimed actions, as well as to help maintain assurance for the user, that his intentions were carried out. Our logging mechanism focuses on saving data request traces throughout the Business Ring. The logging mechanism happens in an asynchronous manner, such that the performance of the service itself is not affected. We are trying to achieve a relaxed logging mechanism where some loss is inevitable, but not fatal. The request tracking system leverages the already existing lookup functionality of the DHT. As specified before, data is not meant to leave the DHT without authorization, and it is meant to be kept under a LookupKey which is known by the DS. That being said, access to data inside the ring is only possible via the search mechanism of the DHT. 51

70 4. System Design In order to perform any operation on data, first the node responsible for it has to be found. Since DHTs offer high scalability, nodes cannot store references to every other node in the system. Search solutions where nodes only keep routing tables with a restricted size are commonly used. Because of this design, every operation first has to go through several hops, in order to get to the actual data host. Every such routing node has valuable information regarding the identity of the requester, as well as the key for the resource requested. Every such <Requester, ResourceKey> pair provides useful information for identifying who has been requesting access to a certain PD. The ResourceKey represents the LookupKey of the requested PD. In order to have a functioning logging mechanism, we need to make sure that there are in fact routing nodes in the system, and not just a singe node serving all requests. We have to assure that the size of the Business Ring is large enough, as addressed in Section D. Since every node is responsible for keeping logs based on its own routed messages, the log informations referring to a certain key ends up scattered at multiple nodes. Composing a comprehensive log out of individual pieces of log events scattered throughout the nodes is the next challenge. Once logging information is aggregated, we need a way to reveal it to the relevant data owner, whose data is being kept under the referenced key. The first intuition is to keep the log object inside the same ring in order to provide easy aggregation and quick access to it. The first problem is that this might cause a cascading of logging messages that might render the system unavailable. We could separate the logging operations from any other operation such that logging on logging messages would be disabled. This solves the problem, but inserts a security threat: normal messages masked as logging messages could be sent to avoid tracing. We need an additional verification step, during which every router node checks the validity of a log message against some predefined standard, in order to avoid masked messages. For example, log messages can be composed of predefined fields, each field can only take a predefined value from an existing value pool. The verification step would then check whether each value of the log message has been chosen with respect to the predefined value pool or not. Another problem with it is under which key to place the <Requester, ResourceKey> log event chunks, such that they all get aggregated. A deterministic solution is needed since, the data owner has to figure out where the aggregated location is. Using the ResourceKey itself to keep logs will end up taking up space from the service providers keyspace. More importantly, the service provider would be in charge of hosting the aggregated logs, which is not desired. A hash could be computed on the ResourceKey to compute another LogKey deterministically where the data owner could find the aggregated logs, by computing the same hash on the LookupKey he is about to trace. Using a deterministic hash function will place the aggregates at a random node, depending on the overlay existing at that particular time. The <Requester, ResourceKey> log objects should be considered as immutable objects that 52

71 4.4 Mediated Privacy can only be read, but not modified or deleted externally by a request. Log objects should be designed as short lived objects with an expiring date, such that every node can clean up its logs periodically. This assures that the system will not get clogged by logs. Retrieval of aggregated logs are happening by using the pull-method. Every PDV is responsible to periodically query the ring for the LogKeys under which his log information is kept. This way, long term aggregates can be composed at the PDV site, to assure the persistence of logs Interaction Models The following sections present the interaction models that arise with the employment of a Business Ring. We examine separately how interactions with multiple Data Subjects and multiple Data Controllers are handled A Data Flow The first interaction model presented focuses on the data flow between a single DS and DC. Figure 4.9 depicts the high level interaction diagram between the two. Figure 4.9: Mediated Privacy: DC - DS interaction model In the first step, the DC makes his request to the DS together with the Data Handling Policy (DHPol) and a LookupKey, defining his intentions on data handling and the key under which the requested data is expected. The LookupKey is a valid key in the Business Ring, residing under the DC node s keyspace slice. After the received request, the DS interrogated the Business Ring for relevant details about the DC. This could include information on his keyspace size, and other trust measures, which contribute into the reasoning in step 3. Depending on the trust level and the predefined data policies of DS the reasoning can have two outcomes. In step 3.a, access is granted and a PD object is created, in step 3.b it is denied. After granting access the DS issues 53

Privacy for the Personal Data Vault SUMMARY

Privacy for the Personal Data Vault SUMMARY INSTITUTO SUPERIOR TÉCNICO Privacy for the Personal Data Vault SUMMARY Tamás Balogh July 13, 2014 1 INTRODUCTION The majority of interactions on today s internet is driven by personal user data. These

More information

Chapter 4. Fundamental Concepts and Models

Chapter 4. Fundamental Concepts and Models Chapter 4. Fundamental Concepts and Models 4.1 Roles and Boundaries 4.2 Cloud Characteristics 4.3 Cloud Delivery Models 4.4 Cloud Deployment Models The upcoming sections cover introductory topic areas

More information

Jeffrey Friedberg. Chief Trust Architect Microsoft Corporation. July 12, 2010 Microsoft Corporation

Jeffrey Friedberg. Chief Trust Architect Microsoft Corporation. July 12, 2010 Microsoft Corporation Jeffrey Friedberg Chief Trust Architect Microsoft Corporation July 2, 200 Microsoft Corporation Secure against attacks Protects confidentiality, integrity and availability of data and systems Manageable

More information

The Honest Advantage

The Honest Advantage The Honest Advantage READY TO CHALLENGE THE STATUS QUO GSA Security Policy and PCI Guidelines The GreenStar Alliance 2017 2017 GreenStar Alliance All Rights Reserved Table of Contents Table of Contents

More information

Approved 10/15/2015. IDEF Baseline Functional Requirements v1.0

Approved 10/15/2015. IDEF Baseline Functional Requirements v1.0 Approved 10/15/2015 IDEF Baseline Functional Requirements v1.0 IDESG.org IDENTITY ECOSYSTEM STEERING GROUP IDEF Baseline Functional Requirements v1.0 NOTES: (A) The Requirements language is presented in

More information

Preserving Data Privacy in the IoT World

Preserving Data Privacy in the IoT World MASSACHUSETTS INSTITUTE OF TECHNOLOGY Preserving Data Privacy in the IoT World Thomas Hardjono Alex Sandy Pentland Connection Science & Engineering Massachusetts Institute of Technology July 2016 connection.mit.edu

More information

C1: Define Security Requirements

C1: Define Security Requirements OWASP Top 10 Proactive Controls IEEE Top 10 Software Security Design Flaws OWASP Top 10 Vulnerabilities Mitigated OWASP Mobile Top 10 Vulnerabilities Mitigated C1: Define Security Requirements A security

More information

Teradata and Protegrity High-Value Protection for High-Value Data

Teradata and Protegrity High-Value Protection for High-Value Data Teradata and Protegrity High-Value Protection for High-Value Data 12.16 EB7178 DATA SECURITY Table of Contents 2 Data Centric Security: Providing High-Value Protection for High-Value Data 3 Visibility:

More information

Google Cloud & the General Data Protection Regulation (GDPR)

Google Cloud & the General Data Protection Regulation (GDPR) Google Cloud & the General Data Protection Regulation (GDPR) INTRODUCTION General Data Protection Regulation (GDPR) On 25 May 2018, the most significant piece of European data protection legislation to

More information

Continuous auditing certification

Continuous auditing certification State of the Art in cloud service certification Cloud computing has emerged as the de-facto-standard when it comes to IT delivery. It comes with many benefits, such as flexibility, cost-efficiency and

More information

Introduction to Device Trust Architecture

Introduction to Device Trust Architecture Introduction to Device Trust Architecture July 2018 www.globalplatform.org 2018 GlobalPlatform, Inc. THE TECHNOLOGY The Device Trust Architecture is a security framework which shows how GlobalPlatform

More information

Cloud Security Standards and Guidelines

Cloud Security Standards and Guidelines Cloud Security Standards and Guidelines V1 Document History and Reviews Version Date Revision Author Summary of Changes 0.1 May 2018 Ali Mitchell New document 1 May 2018 Ali Mitchell Approved version Review

More information

Saba Hosted Customer Privacy Policy

Saba Hosted Customer Privacy Policy Saba Hosted Customer Privacy Policy Last Revised 23 May 2018 1. Introduction Saba is committed to protecting information which can be used to directly or indirectly identify an individual ( personal data

More information

HIPAA Regulatory Compliance

HIPAA Regulatory Compliance Secure Access Solutions & HIPAA Regulatory Compliance Privacy in the Healthcare Industry Privacy has always been a high priority in the health profession. However, since the implementation of the Health

More information

USER CORPORATE RULES. These User Corporate Rules are available to Users at any time via a link accessible in the applicable Service Privacy Policy.

USER CORPORATE RULES. These User Corporate Rules are available to Users at any time via a link accessible in the applicable Service Privacy Policy. These User Corporate Rules are available to Users at any time via a link accessible in the applicable Service Privacy Policy. I. OBJECTIVE ebay s goal is to apply uniform, adequate and global data protection

More information

ISAO SO Product Outline

ISAO SO Product Outline Draft Document Request For Comment ISAO SO 2016 v0.2 ISAO Standards Organization Dr. Greg White, Executive Director Rick Lipsey, Deputy Director May 2, 2016 Copyright 2016, ISAO SO (Information Sharing

More information

Business White Paper IDENTITY AND SECURITY. Access Manager. Novell. Comprehensive Access Management for the Enterprise

Business White Paper IDENTITY AND SECURITY.  Access Manager. Novell. Comprehensive Access Management for the Enterprise Business White Paper IDENTITY AND SECURITY Novell Access Manager Comprehensive Access Management for the Enterprise Simple, Secure Access to Network Resources Business Driver 1: Cost Novell Access Manager

More information

WASHINGTON UNIVERSITY HIPAA Privacy Policy # 7. Appropriate Methods of Communicating Protected Health Information

WASHINGTON UNIVERSITY HIPAA Privacy Policy # 7. Appropriate Methods of Communicating Protected Health Information WASHINGTON UNIVERSITY HIPAA Privacy Policy # 7 Appropriate Methods of Communicating Protected Health Information Statement of Policy Washington University and its member organizations (collectively, Washington

More information

Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) ATTPS. Achieving The Trust Paradigm Shift

Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) ATTPS. Achieving The Trust Paradigm Shift Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) Version 1.0 Author: Bharadwaj Pulugundla (Verizon) 25.10.2015 Table of content 1. Introduction... 3

More information

MIS Week 9 Host Hardening

MIS Week 9 Host Hardening MIS 5214 Week 9 Host Hardening Agenda NIST Risk Management Framework A quick review Implementing controls Host hardening Security configuration checklist (w/disa STIG Viewer) NIST 800-53Ar4 How Controls

More information

Privacy Policy... 1 EU-U.S. Privacy Shield Policy... 2

Privacy Policy... 1 EU-U.S. Privacy Shield Policy... 2 Privacy Policy... 1 EU-U.S. Privacy Shield Policy... 2 Privacy Policy knows that your privacy is important to you. Below is our privacy policy for collecting, using, securing, protecting and sharing your

More information

Subject: University Information Technology Resource Security Policy: OUTDATED

Subject: University Information Technology Resource Security Policy: OUTDATED Policy 1-18 Rev. 2 Date: September 7, 2006 Back to Index Subject: University Information Technology Resource Security Policy: I. PURPOSE II. University Information Technology Resources are at risk from

More information

IAB Europe Transparency & Consent Framework Policies

IAB Europe Transparency & Consent Framework Policies IAB Europe Transparency & Consent Framework Policies This document lays out policies applicable to participants in the Transparency & Consent Framework ( Policies ). Participants may include publishers,

More information

Privacy & Cookie Statement

Privacy & Cookie Statement Privacy & Cookie Statement Version: 8 May 2018 Since day 1, WeTransfer has cared a great deal about privacy and respecting our users. We have always had a lean data policy: no sign up, no install, no retargeting.

More information

Lecture Embedded System Security Introduction to Trusted Computing

Lecture Embedded System Security Introduction to Trusted Computing 1 Lecture Embedded System Security Prof. Dr.-Ing. Ahmad-Reza Sadeghi System Security Lab Technische Universität Darmstadt (CASED) Summer Term 2015 Roadmap: Trusted Computing Motivation Notion of trust

More information

Shaw Privacy Policy. 1- Our commitment to you

Shaw Privacy Policy. 1- Our commitment to you Privacy Policy last revised on: Sept 16, 2016 Shaw Privacy Policy If you have any questions regarding Shaw s Privacy Policy please contact: privacy@shaw.ca or use the contact information shown on any of

More information

This guide is for informational purposes only. Please do not treat it as a substitute of a professional legal

This guide is for informational purposes only. Please do not treat it as a substitute of a professional legal What is GDPR? GDPR (General Data Protection Regulation) is Europe s new privacy law. Adopted in April 2016, it replaces the 1995 Data Protection Directive and marks the biggest change in data protection

More information

IDENTITY ASSURANCE PRINCIPLES

IDENTITY ASSURANCE PRINCIPLES IDENTITY ASSURANCE PRINCIPLES PRIVACY AND CONSUMER ADVISORY GROUP (PCAG) V3.1 17 th July 2014 CONTENTS 1. Introduction 3 2. The Context of the Principles 4 3. Definitions 6 4. The Nine Identity Assurance

More information

TARGET2-SECURITIES INFORMATION SECURITY REQUIREMENTS

TARGET2-SECURITIES INFORMATION SECURITY REQUIREMENTS Target2-Securities Project Team TARGET2-SECURITIES INFORMATION SECURITY REQUIREMENTS Reference: T2S-07-0270 Date: 09 October 2007 Version: 0.1 Status: Draft Target2-Securities - User s TABLE OF CONTENTS

More information

A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data

A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data An Efficient Privacy-Preserving Ranked Keyword Search Method Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop

More information

Cloud Security Standards

Cloud Security Standards Cloud Security Standards Classification: Standard Version Number: 1-00 Status: Published Approved by (Board): University Leadership Team Approval Date: 30 January 2018 Effective from: 30 January 2018 Next

More information

Adobe Sign and 21 CFR Part 11

Adobe Sign and 21 CFR Part 11 Adobe Sign and 21 CFR Part 11 Today, organizations of all sizes are transforming manual paper-based processes into end-to-end digital experiences speeding signature processes by 500% with legal, trusted

More information

HIPAA Compliance Checklist

HIPAA Compliance Checklist HIPAA Compliance Checklist Hospitals, clinics, and any other health care providers that manage private health information today must adhere to strict policies for ensuring that data is secure at all times.

More information

Putting It All Together:

Putting It All Together: Putting It All Together: The Interplay of Privacy & Security Regina Verde, MS, MBA, CHC Chief Corporate Compliance & Privacy Officer University of Virginia Health System 2017 ISPRO Conference October 24,

More information

VERSION 1.3 MAY 1, 2018 SNOWFLY PRIVACY POLICY SNOWFLY PERFORMANCE INC. P.O. BOX 95254, SOUTH JORDAN, UT

VERSION 1.3 MAY 1, 2018 SNOWFLY PRIVACY POLICY SNOWFLY PERFORMANCE INC. P.O. BOX 95254, SOUTH JORDAN, UT VERSION 1.3 MAY 1, 2018 SNOWFLY PRIVACY POLICY SNOWFLY PERFORMANCE INC. P.O. BOX 95254, SOUTH JORDAN, UT 84095-9998 SNOWFLY PRIVACY POLICY This Privacy Policy describes Snowfly s practices regarding the

More information

Lecture Embedded System Security Introduction to Trusted Computing

Lecture Embedded System Security Introduction to Trusted Computing 1 Lecture Embedded System Security Prof. Dr.-Ing. Ahmad-Reza Sadeghi System Security Lab Technische Universität Darmstadt (CASED) Summer Term 2012 Roadmap: Trusted Computing Motivation Notion of trust

More information

Table of Contents. PCI Information Security Policy

Table of Contents. PCI Information Security Policy PCI Information Security Policy Policy Number: ECOMM-P-002 Effective Date: December, 14, 2016 Version Number: 1.0 Date Last Reviewed: December, 14, 2016 Classification: Business, Finance, and Technology

More information

Computer Security Policy

Computer Security Policy Administration and Policy: Computer usage policy B 0.2/3 All systems Computer and Rules for users of the ECMWF computer systems May 1995 Table of Contents 1. The requirement for computer security... 1

More information

GDPR Privacy Policy. The data protection policy of AlphaMed Press is based on the terms found in the GDPR.

GDPR Privacy Policy. The data protection policy of AlphaMed Press is based on the terms found in the GDPR. GDPR Privacy Policy PRIVACY POLICY The privacy and security of data are a priority for AlphaMed Press and our management and staff. While accessing and using our website does not require your submission

More information

NATIONAL CYBER SECURITY STRATEGY. - Version 2.0 -

NATIONAL CYBER SECURITY STRATEGY. - Version 2.0 - NATIONAL CYBER SECURITY STRATEGY - Version 2.0 - CONTENTS SUMMARY... 3 1 INTRODUCTION... 4 2 GENERAL PRINCIPLES AND OBJECTIVES... 5 3 ACTION FRAMEWORK STRATEGIC OBJECTIVES... 6 3.1 Determining the stakeholders

More information

WHITEPAPER. Security overview. podio.com

WHITEPAPER. Security overview. podio.com WHITEPAPER Security overview Podio security White Paper 2 Podio, a cloud service brought to you by Citrix, provides a secure collaborative work platform for team and project management. Podio features

More information

Proposal for a model to address the General Data Protection Regulation (GDPR)

Proposal for a model to address the General Data Protection Regulation (GDPR) Proposal for a model to address the General Data Protection Regulation (GDPR) Introduction Please find the Executive Summary of the data model in Part A of this document. Part B responds to the requirements

More information

Data Protection Policy

Data Protection Policy Data Protection Policy Data Protection Policy Version 3.00 May 2018 For more information, please contact: Technical Team T: 01903 228100 / 01903 550242 E: info@24x.com Page 1 The Data Protection Law...

More information

Windows 10 IoT Core Azure Connectivity and Security

Windows 10 IoT Core Azure Connectivity and Security Windows 10 IoT Core Azure Connectivity and Security Published July 27, 2016 Version 1.0 Table of Contents Introduction... 2 Device identities... 2 Building security into the platform... 3 Security as a

More information

SECURITY & PRIVACY DOCUMENTATION

SECURITY & PRIVACY DOCUMENTATION Okta s Commitment to Security & Privacy SECURITY & PRIVACY DOCUMENTATION (last updated September 15, 2017) Okta is committed to achieving and preserving the trust of our customers, by providing a comprehensive

More information

Forcare B.V. Cross-Enterprise Document Sharing (XDS) Whitepaper

Forcare B.V. Cross-Enterprise Document Sharing (XDS) Whitepaper Cross-Enterprise Document Sharing (XDS) Copyright 2010 Forcare B.V. This publication may be distributed in its unmodified whole with references to the author and company name. Andries Hamster Forcare B.V.

More information

ehaction Joint Action to Support the ehealth Network

ehaction Joint Action to Support the ehealth Network Stakeholder Engagement - Consultation (22 August 2017) ehaction Joint Action to Support the ehealth Network 3 rd Joint Action to Support the ehealth Network Open Consultation 1 Participants of the 3 rd

More information

EBOOK The General Data Protection Regulation. What is it? Why was it created? How can organisations prepare for it?

EBOOK The General Data Protection Regulation. What is it? Why was it created? How can organisations prepare for it? EBOOK The General Data Protection Regulation What is it? Why was it created? How can organisations prepare for it? How the General Data Protection Regulation evolved and what it means for businesses The

More information

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution Tervela helps companies move large volumes of sensitive data safely and securely over network distances great and small. We have been

More information

ETNO Reflection Document on the EC Proposal for a Directive on Network and Information Security (NIS Directive)

ETNO Reflection Document on the EC Proposal for a Directive on Network and Information Security (NIS Directive) ETNO Reflection Document on the EC Proposal for a Directive on Network and Information Security (NIS Directive) July 2013 Executive Summary ETNO supports the European Commission s global approach to cyber-security

More information

U.S. Japan Internet Economy Industry Forum Joint Statement October 2013 Keidanren The American Chamber of Commerce in Japan

U.S. Japan Internet Economy Industry Forum Joint Statement October 2013 Keidanren The American Chamber of Commerce in Japan U.S. Japan Internet Economy Industry Forum Joint Statement 2013 October 2013 Keidanren The American Chamber of Commerce in Japan In June 2013, the Abe Administration with the support of industry leaders

More information

TechTarget, Inc. Privacy Policy

TechTarget, Inc. Privacy Policy This Privacy Policy (the Policy ) is designed to inform users of TechTarget, Inc., and its affiliates (collectively TechTarget ) network of websites about how TechTarget gathers and uses information provided

More information

Internet of Things Toolkit for Small and Medium Businesses

Internet of Things Toolkit for Small and Medium Businesses Your Guide #IoTatWork to IoT Security #IoTatWork Internet of Things Toolkit for Small and Medium Businesses Table of Contents Introduction 1 The Internet of Things (IoT) 2 Presence of IoT in Business Sectors

More information

IMPROVING DATA SECURITY USING ATTRIBUTE BASED BROADCAST ENCRYPTION IN CLOUD COMPUTING

IMPROVING DATA SECURITY USING ATTRIBUTE BASED BROADCAST ENCRYPTION IN CLOUD COMPUTING IMPROVING DATA SECURITY USING ATTRIBUTE BASED BROADCAST ENCRYPTION IN CLOUD COMPUTING 1 K.Kamalakannan, 2 Mrs.Hemlathadhevi Abstract -- Personal health record (PHR) is an patient-centric model of health

More information

Archives in a Networked Information Society: The Problem of Sustainability in the Digital Information Environment

Archives in a Networked Information Society: The Problem of Sustainability in the Digital Information Environment Archives in a Networked Information Society: The Problem of Sustainability in the Digital Information Environment Shigeo Sugimoto Research Center for Knowledge Communities Graduate School of Library, Information

More information

GDPR: An Opportunity to Transform Your Security Operations

GDPR: An Opportunity to Transform Your Security Operations GDPR: An Opportunity to Transform Your Security Operations McAfee SIEM solutions improve breach detection and response Is your security operations GDPR ready? General Data Protection Regulation (GDPR)

More information

Managing SaaS risks for cloud customers

Managing SaaS risks for cloud customers Managing SaaS risks for cloud customers Information Security Summit 2016 September 13, 2016 Ronald Tse Founder & CEO, Ribose For every IaaS/PaaS, there are 100s of SaaS PROBLEM SaaS spending is almost

More information

DATA PROTECTION BY DESIGN

DATA PROTECTION BY DESIGN DATA PROTECTION BY DESIGN Preparing for Europe s New Security Regulations Summary In 2018, the European Union will begin to enforce the provisions of the General Data Protection Regulation (GDPR), a new

More information

Telecom Italia response. to the BEREC public consultation on

Telecom Italia response. to the BEREC public consultation on Telecom Italia response to the BEREC public consultation on Guidelines on Net Neutrality and Transparency: Best practise and recommended approaches - BOR (11) 44 (2 November 2011) Telecom Italia response

More information

How to implement NIST Cybersecurity Framework using ISO WHITE PAPER. Copyright 2017 Advisera Expert Solutions Ltd. All rights reserved.

How to implement NIST Cybersecurity Framework using ISO WHITE PAPER. Copyright 2017 Advisera Expert Solutions Ltd. All rights reserved. How to implement NIST Cybersecurity Framework using ISO 27001 WHITE PAPER Copyright 2017 Advisera Expert Solutions Ltd. All rights reserved. Copyright 2017 Advisera Expert Solutions Ltd. All rights reserved.

More information

Legal basis of processing. Place MODE AND PLACE OF PROCESSING THE DATA

Legal basis of processing. Place MODE AND PLACE OF PROCESSING THE DATA Privacy Policy of www.florence-apartments.net This Application collects some Personal Data from its Users. Owner and Data Controller Florence Apartments Sas - via Curtatone, 2-50123 Firenze Owner contact

More information

Islam21c.com Data Protection and Privacy Policy

Islam21c.com Data Protection and Privacy Policy Islam21c.com Data Protection and Privacy Policy Purpose of this policy The purpose of this policy is to communicate to staff, volunteers, donors, non-donors, supporters and clients of Islam21c the approach

More information

2. What is Personal Information and Non-Personally Identifiable Information?

2. What is Personal Information and Non-Personally Identifiable Information? Privacy Notice Snipp Interactive, Inc. Last Updated: February 11, 2016 Contents: 1. Introduction 2. What is Personal Information? 3. Information we collect about you 4. Use of Your Information 5. Location

More information

This Policy has been prepared with due regard to the General Data Protection Regulation (EU Regulation 2016/679) ( GDPR ).

This Policy has been prepared with due regard to the General Data Protection Regulation (EU Regulation 2016/679) ( GDPR ). PRIVACY POLICY Data Protection Policy 1. Introduction This Data Protection Policy (this Policy ) sets out how Brital Foods Limited ( we, us, our ) handle the Personal Data we Process in the course of our

More information

Archiving. Services. Optimize the management of information by defining a lifecycle strategy for data. Archiving. ediscovery. Data Loss Prevention

Archiving. Services. Optimize the management of information by defining a lifecycle strategy for data. Archiving. ediscovery. Data Loss Prevention Symantec Enterprise Vault TransVault CommonDesk ARCviewer Vault LLC Optimize the management of information by defining a lifecycle strategy for data Backup is for recovery, archiving is for discovery.

More information

COUNCIL OF THE EUROPEAN UNION. Brussels, 24 May /13. Interinstitutional File: 2013/0027 (COD)

COUNCIL OF THE EUROPEAN UNION. Brussels, 24 May /13. Interinstitutional File: 2013/0027 (COD) COUNCIL OF THE EUROPEAN UNION Brussels, 24 May 2013 Interinstitutional File: 2013/0027 (COD) 9745/13 TELECOM 125 DATAPROTECT 64 CYBER 10 MI 419 CODEC 1130 NOTE from: Presidency to: Delegations No. Cion

More information

A Better Approach to Leveraging an OpenStack Private Cloud. David Linthicum

A Better Approach to Leveraging an OpenStack Private Cloud. David Linthicum A Better Approach to Leveraging an OpenStack Private Cloud David Linthicum A Better Approach to Leveraging an OpenStack Private Cloud 1 Executive Summary The latest bi-annual survey data of OpenStack users

More information

Checklist: Credit Union Information Security and Privacy Policies

Checklist: Credit Union Information Security and Privacy Policies Checklist: Credit Union Information Security and Privacy Policies Acceptable Use Access Control and Password Management Background Check Backup and Recovery Bank Secrecy Act/Anti-Money Laundering/OFAC

More information

Ferrous Metal Transfer Privacy Policy

Ferrous Metal Transfer Privacy Policy Updated: March 13, 2018 Ferrous Metal Transfer Privacy Policy Ferrous Metal Transfer s Commitment to Privacy Ferrous Metal Transfer Co. ( FMT, we, our, and us ) respects your concerns about privacy, and

More information

Department of Public Health O F S A N F R A N C I S C O

Department of Public Health O F S A N F R A N C I S C O PAGE 1 of 7 Category: Information Technology Security and HIPAA DPH Unit of Origin: Department of Public Health Policy Owner: Phillip McDown, CISSP Phone: 255-3577 CISSPCISSP/C Distribution: DPH-wide Other:

More information

PRIVACY COMMITMENT. Information We Collect and How We Use It. Effective Date: July 2, 2018

PRIVACY COMMITMENT. Information We Collect and How We Use It. Effective Date: July 2, 2018 Effective Date: July 2, 2018 PRIVACY COMMITMENT Protecting your privacy is very important to Prosci and this privacy policy is our way of providing you with details about the types of information we collect

More information

All Aboard the HIPAA Omnibus An Auditor s Perspective

All Aboard the HIPAA Omnibus An Auditor s Perspective All Aboard the HIPAA Omnibus An Auditor s Perspective Rick Dakin CEO & Chief Security Strategist February 20, 2013 1 Agenda Healthcare Security Regulations A Look Back What is the final Omnibus Rule? Changes

More information

Brian Russell, Chair Secure IoT WG & Chief Engineer Cyber Security Solutions, Leidos

Brian Russell, Chair Secure IoT WG & Chief Engineer Cyber Security Solutions, Leidos Brian Russell, Chair Secure IoT WG & Chief Engineer Cyber Security Solutions, Leidos Cloud Security Alliance, 2015 Agenda 1. Defining the IoT 2. New Challenges introduced by the IoT 3. IoT Privacy Threats

More information

Privacy Policy of

Privacy Policy of Privacy Policy of www.bitminutes.com This Application collects some Personal Data from its Users. Owner and Data Controller BitMinutes Inc Owner contact email: privacy@bitminutes.com Types of Data collected

More information

What Makes PMI Certifications Stand Apart?

What Makes PMI Certifications Stand Apart? What Makes PMI Certifications Stand Apart? Many certifications exist for managers that claim to offer practitioners and organizations a number of benefits. So, why are PMI certifications unique? PMI certifications

More information

SWIFT Response to the Committee on Payments and Market Infrastructures discussion note:

SWIFT Response to the Committee on Payments and Market Infrastructures discussion note: SWIFT Response to the Committee on Payments and Market Infrastructures discussion note: Reducing the risk of wholesale payments fraud related to endpoint security 28 November 2017 SWIFT thanks the Committee

More information

Accelerate Your Enterprise Private Cloud Initiative

Accelerate Your Enterprise Private Cloud Initiative Cisco Cloud Comprehensive, enterprise cloud enablement services help you realize a secure, agile, and highly automated infrastructure-as-a-service (IaaS) environment for cost-effective, rapid IT service

More information

Technical Trust Policy

Technical Trust Policy Technical Trust Policy Version 1.2 Last Updated: May 20, 2016 Introduction Carequality creates a community of trusted exchange partners who rely on each organization s adherence to the terms of the Carequality

More information

The Potential for Blockchain to Transform Electronic Health Records ARTICLE TECHNOLOGY. by John D. Halamka, MD, Andrew Lippman and Ariel Ekblaw

The Potential for Blockchain to Transform Electronic Health Records ARTICLE TECHNOLOGY. by John D. Halamka, MD, Andrew Lippman and Ariel Ekblaw REPRINT H03I15 PUBLISHED ON HBR.ORG MARCH 03, 2017 ARTICLE TECHNOLOGY The Potential for Blockchain to Transform Electronic Health Records by John D. Halamka, MD, Andrew Lippman and Ariel Ekblaw This article

More information

the processing of personal data relating to him or her.

the processing of personal data relating to him or her. Privacy Policy We are very delighted that you have shown interest in our enterprise. Data protection is of a particularly high priority for the management of the Hotel & Pensionat Björkelund. The use of

More information

Cyber Security Program

Cyber Security Program Cyber Security Program Cyber Security Program Goals and Objectives Goals Provide comprehensive Security Education and Awareness to the University community Build trust with the University community by

More information

Data Partnerships to Improve Health Frequently Asked Questions. Glossary...9

Data Partnerships to Improve Health Frequently Asked Questions. Glossary...9 FAQ s Data Partnerships to Improve Health Frequently Asked Questions BENEFITS OF PARTICIPATING... 1 USING THE NETWORK.... 2 SECURING THE DATA AND NETWORK.... 3 PROTECTING PRIVACY.... 4 CREATING METADATA...

More information

Identity Management: Setting Context

Identity Management: Setting Context Identity Management: Setting Context Joseph Pato Trusted Systems Lab Hewlett-Packard Laboratories One Cambridge Center Cambridge, MA 02412, USA joe.pato@hp.com Identity Management is the set of processes,

More information

CERT Symposium: Cyber Security Incident Management for Health Information Exchanges

CERT Symposium: Cyber Security Incident Management for Health Information Exchanges Pennsylvania ehealth Partnership Authority Pennsylvania s Journey for Health Information Exchange CERT Symposium: Cyber Security Incident Management for Health Information Exchanges June 26, 2013 Pittsburgh,

More information

Direct, DirectTrust, and FHIR: A Value Proposition

Direct, DirectTrust, and FHIR: A Value Proposition Direct, DirectTrust, and FHIR: A Value Proposition August 10, 2017 Authors: Grahame Grieve, HL7 Product Director for FHIR; David Kibbe, Luis Maas, Greg Meyer, and Bruce Schreiber, members of the DirectTrust

More information

PRIVACY STATEMENT DEKMANTEL

PRIVACY STATEMENT DEKMANTEL PRIVACY STATEMENT DEKMANTEL Article 1 Definitions DEKMANTEL: Visitor: Event: Social Media: Website: the private company with limited liability DEKMANTEL BV, with its registered office in Amsterdam and

More information

QoS-aware model-driven SOA using SoaML

QoS-aware model-driven SOA using SoaML QoS-aware model-driven SOA using SoaML Niels Schot A thesis submitted for the degree of MSc Computer Science University of Twente EEMCS - TRESE: Software Engineering Group Examination committee: Luís Ferreira

More information

Privacy Policy. Effective date: 21 May 2018

Privacy Policy. Effective date: 21 May 2018 Privacy Policy Effective date: 21 May 2018 We at Meetingbird know you care about how your personal information is used and shared, and we take your privacy seriously. Please read the following to learn

More information

Incentives for IoT Security. White Paper. May Author: Dr. Cédric LEVY-BENCHETON, CEO

Incentives for IoT Security. White Paper. May Author: Dr. Cédric LEVY-BENCHETON, CEO White Paper Incentives for IoT Security May 2018 Author: Dr. Cédric LEVY-BENCHETON, CEO Table of Content Defining the IoT 5 Insecurity by design... 5 But why are IoT systems so vulnerable?... 5 Integrating

More information

Securing cross-border exchange of ehealth data in the EU

Securing cross-border exchange of ehealth data in the EU Securing cross-border exchange of ehealth data in the EU Ioannis Komnios KONFIDO Project Coordinator EXUS Software Ltd, NCSR "Demokritos", Athens, Greece KONFIDO means Trust in Esperanto 2 KONFIDO Consortium

More information

"Charting the Course... Certified Information Systems Auditor (CISA) Course Summary

Charting the Course... Certified Information Systems Auditor (CISA) Course Summary Course Summary Description In this course, you will perform evaluations of organizational policies, procedures, and processes to ensure that an organization's information systems align with overall business

More information

Security Policies and Procedures Principles and Practices

Security Policies and Procedures Principles and Practices Security Policies and Procedures Principles and Practices by Sari Stern Greene Chapter 3: Information Security Framework Objectives Plan the protection of the confidentiality, integrity and availability

More information

RippleMatch Privacy Policy

RippleMatch Privacy Policy RippleMatch Privacy Policy This Privacy Policy describes the policies and procedures of RippleMatch Inc. ( we, our or us ) on the collection, use and disclosure of your information on https://www.ripplematch.com/

More information

CruiseSmarter PRIVACY POLICY. I. Acceptance of Terms

CruiseSmarter PRIVACY POLICY. I. Acceptance of Terms I. Acceptance of Terms This Privacy Policy describes CRUISE SMARTER policies and procedures on the collection, use and disclosure of your information. CRUISE SMARTER LLC (hereinafter referred to as "we",

More information

Bring Your Own Device (BYOD)

Bring Your Own Device (BYOD) Bring Your Own Device (BYOD) An information security and ediscovery analysis A Whitepaper Call: +44 345 222 1711 / +353 1 210 1711 Email: cyber@bsigroup.com Visit: bsigroup.com Executive summary Organizations

More information

Mobility, Security Concerns, and Avoidance

Mobility, Security Concerns, and Avoidance By Jorge García, Technology Evaluation Centers Technology Evaluation Centers Mobile Challenges: An Overview Data drives business today, as IT managers and security executives face enormous pressure to

More information

Privacy Policy. In this data protection declaration, we use, inter alia, the following terms:

Privacy Policy. In this data protection declaration, we use, inter alia, the following terms: Last updated: 20/04/2018 Privacy Policy We are very delighted that you have shown interest in our enterprise. Data protection is of a particularly high priority for the management of VITO (Vlakwa). The

More information

Privacy-Enabled NFTs: User-Mintable, Non-Fungible Tokens With Private Off-Chain Data

Privacy-Enabled NFTs: User-Mintable, Non-Fungible Tokens With Private Off-Chain Data Privacy-Enabled NFTs: User-Mintable, Non-Fungible Tokens With Private Off-Chain Data Philip Stehlik Lucas Vogelsang August 8, 2018 1 Abstract Privacy-enabled NFTs (non-fungible tokens) are user-mintable

More information

DRAFT Privacy Statement (19 July 2017)

DRAFT Privacy Statement (19 July 2017) DRAFT Privacy Statement (19 July 2017) European Reference Networks for Rare, Low Prevalence and Rare Diseases Clinical Patient Management System (CPMS) 1. What is the ERN Clinical Patient Management System?

More information

Office 365 Buyers Guide: Best Practices for Securing Office 365

Office 365 Buyers Guide: Best Practices for Securing Office 365 Office 365 Buyers Guide: Best Practices for Securing Office 365 Microsoft Office 365 has become the standard productivity platform for the majority of organizations, large and small, around the world.

More information