Building Secure Systems: Problems and Principles Dennis Kafura 1
Barriers to Secure Systems Secure systems depend on more than the discovery of more advanced technologies Security also depends on the widespread and correct deployment of the technology and its appropriate use by people and organizations 2 2
State of Crytographic Solutions (1) Cryptographic technology to build secure systems is known Crytographic algorithms Digital signatures Hash functions/digests Protocols System weaknesses are at the interface with the system s human users Usability issues inhibit effective employment by ordinary users and perhaps security personnel Key management Access control Related disciplines not well developed (e.g., engineering software 3 for security) 3
State of Crytographic Solutions (2) Initial impetus derived from secretive government security services Commercial sector fears loss of confidence and customers Feedback to the security community is weak compared to other design communities (e.g., transportation failures are thoroughly scrutinized and the underlying failures documented and disseminated so that the community learns over time and their designs/products improve over time) 4 4
Competing Philosophies railway model trusted kernel formal verification reductionist in spirit system in control airline model rich sources of feedback incremental improvement holistic in spirit human in control 5 5
Design principles (1) security is fundamental to the conception of the system and cannot be bolted on as an afterthought no system is perfectly secure; must engineer a balance between achievable security and acceptable security (100% risk acceptance not 100% security) defense in depth 6 6
Design principles (2) proprietary measures less secure than public ones ( security by obscurity does not work) need detection, confinement, and recovery strategies in addition to prevention every system will eventually fail against a determined attacker detection may be available but not used (e.g., error response codes indicating potential intrusion was ignored) minimize results of failure (e.g., recovering the key for one file/user should not compromise other files/users) create audit logs/trail testing will not reveal security flaws (functionality!= quality) 7 7
Design Principles (3) economy of mechanism fail-safe defaults complete mediation open design separation of privilege least privilege least common mechanism psychological acceptability two others work factor (cost of attack vs. resources of attacker) compromise recording (evidence of tampering gathered to render 8 disclosed information harmless) from: Saltzer/Schroeder(1975) 8
Ideal development process specify all security failure modes identify strategy for each mode (to prevent failure of make acceptable) for each strategy document its implementation determine consequences of failure assessment by independent experts test whether personnel using system can operate 9 the system correctly 9
Organizational Issues organizations lack expertise to correctly design/deploy existing technologies issues are complex, subtle, highly technical well intentioned optimizations break the security exploits continue to be effective due to lack of community learning no natural home in the organization to put a security team (like testing/iv&v in software 10 engineering) 10
Threat Model threat model identifies what is to be protected, from whom (against what), and for how long includes assessment of people and their motivations most bank fraud committed by insiders bank clerk issued extra credit cards technical staff recorded ATM entries on concealed hand-held device users concerned more about simplicity/convenience (e.g., choosing weak passwords) often insufficient feedback to develop realistic threat model we know how systems fail in theory, not how they fail in practice 11 11
Bypassing cryptographic defenses (1) plaintext may be retained (e.g., for reliability and/or recovery) tamper resistant hardware may be vulnerable timing attacks side channel attacks (power consumption, radiation, etc.) trust model development and deployment assumptions might 12be different (e.g., outsourcing of functions) 12
Bypassing cryptographic defenses (2) key hygiene /key management keys exposed by implementation (virtual memory, user interface) key recovery mechanisms can be attacked strong keys protected only by weak passwords/passphrases keys known to technical staff for testing/debugging even when special hardware ( security modules are used 13 maintenance staff may have access to keys 13
Implementation issues valid cryptographic algorithms/protocols may be undermined by flawed implementations problems with keys key values too short (e.g., banks using RSA with 100-400 bit keys while at least 500 bits are needed) inadequate/inappropriate use of random number generators badly designed to produce weak keys (e.g, using the time of day clock giving only 20 bits rather than 54 bits of randomness) paired with weak cryptographic algorithm reuse issues (not all RNG s perform equally well with all cryptographic algorithms) 14 14
Protection Mechanisms list oriented (access control list) Guard holds list of identifiers of authorized users User carries unique unforgeable identifier ticket-oriented (capabilities) Guard hold the description of a single identifier Each user has a collection of unforgeable identifiers, or tickets 15 from: Saltzer/Schroeder(1975) 15
Capability System (1) 16 from: Saltzer/Schroeder(1975) Figure 5 16
Capability System (2) 17 from: Saltzer/Schroeder(1975) Figure 7 17
Access Control Lists (1) 18 from: Saltzer/Schroeder(1975) Figure 9 18
Access control list (2) 19 from: Saltzer/Schroeder(1975) Figure 10 19
Comparison and Projection Capability lists Easy sharing, flexible management of privileges Difficult revocation Access control lists Easy revocation More cumbersome management/sharing Contemporary projections Capability lists credentials/privileges Access control lists policy-based system 20 20