CS261 Scribe Notes: Secure Computation 1

Similar documents
Efficient Software Based Fault Isolation. Software Extensibility

Confinement (Running Untrusted Programs)

Securing Untrusted Code

ISOLATION DEFENSES GRAD SEC OCT

Mitigating Code-Reuse Attacks with. Tyler Bletsch, Xuxian Jiang, Vince Freeh Dec. 9, 2011

Sandboxing Untrusted Code: Software-Based Fault Isolation (SFI)

0x1A Great Papers in Computer Security

Sandboxing. CS-576 Systems Security Instructor: Georgios Portokalidis Spring 2018

Confinement (Running Untrusted Programs)

Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems. Robert Grimm University of Washington

Lecture Notes for 04/04/06: UNTRUSTED CODE Fatima Zarinni.

CIS 5373 Systems Security

How I Learned to Stop Worrying and Love Plugins

Inline Reference Monitoring Techniques

Security Architecture

Control Flow Integrity & Software Fault Isolation. David Brumley Carnegie Mellon University

Towards Application Security on Untrusted Operating Systems

Native Client: A Sandbox for Portable, Untrusted x86 Native Code

C1: Define Security Requirements

InkTag: Secure Applications on an Untrusted Operating System. Owen Hofmann, Sangman Kim, Alan Dunn, Mike Lee, Emmett Witchel UT Austin

Buffer overflow background

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions

PRESENTED BY: SANTOSH SANGUMANI & SHARAN NARANG

SECURING SOFTWARE AGAINST LIBRARY ATTACKS

CSE543 - Computer and Network Security Module: Virtualization

CSE543 - Computer and Network Security Module: Virtualization

OS Security IV: Virtualization and Trusted Computing

INFLUENTIAL OPERATING SYSTEM RESEARCH: SECURITY MECHANISMS AND HOW TO USE THEM CARSTEN WEINHOLD

The Multi-Principal OS Construction of the Gazelle Web Browser. Helen J. Wang, Chris Grier, Alex Moshchuk, Sam King, Piali Choudhury, Herman Venter

OS Agnostic Sandboxing Using Virtual CPUs

Future Work. Build applications that use extensions to optimize performance. Interface design.

ARMlock: Hardware-based Fault Isolation for ARM

Virtualization. Adam Belay

CS 5460/6460 Operating Systems

Operating System Security

4. Jump to *RA 4. StackGuard 5. Execute code 5. Instruction Set Randomization 6. Make system call 6. System call Randomization

CSE543 - Computer and Network Security Module: Virtualization

Secure Architecture Principles

Sandboxing untrusted code: policies and mechanisms

Confinement. Steven M. Bellovin November 1,

CSE 227 Computer Security Spring 2010 S f o t ftware D f e enses I Ste St f e an f Sa v Sa a v g a e g

Lecture 4 September Required reading materials for this class

Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Language-based Protection: Solution

Spectre and Meltdown. Clifford Wolf q/talk

Institute for Cyber Security. ZeroVM Backgroud

Anne Bracy CS 3410 Computer Science Cornell University

Version:1.1. Overview of speculation-based cache timing side-channels

24-vm.txt Mon Nov 21 22:13: Notes on Virtual Machines , Fall 2011 Carnegie Mellon University Randal E. Bryant.

Cling: A Memory Allocator to Mitigate Dangling Pointers. Periklis Akritidis

Honours/Master/PhD Thesis Projects Supervised by Dr. Yulei Sui

Secure Architecture Principles

Dynamically Estimating Reliability in a Volunteer-Based Compute and Data-Storage System

Virtual machines (e.g., VMware)

The Kernel Abstraction

Computer Security Fall 2006 Joseph/Tygar MT 2 Solutions

CS 326: Operating Systems. Process Execution. Lecture 5

T Jarkko Turkulainen, F-Secure Corporation

Isolation/Confinement

Secure Architecture Principles

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Quiz I Solutions

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Quiz I

MITOCW watch?v=i0psvvky-44

THREAT PROTECTION FOR VIRTUAL SYSTEMS #ILTACON #ILTA156

Programming with MPI

0x1A Great Papers in Computer Security

Anne Bracy CS 3410 Computer Science Cornell University

Using Hardware-Assisted Virtualization to Protect Application Address Space Inside Untrusted Environment

Presented by Alex Nicolaou

The confinement principle

Lecture 3: O/S Organization. plan: O/S organization processes isolation

Zero Trust on the Endpoint. Extending the Zero Trust Model from Network to Endpoint with Advanced Endpoint Protection

The Architecture of Virtual Machines Lecture for the Embedded Systems Course CSD, University of Crete (April 29, 2014)

Administrivia. Lab 1 due Friday 12pm. We give will give short extensions to groups that run into trouble. But us:

Exploiting Concurrency Vulnerabilities in System Call Wrappers

Is Browsing Safe? Web Browser Security. Subverting the Browser. Browser Security Model. XSS / Script Injection. 1. XSS / Script Injection

Bypassing Browser Memory Protections

Secure coding practices

Operating Systems. Operating System Structure. Lecture 2 Michael O Boyle

Computer Security. 05. Confinement. Paul Krzyzanowski. Rutgers University. Spring 2018

ReVirt: Enabling Intrusion Analysis through Virtual Machine Logging and Replay

CS 161 Computer Security

Protection. OS central role. Fundamental to other OS goals. OS kernel. isolation of misbehaving applications. Relaibility Security Privacy fairness

Russ McRee Bryan Casper

Kernel Support for Paravirtualized Guest OS

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

143A: Principles of Operating Systems. Lecture 5: Address translation. Anton Burtsev October, 2018

Institutionen för datavetenskap. Extending browser platforms with native capabilities, enabling additional features in a media streaming context

Chrome Extension Security Architecture

The Performance of µ-kernel-based Systems

What are some common categories of system calls? What are common ways of structuring an OS? What are the principles behind OS design and

Exploiting and Protecting Dynamic Code Generation

High-Level Language VMs

Syscalls, exceptions, and interrupts, oh my!

MEAP Edition Manning Early Access Program WebAssembly in Action Version 1

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall 2011.

AN EVALUATION OF THE GOOGLE CHROME EXTENSION SECURITY ARCHITECTURE

CS Paul Krzyzanowski

Computer Security Exam 2 Review. Paul Krzyzanowski. Rutgers University. Spring 2018

CS 356 Operating System Security. Fall 2013

Advanced Systems Security: Ordinary Operating Systems

Transcription:

CS261 Scribe Notes: Secure Computation 1 Scriber: Cameron Rasmussen October 24, 2018 1 Introduction It is often the case that code is being run locally on our system that isn t completely trusted, a prime example being web applications. In the web context, simply by visiting a new website, our system will begin running new untrusted code. The problem is that simply preventing the code from running isn t a solution from the usability standpoint because we visit new websites everyday. Furthermore websites commonly use a very large toolchain that can incorporate a large set of imported libraries or connected services, so whitelisting what domains are allowed to run on our system can become an exercise in frustration and still isn t perfect without exhaustive auditing of the code being run. The common solution to this problem is sandboxing. 2 Sandboxing The key idea for sandboxing is to isolate resources and/or the environment from a process. It does this by creating an isolation boundary around a process which prevents the process from affecting or accessing anything outside of its boundary. Isolation Boundary Trusted System Untrusted Application In the web context, we sandbox the rendering engine, or Javascript code, from the higher privilege browser. The use cases of sandboxing can extend to running any application that may be buggy or malicious, to prevent irreparable damage from being done to the system. 3 Different types of isolation Virtual Machine (VM): virtualize the hypervisor to configure network and file access for anything that will run in the virtual machine [1] + Simple and easy to use + Widely applicable to most scenarios + Can be used without changing the application Heavyweight solution (large memory footprint, costs extra CPU cycles, etc) User level sandboxes Software Fault Isolation (SFI): check that the application is safe to run and isolate possible sources of buggy/malicious behavior (eg Native Client [3]) 1

Safe languages (eg Java, Rust, etc; JS in browser): the language handles certain things safely (ie memory safety) or has the ability to specify a policy for what it can do + Less overhead than a VM + Easily extendable under different threat models Harder to get right (vs VM); SFI could be considered a blacklist approach Domain specific - a application is only safe if written in a safe language, or has been already been vetted by SFI Physical Isolation + Guaranteed to isolate untrusted applications from a trusted system Not very practical in most use cases Hybrid System Call Interposition: OS interposes on system calls made by a sandboxed application and checks whether a system call is allowed to be made given the context and arguments + Mitigates untrusted application s ability to affect system + Easily extendable monitor Implementation has subtle requirements to be safe 4 Hybrid System Call Interposition Types 4.1 Filtering The steps are as follows: Figure 1: Filtering approach, figure from [2] 1. The sandboxed application makes a syscall (eg read/open/write) 2. The OS (tracing interface) intercepts the call and puts the sandboxed application to sleep 3. The monitor checks if the syscall is allowed 4. If yes, then the OS wakes up the sandboxes application which then makes the syscall If no, then the OS returns an immediate error code to the sandboxed application The trouble with this scheme is that there is a race condition. There is a Time Of Check To Time Of Use (TOCTTOU) vulnerability where the syscall is allowed, but the arguments supplied to the monitor change between the time that they were checked and when they are subsequently used in the syscall. 2

4.2 Delegating Architecture (Ostia) The differences in delegating architecture: Figure 2: Delegating architecture, figure from [2] Virtualize the whole syscall/request as a remote procedure call (RPC) User agent executes both the permission check and the syscall for the sandboxed application, then returns the results This fixes the race condition by copying the arguments from the RPC call and using the same arguments for the permission check as for the syscall itself. This local source of truth prevents any changes to the original arguments after the permission check from affecting the arguments used for the syscall. This design also derives efficiency gains by reducing the overhead of monitoring all syscalls made by the application, and by not requiring the agent to talk to the system between the permission check and the syscall happening. 5 Native Client (NaCl) The motivation for NaCl is that different systems need a centralized method to communicate, but one of the current solutions, browsers and JavaScript, just isn t as quick or robust as running natve code; namely it can be dramatically slower in many cases. The proposed solution is to run browser applications as native code to speed up execution without compromising the security/isolation that the browser provides for Javascript or requiring the installation of more dependencies. This then has many possible application domains: + Can run existing and legacy applications (no new dependencies) + Heavy computation in enterprise applications + Multimedia applications + Games + Any application which requires hardware acceleration Access to the Native Client gives the ability to take full advantage of the available resources without needing to know the resources a system will have a priori, whereas generic browser applications cannot access a client s full computational potential without the possible security repercussions. The challenge becomes to run applications natively without this compromise in security. Furthering this issue is that for a program to run natively, we need to handle implementation details across many different kinds of systems. 3

5.1 NaCl - At A Glance To handle the different system implementations, a Native Client exists for each operating system that we will run these applications on 1. Developers will then write their code as usual before using the NaCl toolchain to generate an executable (.nexe) which can be interpreted by a local Native Client. The Native Client then acts as a container to isolate the applications running in it, while also providing trusted access to the system s resources. Browser NaCl Container UI HTML and JavaScript Inter-module comm. nexe eg. Imglib.nexe 5.2 NaCl - A Closer Look We don t implicitly trust the nexe binary given to us, but trust NaCl and it s ability to verify that an nexe is safe to run. Part of this trust model relies on the fact that strict constraints are enforced when generating an nexe so that it can be verified quickly before being run in the in the local client; if these constraints aren t observed, then the nexe should be disallowed to run. Google claims that NaCl can handle: Untrusted modules from any website Memory allocation and additionally spawned threads Race condition attacks These guarantees are met with a nested sandbox: Inner Sandbox Outer sandbox Handles the same threat model as the OS Not strictly necessary considering it is a redundant level of security Inner sandbox Static analysis to discover security holes in resulting x86 Disallow self-modifying code and overlapping instructions Provide reliable disassembly Necessary for when a local client is verifying new nexe s 1 With the rise of WebAssembly, NaCl has become deprecated on all but ChromeOS 4

5.2.1 Software Fault Isolation For the static analysis, it is important to identify any potential security threats. Some of these threats are non-negotiable in that access to them means that we cannot guarantee that the application is safe while others depend on exactly how they are handled. Examples of these non-negotiable behaviors are system calls or access to interrupts (no direct access to the OS) and any instructions that can change the x86 state (eg. lds, far calls, etc). Other implementation specific sensitive code or protocols are handled by a trusted module, which can be queried through inter-module communication while they are also running in the NaCl container. Before identifying all cases of the disallowed instructions, NaCl must first identify all the instructions inside of a native code module, but the problem is that because x86 has a variable instruction length, an instruction can be interpreted differently if it is read from the a slight offset: 25 CD 80 00 00 CD 80 and %eax, 0x0000080CD int 0x80 As you can see, a legal instruction can become one of our disallowed instructions (interrupt) by jumping to it at a different offset, so we have to handle this case. This is where our reliable disassembly comes into play. NaCl imposes alignment and structural rules so that any native code module can be disassembled with all reachable instructions identified. This is done in two steps. First, all jmp instructions must jump to a fixed alignment (every byte offset which is a multiple of 32). The new jump semantics are handled by creating a new NaCl psuedoinstruction, nacljmp, to replace all jump instructions, so that we apply a bitmask before any indirect jmp instruction. Second, every instruction at this alignment must be a valid instruction. With these two constraints, we have handled all possible overlapping instructions and made it possible to reliably disassemble. 5.2.2 Constraints for NaCl Binaries Once a NaCl binary has been loaded in memory, it is marked as not writable which is enforced by OS-level protection mechanisms during execution. This prevents any possibility of modifying the binary after it has passed the NaCl sanity checks and thus bypassing the security requirements. The binary is statically linked at the start address of zero, with the first byte of text at 64K so that our alignment requirements hold and our memory/code access bounds checking is simplified. All indirect control transfers use a nacljmp psuedoinstruction for reasons we outlined above. Similarly for the reasons outlined, the binary contains no instructions or pseudoinstructions overlapping a 32B boundary. All valid instruction addresses are reachable by a fall-through disassembly that starts at the base address so that it can be reliably disassembled and analyzed. All direct control transfers target valid instructions identified during disassembly. Not quite finally, the binary is padded up to the nearest page with at least one hlt instruction as a final stopgap to prevent runaway control flow. 5

References [1] Tal Garfinkel, Ben Pfaff, Jim Chow, Mendel Rosenblum, and Dan Boneh. Terra: A virtual machine-based platform for trusted computing. SIGOPS Oper. Syst. Rev., 37(5):193 206, October 2003. [2] Tal Garfinkel, Ben Pfaff, and Mendel Rosenblum. Ostia: A delegating architecture for secure system call interposition. In IN NDSS, 2003. [3] Bennet Yee, David Sehr, Gregory Dardyk, J. Bradley Chen, Robert Muth, Tavis Ormandy, Shiki Okasaka, Neha Narula, and Nicholas Fullagar. Native client: A sandbox for portable, untrusted x86 native code. In Proceedings of the 2009 30th IEEE Symposium on Security and Privacy, SP 09, pages 79 93, Washington, DC, USA, 2009. IEEE Computer Society. 6