Probabilistic Performance Analysis of Moving Target and Deception Reconnaissance Defenses

Similar documents
Analysis of Network Address Shuffling as a Moving Target Defense

Overview. Priorities for Immediate Action with Adaptive Response The top priorities for Adaptive Response are:

Attackers Process. Compromise the Root of the Domain Network: Active Directory

ATTIVO NETWORKS THREATDEFEND PLATFORM INTEGRATION WITH CISCO SYSTEMS PROTECTS THE NETWORK

ANATOMY OF AN ATTACK!

Introduction to Threat Deception for Modern Cyber Warfare

Protecting Against Modern Attacks. Protection Against Modern Attack Vectors

Fast and Evasive Attacks: Highlighting the Challenges Ahead

n Explain penetration testing concepts n Explain vulnerability scanning concepts n Reconnaissance is the first step of performing a pen test

WHITEPAPER DECEPTION TO ENHANCE ENDPOINT DETECTION AND RESPONSE

Improving the Effectiveness of Deceptive Honeynets through an Empirical Learning Approach

Active defence through deceptive IPS

A Quantitative Framework for Cyber Moving Target Defenses

THE BUSINESS CASE FOR OUTSIDE-IN DATA CENTER SECURITY

Deceiving Network Reconnaissance using SDN-based Virtual Topologies

ATTIVO NETWORKS THREATDEFEND INTEGRATION WITH MCAFEE SOLUTIONS

WHITEPAPER ATTIVO NETWORKS THREATDEFEND PLATFORM AND THE MITRE ATT&CK MATRIX

Smart Attacks require Smart Defence Moving Target Defence

Train as you Fight: Are you ready for the Red Team?

Firewalls, Tunnels, and Network Intrusion Detection

Lie. Cheat. Deceive. How to Practice the Art of Deception at Machine Speed

THE EFFECTIVE APPROACH TO CYBER SECURITY VALIDATION BREACH & ATTACK SIMULATION

Evolutionary Approaches for Resilient Surveillance Management. Ruidan Li and Errin W. Fulp. U N I V E R S I T Y Department of Computer Science

The YAKSHA Cybersecurity Solution and the Ambassadors Programme. Alessandro Guarino YAKSHA Innovation Manager CEO, StudioAG

Security Gap Analysis: Aggregrated Results

ROBUST, quantitative measurement of cyber technology

HOLISTIC NETWORK PROTECTION: INNOVATIONS IN SOFTWARE DEFINED NETWORKS

CompTIA Security+ Malware. Threats and Vulnerabilities Vulnerability Management

Part 2: How to Detect Insider Threats

Acalvio Deception and the NIST Cybersecurity Framework 1.1

Deception Defence 101. Dr. Pedram Hayati BsidesBUD 2017, Budapest, Hungary March 2017

Characterizing the Power of Moving Target Defense via Cyber Epidemic Dynamics

Deceiving Network Reconnaissance Using SDN-Based Virtual Topologies

Beyond Blind Defense: Gaining Insights from Proactive App Sec

Entertaining & Effective Security Awareness Training

Beyond Firewalls: The Future Of Network Security

Intelligent and Secure Network

WHITEPAPER ATTIVO NETWORKS DECEPTION TECHNOLOGY FOR MERGERS AND ACQUISITIONS

SOLUTION BRIEF ASSESSING DECEPTION TECHNOLOGY FOR A PROACTIVE DEFENSE

Lie, Cheat and Deceive: Change the Rules of Cyber Defense

Overview of Honeypot Security System for E-Banking

How Boards use the NIST Cybersecurity Framework as a Roadmap to oversee cybersecurity

Building Resilience in a Digital Enterprise

Ethical Hacking and Countermeasures: Attack Phases, Second Edition. Chapter 1 Introduction to Ethical Hacking

Are we breached? Deloitte's Cyber Threat Hunting

Internet Security Threat Report Volume XIII. Patrick Martin Senior Product Manager Symantec Security Response October, 2008

Cyber Defense & Network Assurance (CyberDNA) Center. Professor Ehab Al Shaer, Director of CyberDNA Center UNC Charlotte

Cyber Deception: Virtual Networks to Defend Insider Reconnaissance

Cyber Threat Assessment and Mitigation for Power Grids Lloyd Wihl Director, Application Engineering Scalable Network Technologies

RSA INCIDENT RESPONSE SERVICES

RSA INCIDENT RESPONSE SERVICES

National Cyber Security Operations Center (N-CSOC) Stakeholders' Conference

An Anomaly-Based Intrusion Detection System for the Smart Grid Based on CART Decision Tree

Combining Moving Target Defense with Autonomic Systems. Warren Connell 7 Dec 15

Automatic Quantification and Minimization of Attack Surfaces

Boston Chapter AGA 2018 Regional Professional Development Conference Cyber Security MAY 2018

Virtual CMS Honey pot capturing threats In web applications 1 BADI ALEKHYA, ASSITANT PROFESSOR, DEPT OF CSE, T.J.S ENGINEERING COLLEGE

DDoS MITIGATION BEST PRACTICES

Sharing What Matters. Accelerating Incident Response and Threat Hunting by Sharing Behavioral Data

HYBRID HONEYPOT -SYSTEM FOR PRESERVING PRIVACY IN NETWORKS

Enterprise D/DoS Mitigation Solution offering

Why Should You Care About Control System Cybersecurity. Tim Conway ICS.SANS.ORG

Intruders and Intrusion Detection. Mahalingam Ramkumar

Early detection of Crossfire attacks using deep learning

Cyber Defense Maturity Scorecard DEFINING CYBERSECURITY MATURITY ACROSS KEY DOMAINS

Honeypots. Security on Offense. by Kareem Sumner

Vulnerability Assessments and Penetration Testing

DDoS Defense by Offense

An Operational Cyber Security Perspective on Emerging Challenges. Michael Misumi CIO Johns Hopkins University Applied Physics Lab (JHU/APL)

FP7 NEMESYS Project: Advances on Mobile Network Security

Service. Sentry Cyber Security Gain protection against sophisticated and persistent security threats through our layered cyber defense solution

The Perfect Storm Cyber RDT&E

OODA Security. Taking back the advantage!

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER

Simulation of Workflow and Threat Characteristics for Cyber Security Incident Response Teams

Survey of Cyber Moving Targets. Presented By Sharani Sankaran

CHAPTER 4 SINGLE LAYER BLACK HOLE ATTACK DETECTION

Resilient Architectures

WHITEPAPER DECEPTION FOR A SWIFT DEFENSE

Eventpad : a visual analytics approach to network intrusion detection and reverse engineering Cappers, B.C.M.; van Wijk, J.J.; Etalle, S.

Outline. Intrusion Detection. Intrusion Detection History. Some Challenges. Network-based Host Compromises. Host-based Network Intrusion Detection

UMSSIA INTRUSION DETECTION

NETWORK SECURITY. Ch. 3: Network Attacks

Comparative Study of Different Honeypots System

How to (not) Share a Password:

Deception: Deceiving the Attackers Step by Step

War Stories from the Cloud Going Behind the Web Security Headlines. Emmanuel Mace Security Expert

COST OF CYBER CRIME STUDY INSIGHTS ON THE SECURITY INVESTMENTS THAT MAKE A DIFFERENCE

A fault tolerance honeypots network for securing E-government

Advanced Endpoint Protection

Application Whitelisting and Active Analysis Nick Levay, Chief Security Officer, Bit9

Cyber Vigilantes. Rob Rachwald Director of Security Strategy. Porto Alegre, October 5, 2011

Using Threat Analytics to Protect Privileged Access and Prevent Breaches

Managing an Active Incident Response Case. Paul Underwood, COO

HACK BACK FOR GOOD, NOT VENGEANCE: DEBATING ACTIVE DEFENSE FOR ENTERPRISES

How AlienVault ICS SIEM Supports Compliance with CFATS

Checklist for Evaluating Deception Platforms

Cylance Axiom Alliances Program

How to (not) Share a Password:

THE CYBERSECURITY LITERACY CONFIDENCE GAP

Transcription:

Probabilistic Performance Analysis of Moving Target and Deception Reconnaissance Defenses Michael Crouse, Bryan Prosser and Errin W. Fulp WAKE FOREST U N I V E R S I T Y Department of Computer Science October 12, 2015

Simplified Attack Process yes start reconnaissance risk OK? attack success? no no Most cyber attacks start with a reconnaissance phase Gather intelligence about potential targets Can require substantial time and effort for the attacker Consider defenses that occur at the reconnaissance phase Attempt to render the attacker s intelligence invalid Potentially asymmetric given the attacker s expense? Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 1

Reconnaissance Defenses ASLR movement reconnaissance defenses network shuffle evolve configs network masking deception honeypot Two reconnaissance defense categories, movement and deception Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 2

Movement Defenses 11.5.3.8 10.5.3.5 10.5.3.7 11.5.3.6 10.5.3.4 10.5.3.7 10.5.3.9 11.5.3.6 11.5.3.5 11.5.3.8 10.5.3.8 11.5.3.2 10.5.3.9 11.5.3.3 10.5.3.4 11.5.3.4 11.5.3.3 11.5.3.8 11.5.3.3 11.5.3.7 10.5.3.5 10.5.3.2 10.5.3.3 11.5.3.9 11.5.3.8 11.5.3.3 11.5.3.8 time t n time t n+1 time t n+2 Movement used to change attack surface over time After movement, reconnaissance information is invalid Can be applied at different system levels Network address shuffling is one example of a moving target defense Periodically remap network addresses Reconnaissance information is nullified after a shuffle Must properly manage legitimate traffic Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 3

Deception Defenses masking inventing honeypot deception defenses decoying mimicking Masking, inventing, decoying, and mimicking used as a defense False or misleading information provided to the attacker Honeypots are one example of a deception defense A computer system designed to be a trap for attackers Mimic a real system to entice the adversary into attacking Slows the attacker and can collect important attacker information Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 4

Reconnaissance Defense Performance Recall, reconnaissance defenses are potentially asymmetric Can potentially cost the attacker more than the defender Empirical results suggest these defenses are successful What type of defense performance can we expect in other situations? What is the probability of attacker success? How should the defenses be deployed? What networks are well suited for this defense? Probabilistic models can perhaps provide greater insight into the performance of reconnaissance defenses Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 5

Network Model Assumptions There are n addresses available (address space) v vulnerable computers, where v n h honeypots within the network The attacker is aware of the address space (n addresses) Will attempt k connections Attacker wins if m vulnerable computers located Has probability d of being deceived by a honeypot Loses if honeypot(s) contacted (perhaps withstands some losses) Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 6

Urn Models Consider using urn models to estimate the attacker success There are v green marbles representing vulnerable computers There are h red marbles representing honeypots Remaining are blue and represent empty or secure computers Drawing marbles from the urn represents scanning the network Result of the draws (marble types) represent the attacker outcome Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 7

Undefended Model Consider the case where no defense is deployed Probes k unique addresses, no need to repeat an address Hypergeometric distribution models attacker outcome after k probes ( v n v ) Pr(X k = x) = x)( k x ( n k) Population consists of v vulnerable and n v non-vulnerable Appropriate since draws are done without replacement, which represents the knowledge gained after each draw Attacker probability of success is given using the following ) Pr(X k 1) = 1 Pr(X k = 0) = 1 ( n v k ( n k) Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 8

Honeypot Defense Model Consider the case where honeypots (h 1) are deployed Probes k unique address, but hopes to avoid honeypots Let l represent the number of allowable losses Multivariate hypergeometric distribution models the attacker results )( h n (v+h) ) l)( Pr(X k = x) = ( v x k (x+l) ( n k) Population has h honeypots, v vulnerable, and non-vulnerable Draws are again done without replacement, which represents the knowledge gained after each draw Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 9

Attacker Success with Honeypot Defense Given the outcome probability, the probability of success is ( L k 1 v )( h n (v+h) ) x l)( k (x+l) Pr(X k 1 and l L) = ( n k) l=0 x=1 At least one vulnerable and no more than L losses (honeypots) Equations can be used to estimate attacker performance Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 10

Defense Analysis Foothold scenario Attacker needs only one vulnerable computer in infrastructure Foothold can then be leveraged to attack other computers Minimum-to-win scenario Attacker wants to compromise a minimum number of computers For example, information is distributed or the attacker is recruiting bots for a botnet Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 11

Foothold Attack Performance with Honeypots 1 Attacker success probability 0.8 0.6 0.4 0.2 0 No defense 1% honeypots 5% honeypots 10% honeypots 0 20 40 60 80 100 Percentage of address space scanned Consider a class-c network with 25% vulnerable computers Honeypots reduces attacker success after a certain scan percentage Larger reductions possible with slightly more honeypots Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 12

Min-to-Win Attack Performance with Honeypots 1 Attacker success probability 0.8 0.6 0.4 0.2 0 No defense 1% honeypots 5% honeypots 10% honeypots 0 20 40 60 80 100 Percentage of address space scanned Consider a class-c network with 25% vulnerable computers Attacker must find at least 16 of the 64 vulnerable Honeypots still significantly reduces the attacker success Overall min-to-win is a more difficult goal for the attacker Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 13

Effect of the Min-to-Win Objective Attacker success probability 0.4 0.3 0.2 0.1 0 Min is 5% of vulnerable Min is 10% of vulnerable Min is 25% of vulnerable 0 20 40 60 80 100 Percentage of address space scanned Class-C network with 25% vulnerable computers and 5% honeypots Achieving the min-to-win can be increasingly difficult for the attacker Hiding information across multiple computers is a good defense Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 14

Comparing Reconnaissance Performance Interested in comparing shuffle, honeypot, and combined defenses Specifically the effect of the attacker wait time Assume a delay between reconnaissance and attack (wait time) wait time shuffle n shuffle n+1 time If a shuffle occurs during the wait, then attacker likely fails This aspect (wait time) is not captured with shuffle models Instead, measure the effect of the different defenses empirically Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 15

Defense Simulations Discrete event simulator used to measure performance Traffic traces (2008 SIGCOMM) replayed for connections IP addresses remapped to a class-c network Defended network used shuffling, honeypots, or a combination 10% of the network were vulnerable computers For honeypot defense, 4% of the addresses were honeypots Random connections from the trace file were marked as reconnaissance Attacker wait time was Poisson distribution with 10 minute average If honeypots are used, attacker loses if a honeypot is contacted If shuffling is used, attacker loses if a vulnerable computer is found (reconnaissance) but is not contacted after wait Attacker wins if a vulnerable machine is contacted at the attack Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 16

Foothold Simulation Results 1 Attacker success probability 0.8 0.6 0.4 0.2 0 No defense Shuffle Honeypot Combined 0 0.5 1 1.5 2 2.5 3 Ratio of shuffle time to attack wait time Consider a 10% scan with different shuffle to wait ratios If ratio is 2, then shuffle is 20 minutes (wait averages 10 minutes) Shuffling better than honeypot, if the shuffle interval less than wait Combined (shuffle and honeypot) always performs best Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 17

Min-to-Win Simulation Results Attacker success probability 1 0.8 0.6 0.4 0.2 0 No defense Shuffle Honeypot Combined 0 10 20 30 Percentage of vulnerable computers necessary for the attacker Consider a 10% scan and vary the percentage required to win Shuffle to wait ratio of 2 (20 minute shuffle interval) Honeypots better than shuffling if the minimum to win was low Combined (shuffle and honeypot) always performs best Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 18

Drop Probability Simulation Results 0.12 Connection drop probability 0.10 0.08 0.06 0.04 0.02 0 0.5 1 1.5 2 2.5 3 Ratio of shuffle time to attack time Shuffling can disconnect illegitimate and legitimate traffic Shorter intervals provide better defense, but higher drop probability Combined can provide good performance with longer shuffle intervals Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 19

Conclusions and Future Work Reconnaissance defenses attempt to invalidate attacker intelligence Movement (shuffling) and deception (honeypot) Probabilistic models developed to estimate defense performance Can provides defense analysis for various situations Models indicate only a few honeypots are necessary for a good defense For class-c network, 5% (13) honeypots were very effective Simulation results also indicate a combined defense is best Would like to develop a combined defense model Adding cost metrics would help determine best defense strategy Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 20

Turn back, you ve gone too far Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 21

Shuffle Defense Model Consider a network address shuffling, with the following assumptions Attacker scans then attacks with no wait Perfect shuffling is used, therefore shuffle after every attack There is no attacker knowledge gain after each attempt Binomial distribution models the attacker foothold outcome ( ) v Pr(X k = x) = p x (1 p) k x x where p = v n Population has v vulnerable and n v non-vulnerable Draws are again done with replacement, which represents no knowledge gained after each draw Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 22

Attacker Success with Perfect Shuffle Defense Given the outcome probability, the probability of success is Pr(X k > 0) = 1 Pr(X k = 0) = 1 (1 p) k The previous equation is for foothold only No penalty for multiples contacts of the same vulnerable computer For min-to-win, the population will change If a green marble is drawn, it is replaced with a blue marble This captures the notion that there is no benefit to contacting the same vulnerable computer more than once Probability of an attacker s outcome follows a contagion equation ( v x ( ) Pr(X k = x) = x) n k ( 1) x j x (n v +j) k j j=0 Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 23

Shuffling Performance Analysis Similar probabilistic models exist for network address shuffling Perfect shuffling modeled, remapping after every scan Scan and attack occur simultaneously Shuffling does not perform well given these assumptions 1 Attacker success probability 0.8 0.6 0.4 0.2 0 No defense perfect shuffle 5% honeypots 0 20 40 60 80 100 Percentage of address space scanned Shows the defense effect after an attack Crouse, Prosser, & Fulp MTD Models MTD@CCS 2015 24