Stuck in Traffic (SiT) Attacks

Similar documents
Stuck in Traffic (SiT) Attacks:

Stuck in Traffic (SiT) Attacks:

Reinforcement Learning: A brief introduction. Mihaela van der Schaar

CSE151 Assignment 2 Markov Decision Processes in the Grid World

The XVC Framework for In-Vehicle User Interfaces

Partially Observable Markov Decision Processes. Silvia Cruciani João Carvalho

Throughput Maximization for Energy Efficient Multi-Node Communications using Actor-Critic Approach

Pinball Attacks: Exploiting Channel Allocation in Wireless Networks

Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control

Vehicular Cloud Computing: A Survey. Lin Gu, Deze Zeng and Song Guo School of Computer Science and Engineering, The University of Aizu, Japan

The Middlesex VANET Research Testbed. Glenford Mapp Associate Professor Middlesex University, London

Stealthy Edge Decoy Attacks Against Dynamic Channel Assignment in Wireless Networks

Emerging Connected Vehicle based

Detection and Mitigation of Cyber-Attacks using Game Theory

Reinforcement Learning for Adaptive Routing of Autonomous Vehicles in Congested Networks

15-780: MarkovDecisionProcesses

An Improved Policy Iteratioll Algorithm for Partially Observable MDPs

congestion energy & environment accessibility aging infrastructure

Value Iteration. Reinforcement Learning: Introduction to Machine Learning. Matt Gormley Lecture 23 Apr. 10, 2019

9/30/2018. Hierarchical Markov decision processes. Outline. Difficulties when modeling

Green Lights Forever: Analyzing the Security of Traffic Infrastructure

Automotive Cybersecurity: Why is it so Difficult? Steven W. Dellenback, Ph.D. Vice President R&D Intelligent Systems Division

Multiagent Traffic Management: An Improved Intersection Control Mechanism PRESENTED BY: PATRICIA PEREZ AND LAURA MATOS

Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs)

LTE and IEEE802.p for vehicular networking: a performance evaluation

WeVe: When Smart Wearables Meet Intelligent Vehicles

Building Ride-sharing and Routing Engine for Autonomous Vehicles: A State-space-time Network Modeling Approach

Dedicated Short Range Communication: What, Why and How?

Experimental Study on Speed-Up Techniques for Timetable Information Systems

Distributed Autonomous Virtual Resource Management in Datacenters Using Finite- Markov Decision Process

Connected Car. Dr. Sania Irwin. Head of Systems & Applications May 27, Nokia Solutions and Networks 2014 For internal use

Approximate Linear Programming for Average-Cost Dynamic Programming

Wireless Environments

Instance-Based Action Models for Fast Action Planning

Quadruped Robots and Legged Locomotion

Industry 4.0 & Transport for Digital Infrastructure

Enabling Efficient and Accurate Large-Scale Simulations of VANETs for Vehicular Traffic Management

Exam Topics. Search in Discrete State Spaces. What is intelligence? Adversarial Search. Which Algorithm? 6/1/2012

SCALABLE VEHICULAR AD-HOC NETWORKS DISTRIBUTED SOFTWARE-DEFINED NETWORKING

A Passive Approach to Wireless NIC Identification

When Network Embedding meets Reinforcement Learning?

To realize Connected Vehicle Society. Yosuke NISHIMURO Ministry of Internal Affairs and Communications (MIC), Japan

Shadow: Real Applications, Simulated Networks. Dr. Rob Jansen U.S. Naval Research Laboratory Center for High Assurance Computer Systems

Reinforcement Learning (INF11010) Lecture 6: Dynamic Programming for Reinforcement Learning (extended)

Markov Decision Processes (MDPs) (cont.)

Parsimonious Active Monitoring and Overlay Routing

Carmela Comito and Domenico Talia. University of Calabria & ICAR-CNR

arxiv: v2 [cs.ni] 23 May 2016

Markov Decision Processes and Reinforcement Learning

Controlling traffic In a Connected world

Problem characteristics. Dynamic Optimization. Examples. Policies vs. Trajectories. Planning using dynamic optimization. Dynamic Optimization Issues

Connection-Level Scheduling in Wireless Networks Using Only MAC-Layer Information

Neuro-Dynamic Programming An Overview

Big Data for Smart Cities Connected Vehicles in the Wireless World

Optimal Routing in Gossip Networks

Self-Energy Optimizations for Future Green Cellular Networks. Zhenni Pan Shimamoto-Lab

Fault-Aware Flow Control and Multi-path Routing in Wireless Sensor Networks

Alma Mater Studiorum University of Bologna CdS Laurea Magistrale (MSc) in Computer Science Engineering

Architectural Support for Internet Evolution and Innovation

PRISM An overview. automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation,

Securing V2X communications with Infineon HSM

On A Traffic Control Problem Using Cut-Set of Graph

Vehicle Connectivity in Intelligent Transport Systems: Today and Future Prof. Dr. Ece Güran Schmidt - Middle East Technical University

Máté Lengyel, Peter Dayan: Hippocampal contributions to control: the third way

Markov Decision Processes. (Slides from Mausam)

A Brief Introduction to Reinforcement Learning

ENSC 427, Spring 2012

Boston Regional ITS Architecture. Lev Pinelis Angela Ho Steve Alpert Tyler Smith

IMS Presence Server: Traffic Analysis & Performance Modelling

A Layered Protocol Architecture for Scalable Innovation and Identification of Network Economic Synergies in the Internet of Things

Vehicle Trust Management for Connected Vehicles

Monte Carlo Tree Search PAH 2015

Training Intelligent Stoplights

Net SILOs: An Architecture to Enable Software Defined Optics

Towards Intelligent Vehicular Networks: A Machine Learning Framework

Planning and Control: Markov Decision Processes

DTN Interworking for Future Internet Presented by Chang, Dukhyun

AIMS Things in a Fog (TGIF): A Framework to Support Multidomain Research in the Internet of Things

Marcos Katz. Wireless Communications Research Seminar 2012

MC-Safe: Multi-Channel Real-time V2V Communication for Enhancing Driving Safety

Vehicle To Android Communication Mode

Deep Reinforcement Learning

What kind of terminal do vehicles need? China Unicom Research Institute Dr. Menghua TAO July, 2015

Traffic balancing-based path recommendation mechanisms in vehicular networks Maram Bani Younes *, Azzedine Boukerche and Graciela Román-Alonso

Double-Loop Receiver-Initiated MAC for Cooperative Data Dissemination via Roadside WLANs

Privacy in Vehicular Ad-hoc Networks. Nikolaos Alexiou, LCN, EE KTH

Subject: Adhoc Networks

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION

TRANSPORT PLANNING AND

Introduction to Game Theory

5G LAB GERMANY. 5G Impact and Challenges for the Future of Transportation

Smartway Project. Hiroshi MAKINO. SS26, 12 th ITS World Congress in San Francisco. National Institute for Land and Infrastructure Management, Japan

Pascal De Beck-Courcelle. Master in Applied Science. Electrical and Computer Engineering

Traffic Light Control by Multiagent Reinforcement Learning Systems

Routing Strategies. Fixed Routing. Fixed Flooding Random Adaptive

Intelligent Transportation Systems. Wireless Access for Vehicular Environments (WAVE) Engin Karabulut Kocaeli Üniversitesi,2014

SDN/NFV and V2X Applications A.Prof. N. Alonistioti

Develop Eco-Driving Assistance Systems --- Value of Traffic Signal Status Information. Dr. Guoyuan Wu Oct. 13 th, 2011

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Call Admission Control for Multimedia Cellular Networks Using Neuro-dynamic Programming

Transcription:

Stuck in Traffic (SiT) Attacks Mina Guirguis Texas State University Joint work with George Atia

Traffic 2

Intelligent Transportation Systems V2X communication enable drivers to make better decisions: Avoiding congestion Balancing traffic across multiple routes Cooperating with other drivers Already, happening implicitly to a subset of drivers: Smartphone apps Vision: more explicit through smart traffic signs and software agents on the vehicles 3

Challenges Reliance on wireless communication Attackers can interfere with/jam the signals preventing communication Complexity Harder to understand and debug not all drivers will follow the signs suggestive ones! Studies consider communication failures as random noise Attacks are not random, but are well orchestrated 4

Contributions Research questions: Can ITS be exploited by attackers to cause congestion? Can attackers do this in a smart way to avoid detection? Contributions: Develop a general framework to identify stealthy attacks minimize cost and maximize damage Expose SiT attacks that decides which signal to interfere with and when Attack policies identified outperform other attack policies (e.g., DoS, random and myopic) 5

Talk outline Motivation An MDP framework Results Conclusions 6

ITS: balancing traffic ITS goal: balance traffic across road segments Segment part of an infrastructure controlled by a Road Side Unit (RSU) Decision Point a point in which drivers make informed decisions How: Vehicles on each segment report to their RSU to get an estimate of the load Decision point influence the choice made by incoming traffic to balance traffic 7

The model Discrete-time system of n segments, indexed by time k Number of vehicles on segment i at time k: admission ratio Traffic optimization function: Arrival rate Service rate 8

SiT attacks Goal: unbalance traffic causing congestion How: Attacker jams some signals from vehicles to the RSU by action u RSUs get incorrect estimates Decision point does not reflect true conditions Attack action Incoming vehicles make wrong decisions 9

Markov Decision Process The state at time k: Number of cars on each segment Decision info displayed to drivers State transitions Randomness from the arrival probability distribution Attack actions (no attacks, attack 1 segment, attack 2 segments, etc ) Rewards Damage: unbalance in traffic Cost: price incurred when a segment is attacked 10

Illustration P12, g(1,u,2) 2 P23, g(2,u,3) 1 P14, g(1,u,4) P43, g(4,u,3) 3 4 P34, g(3,u,4) Pij: probability of transition from state i to state j g(i,u,j): reward under action u Policy: selecting an action u for every state 11

Bellman s equation To obtain an optimal policy, attacker solves: 12

Bellman s equation To obtain an optimal policy, attacker solves: Immediate reward reflects tradeoffs between the damage inflicted and the cost of the attack 13

Bellman s equation To obtain an optimal policy, attacker solves: To find the optimal policy Value iteration Policy iteration 14

Policy iteration Exact policy iteration There is a curse! 15

The curse of dimensionality! Finding optimal policies is difficult! Sample example A 2 segment setup with 100 vehicles has 100^8 state space Cannot solve a system of linear equations! Need to look for approximations! 16

Approximate policy iteration Replace the optimal values with a parametric cost-togo function Obtain the approximate cost-to-go from simulations Characterize every state by s features r is a weight vector, one for each feature 17

API algorithm 18

Talk outline Motivation An MDP framework Results Conclusions 19

Experimental setup The setup Two segments Arrival distribution 3 prob. 0.3 8 prob. 0.6 30 prob. 0.1 Service rate fixed to 5 v/time A SiT attack affects 50% of vehicles Damage: Cost: 20

System 1: two identical segments 21

System 2: different segments System 1 System 2 No Attack Attack Seg1 Attack Seg2 System 1 35% 45% 20% System 2 4% 20% 76% 22

Conclusions Developed a framework to identify stealthy attacks that cause congestion SiT attacks Demonstrated their potency in comparison to other attack policies (DoS, random, myopic) Adapt to system parameters while balancing between current and future rewards As the degree of uncertainty increases, the policies obtain perform better Important to investigate the safety of ITS as they are developed 23

Stuck in Traffic (SiT) Attacks Thank you! This work is supported by NSF