Adaptive Stream Mining: A Novel Dynamic Computing Paradigm for Knowledge Extraction AFOSR DDDAS Program PI Meeting Presentation PIs: Shuvra S. Bhattacharyya, University of Maryland Mihaela van der Schaar, UCLA Email: ssb@umd.edu, mihaela@ee.ucla.edu October 1, 2013, Arlington VA Presentation version: September 30, 2013
UMD-UCLA Project on Adaptive Stream Mining Systems in the AFOSR Dynamic Data Driven Applications Systems (DDDAS) Program. Shuvra Bhattacharyya Expertise in real-time signal processing systems, & modelbased HW & SW design tools, dataflow methodologies. Kishan Sudusinghe, PhD student at DSPCAD/UMD. Research interest in embedded systems, and applications on machine learning and computer vision. Inkeun Cho, PhD student at DSPCAD/UMD. Research interests in embedded systems design and wireless communication systems. Mihaela van der Schaar Expertise in multimedia processing, networks & distributed systems, online learning & distributed stream mining systems. Jie Xu, PhD student at UCLA, EE Department. Research interests in stream mining, online learning and incentive design in networks. Luca Canzian, Post-doc at UCLA, EE department. Research interests in stream mining and game theory applied to wireless networks.
Motivation: Adaptive stream mining (ASM) and connection to dataflow (DF) graphs Relevance of DDDAS paradigm to ASM ASMDF design methodology Recent developments and results Summary Outline
Project Objective: Systematic Design, Implementation, and Optimization of Adaptive Stream Mining Systems
ASMDF: Motivation and Challenges Time Sensitive: mission critical tasks require data at run-time and need to make adaptive classifications dynamically High volume data in different context is distributed across networks with heterogeneity of content types and platforms Collaborative training and detection schemes are useful for increased accuracy Designing complex application graphs using traditional methods leads to many inefficiencies Flexibility, extensibility and adaptivity are important for sustained performance Computational efficiency: operations to perform in real-time on limited and often distributed resources
Project Objectives Managing dynamic and non-stationary behavior Model-based integration of topologies of classifiers into network and node level design Integration of DDDAS principles throughout the design methodology Analyze run-time statistics for accurate, goal-driven execution Online learning algorithms for data driven adaptivity Exploiting heterogeneity Dataflow modeling at the network- and node-levels to orthogonalize application and platform capabilities, and perform effective design space exploration Optimizing computational efficiency Distribute classification, signal processing, and control tasks for energy- and performance-efficient operation Design and integration of advanced dataflow scheduling techniques for efficient mapping and workload distribution
Contributions Design methodology, tools, and component libraries for the emerging domain of adaptive stream mining (ASM) applications. Theory and design methods for systematic integration of dynamic, data-driven operation into stream mining systems. Advanced machine learning concepts and methods for development of adaptive software systems on resource constrained heterogeneous platforms. Structured methods for network- and node-level design and analysis based on dataflow (DF) models of computation. Novel stream mining approaches to optimize real-time knowledge extraction. Natural fit to DDDAS paradigm
Motivation: Adaptive stream mining (ASM) and connection to dataflow (DF) graphs Relevance of DDDAS paradigm to ASM ASMDF design methodology Recent developments and results Summary Outline
ASM in Absence of DDDAS Stream Mining Subsystem 1 Stream Mining Subsystem 2 Node 1 Node 2 Adaptability achieved within subsystems à Lacks run-time adaptability at the system level based on monitored performance and statistics High latency due to serial processing Replicated system for different constraints Data streams applied to differently-configured parallel systems for adaptive behavior. Each system is specifically tailored for different metrics or scenarios of interest à must sacrifice scalability or accuracy (or both)
DDDAS impact on ASM Run-time ASM systems Run-time system provides feedback and simulation provides predictions Simulated ASM systems Simulation & runtime systems are driven deliberately based on metrics of interest Dataflow models Local data drives higher level DF models Sensors: Local/HW Systems share statistics with models and models help predict future states Models Classification models Monitoring Both local and global data configure classification Sensors: Global/ Networked Scheduling models Models can be dynamically configured Scheduling affects both local and global monitoring Monitoring models dynamically adapt to different constraints and scenarios Metrics of interest/ constraints Processing power Battery level Resource util. Bandwidth Latency Accuracy Etc. Algorithms selected based on feedback from monitoring Algorithms Different constraints enable variable sets of algorithms
Surveillance Applications
q q q q Related Work Ducasse, van der Schaar, et al. (ASMDF Team s prior work) q Adheres to resource constraints on platforms q Static instead of dynamic and based on queries with centralized and de-centralized approaches Olston et al. q Centralized approach with query based implementation q Missing dynamic component Chandrasekaran et al. q q Telegraph CQ: Dynamic but query based approach No analysis of application for accuracy trade-offs Shen et al. (ASMDF Team s prior work) q q Dynamic dataflow model to encapsulate different operating configurations with different dataflow properties Lacks support for the hierarchical design and integration of dynamically reconfigurable subsystems, which is critical in ASM
DDDAS Paradigm Applied to ASM Design Space Classifier topologies Dataflow graph schedules Platform configurations Network attributes Algorithms Machine learning algorithms Scheduling algorithms Signal processing algorithms DDDAS Models Dataflow models for design Classifier models for computation and classification Scheduling models for mapping and distribution Simulation models for behavior prediction and analysis Applications Multimedia processing Surveillance Cyber-Security Intelligent traffic control Seismic monitoring Online Financial analysis
Motivation: Adaptive stream mining (ASM) and connection to dataflow (DF) graphs Relevance of DDDAS paradigm to ASM ASMDF design methodology Recent developments and results Summary Outline
DDDAS Design Framework for ASM Hierarchical modeling Networks for distributed stream mining Network nodes Classifier topologies Signal processing subsystems Individual classifiers DDDAS run-time components DDDAS instrumentation components Novel design framework for distributed adaptive stream mining Lightweight Dataflow for Dynamic Data-Driven application systems Environment (LiD4E) Hierarchical, dynamic dataflow model of computation (HCFDF)
Adaptive Stream Mining Systems as Topologies of Classifiers R. Ducasse and M. van der Schaar. Finding it now: Construction and configuration of networked classifiers in real-time stream mining systems. In S. S. Bhattacharyya, E. F. Deprettere, R. Leupers, and J. Takala, editors, Handbook of Signal Processing Systems. Springer, second edition, 2013.
Dataflow-based Design for DSP A variety of development environments is based on dataflow models of computation. Applications are designed in terms of signal processing block diagrams. By using these design tools, an application designer can Develop complete functional specifications of model-based components. Verify functional correctness through model-based simulation and verification. Implement the designs on targeted platforms through supported platform-specific flows. Example from Agilent ADS tool Example from National Instruments LabVIEW Example from UC Berkeley Ptolemy
DSP-oriented Dataflow Models of Computation Application is modeled as a directed graph Nodes (actors) represent functions Edges represent communication channels between functions Nodes produce and consume data from edges Edges buffer data (logically) in a FIFO (first-in, first-out) fashion Data-driven execution model An actor can execute whenever it has sufficient data on its input edges. The order in which actors execute is not part of the specification The order is typically determined by the compiler, the hardware, or both. Iterative execution Body of loop to be iterated a large or infinite number of times
Dataflow Graphs Vertices (actors) represent computational modules Edges represent FIFO buffers Edges may have delays, implemented as initial tokens Tokens are produced and consumed on edges Different models have different rules for production (SDF à fixed, CSDF à periodic, BDF à dynamic) X Y 5 Z p 1,i c 1,i p 2,i c 2,i e 1 e 2
Dataflow Production and Consumption Rates X Y 5 Z p 1,i c 1,i p 2,i c 2,i e 1 e 2
Dataflow Graph Scheduling Assigning actors to processors, and ordering actor subsets that share common processors Here, a processor means a hardware resource for actor execution on which assigned actors are time-multiplexed Scheduling objectives include Exploiting parallelism Buffer management Minimizing power/energy consumption Managing thermal constraints
Proc 1 1 2 3 4 6 Schedule and Schedule Modeling Example Proc 2 Proc 3 5 7 8 9 Self-Timed Schedule Proc 1: (1, 2, 3, 4, 6) Proc 2: (5, 7, 8) Proc 3: (9) Self-timed schedule and its IPC graph Every edge (x, y) induces the following precedence constraint: start(y, k) start(x, k delay((x, y)) + t(x) 1 2r 1 2 3 4 4s 1 4s 2 4s 3 6 5 5s 1 7r 1 7 8r 1 8 IPC Graph 9r 1 9
Motivation: Adaptive stream mining (ASM) and connection to dataflow (DF) graphs Relevance of DDDAS paradigm to ASM ASMDF design methodology Recent developments and results Summary Outline
DDDAS Design Framework for ASM Hierarchical modeling Networks for distributed stream mining Network nodes Classifier topologies Signal processing subsystems Individual classifiers DDDAS run-time components DDDAS instrumentation components Novel framework for distributed adaptive stream mining Lightweight Dataflow for Dynamic Data-Driven application systems Environment (LiD4E) Hierarchical, dynamic dataflow model of computation (HCFDF)
Background: Core Functional Dataflow (CFDF) Divide actors into sets of modes [Plishker 2008] Each mode has a fixed consumption and production behavior, but actors may dynamically switch between modes. Enabling conditions and computation associated for each mode Including next mode to enable and then invoke For example, consider a standard Switch: Switch Actor 1 1 Switch Data Control True Output False Output [1,0] [0,1] Production & consumption behavior of switch modes Mode Consumes Produces Control Data True False Control 1 0 0 0 True 0 1 1 0 False 0 1 0 1 Mode transition diagram between switch modes True Mode Control Mode False Mode
Background: Design Flow - Lightweight Dataflow (LWDF) [Shen 2010] p X 1 c p Y 2 c 2 5 Z 1 e 1 e 2 Dataflow graph application Unit testing Unit testing Actor library Communication library Graph transformation and analysis Scheduling and buffer mapping Graph-level function/ implementation validation Programmable DSP GPU FPGA SoC
Lightweight Dataflow Programming Approach A dataflow programming approach for model-based design and implementation of DSP systems. [Shen 2010] [Shen 2011] lightweight à minimally intrusive on existing design processes, and requires minimal dependence on specialized tools or libraries. Features Improve the productivity of the design process and the quality of derived implementations. Retargetability across different platforms. Allow designers to integrate and experiment with dataflow modeling approaches relatively quickly and flexibly within existing design methodologies and processes.
Dataflow Modeling: Hierarchical Core Functional Dataflow (HCFDF) q q Functionality q Enable & Invoke separation q q q Hierarchical subgraphs Unique modes within each subgraph Each mode has a unique set of invoke options (firings) q Predictable production & consumption rates at subsystem interfaces Operation q q Interface enable subset based on input Each subgraph executes until termination conditions are met
HCFDF Example Subgraph termination conditions
Developed: LiD4E Framework, v0.1 A novel design tool constructed using LiDE as a foundation Lightweight: minimal intrusion on development process Emphasis on interface-level modifications Incorporates first version of HCFDF (hierarchical core functional dataflow), enabling hierarchical construction of core functional dataflow graphs Flexible and structured support for DDDAS paradigm in ASM systems New types of decision and control actors (dataflow graph functional components) Adaptation management library Adaptive Classification Module (ACM): Computational core that dynamically varies classification modules based on data and decisions DDDAS instrumentation actors: Collect and feed back statistics to help steer subsequent reconfiguration operations
Implemented System: LiD4E Framework, v0.1
Validation: Face Detection Case Study Face detection application Mix of male and female portraits from MIT CBCL database Utilizes HCFDF to model the multi model classifier subsystem Utilizes LiD4E framework to implement and validate the framework Support vector machines (SVM) for classification Three SVM classifiers designed for: higher accuracy, low runtime, low false positive rate Each classifier subsystem can be initiated on data-driven demands based on specific thresholds Operation Read classifier parameters from a file and perform classification for comparison Constraints and operational requirements arrive into the system as a separate input stream
Validation Process: Experiments System specifications: 3GHz Intel Xeon with 3GB RAM and Ubuntu 10.04 OS C language programming of each actor with C based associated libraries for dataflow Trained in a MATLAB implementation of classifiers Functional accuracy validated via a MATLAB simulation Terms defined: Single Run: one 19x19 image for each classifier sub system Stream Run: Sequence of three images for each classifier Performance Loss Ratio (PLR): ratio of run-times between the classifier employed (numerator) and fastest available classifier (denominator) PLR helps to measure the trade-off between adaptive and non-adaptive implementations
Results (seconds) Adaptability trade-off measurement
Classifier Specifications Classifier Features Highest Accuracy Fastest Lowest False Positive Rate Accuracy 99.708% 99.563% 78.317% Time 0.00193 sec 0.00095 sec 0.10854 sec Number of False Positives Number of Support Vectors 4/3436 9/3436 0/3436 690 364 3520 q Each Column corresponds to a different Classifier employed q Left-most column represents features that can be configured in Support Vector Machines (SVM) to obtain different classifier configurations q Sample size is 3436 facial images from MIT CBCL database
Overhead of the framework Results (Continued) Falls within 0.75% -- 1.1%, consistently based on classifier Overhead: Average of 26% improvement in execution time for stream run Improvement due to amortization of start up overhead Improvement increases with the length of input stream Trade-offs involving scheduling strategy and classifier employed
Model for Distributed Stream Mining
Preliminary Results Percentages of mis-classifications Network intrusion Electricity Pricing Forest Cover Type AM [17] 3.07 41.8 29.5 Ada [1] 5.25 41.1 57.5 OnAda [2] 2.25 41.9 39.3 Wang [4] 1.73 40.5 32.7 DDD [9] 0.72 39.7 24.6 WM [10] 0.29 22.9 14.1 Blum [12] 1.64 37.3 22.6 TrackExp [13] 0.52 23.1 14.8 Our scheme [16] 0.19 14.3 4.1
Motivation: Adaptive stream mining (ASM) and connection to dataflow (DF) graphs Relevance of DDDAS paradigm to ASM ASMDF design methodology Recent developments and results Summary Outline
Distinguishing Features of our ASMDF Design Methodology and LiD4E Real-time stream mining capability Streams of data versus streams of queries Guaranteed determinism Enabling of high level dataflow based analysis and optimization Flexible assembly of different classifiers into arbitrary topologies Dynamic reconfiguration of classifiers Low-overhead, hierarchical dynamic dataflow framework
Summary Need for structured design methodologies and automated tools for dynamic, data-driven, adaptive stream mining (ASM) systems Models for classification, distribution, and scheduling need to be applied with agility and efficiency in a data-driven manner Lightweight dataflow based framework (LiD4E) and underlying HCFDF model provide new features for model-based ASM design and implementation Preliminary demonstrations of through a face detection application Ongoing research Application to object detection, recognition, and tracking (e.g., under stringent performance and energy constraints) Collaboration with AFRL --- e.g., to leverage data sets and computer vision subsystems Integrated, data-driven ASM topology adaptation, distribution, and scheduling (platform mapping) for optimized real-time operation
References