SIMULINK based Moving Object Detection and Blob Counting Algorithm for Traffic Surveillance

Similar documents
Face Tracking in Video

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

Research Article 2017

TRAFFIC surveillance and traffic control systems are

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking

A Moving Object Segmentation Method for Low Illumination Night Videos Soumya. T

Clustering Based Non-parametric Model for Shadow Detection in Video Sequences

Implementation of the Gaussian Mixture Model Algorithm for Real-Time Segmentation of High Definition video: A review 1

Object Detection in Video Streams

Automatic Shadow Removal by Illuminance in HSV Color Space

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

Adaptive Background Mixture Models for Real-Time Tracking

Feature Extraction of Edge Detected Images

A NOVEL MOTION DETECTION METHOD USING BACKGROUND SUBTRACTION MODIFYING TEMPORAL AVERAGING METHOD

Figure 1 shows unstructured data when plotted on the co-ordinate axis

Video Object Extraction for Surveillance System. Like Zhang CS, UTSA

IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION INTRODUCTION

A MIXTURE OF DISTRIBUTIONS BACKGROUND MODEL FOR TRAFFIC VIDEO SURVEILLANCE

Detection of Moving Cast Shadow Using Pixel and Texture

Moving Object Detection for Real-Time Applications

Vision-based parking lot occupancy evaluation system using 2D separable discrete wavelet transform

Moving Shadow Detection with Low- and Mid-Level Reasoning

Car License Plate Detection Based on Line Segments

How to Detect Moving Shadows: Theory and Practice

Motion Detection Algorithm

EE 5359 MULTIMEDIA PROCESSING. Implementation of Moving object detection in. H.264 Compressed Domain

Detection and Treatment of Torn Films with Comparison of Different Motionestimation Techniques

Video Surveillance for Effective Object Detection with Alarm Triggering

A Background Subtraction Based Video Object Detecting and Tracking Method

Human Motion Detection and Tracking for Video Surveillance

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE

Motion Detection Using Adaptive Temporal Averaging Method

Repositorio Institucional de la Universidad Autónoma de Madrid.

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

A Fuzzy Approach for Background Subtraction

Background Subtraction Techniques

REAL TIME ESTIMATION OF VEHICULAR SPEED USING NORMALIZED CORRELATION

Statistical Approach to a Color-based Face Detection Algorithm

Video Surveillance System for Speed Violated Vehicle Detection

Vehicle speed detection in video image sequences using CVS method

SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HISTOGRAM EQUALIZATION

An Edge-Based Approach to Motion Detection*

Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression

Advanced Motion Detection Technique using Running Average Discrete Cosine Transform for Video Surveillance Application

Fast Vehicle Detection and Counting Using Background Subtraction Technique and Prewitt Edge Detection

A NEW HYBRID DIFFERENTIAL FILTER FOR MOTION DETECTION

2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Automatic Detection of Texture Defects using Texture-Periodicity and Gabor Wavelets

BACKGROUND MODELS FOR TRACKING OBJECTS UNDER WATER

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

CENTROID BASED MULTIPLE OBJECT TRACKING WITH SHADOW REMOVAL AND OCCLUSION HANDLING

Real-Time Traffic Video Analysis Using Intel Viewmont Coprocessor

Moving cast shadow detection of vehicle using combined color models

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

Detection and Classification of a Moving Object in a Video Stream

Moving Object Detection for Video Surveillance

Detecting and Identifying Moving Objects in Real-Time

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION

Background/Foreground Detection 1

Moving Object Counting in Video Signals

Salient Points for Tracking Moving Objects in Video

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Robust Real-Time Background Subtraction based on Local Neighborhood Patterns

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Detection of Moving Object using Continuous Background Estimation Based on Probability of Pixel Intensity Occurrences

Paired Region Approach based Shadow Detection and Removal

Vehicle Detection and Tracking using Gaussian Mixture Model and Kalman Filter

Shadow Classification and Evaluation for Soccer Player Detection

Background Subtraction for Urban Traffic Monitoring using Webcams

Grayscale Image Analysis Using Morphological Filtering

Improving License Plate Recognition Rate using Hybrid Algorithms

Automatic License Plate Recognition in Real Time Videos using Visual Surveillance Techniques

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

/13/$ IEEE

AN EFFICIENT APPROACH FOR IMPROVING CANNY EDGE DETECTION ALGORITHM

Detecting the Moving Object in Dynamic Backgrounds by using Fuzzy-Extreme Learning Machine

Motion Detection Algorithm for Agent Interaction Surveillance Systems

IJSER. Abstract : Image binarization is the process of separation of image pixel values as background and as a foreground. We

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm

SMOKE DETECTION BASED ON IMAGE PROCESSING BY USING GREY AND TRANSPARENCY FEATURES

AD-HOC: Appearance Driven Human tracking with Occlusion Handling

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

An Approach for Reduction of Rain Streaks from a Single Image

Idle Object Detection in Video for Banking ATM Applications

Fixed-point Simulink Designs for Automatic HDL Generation of Binary Dilation & Erosion

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Morphological Corner Detection

Gesture based PTZ camera control

Robust color segmentation algorithms in illumination variation conditions

SVM CLASSIFIER FOR VEHICLE SURVEILLANCE UNDER NIGHTTIME VIDEO SCENES

Digital Image Processing

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection

A Static Object Detection in Image Sequences by Self Organizing Background Subtraction

International Journal of Advance Engineering and Research Development

Transport System Telematics. Detailed evaluation and analysis of vision-based online traffic parameters estimation approach using lowresolution

Enhanced Hexagon with Early Termination Algorithm for Motion estimation

Line, edge, blob and corner detection

Transcription:

SIMULINK based Moving Object Detection and Blob Counting Algorithm for Traffic Surveillance Mayur Salve Dinesh Repale Sanket Shingate Divya Shah Asst. Professor ABSTRACT The main objective of this paper is to propose a SIMULINK MODEL to detect moving vehicles. Background subtraction is the technique used in this algorithm. Based on the retrieved information, automatic traffic surveillance can be done. Initially, a recorded video is given directly to the blocks. The main logic is then implemented using various Embedded MATLAB blocks using individual algorithms. The algorithm takes into consideration three main techniques namely Background Subtraction, Edge Detection and Shadow Detection. Background Subtraction block is sub-divided into Selective and Non-selective parts to improve the sensitivity and give accurate background. Edge detection helps to detect the exact boundaries of moving vehicles. This is followed by the shadow detection block that removes the falsely detected pixels that are generated due to shadow of the vehicle. By analyzing the output of the blocks discussed above, the final mask is generated. The mask along with the input frame processed to give the final video output with the detected object. Furthermore, using a Blob analysis block, parameters such as number of blobs per frame (vehicles) and the area of blobs can be used directly for traffic surveillance. Finally a Blob counting block is used to count and display the total number of cars. Keywords SIMULINK, MATLAB, Moving Object Detection, Blob Counting Algorithm, Traffic Surveillance. 1. INTRODUCTION Complex traffic surveillance systems are often used for controlling traffic and detecting unusual situations, such as traffic congestion or accidents. This paper describes an approach which detects moving and recently stopped vehicles using the novel technique of background subtraction [1] [2]. Only the SIMULINK Utility of MATLAB Software is used to realize image processing operations. The result of the algorithm is a binary mask image of blobs representing the detected objects. The background is updated slowly with Selective and Non-selective algorithm. The use of two updating blocks improves the sensitivity of the algorithm. 1.1. About MATLAB MATLAB is software used to compute numerical functions. All the work related to image processing is first done using MATLAB. Image processing is any form of signal processing for which the input is a digital image, such as a photograph or video frame. The output of image processing may be either an image or parameters related to the image. Unlike other software MATLAB does pixel by pixel calculation of the input image as per the algorithm. As image sizes grow larger, software becomes less useful in the video processing realm. 1.2.About SIMULINK SIMULINK [8] is a utility of MATLAB [8] used to represent a process in the form of blocks. A block diagram is a representation of the process consisting of inputs, the system and its outputs. Rather than writing codes as done for MATLAB, blocks are simply placed and connected to make the complete system. In short it provides a graphical user interface (GUI) for building blocks, performing simulations and finally analyzing results. In this paper, SIMULINK is used for performing Image Processing operations and to obtain appropriate results. Also, a VHDL code of SIMULINK model can be generated using the HDL Encoder tool. This code can be used to program FPGA, ASIC or other equivalent ICs. 1.3.Algorithm Each pixel is modified independently using the statistical procedure of Gaussian distribution [3] and the pixels of the moving object is detected using the inequality mentioned below: I t μ t < k σ t (1) Where µ t and σ t are mean and standard deviation matrices of Gaussian distribution for image pixel intensities and constant k typically has a value between 2 and 3.The updating background image is calculated as shown by the following equations: (2) μ t x, y = α I t 1 x, y + (1 α) μ t 1 (x, y) 2 σ t x, y = α[i t 1 x, y μ t 1 (x, y)] 2 + 1 α 2 (x, y) (3) σ t 1 Where I t-1 and µ t-1 is the intensity of previous image frame and previous frame and α is learning ratio. 17

2. Main Block Figure 1: Main Block Diagram 18

3. NON-SELECTIVE BLOCK The task of detecting the moving and recently stopped vehicles is done by the non-selective block. The equation for non-selective background updating [2] [4] is given by μ N,t x, y = μ N,t 1 x, y + δ N1 if I t x, y > μ N,t 1 x, y μ N,t 1 x, y δ N1 if I t x, y > μ N,t 1 x, y μ N,t 1 x, y otherwise (4) Where I t (x y) is the brightness of a pixel situated at coordinates (x,y) of input monochrome image at the time t; µ N, t (x,y) the brightness of a pixel situated at coordinates (x,y) of background image, updated non-selectively; δ N1 = 2-5 = 0.03125 is a small constant evaluated experimentally. It is assumed that the brightness of the input image is in the range: It(x,y) <0,255>. σ S,t x, y = σ S,t 1 x, y + δ S2 if I t 1 x, y μ S,t 1 x, y >σ S,t 1 x, y and m VTS,t 1 x, y = 0 σ S,t 1 x, y δ S2 if I t 1 x, y μ S,t 1 x, y < σ S,t 1 x, y and m VTS,t 1 x, y = 0 σ S,t 1 x, y otherwise (7) where m VTS,t (x, y) = m V,t (x, y) m ET,t (x, y) m ES,t (x, y) ; S,t (x, y) the brightness of a pixel at coordinates (x, y) of background image updated using selectivity; m V (x, y) the element of the detected vehicle mask image of value equal to 0 or 1, where 1 denotes the detected moving objects. The values of constants S1 and S2 were established experimentally: S1 = 0.25, S2 = 0.03125 for I t x, y) 0,255>. From (1), the image frame m s comes out as shown below: The updating of which is from (1) for nonselective model [2] is given by (5): σ N,t x, y = σ N,t 1 x, y + δ N1 if I t μ N,t 1 > σ N,t 1 σ N,t 1 x, y δ N1 if I t μ N,t 1 < σ N,t 1 σ N,t 1 x, y otherwise (5) where is experimentally evaluated to be 0.00390625 (i.e. 2-8 ). The values of µ N,t and are used to calculate the pixel values of m N from equation (1). Figure 2: Showing I t (x,y) and m N (x,y) respectively. 4. SELECTIVE BLOCK This block is similar to the non-selective block, but it depends on the final mask output [4] as shown by (6): Figure 3: Showing m s (x,y) 5. BINARY MASK COMBINATION BLOCK The output from both selective and non-selective block is combined into a single binary mask. A simple AND operation is not enough to detect all the pixels. Hence a special combination of AND and OR operations [5] are used to improve the detection as shown (8): m B x, y = m S x, y m N x, y if ((m S x 1, y m N x 1, y ) m S x 1, y 1 m N x 1, y 1 (8) m S x, y 1 m N x, y 1 m S x + 1, y 1 m N x + 1, y 1 m S x, y m N x, y otherwise μ S,t x, y = μ S,t 1 x, y + δ S1 if I t 1 x, y > μ S,t 1 x, y and m VTS,t 1 x, y = 0 σ S,t 1 x, y δ S1 if I t 1 x, y > μ S,t 1 x, y and m VTS,t 1 x, y = 0 σ S,t 1 x, y otherwise (6) Figure 4: Showing m B (x,y) 19

6. TEMPORAL AND SPATIAL EDGE DETECTION During a dark scene, True Positive pixels [2] may not be detected and a major portion or the car may be neglected. So to improve the segmentation quality, the TP pixels need to be increased. This is done by using two type of edge detection: Temporal and Spatial Edge Detection blocks. In temporal edge detection block, the edges are detected by taking the difference of the current and previous frame: ΔI T = I t I t 1 (9) In spatial edge detection, the difference of current image and background is taken as below: ΔI S = I t μ N,t (10) Temporal and spatial edge image mask m ET and m ES respectively is given by the following equation: m ET x, y = 1 if ΔI T x, y ΔI T x 1, y > θ ET ΔI T x, y ΔI T x, y 1 > θ ET (11) m ES x, y = 1 if ΔI S x, y ΔI S x 1, y > θ ES ΔI S x, y ΔI S x, y 1 > θ ES (12) where θ ET and θ ES are constant threshold values. m SH x, y = 1 if α I t(x,y) μ N,t (x,y) β (13) where α and β are constant coefficients: α=0.55, β=0.95, both evaluated experimentally. Figure 6: Showing m SH. 8. FINAL PROCESSING The masks obtained by implementing the previous algorithms are combined into a single mask M_Temp1: m HS = dil(ero(~(m ET m ES ) )m SH ) (14) M Temp 1 = ero(dil m B ~m HS m ET m ES ) (15) where dil() and ero() denote 2x2 morphological dilation and erosion operation [2] respectively. The mask thus obtained usually contains False negative and False positive pixels. Hence to improve the shape of the blobs, the inbuilt opening block is added to obtain the final frame of mask m V. Figure 5: Showing m ET and m ES respectively. 7. SHADOW DETECTION Shadows get detected and may be misunderstood by the system to be an object itself. Hence to remove any such discrepancies, the Shadow detection technique is used. By comparing the decrease in brightness [5] [6] [7], shadows are detected by the following equation: Figure 7: Showing m V. 9. BLOB PROCESSING This block is used to process the information that is retrieved from afore proposed algorithm which can be used to calculate various elements such as area, eccentricity, centroid, orientation and the number of blobs detected. It is also used to form rectangle around the detected blobs which can be shown in the final image. These elements are of prime purpose to carry out the process of traffic surveillance. 20

Figure 8: Inside the Blob Processing Block. Figure 9: Image showing the detected objects. 10. BLOB COUNTING BLOCK The Blob Processing block helps to find the number of vehicles visible per frame. But this number might not be particularly helpful for traffic surveillance. So an additional Embedded MATLAB function block is added to count the total number of cars that are passing. The result is displayed using the inbuilt display block as shown in figure 1. 11. CONCLUSION The system realizes real-time motion segmentation and morphological operations. The important advantage of using SIMULINK is that a complete design of the system can be generated, visualized and analyzed as required. Also, a VHDL code of SIMULINK model can be generated using the HDL Encoder tool provided in SIMULINK. This code can be used to program FPGA, ASIC or other equivalent ICs. As a future scope, the output of the proposed algorithm can be used for automatic speed detection of vehicles and estimate direction of motion. Number plate detection algorithm can be used along with the discussed algorithm to generate a log of the necessary information such as number plate, time of passing, speed of the vehicle, etc. 12. ACKNOWLEDGEMENTS We are thankful to Dr. M. D. Patil, HOD(electronics) for their support and encouragement. We also want to thank college staff and family members for their constant support. 13. REFERENCES [1]. Sen-Ching S. Cheung and Chandrika Kamath, Robust Background Subtraction With Foreground Validation For Urban Traffic Video in Proc. SPIE Video Communication and Image Processing, 2004,pp.881-892. [2]. Marek Wójcikowski, Robert Zaglewski, FPGA-Based Real-Time Implementation of Detection Algorithm for Automatic Traffic Surveillance Sensor Network. [3]. A. Franois, G. Medioni, "Adaptive Color Background Modeling for Real-Time Segmentation of Video Streams", in Proc. of Int. Conf. Imaging Science, Systems and Technology, Las Vegas, NV, 1999, pp. 227-232. [4]. F. Porikli, Y. Ivanov, T. Haga, "Robust abandoned object detection using dual foregrounds", EURASIP J. Adv. Signal Process,Jan.2008. [5]. D. Duque, H. Santos, P. Cortez, "Moving Object Detection Unaffected by Cast Shadows,Highlights and Ghosts", in Proc. IEEE Int. Conf. Image Processing, 2005, pp. 413-416. [6]. R. Cucchiara, C. Granna. M. Piccardi, A. Prati, S. Sirotti, "Improving Shadow Suppression in Moving Object Detection with HSV Color Information", in Proc. IEEE Intell. Transp. Syst. Conf., Oakland, CA, 2001, pp. 334-339. [7]. R. Cucchiara, C. Granna. M.Piccardi, A. Prati, "Detecting Moving Objects, Ghosts and Shadows in Video Streams", IEEE Trans. Pattern Anal. Machine Intell., vol. 25, no. 10, pp. 1337-1342, Oct. 2003. [8]. http://www.mathworks.in 21