Real Time Tracking System using 3D Vision

Size: px
Start display at page:

Download "Real Time Tracking System using 3D Vision"

Transcription

1 Real Time Tracking System using 3D Vision Arunava Nag, Sanket Deshmukh December 04,2015 Abstract In this report a Skeleton Tracking approach has been described, using the Xbox 360 Kinect camera, as a solution to perform real time tracking of tools and their operation in a manufacturing system. The approach explained in this report describes how the work done by an industrial worker can be mistake proofed, by tracking the worker s skeleton joints in real time. It also proposes a valid approach of how the change in body s positional co-ordinates can be used as a verification for the work performed by the worker. 1 Introduction Nowadays in the manufacturing and automation industries various industrial vision solutions are widely used for purposes like laser cutting, robot automation, quality analysis, and also tracking of various industrial operations. However, the industrial vision solutions are very expensive and most of them are Gray scale and 2D in nature. Hence an economically efficient and easy to use vision solution is required by the industries to replace these expensive yet inferior solutions to perform industrial operations. The manufacturing facility at John Deere- Turf Division wants to mistake proof the operation of ensuring the turf vehicle safety which includes setting the respective industrial standard torques for the bolts on the turf vehicle. There are two kinds of tools used for this purpose presently. One being the sensor-embedded wrench; and the other is the pneumatictriggered tool. These tools though being very expensive, can only provide the torque and related bolt specific data, and lacks any data regarding the location of the tools being used. Hence, a cheap vision solution to perform tracking of the tools to mistake proof the torque fixing of all the bolts in a turf vehicle is required. We proposed to have a Kinect XBox 360 as our vision solution to determine and track the positional coordinates of the worker s skeleton in real time, that would assist the manufacturing execution system to mistake proof the performed tasks. 1

2 2 Kinect Based Vision Solution 2.1 Hardware Details: XBOX 360 Kinect Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an RGB camera, depth sensor and multi-array microphone as seen in figure 1 and the specifications can be found in table 1 [2], [4]. RGB Camera: It stores three channel data in a 1280x960 resolution. This makes capturing a color image possible. Infrared (IR) emitter & IR Depth sensor: The emitter emits infrared light beams and the depth sensor reads the IR beams reflected back to the sensor. The reflected beams are converted into depth information measuring the distance between an object and the sensor. This makes capturing a depth image possible. Multi-Array Microphone: This contains 4 microphones for capturing sound. Because there are four microphones, it is possible to record audio as well as find the location of the sound source and the direction of the audio wave. Accelerometer: A 3-axis accelerometer configured for a 2G range, where G is the acceleration due to gravity. It is possible to use the accelerometer to determine the current orientation of the Kinect. Figure 1: Schematic for Kinect V1 (Source Microsoft Library). 2

3 Table 1: Specifications for Kinect Kinect Features Array Specifications Viewing angle 43 vertical by 57 horizontal field of view Vertical tilt range ± 27 Frame rate (depth and color stream) 30 frames per second (FPS) Audio format 16-kHz, 24-bit mono pulse code modulation (PCM) Audio input characteristics A four-microphone array with 24-bit analog-todigital converter (ADC) and Kinect-resident signal processing including acoustic echo cancellation and noise suppression Accelerometer characteristics A 2G/4G/8G accelerometer configured for the 2G range, with a 1 accuracy upper limit. 2.2 Software Details: Kinect SDK Version 1.8 We used software development kit (SDK) released by Windows to develop the application [1]. Raw Sensor streams: Access to raw data streams from the depth sensor, color camera sensor, and four-element microphone array enables developers to build upon the low-level streams that are generated by the Kinect sensor. Skeletal tracking: The capability to track the skeleton image of one or two people moving within the Kinect field of view make it easy to create gesture-driven applications. 3

4 Advanced Audio capabilities: Audio processing capabilities include sophisticated acoustic noise suppression and echo cancellation, beam formation to identify the current sound source, and integration with the Windows speech recognition API. Sample code and documentation: The SDK includes more than 100 pages of technical documentation. In addition to built-in help files, the documentation includes detailed walk through for most samples provided with the SDK. Easy installation: The SDK installs quickly, requires no complex configuration, and the complete installer size is less than 100 MB. Developers can get up and running in just a few minutes with a standard standalone Kinect sensor unit (widely available at retail outlets). Design: Designed for non-commercial purposes; a commercial version is expected later. Windows 7 Platform Support: Supports C++, C#, or Visual Basic in Microsoft Visual Studio Skeleton Tracking Analysis In this section we will talk about the what were the challenges and how our approach can be a potential solution to the challenges. We performed tracking of the skeleton in real time using the Kinect sensors. We chose to perform tracking of the worker because of the following challenges: The platform the tracking that needs to be done is small and also the worker blocks the field of view of the camera periodically. Also, the bolts are small enough to be seen and tracked from the given distance of 8-10 feets. To perform position tracking or shift in co-ordinate of the tool from one bolt to other is not possible without using several cameras and enough lighting of the environment is needed, leading to increasing complexity of the operation region. Hence, we chose to track the worker s skeleton joint co-ordinates in real time instead, which can be viewed from given distance at any orientation. At the most, two cameras will be needed to implement this solution, and Kinect is itself a cheap camera as compared to any industrial vision. As we are now only concerned about the worker and his skeleton and hand joint movements it solves the problem of small bolts and the worker blocking the field of view(fov) as well. Kinect being an RGB camera augmented with a depth sensor and IR sensor, it is capable and robust to provide 3D data of the environment within the FOV in normal lighting condition. The algorithm for the developed application flows as shown in figure 2, where the at first the Kinect is initialized and then we fetched the depth data from the sensor. Followed by which the RGB data is fetched as well. Then the skeleton data is extracted where we 4

5 took one frame per second using the Kinect Camera and tracked all the skeleton data that can be seen within the FOV and returned only the closest skeleton, after which we extracted all the joint data and saved them, which are further returned. This returned values are then used to draw the output using the graphics libraries like OPENGL, glew etc. Figure 2: Skeletal Tracking Flow Diagram The application was performed using Microsoft Visual Studio 12 IDE. Below shows the code snippet which performs the skeletal Data extraction written using C++ language. void getskeletaldata() { NUI_SKELETON_FRAME skeletonframe = {0; if (sensor->nuiskeletongetnextframe(0, &skeletonframe) >= 0) { sensor->nuitransformsmooth(&skeletonframe, NULL); // Loop over all sensed skeletons for (int z = 0; z < NUI_SKELETON_COUNT; ++z) { const NUI_SKELETON_DATA& skeleton = skeletonframe.skeletondata[z]; 5

6 // Check the state of the skeleton if (skeleton.etrackingstate == NUI_SKELETON_TRACKED) { // Copy the joint positions into our array for (int i = 0; i < NUI_SKELETON_POSITION_COUNT; ++i) { skeletonposition[i] = skeleton.skeletonpositions[i]; //std::cout<< skeletonposition; if (skeleton.eskeletonpositiontrackingstate[i] == NUI_SKELETON_POSITION_NOT_TRACKED) { skeletonposition[i].w = 0; return; // Only take the data for one skeleton 2.4 Results The tracked skeleton was obtained and viewed using OPENGL API and its libraries along with other graphics processing libraries. The application was designed in C++ to keep it simple and open to any OS platform. It made use of the Kinect SDK and the NuiApi or the Natural user Interface API from Microsoft Kinect. In figure 3 we can see a snapshot from the developed application s output window which shows the 3D point cloud of the FOV and tracks the human s skeleton and joints and marks the hand with the Red Marking line as the application is developed only to track the hands. 6

7 Figure 3: Skeleton Tracking Output Window (Red marks the hand joints) The body orientation was taken in to account and the skeleton could be tracked at any orientation, hence positioning of the camera can be no more an issue. Also, the surrounding environment s depth information has been included which provides the flexibility of tracking of any object or tools in future.the project is an open source hence it is available as a free source (link in section 4)to further development as per needs. Some of the required extension to this application is explained in the section The following code section shows how OpenGL has been used to draw/visualize the Kinect data in the output window. void drawkinectdata() { getkinectdata(); //rotatecamera(); glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT); glenableclientstate(gl_vertex_array); glenableclientstate(gl_color_array); glbindbuffer(gl_array_buffer, vboid); glvertexpointer(3, GL_FLOAT, 0, NULL); 7

8 glbindbuffer(gl_array_buffer, cboid); glcolorpointer(3, GL_FLOAT, 0, NULL); glpointsize(1.f); gldrawarrays(gl_points, 0, width*height); gldisableclientstate(gl_vertex_array); gldisableclientstate(gl_color_array); // Draw some arms const Vector4& lh = skeletonposition[nui_skeleton_position_hand_left]; const Vector4& le = skeletonposition[nui_skeleton_position_elbow_left]; const Vector4& ls = skeletonposition[nui_skeleton_position_shoulder_left]; const Vector4& rh = skeletonposition[nui_skeleton_position_hand_right]; const Vector4& re = skeletonposition[nui_skeleton_position_elbow_right]; const Vector4& rs = skeletonposition[nui_skeleton_position_shoulder_right]; glbegin(gl_lines); glcolor3f(1.f, 0.f, 0.f); if (lh.w > 0 && le.w > 0 && ls.w > 0) { glvertex3f(lh.x, lh.y, lh.z); glvertex3f(le.x, le.y, le.z); glvertex3f(le.x, le.y, le.z); glvertex3f(ls.x, ls.y, ls.z); if (rh.w > 0 && re.w > 0 && rs.w > 0) { glvertex3f(rh.x, rh.y, rh.z); glvertex3f(re.x, re.y, re.z); glvertex3f(re.x, re.y, re.z); glvertex3f(rs.x, rs.y, rs.z); glend(); 8

9 2.4.1 Required Extension to the Application Due to time constraints and lack of hardware availability and not much access to real industry environment the software could not be fully developed to track and mistake proof the bolttorque fixing operation, however they can be implemented easily and has been explained in this section.the present application can perform the skeleton tracking, but some required extension to this application is the tracking of the hand and register the change/shift in the hand joint coordinates and given the threshold of shift, mistake proof can be performed. A certain range of movement from one spot to other spot can be tracked and used for verification purpose, having the practical distance between the bolts in the turf vehicle in to account. This part of the application needs co-ordinate transformation, where the Kinect sensor messages/co-ordinates needs to be transformed in to image co-ordinates (same as done for drawing the skeleton in output window). The algorithm can be developed as shown in flow diagram in figure 4. The positional data can be found using the Kinect API methods as shown below in the code snippet. This snippet is an idea of how it can be done, but it has not been implemented or tested. void performtracking() { getskeletaldata(); NUI_SKELETON_FRAME SkeletonFrame = {0; //left hand float float float hand_left_x=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_hand_le hand_left_y=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_hand_le hand_left_z=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_hand_l float resultant_hand_pos= (sqrt(pow(hand_left_x,2)+pow(hand_left_y,2)+pow(hand_left_z,2))); NuiSkeletonGetNextFrame(0, &SkeletonFrame); float hand_left_new_x=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_han float 9

10 hand_left_new_y=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_han float hand_left_new_z=skeletonframe.skeletondata[1].skeletonpositions[nui_skeleton_position_ha float resultant_hand_newpos= sqrt(pow(hand_left_new_x,2)+pow(hand_left_new_y,2)+pow(hand_left_new_z,2)); //finding the absolute value of the co-ordinates float diffvalue=resultant_hand_pos-resultant_hand_newpos; if(diffvalue>6.0) { std::cout<<"mistake Proofed"<< endl; else {//condition 10

11 Figure 4: Hand Tracking Approach Also the refreshing of skeleton frame can be done as shown below in the snippet, where it takes every frame stores the joint information and compare it with the next joint information. void skeletonframecomparison{ NUI_SKELETON_FRAME skeletonframe={0; if (skeleton.etrackingstate == NUI_SKELETON_TRACKED) { // Copy the joint positions into our array for (int i = 0; i < NUI_SKELETON_POSITION_COUNT; ++i) { skeletonposition[i] = skeleton.skeletonpositions[i]; NuiSkeletonGetNextFrame (0, &skeletonframe) for(int j=0;j<nui_skeleton_position_count;++j){ skeletonposition[j]=skeleton.skeletonpositions[j]; difference[]=skeletonposition[i]-skeleton.skeletonposition[j] 11

12 if(difference[j] >=6){ printf( //condition else {//condition //in case the skeleton is not tracked it continues to track if (skeleton.eskeletonpositiontrackingstate[i] == NUI_SKELETON_POSITION_NOT_TRACKED) { skeletonposition[i].w = 0; return; The above snippet has not been tested, and requires implementation. The joint information needs to be taken in to account and then should be fed to the OPENGL class and work with the co-ordinate shifts. 3 Future Work It will be interesting to implement all the required extensions as mentioned in the section and have a working software with those extended methods. Also, it needs to be tested in the real factory environments and figure out if any other challenges that should be taken in to account and perform mistake proofing for the software. Also Industrial Robot Operating System can be used to develop the software for the Kinect, in Linux Platform which provides lot more flexibility and packages to perform development and give the software open avenues to cross platform porting. 4 Conclusion The project was a good learning experience for us. Thanks to our advisor/professor and also John Deere to provide us an opportunity to work on this project. We have performed the skeleton tracking and provided tracking results along with environment information. Also how the application can be fully developed has been explained as well. A video link and the code to to the application has been provided in the Appendix. 12

13 References [1] Microsoft. Kinect for windows sdk v1.8: [2] Microsoft. Kinect for windows sensor components and specifications: microsoft.com/en-us/library/jj aspx. [3] Mircrosoft. Skeleton tracking : [4] WikiPedia. Kinect : [5] Edward Zhang. Kinect skeletal tracking : edwardz/tutorials/kinect/kinect4. html#. Appendix 1. Github Project: 2. Application Video: 13

Hand Gesture Recognition with Microsoft Kinect A Computer Player for the Rock-paper-scissors Game

Hand Gesture Recognition with Microsoft Kinect A Computer Player for the Rock-paper-scissors Game Hand Gesture Recognition with Microsoft Kinect A Computer Player for the Rock-paper-scissors Game Vladan Jovičić, Marko Palangetić University of Primorska Faculty of Mathematics, Natural Sciences and Information

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

Real-Time 2D to 2D+Depth Video Conversion. OzViz David McKinnon QUT

Real-Time 2D to 2D+Depth Video Conversion. OzViz David McKinnon QUT Real-Time 2D to 2D+Depth Video Conversion OzViz David McKinnon QUT Why? The goal is to by able to convert video sequences where the camera undergoes motion but the scene is stationary, fast. This is one

More information

Interaction with the Physical World

Interaction with the Physical World Interaction with the Physical World Methods and techniques for sensing and changing the environment Light Sensing and Changing the Environment Motion and acceleration Sound Proximity and touch RFID Sensors

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Hautant's Test Based on Kinect Skeleton Tracking Feature

Hautant's Test Based on Kinect Skeleton Tracking Feature Hautant's Test Based on Kinect Skeleton Tracking Feature DOLINAY VILIAM, PIVNICKOVA LUCIE, VASEK VLADIMIR Faculty of Applied Informatics Tomas Bata University in Zlin Nad Stranemi 4511, Zlin CZECH REPUBLIC

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

The Kinect Sensor. Luís Carriço FCUL 2014/15

The Kinect Sensor. Luís Carriço FCUL 2014/15 Advanced Interaction Techniques The Kinect Sensor Luís Carriço FCUL 2014/15 Sources: MS Kinect for Xbox 360 John C. Tang. Using Kinect to explore NUI, Ms Research, From Stanford CS247 Shotton et al. Real-Time

More information

Depth Sensors Kinect V2 A. Fornaser

Depth Sensors Kinect V2 A. Fornaser Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography

More information

Human Arm Simulation Using Kinect

Human Arm Simulation Using Kinect Human Arm Simulation Using Kinect Nikunj Agarwal 1, Priya Bajaj 2, Jayesh Pal 3, Piyush Kushwaha 4 1,2,3,4 Student, Computer Science & Engineering Department, IMS Engineering College, Ghaziabad, Uttar

More information

Sensor technology for mobile robots

Sensor technology for mobile robots Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance

More information

Kinect: getting started. Michela Goffredo University Roma TRE

Kinect: getting started. Michela Goffredo University Roma TRE Kinect: getting started 2 Michela Goffredo University Roma TRE goffredo@uniroma3.it What s Kinect Sensor Microsoft Kinect is a motion sensor by Microsoft Xbox which allows to extract: RGB video stream

More information

A490 Machine Vision and Computer Graphics

A490 Machine Vision and Computer Graphics A490 Machine Vision and Computer Graphics Lecture Week 11 (More on Convolutions and Introduction to Boundary, Shape and Skeletal Models) November 6, 2012 Sam Siewert More on Convolution FCG Chapter 9 -

More information

Index C, D, E, F I, J

Index C, D, E, F I, J Index A Ambient light, 12 B Blurring algorithm, 68 Brightness thresholding algorithm float testapp::blur, 70 kinect.update(), 69 void testapp::draw(), 70 void testapp::exit(), 70 void testapp::setup(),

More information

Using infrared proximity sensors for close 2D localization and object size recognition. Richard Berglind Neonode

Using infrared proximity sensors for close 2D localization and object size recognition. Richard Berglind Neonode Using infrared proximity sensors for close 2D localization and object size recognition Richard Berglind Neonode Outline Overview of sensor types IR proximity sensors and their drawbacks Principles of a

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Robotic Systems ECE 401RB Fall 2006

Robotic Systems ECE 401RB Fall 2006 The following notes are from: Robotic Systems ECE 401RB Fall 2006 Lecture 15: Processors Part 3 Chapter 14, G. McComb, and M. Predko, Robot Builder's Bonanza, Third Edition, Mc- Graw Hill, 2006. I. Peripherals

More information

A Kinect Sensor based Windows Control Interface

A Kinect Sensor based Windows Control Interface , pp.113-124 http://dx.doi.org/10.14257/ijca.2014.7.3.12 A Kinect Sensor based Windows Control Interface Sang-Hyuk Lee 1 and Seung-Hyun Oh 2 Department of Computer Science, Dongguk University, Gyeongju,

More information

Kinect for Windows An Update for Researchers

Kinect for Windows An Update for Researchers 1 Kinect for Windows An Update for Researchers Stewart Tansley, PhD Microsoft Research Connections Special thanks: Prof. Patrick Baudisch, Hasso Plattner Institute and Kinect for Windows product group

More information

ECGR4161/5196 Lecture 6 June 9, 2011

ECGR4161/5196 Lecture 6 June 9, 2011 ECGR4161/5196 Lecture 6 June 9, 2011 YouTube Videos: http://www.youtube.com/watch?v=7hag6zgj78o&feature=p layer_embedded Micro Robotics Worlds smallest robot - Version 1 - "tank" Worlds smallest robot

More information

ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR

ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR Wael R. Abdulmajeed 1 and Alaa A. Hajr 2 1 Department of Mechatronics Engineering, Al-Khawarizmi Engineering

More information

Petroleum giant in the Gulf pilot module for a RF-based PERIMETER INTRUDER TRACKING SYSTEM

Petroleum giant in the Gulf pilot module for a RF-based PERIMETER INTRUDER TRACKING SYSTEM Petroleum giant in the Gulf pilot module for a RF-based PERIMETER INTRUDER TRACKING SYSTEM Monitors remote installations without security manpower Long distance RF detection and alert over 360 degrees

More information

Design and building motion capture system using transducer Microsoft kinect to control robot humanoid

Design and building motion capture system using transducer Microsoft kinect to control robot humanoid Design and building motion capture system using transducer Microsoft kinect to control robot humanoid Gun Gun Maulana 1 *, Yuliadi Erdani 1, Aris Budiyarto 1, and Wahyudi Purnomo 1 1 Politeknik Manufaktur

More information

SANGAM PROJECT BROCHURE:

SANGAM PROJECT BROCHURE: SANGAM PROJECT BROCHURE: Real-Time 3D Object Reconstruction using Kinect By: Sudharshan Suresh Narendar Sriram Senthil Hariharan Anjana Gayathri Spider R & D Introduction In a year where astronauts in

More information

Introduction to ROS. Lasse Einig, Dennis Krupke, Florens Wasserfall

Introduction to ROS. Lasse Einig, Dennis Krupke, Florens Wasserfall Introduction to ROS Lasse Einig, Dennis Krupke, Florens Wasserfall University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects of Multimodal Systems April 6, 2015 L.

More information

Margarita Grinvald. Gesture recognition for Smartphones/Wearables

Margarita Grinvald. Gesture recognition for Smartphones/Wearables Margarita Grinvald Gesture recognition for Smartphones/Wearables Gestures hands, face, body movements non-verbal communication human interaction 2 Gesture recognition interface with computers increase

More information

XDK HARDWARE OVERVIEW

XDK HARDWARE OVERVIEW XDK HARDWARE OVERVIEW Agenda 1 General Overview 2 3 4 Sensors Communications Extension Board 2 General Overview 1. General Overview What is the XDK? The Cross-Domain Development Kit, or XDK, is a battery

More information

Human Detection, Tracking and Activity Recognition from Video

Human Detection, Tracking and Activity Recognition from Video Human Detection, Tracking and Activity Recognition from Video Mihir Patankar University of California San Diego Abstract - Human detection, tracking and activity recognition is an important area of research

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Chapter 7 Human Interface Devices. asyrani.com

Chapter 7 Human Interface Devices. asyrani.com Chapter 7 Human Interface Devices asyrani.com Types of Human Interface Devices Magnavox Odyssey, released in 1972, was the first video game console. Interfacing with a HID Some simple devices, like game

More information

Building Models. Objectives. Introduce simple data structures for building polygonal models. OpenGL vertex arrays. Vertex lists Edge lists

Building Models. Objectives. Introduce simple data structures for building polygonal models. OpenGL vertex arrays. Vertex lists Edge lists Building Models Objectives Introduce simple data structures for building polygonal models Vertex lists Edge lists OpenGL vertex arrays 2 Representing a Mesh Consider a mesh v 5 v e e e 3 v 9 8 8 v e 4

More information

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP

USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP ACTA UNIVERSITATIS AGRICULTURAE ET SILVICULTURAE MENDELIANAE BRUNENSIS Volume LX 23 Number 2, 2012 USAGE OF MICROSOFT KINECT FOR AUGMENTED PROTOTYPING SPEED-UP J. Landa, D. Procházka Received: November

More information

Camera Drones Lecture 2 Control and Sensors

Camera Drones Lecture 2 Control and Sensors Camera Drones Lecture 2 Control and Sensors Ass.Prof. Friedrich Fraundorfer WS 2017 1 Outline Quadrotor control principles Sensors 2 Quadrotor control - Hovering Hovering means quadrotor needs to hold

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Visual Perception Sensors

Visual Perception Sensors G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and

More information

Guppy F-033. Description. Specifications. IEEE 1394a camera Lightweight Robust design Machine vision camera

Guppy F-033. Description. Specifications. IEEE 1394a camera Lightweight Robust design Machine vision camera Guppy F-033 IEEE 1394a camera Lightweight Robust design Machine vision camera Description Ultra-compact, inexpensive IEEE 1394a VGA machine vision camera The Guppy F-033B/C is an ultra-compact, inexpensive

More information

Development of real-time motion capture system for 3D on-line games linked with virtual character

Development of real-time motion capture system for 3D on-line games linked with virtual character Development of real-time motion capture system for 3D on-line games linked with virtual character Jong Hyeong Kim *a, Young Kee Ryu b, Hyung Suck Cho c a Seoul National Univ. of Tech., 172 Gongneung-dong,

More information

Ceilbot vision and mapping system

Ceilbot vision and mapping system Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize

More information

Keywords: clustering, construction, machine vision

Keywords: clustering, construction, machine vision CS4758: Robot Construction Worker Alycia Gailey, biomedical engineering, graduate student: asg47@cornell.edu Alex Slover, computer science, junior: ais46@cornell.edu Abstract: Progress has been made in

More information

CALIBRATION OF A MULTI-KINECT SYSTEM Master of Science Thesis

CALIBRATION OF A MULTI-KINECT SYSTEM Master of Science Thesis DAVID LEONARDO ACEVEDO CRUZ CALIBRATION OF A MULTI-KINECT SYSTEM Master of Science Thesis Examiner: Irek Defée Examiner and topic approved by the Faculty Council of Computing and Electrical Engineering

More information

WIRELESS VEHICLE WITH ANIMATRONIC ROBOTIC ARM

WIRELESS VEHICLE WITH ANIMATRONIC ROBOTIC ARM WIRELESS VEHICLE WITH ANIMATRONIC ROBOTIC ARM PROJECT REFERENCE NO. : 37S0918 COLLEGE : P A COLLEGE OF ENGINEERING, MANGALORE BRANCH : ELECTRONICS & COMMUNICATION GUIDE : MOHAMMAD RAFEEQ STUDENTS : CHARANENDRA

More information

Research Article Motion Control of Robot by using Kinect Sensor

Research Article Motion Control of Robot by using Kinect Sensor Research Journal of Applied Sciences, Engineering and Technology 8(11): 1384-1388, 2014 DOI:10.19026/rjaset.8.1111 ISSN: 2040-7459; e-issn: 2040-7467 2014 Maxwell Scientific Publication Corp. Submitted:

More information

Building Models. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Building Models. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico Building Models Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico 1 Objectives Introduce simple data structures for building polygonal

More information

Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors

Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Sriram Sethuraman Technologist & DMTS, Ittiam 1 Overview Imaging on Smart-phones

More information

HAND GESTURE RECOGNITION USING MEMS SENSORS

HAND GESTURE RECOGNITION USING MEMS SENSORS HAND GESTURE RECOGNITION USING MEMS SENSORS S.Kannadhasan 1, R.Suresh 2,M.Shanmuganatham 3 1,2 Lecturer, Department of EEE, Tamilnadu Polytechnic College, Madurai, Tamilnadu, (India) 3 Senior Lecturer,

More information

Clearpath Communication Protocol. For use with the Clearpath Robotics research platforms

Clearpath Communication Protocol. For use with the Clearpath Robotics research platforms Clearpath Communication Protocol For use with the Clearpath Robotics research platforms Version: 1.1 Date: 2 September 2010 Revision History Version Date Description 1.0 26 March 2010 Release 1.1 2 September

More information

Collaboration is encouraged among small groups (e.g., 2-3 students).

Collaboration is encouraged among small groups (e.g., 2-3 students). Assignments Policies You must typeset, choices: Word (very easy to type math expressions) Latex (very easy to type math expressions) Google doc Plain text + math formula Your favorite text/doc editor Submit

More information

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters Computer Animation and Visualisation Lecture 3. Motion capture and physically-based animation of characters Character Animation There are three methods Create them manually Use real human / animal motions

More information

Khronos and the Mobile Ecosystem

Khronos and the Mobile Ecosystem Copyright Khronos Group, 2011 - Page 1 Khronos and the Mobile Ecosystem Neil Trevett VP Mobile Content, NVIDIA President, Khronos Copyright Khronos Group, 2011 - Page 2 Topics It s not just about individual

More information

Gesture Recognition and Voice Synthesis using Intel Real Sense

Gesture Recognition and Voice Synthesis using Intel Real Sense Gesture Recognition and Voice Synthesis using Intel Real Sense An Experimental Manual for Design and Development of Mobile Devices In association with Intel Collaboration Program Designed by: Zeenat Shareef,PhD

More information

VISION IMPACT+ OCR HIGHLIGHTS APPLICATIONS. Lot and batch number reading. Dedicated OCR user interface. Expiration date verification

VISION IMPACT+ OCR HIGHLIGHTS APPLICATIONS. Lot and batch number reading. Dedicated OCR user interface. Expiration date verification IMPACT+ OCR IMPACT+ OCR is the new Datalogic innovative solution for robust and effective Optical Character Recognition (e.g. expiration date, lot number) for the Food & Beverage industry. The new Datalogic

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

Getting Started with Microsoft Kinect for FRC

Getting Started with Microsoft Kinect for FRC v2.3 January 3 rd, 2012 Page 1 of 14 Getting Started with Microsoft Kinect for FRC Before proceeding, make sure you do not have any existing Kinect drivers on your computer. If you have previously installed

More information

Hitachi-LG Data Storage, Inc.

Hitachi-LG Data Storage, Inc. Product Leaflet 3D LiDAR [TOF] Hitachi-LG Data Storage, Inc. PRODUCTS 3D LiDAR (TOF) Sensor from Hitachi-LG Data Storage, Inc. Next Generation Technology 3D LiDAR (TOF) Motion Sensor Series Object distance

More information

Face RFID Fingerprint Recognition Reader Sensor Fanless Touch 24/7 VESA

Face RFID Fingerprint Recognition Reader Sensor Fanless Touch 24/7 VESA 2017 by (Germany). All information subject to change without notice. Pictures for illustration purposes only. Shuttle Biometric System BR06 Shuttle has proven its experience in the field of biometric systems

More information

Development of 3D Image Manipulation Software Utilizing the Microsoft Kinect

Development of 3D Image Manipulation Software Utilizing the Microsoft Kinect Development of 3D Image Manipulation Software Utilizing the Microsoft Kinect A report submitted to the School of Engineering and Energy, Murdoch University in partial fulfilment of the requirements for

More information

Android Spybot. ECE Capstone Project

Android Spybot. ECE Capstone Project Android Spybot ECE Capstone Project Erik Bruckner - bajisci@eden.rutgers.edu Jason Kelch - jkelch@eden.rutgers.edu Sam Chang - schang2@eden.rutgers.edu 5/6/2014 1 Table of Contents Introduction...3 Objective...3

More information

A Novel Approach for Constructing Emulator for Microsoft Kinect XBOX 360 Sensor in the.net Platform

A Novel Approach for Constructing Emulator for Microsoft Kinect XBOX 360 Sensor in the.net Platform 2013 4th International Conference on Intelligent Systems, Modelling and Simulation A Novel Approach for Constructing Emulator for Microsoft Kinect XBOX 360 Sensor in the.net Platform Mohammad Raihanul

More information

Gesture Recognition Using 3D MEMS Accelerometer

Gesture Recognition Using 3D MEMS Accelerometer Gesture Recognition Using 3D MEMS Accelerometer Akhila Denny 1, Annmary Cherian 2, Athira V Nair 3, Anitta Thomas 4 Graduate Students, Department of Electronics and Communication Engineering, Viswajyothi

More information

Ubiquitous and Context Aware Computing: Overview and Systems

Ubiquitous and Context Aware Computing: Overview and Systems Ubiquitous and Context Aware Computing: Overview and Systems Simon Bichler 1 / 30 Outline Definition and Motivation for Ubiquitous computing Context aware computing Sample Systems Discussion 2 / 30 Ubiquitous

More information

With voice navigation, fully compatible with Amazon AVS/Alexa voice services

With voice navigation, fully compatible with Amazon AVS/Alexa voice services GiuliaEuroA53+ With voice navigation, fully compatible with Amazon AVS/Alexa voice services Zykronix USA: 188 Inverness Drive West, Suite 250, Englewood, CO 80112 USA I P: 303.799.4944I www.zykronix.com

More information

Manta G Megapixel GigE Vision compliant camera. Benefits and features: Options:

Manta G Megapixel GigE Vision compliant camera. Benefits and features: Options: Manta G-201 14.7 fps at full resolution Power over Ethernet option Angled-head and board level variants Video-iris lens control 2 Megapixel GigE Vision compliant camera Manta G-201 is a low cost machine

More information

Mouse Simulation Using Two Coloured Tapes

Mouse Simulation Using Two Coloured Tapes Mouse Simulation Using Two Coloured Tapes Kamran Niyazi 1, Vikram Kumar 2, Swapnil Mahe 3 and Swapnil Vyawahare 4 Department of Computer Engineering, AISSMS COE, University of Pune, India kamran.niyazi@gmail.com

More information

begins halting unexpectedly, doing one or more of the following may improve performance;

begins halting unexpectedly, doing one or more of the following may improve performance; CLEARPATH ROBOTICS F r o m T h e D e s k o f T h e R o b o t s m i t h s Thank you for your Husky A200 order! As part of the integration, we have prepared this quick reference sheet for you and your team

More information

Skeleton based Human Action Recognition using Kinect

Skeleton based Human Action Recognition using Kinect Skeleton based Human Action Recognition using Kinect Ayushi Gahlot Purvi Agarwal Akshya Agarwal Vijai Singh IMS Engineering college, Amit Kumar Gautam ABSTRACT This paper covers the aspects of action recognition

More information

Manta G Megapixel GigE Vision camera with Sony ICX655 CCD sensor. Benefits and features: Options:

Manta G Megapixel GigE Vision camera with Sony ICX655 CCD sensor. Benefits and features: Options: Manta G-504 9.2 fps at full resolution Power over Ethernet option Angled-head and board level variants Video-iris lens control 5 Megapixel GigE Vision camera with Sony ICX655 CCD sensor Manta G-504 is

More information

PART IV: RS & the Kinect

PART IV: RS & the Kinect Computer Vision on Rolling Shutter Cameras PART IV: RS & the Kinect Per-Erik Forssén, Erik Ringaby, Johan Hedborg Computer Vision Laboratory Dept. of Electrical Engineering Linköping University Tutorial

More information

Prosilica GT. 1.2 Megapixel machine vision camera for extreme environments. Benefits and features:

Prosilica GT. 1.2 Megapixel machine vision camera for extreme environments. Benefits and features: Prosilica GT 1290 Versatile temperature range for extreme environments IEEE 1588 PTP Power over Ethernet P-Iris and DC-Iris lens control 1.2 Megapixel machine vision camera for extreme environments Prosilica

More information

The NAO Robot, a case of study Robotics Franchi Alessio Mauro

The NAO Robot, a case of study Robotics Franchi Alessio Mauro The NAO Robot, a case of study Robotics 2013-2014 Franchi Alessio Mauro alessiomauro.franchi@polimi.it Who am I? Franchi Alessio Mauro Master Degree in Computer Science Engineer at Politecnico of Milan

More information

VGA machine vision camera with GigE Vision interface

VGA machine vision camera with GigE Vision interface Manta G-032 Versatile VGA camera Power over Ethernet option Angled-head and board level variants Video-iris lens control VGA machine vision camera with GigE Vision interface Manta G-032 is a value packed

More information

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios Sensor Fusion for Car Tracking Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios, Daniel Goehring 1 Motivation Direction to Object-Detection: What is possible with costefficient microphone

More information

GEOMETRY ALGORITHM ON SKELETON IMAGE BASED SEMAPHORE GESTURE RECOGNITION

GEOMETRY ALGORITHM ON SKELETON IMAGE BASED SEMAPHORE GESTURE RECOGNITION GEOMETRY ALGORITHM ON SKELETON IMAGE BASED SEMAPHORE GESTURE RECOGNITION 1 AERI RACHMAD, 2 MUHAMMAD FUAD 1,2 Faculty of Engineering, University of Trunojoyo Madura, Indonesia E-mail: 1 aery_r@yahoo.com

More information

PRODUCT INFORMATION. Visionary-T 3D SNAPSHOT WIDE RANGE OF APPLICATIONS FOR INDOOR USE. 3D vision

PRODUCT INFORMATION. Visionary-T 3D SNAPSHOT WIDE RANGE OF APPLICATIONS FOR INDOOR USE. 3D vision PRODUCT INFORMATION Visionary-T 3D SNAPSHOT WIDE RANGE OF APPLICATIONS FOR INDOOR USE 3D vision FULL FLEXIBILITY FOR INDOOR USE The streaming cameras in the Visionary-T 3D vision product family deliver

More information

MultiAR Project Michael Pekel, Ofir Elmakias [GIP] [234329]

MultiAR Project Michael Pekel, Ofir Elmakias [GIP] [234329] MultiAR Project Michael Pekel, Ofir Elmakias [GIP] [234329] Supervisors Dr. Matan Sela Mr. Yaron Honen Assistants Alexander Porotskiy Summary MultiAR is a multiplayer quest (Outdoor Real Time Multiplayer

More information

3D Laser Range Finder Topological sensor gathering spatial data from the surrounding environment

3D Laser Range Finder Topological sensor gathering spatial data from the surrounding environment Initial Project and Group Identification Document September 19, 2013 3D Laser Range Finder Topological sensor gathering spatial data from the surrounding environment Christian Conrose Jonathan Ulrich Andrew

More information

An Efficient Magic Mirror Using Kinect

An Efficient Magic Mirror Using Kinect An Efficient Magic Mirror Using Kinect Md. Moniruzzaman Monir monir0805@gmail.com Nahyan Ebn Hashem nahyan.nick@gmail.com Md. Nafis Hasan Siddique ns.shamit@gmail.com Afsana Pervin Tanni meu2ari@gmail.com

More information

Announcement. Homework 1 has been posted in dropbox and course website. Due: 1:15 pm, Monday, September 12

Announcement. Homework 1 has been posted in dropbox and course website. Due: 1:15 pm, Monday, September 12 Announcement Homework 1 has been posted in dropbox and course website Due: 1:15 pm, Monday, September 12 Today s Agenda Primitives Programming with OpenGL OpenGL Primitives Polylines GL_POINTS GL_LINES

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

GigE Vision camera featuring the Sony IMX265 CMOS sensor

GigE Vision camera featuring the Sony IMX265 CMOS sensor Manta G-319 Sony IMX265 CMOS sensor Power over Ethernet option Angled-head and board level variants Video-iris lens control GigE Vision camera featuring the Sony IMX265 CMOS sensor Manta G-319 is a machine

More information

2.8 Megapixel GigE camera with Sony ICX674 CCD sensor

2.8 Megapixel GigE camera with Sony ICX674 CCD sensor Manta G-283 Versatile 2.8 Megapixel camera 30.4 fps at full resolution Power over Ethernet option Video-iris lens control 2.8 Megapixel GigE camera with Sony ICX674 CCD sensor Manta G-283 is a machine

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Keyword Recognition Performance with Alango Voice Enhancement Package (VEP) DSP software solution for multi-microphone voice-controlled devices

Keyword Recognition Performance with Alango Voice Enhancement Package (VEP) DSP software solution for multi-microphone voice-controlled devices Keyword Recognition Performance with Alango Voice Enhancement Package (VEP) DSP software solution for multi-microphone voice-controlled devices V1.19, 2018-12-25 Alango Technologies 1 Executive Summary

More information

How to Build Optimized ML Applications with Arm Software

How to Build Optimized ML Applications with Arm Software How to Build Optimized ML Applications with Arm Software Arm Technical Symposia 2018 Arm K.K. Senior FAE Ryuji Tanaka Overview Today we will talk about applied machine learning (ML) on Arm. My aim for

More information

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 Sara BRAGANÇA* 1, Miguel CARVALHO 1, Bugao XU 2, Pedro AREZES 1, Susan ASHDOWN 3 1 University of Minho, Portugal;

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Beam Remote Presence System

Beam Remote Presence System What is Beam? Beam is the remote presence system that takes video conferencing to the next level. Travel instantly and interact with people around the world in real-time whenever, wherever. Leveraging

More information

Embedded Audio & Robotic Ear

Embedded Audio & Robotic Ear Embedded Audio & Robotic Ear Marc HERVIEU IoT Marketing Manager Marc.Hervieu@st.com Voice Communication: key driver of innovation since 1800 s 2 IoT Evolution of Voice Automation: the IoT Voice Assistant

More information

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design

ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design ArchGenTool: A System-Independent Collaborative Tool for Robotic Architecture Design Emanuele Ruffaldi (SSSA) I. Kostavelis, D. Giakoumis, D. Tzovaras (CERTH) Overview Problem Statement Existing Solutions

More information

Intelligent Machines Design Laboratory EEL 5666C

Intelligent Machines Design Laboratory EEL 5666C Atocha Too Donald MacArthur Center of Intelligent Machines and Robotics & Machine Intelligence Laboratory Intelligent Machines Design Laboratory EEL 5666C TABLE OF CONTENTS Abstract 3 Executive Summary

More information

Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras

Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras Implementation of 3D Object Reconstruction Using Multiple Kinect Cameras Dong-Won Shin and Yo-Sung Ho; Gwangju Institute of Science of Technology (GIST); Gwangju, Republic of Korea Abstract Three-dimensional

More information

Wireless Authentication System for Barcode Scanning Using Infrared Communication Technique

Wireless Authentication System for Barcode Scanning Using Infrared Communication Technique Wireless Authentication System for Barcode Scanning Using Infrared Communication Technique *M.S. Raheel, M. R. Asfi, M. Farooq-i-Azam, H. R. Shaukat, J. Shafqat Department of Electrical Engineering COMSATS

More information

EEL 5666C FALL Robot Name: DogBot. Author: Valerie Serluco. Date: December 08, Instructor(s): Dr. Arroyo. Dr. Schwartz. TA(s): Andrew Gray

EEL 5666C FALL Robot Name: DogBot. Author: Valerie Serluco. Date: December 08, Instructor(s): Dr. Arroyo. Dr. Schwartz. TA(s): Andrew Gray EEL 5666C FALL 2015 Robot Name: DogBot Author: Valerie Serluco Date: December 08, 2015 Instructor(s): Dr. Arroyo Dr. Schwartz TA(s): Andrew Gray Jacob Easterling INTRODUCTION ABSTRACT One of the fun things

More information

SMART OBJECT DETECTOR FOR VISUALLY IMPAIRED

SMART OBJECT DETECTOR FOR VISUALLY IMPAIRED SMART OBJECT DETECTOR FOR VISUALLY IMPAIRED Govardhan.S.D 1, Kumar.G 2, Mariyappan.S 3, Naveen Kumar.G 4, Nawin Asir.J 5 Assistant Professor 1, Student 2,3,4,5 Department of Electronics and Communication

More information

Using Mobile LiDAR To Efficiently Collect Roadway Asset and Condition Data. Pierre-Paul Grondin, B.Sc. Surveying

Using Mobile LiDAR To Efficiently Collect Roadway Asset and Condition Data. Pierre-Paul Grondin, B.Sc. Surveying Using Mobile LiDAR To Efficiently Collect Roadway Asset and Condition Data Pierre-Paul Grondin, B.Sc. Surveying LIDAR (Light Detection and Ranging) The prevalent method to determine distance to an object

More information

FULL HD IP PTZ CAMERa

FULL HD IP PTZ CAMERa smart 1.3/2mp wi-fi mini ptz ir bullet camera FULL HD IP PTZ CAMERa 1080P/120M IR RANGE/onvif Main Feature 1080p/960p Resolution - Leveraging 2MP/1.3MP CMOS image sensor delivers 1080p/960p video, D-WDR,

More information

BABLEFISH. Create a custom VR Engine

BABLEFISH. Create a custom VR Engine BABLEFISH Create a custom VR Engine Evolution of Digital Games VR Camera Controller Experiential Experiential DAQRI DAQRI : AR v VR v MR Startupbeat.com https://startupbeat.com/2017/10/dublin-dominating-vr-industry/

More information