Gaze tracking in multi-display environment
|
|
- Shon Archibald Dorsey
- 6 years ago
- Views:
Transcription
1 HSI 2013 Sopot, Poland, June 06-08, 2013 Gaze tracking in multi-display environment Tomasz Kocejko and Jerzy Wtorek Department of Biomedical Engineering, Gdansk University of Technology, Narutowicza 11/12, Gdańsk, Poland, Abstract. This paper presents the basic ideas of eye and gaze tracking in multiple-display environment. The algorithm for display detection and identification is described as well as the rules for gaze interaction in multi display environment. The core of the method is to use special LED markers and eye and scene tracking glasses. Scene tracking camera registers markers position which is then represented as a cloud of points. Analyzing the mutual positions of detected points the algorithm estimates screen/displays position. Display number is assigned based on special information marker. Described project shows the possibility of hands free interaction in MDE. It also presents how to register visual attention when information is dispersed on several screens. Keywords: multi display environment, MDE, gaze communication, eye tracking, gaze interaction with multiple displays, visual attention registration. I I. INTRODUCTION N general, eye tracking systems can be categorized as a table and head mounted ones. They differ from each other by design, technology and implemented algorithms. The head mounted systems were designed to enable tracking a visual attention in so called a real environment and they are equipped with one [1], two (simultaneous tracking of the eye and a scene) [2]-[6], or even more than two cameras [7][8]. The multiple-camera head mounted systems enables performing tasks or experiments outside the lab. Similar to head mounted systems the table mounted ones can be equipped with one, two or more cameras. Despite a number of the cameras there are systems containing the pan/tilt units which allows the camera to be rotated [9] and/or the lens focus can automatically adjusted [10]. The eye trackers are widely used in many scientific disciplines including medicine e.g. neurology, psychology, cognitive science, and many others [16]-[18]. Apart from the above mentioned utilization the eye trackers are used in systems developed for data collection from a region of interest. They can also be used for communication of a user with computers [13]-[15]. It is achieved in many ways, e.g. by displaying on computer s monitor a keyboard and the user is selecting a desired letter by gazing at it. Rapid development of information technologies influences on improvement of diagnostic techniques. However, this improvement is associated with an increased amount of information and necessity of its This work has been partially supported by European Regional Development Fund concerning the project: UDA-POIG / Home assistance for elders and disabled -DOMESTIC, Innovative Economy , National Cohesion Strategy. transfer/presentation to a user. E.g., an amount and variety of information generated/transferred through Hospital Information Systems (HIS) causes that the workstations are equipped with multiple displays or even consist of multiple independent modules. It follows from a fact that it is easier to absorb a knowledge when a complex information is presented on several monitors/screens [11]. A cloud computing technology improves an information transfer between different systems and devices. It implies that increasing number of HIS providers offer their products for both desktops and tablets. Dedicated frameworks are developed to enable the user a better managing of information and its better distribution among PDAs, laptops, tablets and smartphones [12]. Thus, an objective evaluation of working effectiveness in such environment demands a special tools to be used. However, there are a very few technologies that allow for eye/gaze tracking in constantly changing multi display environment (MDE) Despite all the frameworks, the development of functional interface that enable controlling the multiple screens, devices or systems maybe beneficial for understanding the rules of interaction in MDE as well for getting more knowledge about process of obtaining information and/or communication. II. METHOD In general, idea of tracking the visual attention in multiple displays environment relies on marking the displays of interest with IR LED markers. The concept of gaze tracking presented in "Eye Mouse for disabled" [13] paper was used as the core of this project. Although the hardware for gaze and scene tracking remain the same the software architecture and algorithms were redesigned. The main improvement was adding a detection of display candidates from a points cloud and numerating them according to IR identifying markers (Fig. 2b). A. General Concept The project aimed at creating the user interface (UI) that allows acquiring a distribution of the visual attention from different, also mobile devices or, in general, from multi-display environment. The main task was to develop rules enabling data exchange between the eye tracking interface and devices forming a multiple display environment. To achieve this a client-server architecture was chosen. Fig. 1 presents the general concept of the project.
2 Exemplary markers configuration is presented in Fig. 2a, the identifying marker is presented in Fig. 2b: Fig. 1. The general concept of gaze interaction in multi display environment To enable tracking of the visual attention in multiple devices environment, every particular device had to be assigned an ID number and registered in the database software. The registration rule did not apply for multiple screen gaze tracking situation. In such a case there was only one device to exchange data with. Despite the number of devices all displays of interest had to be assigned with ID number and stored in the database. The operation of numerating and registering the devices allowed for their integration. From now on, integrated devices were able to provide an information about their configuration (screen size and resolution) to the server application when called. Equipped with eye and scene tracking camera, the eyemouse interface [13] was used for data registration. The scene camera constantly observed the environment registering the position of each display and correlating this information with a data provided by the eye tracking camera. According to the results of screen tracking and pupil center detection algorithms the approximate fixation point was calculated. When software classified that the gaze is within certain device/screen, the point of regard was calculated for the display of the interest. Selected device broadcasted their basic parameters and proper information was exchange with the device. In presented concept, the server application covered the data exchange and organization between tracked devices. Proper pupil detection was possible due to the algorithms applied in earlier projects [5][6][13]. However, device detection and differentiation relied on new algorithm detecting displays from the cloud of points created by IR LED markers. B. IR markers Each display's corners are marked with IR LED's. To distinguish particular devices/displays from each other special identifying markers were designed and mounted between the top corners of a display. This additional marker informed about the device/screen ID number in configured MDE (multi display environment). Two different way of numbering the display of interest were designed: traditional - number of lighten LEDs correspond to the display number binary - the number of display of interests is represented as binary code (lighten LEDs represents a number in binary code) Fig. 2. a) Exemplary configuration of LED markers b) the model of identifying LED marker Identifying markers were designed to be powered from voltage battery and contained switches enabling presentation the ID number by switching the LEDs ON/OFF. However, markers placed at the display corners were designed to be powered either from battery or USB source. C. Display detection and identification Displays detection was based on fitting of the quadrangle shapes to the given cloud of points. The scene camera was equipped with IR filters mainly enabling markers registration. Attached to the displays the LED markers created a cloud of bright points on the captured images. Position of each captured marker was estimated with contour detection algorithm. Every marker was represented by its contour properties (size, position of a center etc. ). Markers position were converted into the cloud of points and stored in a matrix. The next step was to apply a mesh to connect points with each other. Every two points joint with a line created a section. Sections with common point were paired regarding the ratio of their lengths and angle between them. The pairs with opposite vertices that shared two points were merged to a quadrangles. The number of created quadrangles was then reduced so every four points belonged only to one quadrangle. The detected quadrangles were identified and numbered according to the information obtained from indentifying markers. It was assumed that this markers were always placed at the center of upper edge of each display. Depends of the configuration, the algorithm checked either the binary code created by lighten LEDs or simply LED's status (on/off). The block diagram of the algorithm is presented in Fig. 3. This algorithm (display detection and identification algorithm) allowed for estimating the actual position of each marked screen or device.
3 corresponding pupil positions. For calculating the transform matrix Ω 1 from M1 and M2, the perspective transform was used. From now on, two most important parameters could be calculated: virtual point of regards and true point of regards The virtual point of regards (VPoR) referred to the position of pupil center correlated with images and information captured by the scene camera. It also could be represented in the same matrix space as a captured image of a scene. The accuracy of the VPoR estimation was related to the precision of the calibration performance and thereby to the transformation matrix: T VPoR = Ω 1 P (1) where: Ω 1 - is transformation matrix computed from pupil positions stored in matrix M1 related with calibration points stored in M2 P - is a vector containing absolute pupil center position registered by eye camera VPoR - Virtual Point of Regards vector containing the fixation position correlated and represented in the same space as images captured by scene camera. Fig. 3. Block diagram of multiple display tracking algorithm D. Gaze estimation The concept of gaze estimation used for eyemouse interface was partially implemented for gaze tracking in multiple display environment. The basic idea of estimating the gaze position is presented in Fig. 4. Beside the pupil center detection and establishing of the primary screen (the ID number of primary display was setup before the calibration procedure) the calibration procedure was first important step in gaze estimation. Regarding the fact that the calibration had to be done in regards to the constant set of points, only one display (the primary one) was used during this procedure. The calibration led to the transform matrix computation - Ω 1. During calibration user watched at the points of constant position in the scene while the software registered his/her pupil position. The calibration points were stored in matrix "M2" while matrix "M1" contained Knowing the VPoR it was possible to validate if it was within the area of particular labeled display. Along the VPoR, detected displays position were also stored in the matrix containing information about the image captured by the scene camera. To establish the "true" point of regard TPoR (actual position of fixation within particular display) the perspective transform was used one more time to calculate another transform matrix - Ω 2. However, in this case the transform matrix was calculated dynamically. Unlike the first transform matrix - Ω 1, which was calculated only once during the calibration procedure and for static data sets, the second one - Ω 2 was calculated every time the software detected displays in the image captured by the scene camera. Using perspective transformation, the Ω 2 matrix was computed with regards to matrix "Mi" (representing current display's of interest position) and corresponding matrix "MRi" containing markers position represented in particular display pixel space (with regards to its resolution). With the transform matrix Ω 2 and the VPoR the fixation on actual object of interest was calculated according to the formula: TPoR = Ω T 2 VPoR (2) where: Ω 2 - is transformation matrix computed from virtual positions of IR LEDs markers dynamically captured by scene camera stored in matrix Mi and their position represented in pixel space - MRi VPoR - Virtual Point of Regards vector containing the fixation position correlated and represented in the same space as images captured by scene camera.
4 TPoR - the fixation on real object of interest represented in particular displays pixel space Rendering the virtual representation of potential multi display environment allowed to assume the very wide angle of scene tracking camera and check different orientation of displays on a large scene. The results for different configuration of three screens in the scene is presented in Fig. 6: Fig. 4. Block diagram of gaze interaction in multi display environment The position of TPoR registered for particular device was transferred to the server and jointed with other information passed by this device as its number, screen-shot and/or performed action (e.g. a mouse click) III. RESULTS To check how the display detection and identification algorithm works the set of exemplary images of potential scene was generated according to the render 3D images. The set of images containing up to three displays setup in different positions in space was prepared. The exemplary pictures of render image and the result of the display detection algorithm are presented in Fig. 5. (c) Fig. 6. Different configuration of identified displays The situation for the detection result in case two displays overlying each other is presented in Fig. 7. Fig. 7. Display detecting algorithm applied on overlying screen The rotated (and different size) displays detection is in Fig. 8. Fig. 5. a) Rendered image of exemplary MDE b) result of display detection All detected displays were numerated according to the information obtained from the identifying markers. To check every aspect of the algorithm (including upper edge detection and vertex orientation detection) the important features were marked on the image (edge with green and bottom right corner of the screen with yellow color). Fig. 8. Result of rotated display detection However, the correct work of the algorithm was possible when the LED markers were attached properly. It means that the each angle of possible quadrangle must be within
5 certain range (in this experiment the angle range was set to degrees). Fig. 9 presents results for a wrong marker placement. Fig. 9. Wrong LEDs configuration Moreover, the algorithm was tested in real multi display environment. Two devices were setup: a laptop (as primary device - number 1) and a tablet (additional device - number 3). Possible situation of tracking visual attention from two different devices is presented in Fig. 10 (Fig. 10a - image captured by scene camera, Fig. 10b - result of display detection algorithm) Fig. 10. The example of a) MDE configuration and b) result of display detection and identification Presented example shows the situation where user moves his visual attention between configured devices. Another example (Fig. 11) presents results of implemented algorithm for user focusing his attention on only one of the devices. Fig. 11. The example of a) MDE configuration and b) result of display detection and identification IV. DISCUSSION We elaborated a method of display labeling based on IR LEDs markers which creates the cloud of bright points when captured by the IR sensitive scene camera. In our approach we assumed that displays have quadrangle shape. Then we applied the distance and angular dependencies to estimate their position on a detected cloud of points. On this basis we developed an algorithm for detecting and identifying labeled displays. Then we implemented this algorithm in previously created eye tracking software and we establish a method for tracking the visual attention in multi-display environment. We conducted a validation study which indicates that the algorithm performs well on images captured from scene camera of head-mounted eye tracking interface. The algorithm seems to work correctly even for large cloud of points. The experiment was conducted in natural light conditions (in the regular office) including the illumination of tracked displays (Fig. 11a). The problem of disturbing reflections from behind of operator was partially limited due to the implementation of IR light pass filters on both of the tracking cameras. The rules for differentiation between IR markers and other light sources also has been implemented in the algorithm. However, the misdetection episodes would rarely happened. Nevertheless, this problem has been considered very seriously and attempt to solve it both by hardware modifications and software improvements has been made. According the rendered objects, all of them were detected and identified correctly even when the display's position was changed (Fig. 6). Presented concept of gaze tracking in multi-display environment might improve tracking visual attention in multiple devices. The slight disadvantage of proposed solution is necessity of wearing the eye tracking glasses and pre-configuration of the devices belonging to monitored multi-display environment (MDE). However, the great advantage is that only one eye tracking interface is needed. After calibration user can freely moves his attention from one device to another as well as interact with different devices without the need of recalibration. Although the algorithm can detect and identify quit a number of devices, its efficiency is decreasing with every additional set of markers. When the cloud of the points grow, the number of section increases which influence on the speed of data processing. It was noticed that the algorithm may even collapse for large number of markers. Although the view angle of scene tracking camera enables for continuous tracking up to three different displays that are not overlying each other (for example laptop, tablet and Smartphone), this part still must be improved. The important fact is that the algorithm correctly identifies the
6 current display of interest (Fig. 11a). Moreover, the algorithm works well even for rotated displays (but only to a limited value of angle rotation). However, certain assumption was made according the angles value of detected quadrangles. This value depends on correct IR markers placement as well as on perspective projection of captured image on the camera plane. Therefore we are planning to investigate the range of head movement rotation during work within MDE. Depending on the results the problem will be solved mathematically or simply ignored (if the head rotation will not influence on the results of the display tracking algorithm). Still, more studies have to be made in this case to accurately verify the possible angle of display rotation. It happened that the empty space between marked displays was qualified as a quadrangle however algorithm correctly identified displays due to the specially designed markers. Described concept of gaze tracking in multi-display environment can also be applied for hands free communication with multiple devices clustered in one group of interest. We have not measured the accuracy and spatial resolution of the interface in MDE but this parameters were measured for the eye tracking device we designed (eyemouse). The tests were conducted for 14" display and the result was 0,7 of accuracy and 0,4 of spatial resolution. Still, more tests will be conducted in the MDE. Gaze communication would require installing additional applications at this devices to enable receiving and translating the commands given be means of eye tracking software. REFERENCES [1] Fejtova, M.; Fejt, J.; Novak, P. & Stepankova, O. (2006), System I4Control: Contactless control PC, in 'Intelligent Engineering Systems, INES '06. Proceedings. International Conference on', pp [2] cts/iview-x-hed.html [3] ware/tobii-glasses-eye-tracker/technical-specifications/ [4] nts/smi_etg_flyer.pdf [5] Kocejko T., Bujnowski A., Wtorek J, Complex human computer interface for LAS patient HSI'09, Proc. 2nd International Conference on Human System Interaction, Catania, Italy, May , pp [6] Kocejko T., Bujnowski A., Wtorek., Dual camera based eye gaze tracking system, 4th European Conference of the International Federation for Medical an Biological Engineering; IFMBE Proceedings, Antwerp, Belgium November, Berlin: Springer, (IFMBE Proceedings, vol pp [7] Nishant Kumar, Stefan Kohlbecher, E. S. (2009), 'A Novel Approach To Video-Based Pupil Tracking', Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, [8] Vockeroth, J.; Dera, T.; Boening, G.; Bartl, K.; Bardins, S. & Schneider, E. (2007), The combination of a mobile gaze-driven and a head-mounted camera in a Hybrid perspective setup, in 'Systems, Man and Cybernetics, ISIC. IEEE International Conference on', pp [9] Liu, R.; Zhou, X.; Wang, N. & Zhang, M. (2009), Adaptive Regulation of CCD Camera in Eye Gaze Tracking System, in 'Image and Signal Processing, CISP '09. 2nd International Congress on', pp [10] Poitschke, T.; Bay, E.; Laquai, F.; Rigoll, G.; Bardins, S.; Bartl, K.; Vockeroth, J. & Schneider, E. (2009), Using liquid lenses to extend the operating range of a remote gaze tracking system, in 'Systems, Man and Cybernetics, SMC IEEE International Conference on', pp [11] Poder, T.; Godbout, S. & Bellemare, C. (2011), 'Dual vs. single computer monitor in a Canadian hospital Archiving Department: a study of efficiency and satisfaction.', HIM J 40(3), [12] Biehl, J; Baker, W; Bailey, B;, Tan, D; Inkpen, K; Czerwinski, M IMPROMPTU: A New Interaction Framework for Supporting Collaboration in Multiple Display Environments and Its Field Evaluation for Co-located Software Development, CHI 2008, April 5 10, 2008, Florence, Italy [13] Kocejko, T.; Bujnowski, A. & Wtorek, J. (2008), Eye mouse for disabled, in 'Human System Interactions, 2008 Conference on', pp [14] Magee, J.; Scott, M.; Waber, B. & Betke, M. (2004), EyeKeys: A Real-Time Vision Interface Based on Gaze Detection from a Low-Grade Video Camera, in 'Computer Vision and Pattern Recognition Workshop, CVPRW '04. Conference on', pp [15] Li, D.; Winfield, D. & Parkhurst, D. (2005), Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches, in 'Computer Vision and Pattern Recognition - Workshops, CVPR Workshops. IEEE Computer Society Conference on', pp. 79. [16] Pfeiffer UJ, Schilbach L, Jording M, Timmermans B, Bente G, Vogeley K. Eyes on the mind: investigating the influence of gaze dynamics on the perception of others in real-time social interaction. Front Psychol. 2012;3:537. doi /fpsyg Epub 2012 Dec 3. [17] Balslev T, Jarodzka H, Holmqvist K, de Grave W, Muijtjens AM, Eika B, van Merriënboer J, Scherpbier AJ. Visual expertise in paediatric neurology. Eur J Paediatr Neurol Mar;16(2): doi: /j.ejpn Epub 2011 Sep 8. [18] Tylén K, Allen M, Hunter BK, Roepstorff A. Interaction vs. observation: distinctive modes of social cognition in human brain and behavior? A combined fmri and eye-tracking study. Front Hum Neurosci. 2012;6:331. doi: /fnhum Epub 2012 Dec 19.
Using Liquid Lenses to Extend the Operating Range of a Remote Gaze Tracking System
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Using Liquid Lenses to Extend the Operating Range of a Remote Gaze Tracking System
More informationA Study of Medical Image Analysis System
Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationInput Method Using Divergence Eye Movement
Input Method Using Divergence Eye Movement Shinya Kudo kudo@kaji-lab.jp Hiroyuki Okabe h.okabe@kaji-lab.jp Taku Hachisu JSPS Research Fellow hachisu@kaji-lab.jp Michi Sato JSPS Research Fellow michi@kaji-lab.jp
More informationEye tracking by image processing for helping disabled people. Alireza Rahimpour
An Introduction to: Eye tracking by image processing for helping disabled people Alireza Rahimpour arahimpo@utk.edu Fall 2012 1 Eye tracking system: Nowadays eye gaze tracking has wide range of applications
More informationMulti-sensor Gaze-tracking
Multi-sensor Gaze-tracking Joe Rice March 2017 Abstract In this paper, we propose a method using multiple gaze-tracking-capable sensors along with fuzzy data-fusion techniques to improve gaze-estimation.
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationNatural Viewing 3D Display
We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,
More informationTecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy
Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the
More informationVehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video
Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using
More informationGAZE TRACKING SYSTEMS FOR HUMAN-COMPUTER INTERFACE
GAZE TRACKING SYSTEMS FOR HUMAN-COMPUTER INTERFACE Matěj Černý 1 Summary: This article deals with currently used gaze tracking systems, their classification and application possibilities. Further are presented
More informationGaze Computer Interaction on Stereo Display
Gaze Computer Interaction on Stereo Display Yong-Moo KWON KIST 39-1 Hawalgogdong Sungbukku Seoul, 136-791, KOREA +82-2-958-5767 ymk@kist.re.kr Kyeong Won Jeon KIST 39-1 Hawalgogdong Sungbukku Seoul, 136-791,
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationMERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia
MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects
More informationExperimental Humanities II. Eye-Tracking Methodology
Experimental Humanities II Eye-Tracking Methodology Course outline 22.3. Introduction to Eye-Tracking + Lab 1 (DEMO & stimuli creation) 29.3. Setup and Calibration for Quality Data Collection + Lab 2 (Calibration
More informationAAM Based Facial Feature Tracking with Kinect
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking
More informationAugmenting Reality with Projected Interactive Displays
Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection
More informationREGION MAP GENERATION OF EYE PUPIL MOVEMENTS TO PRESCRIBE CUSTOMISED PROGRESSIVE LENS
REGION MAP GENERATION OF EYE PUPIL MOVEMENTS TO PRESCRIBE CUSTOMISED PROGRESSIVE LENS Thakur Navneetha Singh 1, C.Gireesh 2 1 PG Student 2 Assistant Professor Department of Computer Science and Engineering
More informationA GENETIC ALGORITHM FOR MOTION DETECTION
A GENETIC ALGORITHM FOR MOTION DETECTION Jarosław Mamica, Tomasz Walkowiak Institute of Engineering Cybernetics, Wrocław University of Technology ul. Janiszewskiego 11/17, 50-37 Wrocław, POLAND, Phone:
More informationComputer Vision and Virtual Reality. Introduction
Computer Vision and Virtual Reality Introduction Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: October
More informationTexture Image Segmentation using FCM
Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M
More informationFace Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian
4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and
More informationAbstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010
Information and Communication Engineering October 29th 2010 A Survey on Head Pose Estimation from Low Resolution Image Sato Laboratory M1, 48-106416, Isarun CHAMVEHA Abstract Recognizing the head pose
More informationA New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction
A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical
More information[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera
[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The
More informationK-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors
K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607
More informationGaze interaction (2): models and technologies
Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano boccignone@dsi.unimi.it http://homes.dsi.unimi.it/~boccignone/l
More informationNON-INTRUSIVE INFRARED-FREE EYE TRACKING METHOD
NON-INTRUSIVE INFRARED-FREE EYE TRACKING METHOD Bartosz Kunka, Bozena Kostek Gdansk University of Technology, Multimedia Systems Department, Gdansk, Poland, e-mail: kuneck@sound.eti.pg.gda.pl e-mail: bozenka@sound.eti.pg.gda.pl
More informationUse of Mean Square Error Measure in Biometric Analysis of Fingerprint Tests
Journal of Information Hiding and Multimedia Signal Processing c 2015 ISSN 2073-4212 Ubiquitous International Volume 6, Number 5, September 2015 Use of Mean Square Error Measure in Biometric Analysis of
More informationImplemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens
An open-source toolbox in Matlab for fully automatic calibration of close-range digital cameras based on images of chess-boards FAUCCAL (Fully Automatic Camera Calibration) Implemented by Valsamis Douskos
More informationBasic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More informationTecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy
Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy 3D from Photos Our not-so-secret dream: obtain a reliable and precise 3D from simple photos Why? Easier, less
More informationDESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING
DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING Tomasz Żabiński, Tomasz Grygiel, Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland tomz, bkwolek@prz-rzeszow.pl
More informationMouse Simulation Using Two Coloured Tapes
Mouse Simulation Using Two Coloured Tapes Kamran Niyazi 1, Vikram Kumar 2, Swapnil Mahe 3 and Swapnil Vyawahare 4 Department of Computer Engineering, AISSMS COE, University of Pune, India kamran.niyazi@gmail.com
More informationComments on Consistent Depth Maps Recovery from a Video Sequence
Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing
More informationMingle Face Detection using Adaptive Thresholding and Hybrid Median Filter
Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter Amandeep Kaur Department of Computer Science and Engg Guru Nanak Dev University Amritsar, India-143005 ABSTRACT Face detection
More informationGAZE TRACKING APPLIED TO IMAGE INDEXING
GAZE TRACKING APPLIED TO IMAGE INDEXING Jean Martinet, Adel Lablack, Nacim Ihaddadene, Chabane Djeraba University of Lille, France Definition: Detecting and tracking the gaze of people looking at images
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationReal time eye detection using edge detection and euclidean distance
Vol. 6(20), Apr. 206, PP. 2849-2855 Real time eye detection using edge detection and euclidean distance Alireza Rahmani Azar and Farhad Khalilzadeh (BİDEB) 2 Department of Computer Engineering, Faculty
More informationApplication of Radon Transform for Scaling and Rotation estimation of a digital image
International Journal of Engineering Research and Development eissn : 2278-067X, pissn : 2278-800X, www.ijerd.com Volume 2, Issue 3 (July 2012), PP. 35-39 Application of Radon Transform for Scaling and
More informationFrom Structure-from-Motion Point Clouds to Fast Location Recognition
From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at
More informationOBSTACLE DETECTION USING STRUCTURED BACKGROUND
OBSTACLE DETECTION USING STRUCTURED BACKGROUND Ghaida Al Zeer, Adnan Abou Nabout and Bernd Tibken Chair of Automatic Control, Faculty of Electrical, Information and Media Engineering University of Wuppertal,
More informationHelping people with ICT device control by eye gaze
Loughborough University Institutional Repository Helping people with ICT device control by eye gaze This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationFast K-nearest neighbors searching algorithms for point clouds data of 3D scanning system 1
Acta Technica 62 No. 3B/2017, 141 148 c 2017 Institute of Thermomechanics CAS, v.v.i. Fast K-nearest neighbors searching algorithms for point clouds data of 3D scanning system 1 Zhang Fan 2, 3, Tan Yuegang
More informationRealtime Object Recognition Using Decision Tree Learning
Realtime Object Recognition Using Decision Tree Learning Dirk Wilking 1 and Thomas Röfer 2 1 Chair for Computer Science XI, Embedded Software Group, RWTH Aachen wilking@informatik.rwth-aachen.de 2 Center
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationAdaptive Image Sampling Based on the Human Visual System
Adaptive Image Sampling Based on the Human Visual System Frédérique Robert *, Eric Dinet, Bernard Laget * CPE Lyon - Laboratoire LISA, Villeurbanne Cedex, France Institut d Ingénierie de la Vision, Saint-Etienne
More informationAdaptive Skin Color Classifier for Face Outline Models
Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de
More informationA Qualitative Analysis of 3D Display Technology
A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract
More informationThis is the accepted version of a paper presented at Proceeding of The Swedish Symposium on Image Analysis (SSBA2013), Gothenburg, Sweden.
http://www.diva-portal.org Postprint This is the accepted version of a paper presented at Proceeding of The Swedish Symposium on Image Analysis (SSBA2013), Gothenburg, Sweden. Citation for the original
More informationContextual priming for artificial visual perception
Contextual priming for artificial visual perception Hervé Guillaume 1, Nathalie Denquive 1, Philippe Tarroux 1,2 1 LIMSI-CNRS BP 133 F-91403 Orsay cedex France 2 ENS 45 rue d Ulm F-75230 Paris cedex 05
More informationIndependent Component Analysis (ICA) in Real and Complex Fourier Space: An Application to Videos and Natural Scenes
Independent Component Analysis (ICA) in Real and Complex Fourier Space: An Application to Videos and Natural Scenes By Nimit Kumar* and Shantanu Sharma** {nimitk@iitk.ac.in, shsharma@iitk.ac.in} A Project
More informationDESIGN AND IMPLEMENTATION OF THE REMOTE CONTROL OF THE MANIPULATOR
ACTA UNIVERSITATIS AGRICULTURAE ET SILVICULTURAE MENDELIANAE BRUNENSIS Volume 62 158 Number 6, 2014 http://dx.doi.org/10.11118/actaun201462061521 DESIGN AND IMPLEMENTATION OF THE REMOTE CONTROL OF THE
More informationThe 12th Conference on Selected Problems of Electrical Engineering and Electronics WZEE 2015
The 12th Conference on Selected Problems of Electrical Engineering and Electronics WZEE 2015 Proceedings September 17-19, 2015 Kielce, Poland 1 Measurements of Shape Memory Alloy Actuator for a Self-Switching
More informationA Study on Object Tracking Signal Generation of Pan, Tilt, and Zoom Data
Vol.8, No.3 (214), pp.133-142 http://dx.doi.org/1.14257/ijseia.214.8.3.13 A Study on Object Tracking Signal Generation of Pan, Tilt, and Zoom Data Jin-Tae Kim Department of Aerospace Software Engineering,
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More informationApplication of partial differential equations in image processing. Xiaoke Cui 1, a *
3rd International Conference on Education, Management and Computing Technology (ICEMCT 2016) Application of partial differential equations in image processing Xiaoke Cui 1, a * 1 Pingdingshan Industrial
More informationFully Automatic Endoscope Calibration for Intraoperative Use
Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,
More informationA Novel Field-source Reverse Transform for Image Structure Representation and Analysis
A Novel Field-source Reverse Transform for Image Structure Representation and Analysis X. D. ZHUANG 1,2 and N. E. MASTORAKIS 1,3 1. WSEAS Headquarters, Agiou Ioannou Theologou 17-23, 15773, Zografou, Athens,
More informationGaze Tracking. Introduction :
Introduction : Gaze Tracking In 1879 in Paris, Louis Émile Javal observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops
More informationTranslation Symmetry Detection: A Repetitive Pattern Analysis Approach
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing
More informationAdvance Shadow Edge Detection and Removal (ASEDR)
International Journal of Computational Intelligence Research ISSN 0973-1873 Volume 13, Number 2 (2017), pp. 253-259 Research India Publications http://www.ripublication.com Advance Shadow Edge Detection
More informationCS 4758: Automated Semantic Mapping of Environment
CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationDirection-Length Code (DLC) To Represent Binary Objects
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. I (Mar-Apr. 2016), PP 29-35 www.iosrjournals.org Direction-Length Code (DLC) To Represent Binary
More informationPolarized Downwelling Radiance Distribution Camera System
Polarized Downwelling Radiance Distribution Camera System Kenneth J. Voss Physics Department, University of Miami Coral Gables, Fl. 33124 phone: (305) 284-2323 ext 2 fax: (305) 284-4222 email: voss@physics.miami.edu
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationDevelopment of a Fall Detection System with Microsoft Kinect
Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West
More informationEASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS
EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS Tatsuya Hanayama 1 Shota Kiyota 1 Ryo Furukawa 3 Hiroshi Kawasaki 1 1 Faculty of Engineering, Kagoshima
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationCOLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij
COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance
More informationA Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation
, pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,
More informationA Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping
A Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping Keerthi Rajan *1, A. Bhanu Chandar *2 M.Tech Student Department of ECE, K.B.R. Engineering College, Pagidipalli, Nalgonda,
More informationVirtual Interaction System Based on Optical Capture
Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,
More informationGeneration of a Disparity Map Using Piecewise Linear Transformation
Proceedings of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 2-22, 26 4 Generation of a Disparity Map Using Piecewise Linear Transformation
More information3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy
3D from Images - Assisted Modeling, Photogrammetry Marco Callieri ISTI-CNR, Pisa, Italy 3D from Photos Our not-so-secret dream: obtain a reliable and precise 3D from simple photos Why? Easier, cheaper
More informationHeadMouse. Robotic Research Team. University of Lleida
HeadMouse Robotic Research Team University of Lleida User Manual and Frequently Asked Questions What is HeadMouse? HeadMouse is a free program designed to replace the computer mouse. The user can control
More informationImage Formation. CS418 Computer Graphics Eric Shaffer.
Image Formation CS418 Computer Graphics Eric Shaffer http://graphics.cs.illinois.edu/cs418/fa14 Some stuff about the class Grades probably on usual scale: 97 to 93: A 93 to 90: A- 90 to 87: B+ 87 to 83:
More informationHeadMouse. Robotic Research Team. University of Lleida
HeadMouse Robotic Research Team University of Lleida User Manual and Frequently Asked Questions What is HeadMouse? HeadMouse is a free program designed to replace the computer mouse. The user can control
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationMy Google Glass Sees Your Password!
2014 My Smartwatch Sees Your Password! My Google Glass Sees Your Password! My iphone Sees Your Password! Qinggang Yue University of Massachusetts Lowell In Collaboration with Zhen Ling, Southeast University,
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,
More informationTowards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training
Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer
More informationCOLOR AND SHAPE BASED IMAGE RETRIEVAL
International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol.2, Issue 4, Dec 2012 39-44 TJPRC Pvt. Ltd. COLOR AND SHAPE BASED IMAGE RETRIEVAL
More informationFACET SHIFT ALGORITHM BASED ON MINIMAL DISTANCE IN SIMPLIFICATION OF BUILDINGS WITH PARALLEL STRUCTURE
FACET SHIFT ALGORITHM BASED ON MINIMAL DISTANCE IN SIMPLIFICATION OF BUILDINGS WITH PARALLEL STRUCTURE GE Lei, WU Fang, QIAN Haizhong, ZHAI Renjian Institute of Surveying and Mapping Information Engineering
More informationGreen Computing and Sustainability
Green Computing and Sustainability Damien Lecarpentier (CSC) einfranet Workshop, Brussels, 15th April 2010 CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. 1 Green Computing: a hot topic
More informationDEVELOPMENT OF THE EFFECTIVE SET OF FEATURES CONSTRUCTION TECHNOLOGY FOR TEXTURE IMAGE CLASSES DISCRIMINATION
DEVELOPMENT OF THE EFFECTIVE SET OF FEATURES CONSTRUCTION TECHNOLOGY FOR TEXTURE IMAGE CLASSES DISCRIMINATION E. Biryukova 1, R. Paringer 1,2, A.V. Kupriyanov 1,2 1 Samara National Research University,
More informationA Mental Cutting Test on Female Students Using a Stereographic System
Journal for Geometry and Graphics Volume 3 (1999), No. 1, 111 119 A Mental Cutting Test on Female Students Using a Stereographic System Emiko Tsutsumi, Kanakao Shiina, Ayako Suzaki, Kyoko Yamanouchi, Takaaki
More informationAircraft Tracking Based on KLT Feature Tracker and Image Modeling
Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University
More informationSubpixel Corner Detection Using Spatial Moment 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationAlgorithm of correction of error caused by perspective distortions of measuring mark images
Mechanics & Industry 7, 73 (206) c AFM, EDP Sciences 206 DOI: 0.05/meca/206077 www.mechanics-industry.org Mechanics & Industry Algorithm of correction of error caused by perspective distortions of measuring
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 1, Jan-Feb 2015
RESEARCH ARTICLE Comparison between Square Pixel Structure and Hexagonal Pixel Structure in Digital Image Processing Illa Singh 1, Ashish Oberoi 2 M.Tech 1, Final Year Student, Associate Professor2 Department
More informationLocating ego-centers in depth for hippocampal place cells
204 5th Joint Symposium on Neural Computation Proceedings UCSD (1998) Locating ego-centers in depth for hippocampal place cells Kechen Zhang,' Terrence J. Sejeowski112 & Bruce L. ~cnau~hton~ 'Howard Hughes
More informationSearching Image Databases Containing Trademarks
Searching Image Databases Containing Trademarks Sujeewa Alwis and Jim Austin Department of Computer Science University of York York, YO10 5DD, UK email: sujeewa@cs.york.ac.uk and austin@cs.york.ac.uk October
More informationTarget Marker: A Visual Marker for Long Distances and Detection in Realtime on Mobile Devices
Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 015) Barcelona, Spain, July 13-14, 015 Paper No. 339 Target Marker: A Visual Marker for Long Distances
More informationResearch on online inspection system of emulsion explosive packaging defect based on machine vision
Research on online inspection system of emulsion explosive packaging defect based on machine vision Yuesheng Wang *, and Zhipeng Liu School of Hangzhou Dianzi University, Hangzhou, China. Abstract. Roll
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationTexture Segmentation by Windowed Projection
Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw
More information