Light source separation from image sequences of oscillating lights

Size: px
Start display at page:

Download "Light source separation from image sequences of oscillating lights"

Transcription

1 2014 IEEE 28-th Convention of Electrical and Electronics Engineers in Israel Light source separation from image sequences of oscillating lights Amir Kolaman, Rami Hagege and Hugo Guterman Electrical and Computer Engineering Department Ben Gurion University of the Negev Beer Sheva, Israel Abstract Light has a significant influence on the color of objects in the image. Sometimes scene light is comprised of mixture of several light sources, and this mixture makes it hard to achieve color constancy across the image. Having precise control over the intensity of the light sources at the capturing stage, can enable an easy light source separation by turning on a single light source for each captured frame. In other cases - prior knowledge on the cyclic behavior of the light intensity over time can be used instead, by describing each light source as a set of base functions. This enables the reconstruction of the light sources by means of inner multiplication of the video scene with the base functions. This analysis method assumes that the signal is linear, but this assumption fails when secularities or high illuminance values cause clipping or when low light conditions causes noise to be apparent in the image. By using a cyclic High Dynamic Range (HDR) sampling method the scene becomes linear and good reconstruction results are obtained.two experiments are used to demonstrate the approach. The first shows the decomposition of two oscillating light sources and the second the decomposition of an oscillating light and sunlight. (a) Mixed light source scene I. INTRODUCTION Color is one of the main components used to help us describe our every day lives. Using color helps us describe people, scene, objects and even feelings. In computer vision, color helps the computer the better recognize objects in the scene[1], produce 3D data[2] and more. Light has a big influence on the color of an object in the image [3]. The same object captured by the same camera under different types of illumination may vary in its color measurement values[4]. Color constancy algorithms try to transform the input image into a new image in which all the colors in the scene are independent of the light source illuminating them[5]. This transformation is also referred as White Balance (WB). Many natural scenes have a mixed lighting condition originating from several lights sources, as seen in the example in figure 1(a). Artificial lights, originating from the indoor lighting on the ceiling or walls, have different color temperature (tint) which varies from reddish (tungsten light) to blueish (fluorescent) to green (led). This work proposes a method for separating light sources in the capturing stage, which enables improvement of white balance of a color image, as seen in figure 1(b). (b) Standard white balance Vs. Proposed method Fig. 1. (a) Natural image with two lights sources affecting the image with a different color tint. Notice the table changes in color (zoom-in) from grey on the left to blue on the right. (b) comparison between standard white balance and white balance after light source separation using the proposed method

2 (a) Our sampling method II. RELATED WORK Color constancy is an extensively studied field of research and many algorithms have been proposed [5], most of them focus on static images. This section briefly reviews the most relevant methods to our work, namely those based on sampling video data over time and extracting lights source data from them. Single illuminance estimation from video To the best of our knowledge, there are only 2 works related to color constancy, video sequences and a single light source. In [6] color statistics of a video scene is presented to demonstrate the variance of color over time. In [7] frame averaging of similar video sequences to estimate the scene chromaticity. Multi-illuminance estimation from video The only work found closely related to this work is of Prinet et. al. [8]. Prinet uses the information in the video sequence to recognize two illumination sources and estimate their chroma values. Prinet assumes that the scene has secularities, that the intensity of the light source stays constant over time and that it is evenly distributed over the entire image. She then uses information around the edges to estimate the chroma of two lights in the scene. III. LIGHT SOURCE SEPARATION First a video sequence of Z frames is captured. The image sequences of the scene has a mixture of N light sources. The objective is to separate this sequence into N images illuminated by a single single light source. (b) Our proposed method compared to standard Fig. 2. Sample graphs of on/off light amplitude (marked in blue) represents theoretical intensity of white patch with ideal exposure value. Real sample values of image pixels are marked in colors in image and graph. (a) Proposed sampling method in which light intensity changes sinusoidally, and exposure values changes several times to accurately sample the radiance values of the objects in the scene. (b) General algorithm for improving white balance results on mixed lights scene using light source separation compared to standard white balance algorithm. The main contributions of this work are: 1) Introducing a new method for sampling oscillating light sources from video using High Dynamic Range (HDR) technique, as seen in figure 2(a). 2) Using this sampling method it is possible to improve WB performance, as seen in figure 2(b). The rest of the paper is organized as follows: after a short review on the state of the art color constancy algorithms (Section II), we describe a new method for separating two light sources from a video sequence, assuming that at least one of them is oscillating in time, is presented (Section III). Experiments and analysis are given in Section IV, and conclusions in Section V. A. Naive light source separation Having a precise control over the intensity of light sources in the scene and synchronizing it with the capturing device, enables light source separation (by turning on a single light source in each video frame). This gives N images, where each image has a single light source. The following procedure is performed for each controlled light: 1) turn on a single light source 2) capture a frame The method assumes that for N light sources each light can be turned off separately, and precisely synchronized with the capturing device. This assumption is not always true. For example scenes with a mixture of artificial and natural sun light cannot be controlled. In order to solve this problem the following procedure must be followed: 1) Capture the initial frame with all the lights turned off (except the background light). 2) Capture Z = N 1 frames, where each frame has sunlight and another controlled light turned on. 3) Subtract the first frame from each Z = N 1 frames to get N 1 images with separated light sources. This Naive Light Source Separation assumes that out of N light sources at least N 1 can be precisely controlled and synchronized with capturing device. In most systems, this kind of high precision control is not possible, and light sources are not synchronized to the camera. In these cases the fact that

3 indoor lighting, connected to the power supply, usually change their intensity over time[9] (flickering.), can be used to our advantage. Assuming that out of N light sources in the scene, N 1 light sources change their intensity over time in a linear and cyclic manner, helps us perform the separation. In the next sub section this assumption is used to develop the main idea of light source separation algorithm. B. Separation of sinusoidally varying lights Base functions B n of N dimension, can be used to reconstruct any linear signal over time. An example base function is B n = sin(2π f n) (1) (a) Light intensities graph over time Each light source is modulated by the basis L n = a n B n (2) where B n is described in equation (1) and a n is the amplitude coefficient of the base signal. The composite signal of the light sources is described by: S = N L n (3) n=1 If the base signals are orthogonal and normalized to 1 then B n = 1 (4) < B n, B s > = 0 (5) Extracting the coefficient for each light source is done by â n =< S, B n > (6) Thus reconstructing the nth light source is achieved by ˆ L n = â n B n (7) Intensity of real light source Lr varies between 0 Lr 2 a n. Then, Lr n = a n + a n B n = a n (1 + B n ) (8) ˆLr n = â n (1 + B n ) (9) where Lr, ˆLr represents the real and estimated light source values respectively. In case of N light sources with only N 1 having a cyclic varying intensity the constant light source can be reconstructed by N 1 ˆLc = S Lr n (10) n=1 where Lc represents a light source with a constant intensity over time. (b) Sample points of light intensities graph Fig. 3. (a) A time sample graph of 5 pixel values, where clipping effect can be seen in point 1. (b) Corresponding image pixels, where clipped pixel is marked by a red circle. C. Sampling with HDR The reconstruction method, explained in the previous sub section assumed that the measured light intensities are linear. Object secularities, and large intensity differences between light sources may introduce non linearities such as sensor noise in dark areas and clipping of values in very bright areas, as seen in figure 3. Removing these non-linearities is important in order to use the reconstruction method explained in the previous section. HDR sampling of still images has been known as a method for enhancing the dynamic range of a camera sensor and has been extensively researched in the past decade [10]. Using HDR for video sequences [11] with Wide Dynamic Range (WDR) sensors [12] is still being investigated. To capture an HDR image, using a standard sensor, one has to take several Low Dynamic Range (LDR) images of a static scene. Each LDR frame has different exposure/gain values of the sensor. The LDR images are then merged using normalization and outlier suppression as explained in [10]. Generating HDR image of changing light intensities, with no control over rate of change of light, with a standard LDR sensors poses a sampling problem (figure 4(a)). This can be solved by capturing the LDR frame in a cyclic manner, as seen in figure 4(b). Real sampling values can be seen in figure 2(a). D. Example application for improving WB and color constancy By using the proposed light source separation it is possible to improve the results of most color balancing and WB algorithms. In this sub-section a simple WB algorithms is

4 (a) Problem with standard sampling (a) Controlled light sources and camera (b) Experiment 1 (b) Proposed sampling method Fig. 4. (a) Example of error that occurs when sampling changing intensities over time with a LDR sensor using 4 Exposure Values (EV). (b) Cyclic sampling which gets an accurate HDR images when light intensity changes over time. improved using light source separation on scene with two light sources. A brief description of the procedure follows: 1) Separate light sources from the mixed light scene 2) Perform WB on each extracted image, which has a single light source. 3) Linearly add all the output images from the previous stage to get the final WB image. IV. A. Prerequisites and equipment EXPERIMENTS To make a proof of concept for this capturing method, a simple experiment was performing using low end camera sensor 1. This subsection may be skipped if a high end camera with frame rate of above 260 FPS is used. Prerequisites: 1) Precise synchronization of camera and light sources was achieved using Matlab software which controlled the light intensity of a led projector and camera properties through a usb connection. Matlab software waited for the camera sensor to finish capturing the frame before preforming the intensity change for the next time lapse. 2) Precise control of light intensity was achieved by setting ten different light intensities, using Matlab software and marked as I n (I 1 = 25, I 2 = 50...I 10 = 250), and measuring them with a light meter 2, marked as L n (L 1, L 2,...L 10 ). Linear function connecting both values such that L n = a In 2 + b and empirically found values for a and b, was estimated. 1 We used the Point Gray c Chameleon T M color camera, 18 FPS 2 Lutron lx-101 Fig. 5. (a) Led projectors used in the first experiment. (b) First experiment diagram where two oscillating lights Lr 1 and Lr 2 were added to S and decomposed using proposed method to get ˆLr 1 and ˆLr 2 3) Precise data capture from sensor was achieved by using the raw data coming from the camera sensor. This helps avoiding gamma correction and tone mapping in the sensor, which usually introduce non linearities to the sensor measurements. B. Experimental setup The experimental setup can be seen in figures 5(a), 6(a) and 6(b). Objects were placed in front of a lambertian white board, which can be seen on the left side of figure 6(b) and right side of figure 6(a). Two experiments were performed: 1) Decomposing two oscillating light sources In this experiment, the white board was illuminated by two oscillating light sources inside a dark room 3. The two light sources were set to have extreme chromatic differences in order to emphasize the efficiency of the proposed separation method, as seen in figure 5. Thirty images were taken while the light sources were modulated by two sinusoidal waveforms. 2) Decomposing an oscillating light from sun light In this experiment several objects were placed in front of a lambertian white board inside a room illuminated by a side window and a single oscillating light source, as seen in figure 6. The single light source was set to have spatial intensity differences across the image, and has a slight chromatic shift to blue. Sun light, coming from the side window, was taken in clear mid day and its intensity stayed constant through the entire video sequence. Thirty images were taken while the light source was modulated by one sinusoidal waveform. V. ANALYSIS AND CONCLUSIONS The experimental results were compared to the reference images. Reference images are images that were captured having only one light source turned on. 3 In this experiment sun light was blocked using curtains

5 sensors will be investigated in future works. REFERENCES (a) Window View (b) Door View Fig. 6. (a) A view from the window side on the experimental setup. Camera and light source can be seen on the left (b) A view from the door side on the experiment setup.camera and light source can be seen on the right (a) Reconstructed light source Vs. real light source (b) Reconstructed sun light Vs. real sun light Fig. 7. (a) Visual comparison of light 1 compared to the ground truth. (b) visual comparison of constant sun light compared to the ground truth. Result and analysis of separating two oscillating light sources Visual comparison of the first experiment can be seen in figure 5(b). Reference images are on the upper part of the diagram, and resulting images are on the lower part of the diagram. Global color of the filtered and reference lights are the same. Difference can be seen mainly in Lr 2 were a purple spot is seen in the center of the filtered light. A careful look at the reference light shows the same spot but with a less saturated color. This means that filtered light has the same pattern as the reference light but is more saturated. This phenomena should be further investigated. Result and analysis of separating oscillating light from sun light Visual comparison of the first experiment can be seen in figures 7(a) and 7(b). Small differences are detected in figure 7(b), were the grey background has a small tint difference (bluish in the reconstructed light and a reddish in the reference image). WB performed on the separated light source and combining them to a single image using simple addition can be visually scene in figures 1(b) and 2(b). A simple max-rgb WB method was used but any other state of the art method may be used instaed. A novel method for separation of oscillating light sources was proposed and demonstrated. The use of high frame rate [1] K. E. Van De Sande, T. Gevers, and C. G. Snoek, Evaluating color descriptors for object and scene recognition, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 9, pp , [2] J.-J. Yu, H.-D. Kim, H.-W. Jang, and S.-W. Nam, A hybrid color matching between stereo image sequences, in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV- CON), IEEE, 2011, pp [3] M. Ebner, Color constancy. John Wiley & Sons, 2007, vol. 6. [4] R. R. Hagege, Scene appearance model based on spatial prediction, Machine Vision and Applications, pp. 1 16, [5] A. Gijsenij, T. Gevers, and J. Van De Weijer, Computational color constancy: Survey and experiments, Image Processing, IEEE Transactions on, vol. 20, no. 9, pp , [6] J.-P. Renno, D. Makris, T. Ellis, and G. A. Jones, Application and evaluation of colour constancy in visual surveillance, in Visual Surveillance and Performance Evaluation of Tracking and Surveillance, nd Joint IEEE International Workshop on. IEEE, 2005, pp [7] N. Wang, B. Funt, C. Lang, and D. Xu, Video-based illumination estimation, in Computational Color Imaging. Springer, 2011, pp [8] V. Prinet, D. Lischinski, and M. Werman, Illuminant chromaticity from image sequences, December [9] D. Poplin, An automatic flicker detection method for embedded camera systems, Consumer Electronics, IEEE Transactions on, vol. 52, no. 2, pp , [10] E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High dynamic range imaging: acquisition, display, and image-based lighting. Morgan Kaufmann, [11] G. Eilertsen, R. Wanat, R. K. Mantiuk, and J. Unger, Evaluation of tone mapping operators for hdr-video, in Computer Graphics Forum, vol. 32, no. 7. Wiley Online Library, 2013, pp [12] A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, Wide-dynamicrange cmos image sensorscomparative performance analysis, Electron Devices, IEEE Transactions on, vol. 56, no. 11, pp , 2009.

Amplitude Modulated Video Camera - Light Separation in Dynamic Scenes

Amplitude Modulated Video Camera - Light Separation in Dynamic Scenes Amplitude Modulated Video Camera - Light Separation in Dynamic Scenes Amir Kolaman, Maxim Lvov, Rami Hagege, and Hugo Guterman Electrical and Computer Engineering Department Ben-Gurion University of the

More information

Lecture 1 Image Formation.

Lecture 1 Image Formation. Lecture 1 Image Formation peimt@bit.edu.cn 1 Part 3 Color 2 Color v The light coming out of sources or reflected from surfaces has more or less energy at different wavelengths v The visual system responds

More information

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca Gray-World assumption on perceptual color spaces Jonathan Cepeda-Negrete jonathancn@laviria.org Raul E. Sanchez-Yanez sanchezy@ugto.mx Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

More information

High Dynamic Range Imaging.

High Dynamic Range Imaging. High Dynamic Range Imaging High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show

More information

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Video-Based Illumination Estimation

Video-Based Illumination Estimation Video-Based Illumination Estimation Ning Wang 1,2, Brian Funt 2, Congyan Lang 1, and De Xu 1 1 School of Computer Science and Infromation Technology, Beijing Jiaotong University, Beijing, China 2 School

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Light, Color, and Surface Reflectance. Shida Beigpour

Light, Color, and Surface Reflectance. Shida Beigpour Light, Color, and Surface Reflectance Shida Beigpour Overview Introduction Multi-illuminant Intrinsic Image Estimation Multi-illuminant Scene Datasets Multi-illuminant Color Constancy Conclusions 2 Introduction

More information

RALATIVITY AND CONTRAST ENHANCEMENT Connecting properties of the human visual system with mathematics from the theory of relativity

RALATIVITY AND CONTRAST ENHANCEMENT Connecting properties of the human visual system with mathematics from the theory of relativity RALATIVITY AND CONTRAST ENHANCEMENT Connecting properties of the human visual system with mathematics from the theory of relativity Amir Kolaman, Amir Egozi Department of Electrical and Computer Engineering,

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Multi-exposure Fusion Features

Multi-exposure Fusion Features Multi-exposure Fusion Features Paper Number: 94 No Institute Given Abstract. This paper introduces a process where fusion features assist matching scale invariant feature transform (SIFT) image features

More information

Switchable Temporal Propagation Network

Switchable Temporal Propagation Network Switchable Temporal Propagation Network Sifei Liu 1, Guangyu Zhong 1,3, Shalini De Mello 1, Jinwei Gu 1 Varun Jampani 1, Ming-Hsuan Yang 2, Jan Kautz 1 1 NVIDIA, 2 UC Merced, 3 Dalian University of Technology

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,

More information

Intensification Of Dark Mode Images Using FFT And Bilog Transformation

Intensification Of Dark Mode Images Using FFT And Bilog Transformation Intensification Of Dark Mode Images Using FFT And Bilog Transformation Yeleshetty Dhruthi 1, Shilpa A 2, Sherine Mary R 3 Final year Students 1, 2, Assistant Professor 3 Department of CSE, Dhanalakshmi

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? Gerald Schaefer School of Computing and Technology Nottingham Trent University Nottingham, U.K. Gerald.Schaefer@ntu.ac.uk Abstract Keywords: The images

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

Introduction to Blynk

Introduction to Blynk Introduction to Blynk Lyfeshot's Blynk is an easy-to-use time-lapse camera. In this section, we will introduce the basics to get you up and running with your new Blynk as quickly as possible. Later sections

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Survey of Temporal Brightness Artifacts in Video Tone Mapping

Survey of Temporal Brightness Artifacts in Video Tone Mapping HDRi2014 - Second International Conference and SME Workshop on HDR imaging (2014) Bouatouch K. and Chalmers A. (Editors) Survey of Temporal Brightness Artifacts in Video Tone Mapping Ronan Boitard 1,2

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

ON-SCREEN DISPLAY (OSD) GUIDE FOR PRO-T890 HD CAMERA

ON-SCREEN DISPLAY (OSD) GUIDE FOR PRO-T890 HD CAMERA ON-SCREEN DISPLAY (OSD) GUIDE FOR PRO-T890 HD CAMERA EN CONTENTS CONTENTS...2 INTRODUCTION...3 OPERATING THE OSD...4 MAIN MENU...5 OUTPUT MODE...7 EXPOSURE...8 SPECIAL...9 SPECIAL - DPC...10 ADJUST...11

More information

Digital Image Processing COSC 6380/4393. Lecture 19 Mar 26 th, 2019 Pranav Mantini

Digital Image Processing COSC 6380/4393. Lecture 19 Mar 26 th, 2019 Pranav Mantini Digital Image Processing COSC 6380/4393 Lecture 19 Mar 26 th, 2019 Pranav Mantini What is color? Color is a psychological property of our visual experiences when we look at objects and lights, not a physical

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

New Sony DepthSense TM ToF Technology

New Sony DepthSense TM ToF Technology ADVANCED MATERIAL HANDLING WITH New Sony DepthSense TM ToF Technology Jenson Chang Product Marketing November 7, 2018 1 3D SENSING APPLICATIONS Pick and Place Drones Collision Detection People Counting

More information

NOW MAC COMPATIBLE INFINITY D I G I T A L M I C R O S C O P Y C A M E R A S

NOW MAC COMPATIBLE INFINITY D I G I T A L M I C R O S C O P Y C A M E R A S NOW MAC COMPATIBLE INFINITY D I G I T A L M I C R O S C O P Y C A M E R A S > INFINITY ANALYZE Software All Lumenera INFINITY cameras include INFINITY ANALYZE software, allowing complete camera control

More information

E-510. Built-in image stabiliser Excellent dust reduction system 6.4cm / 2.5'' HyperCrystal LCD New image processing engine

E-510. Built-in image stabiliser Excellent dust reduction system 6.4cm / 2.5'' HyperCrystal LCD New image processing engine E-510 Built-in image stabiliser Excellent dust reduction system 6.4cm / 2.5'' HyperCrystal LCD New image processing engine Live View 10 Megapixel Live MOS sensor Professional functions Compact and ergonomic

More information

PRO-T853/T854/T858 On-Screen Display (OSD)

PRO-T853/T854/T858 On-Screen Display (OSD) PRO-SERIES HD PRO-T853/T854/T858 On-Screen Display (OSD) REFERENCE GUIDE EN Configuring your DVR The camera uses a special control method called Coaxitron that sends the control signal down the video signal.

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

Color Constancy from Hyper-Spectral Data

Color Constancy from Hyper-Spectral Data Color Constancy from Hyper-Spectral Data Th. Gevers, H. M. G. Stokman, J. van de Weijer Faculty of Science, University of Amsterdam, The Netherlands fgevers, stokman, joostwg@wins.uva.nl Abstract This

More information

IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION INTRODUCTION

IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION INTRODUCTION IMPROVEMENT OF BACKGROUND SUBTRACTION METHOD FOR REAL TIME MOVING OBJECT DETECTION Sina Adham Khiabani and Yun Zhang University of New Brunswick, Department of Geodesy and Geomatics Fredericton, Canada

More information

Global Illumination. Frank Dellaert Some slides by Jim Rehg, Philip Dutre

Global Illumination. Frank Dellaert Some slides by Jim Rehg, Philip Dutre Global Illumination Frank Dellaert Some slides by Jim Rehg, Philip Dutre Color and Radiometry What is color? What is Color? A perceptual attribute of objects and scenes constructed by the visual system

More information

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com Image Processing using LabVIEW By, Sandip Nair sandipnair06@yahoomail.com sandipnair.hpage.com What is image? An image is two dimensional function, f(x,y), where x and y are spatial coordinates, and the

More information

A Fast Video Illumination Enhancement Method Based on Simplified VEC Model

A Fast Video Illumination Enhancement Method Based on Simplified VEC Model Available online at www.sciencedirect.com Procedia Engineering 29 (2012) 3668 3673 2012 International Workshop on Information and Electronics Engineering (IWIEE) A Fast Video Illumination Enhancement Method

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

Optical Verification of Mouse Event Accuracy

Optical Verification of Mouse Event Accuracy Optical Verification of Mouse Event Accuracy Denis Barberena Email: denisb@stanford.edu Mohammad Imam Email: noahi@stanford.edu Ilyas Patanam Email: ilyasp@stanford.edu Abstract Optical verification of

More information

BACKGROUND MODELS FOR TRACKING OBJECTS UNDER WATER

BACKGROUND MODELS FOR TRACKING OBJECTS UNDER WATER Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,

More information

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template The Lecture Contains: Diffuse and Specular Reflection file:///d /...0(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2028/28_1.htm[12/30/2015 4:22:29 PM] Diffuse and

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

CONTENTS. Before You Start. Initial Operation. Prepare For Shooting. What's in the Box Camera Parts Display Icons

CONTENTS. Before You Start. Initial Operation. Prepare For Shooting. What's in the Box Camera Parts Display Icons CONTENTS Before You Start What's in the Box Camera Parts Display Icons Initial Operation Install microsd Card & Batteries Power On/ Off Power Saving Mode Set Date & Time Prepare For Shooting Change Capture

More information

3D Shape and Indirect Appearance By Structured Light Transport

3D Shape and Indirect Appearance By Structured Light Transport 3D Shape and Indirect Appearance By Structured Light Transport CVPR 2014 - Best paper honorable mention Matthew O Toole, John Mather, Kiriakos N. Kutulakos Department of Computer Science University of

More information

Agenda. Camera Selection Parameters Focal Length Field of View Iris Aperture Automatic Shutter Illumination Resolution S/N Ratio Image Sensor Lens

Agenda. Camera Selection Parameters Focal Length Field of View Iris Aperture Automatic Shutter Illumination Resolution S/N Ratio Image Sensor Lens HEARTY WELCOME Agenda Camera Selection Parameters Focal Length Field of View Iris Aperture Automatic Shutter Illumination Resolution S/N Ratio Image Sensor Lens Camera Features Backlight Compensation Wide

More information

Performance study on point target detection using super-resolution reconstruction

Performance study on point target detection using super-resolution reconstruction Performance study on point target detection using super-resolution reconstruction Judith Dijk a,adamw.m.vaneekeren ab, Klamer Schutte a Dirk-Jan J. de Lange a, Lucas J. van Vliet b a Electro Optics Group

More information

Analysis and extensions of the Frankle-McCann

Analysis and extensions of the Frankle-McCann Analysis and extensions of the Frankle-McCann Retinex algorithm Jounal of Electronic Image, vol.13(1), pp. 85-92, January. 2004 School of Electrical Engineering and Computer Science Kyungpook National

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information

IMAGE DE-NOISING IN WAVELET DOMAIN

IMAGE DE-NOISING IN WAVELET DOMAIN IMAGE DE-NOISING IN WAVELET DOMAIN Aaditya Verma a, Shrey Agarwal a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - (aaditya, ashrey)@iitk.ac.in KEY WORDS: Wavelets,

More information

Highlight detection with application to sweet pepper localization

Highlight detection with application to sweet pepper localization Ref: C0168 Highlight detection with application to sweet pepper localization Rotem Mairon and Ohad Ben-Shahar, the interdisciplinary Computational Vision Laboratory (icvl), Computer Science Dept., Ben-Gurion

More information

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different

More information

Data-driven Depth Inference from a Single Still Image

Data-driven Depth Inference from a Single Still Image Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Fractional Discrimination for Texture Image Segmentation

Fractional Discrimination for Texture Image Segmentation Fractional Discrimination for Texture Image Segmentation Author You, Jia, Sattar, Abdul Published 1997 Conference Title IEEE 1997 International Conference on Image Processing, Proceedings of the DOI https://doi.org/10.1109/icip.1997.647743

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

A Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping

A Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping A Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping Keerthi Rajan *1, A. Bhanu Chandar *2 M.Tech Student Department of ECE, K.B.R. Engineering College, Pagidipalli, Nalgonda,

More information

A Background Subtraction Based Video Object Detecting and Tracking Method

A Background Subtraction Based Video Object Detecting and Tracking Method A Background Subtraction Based Video Object Detecting and Tracking Method horng@kmit.edu.tw Abstract A new method for detecting and tracking mo tion objects in video image sequences based on the background

More information

Image registration for agricultural sensing tasks

Image registration for agricultural sensing tasks Ref: C0xxx Image registration for agricultural sensing tasks Berenstein Ron, Ben-Gurion University of the Negev, Beer Sheva, Israel, berensti@bgu.ac.il Ben-Shahar Ohad, Ben-Gurion University of the Negev,

More information

HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS

HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS CHENG-HSIUNG HSIEH, YU-SHENG LIN, CHIH-HUI CHANG Department of Computer Science and Information Engineering Chaoyang University

More information

Starting this chapter

Starting this chapter Computer Vision 5. Source, Shadow, Shading Department of Computer Engineering Jin-Ho Choi 05, April, 2012. 1/40 Starting this chapter The basic radiometric properties of various light sources Develop models

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY ABSTRACT V. Thulasika and A. Ramanan Department of Computer Science, Faculty of Science, University of Jaffna, Sri Lanka v.thula.sika@gmail.com, a.ramanan@jfn.ac.lk

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay

A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay VPQM 2010, Scottsdale, Arizona, U.S.A, January 13-15 Outline Introduction Proposed approach Colors

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow www.ijarcet.org 1758 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

Foreground Detection Robust Against Cast Shadows in Outdoor Daytime Environment

Foreground Detection Robust Against Cast Shadows in Outdoor Daytime Environment Foreground Detection Robust Against Cast Shadows in Outdoor Daytime Environment Akari Sato (), Masato Toda, and Masato Tsukada Information and Media Processing Laboratories, NEC Corporation, Tokyo, Japan

More information

Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

More information

DIGITAL MICROSCOPY CAMERAS

DIGITAL MICROSCOPY CAMERAS DIGITAL MICROSCOPY CAMERAS ACCU-SCOPE and UNITRON digital microscopy cameras are specifically engineered for low light, color critical and high-speed applications for clinical, life science, material science

More information

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,

More information

Color Appearance in Image Displays. O Canada!

Color Appearance in Image Displays. O Canada! Color Appearance in Image Displays Mark D. Fairchild RIT Munsell Color Science Laboratory ISCC/CIE Expert Symposium 75 Years of the CIE Standard Colorimetric Observer Ottawa 26 O Canada Image Colorimetry

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Specularity Removal using Dark Channel Prior *

Specularity Removal using Dark Channel Prior * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 29, 835-849 (2013) Specularity Removal using Dark Channel Prior * School of Information Science and Engineering Central South University Changsha, 410083

More information

Xtreme Starlight Camera

Xtreme Starlight Camera Professional for Professionals! High Definition Xtreme Starlight Camera BENEFITS Can you see COLOR in the dark at 30fps? Ikegami can! Our newly developed 2/3 format CMOS sensor comes to be a big differentiator

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

Color Correction for Projected Image on Colored-screen Based on a Camera

Color Correction for Projected Image on Colored-screen Based on a Camera Color Correction for Projected Image on Colored-screen Based on a Camera Dae-Chul Kim a, Tae-Hyoung Lee a, Myong-Hui Choi b, and Yeong-Ho Ha* a a School of Electronics Engineering, Kyungpook Natl. Univ.,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Lecture 22: Basic Image Formation CAP 5415

Lecture 22: Basic Image Formation CAP 5415 Lecture 22: Basic Image Formation CAP 5415 Today We've talked about the geometry of scenes and how that affects the image We haven't talked about light yet Today, we will talk about image formation and

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Perceptual Effects in Real-time Tone Mapping

Perceptual Effects in Real-time Tone Mapping Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR

More information

2 Depth Camera Assessment

2 Depth Camera Assessment 2 Depth Camera Assessment The driving question of this chapter is how competitive cheap consumer depth cameras, namely the Microsoft Kinect and the SoftKinetic DepthSense, are compared to state-of-the-art

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

A&E Specifications Rev AV p Full HD WDR H.264 Day/Night IP MegaBall Dome. Wall Mount and 4mm Lens. camera with 4mm Lens

A&E Specifications Rev AV p Full HD WDR H.264 Day/Night IP MegaBall Dome. Wall Mount and 4mm Lens. camera with 4mm Lens AV2146DN-04-W AV2146DN-04-D AV2146DN-04-D-LG AV2146DN-3310-W AV2146DN-3310-D AV2146DN-3310-D-LG 1080p Full HD WDR H.264 Day/Night IP MegaBall camera with Wall Mount and 4mm Lens camera with 4mm Lens camera

More information

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * Xinguo Yu, Wu Song, Jun Cheng, Bo Qiu, and Bin He National Engineering Research Center for E-Learning, Central China Normal

More information