CSCI 1290: Comp Photo
|
|
- Marlene Powers
- 5 years ago
- Views:
Transcription
1 CSCI 1290: Comp Photo Fall Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all of its acknowledgements.
2 Feedback from Project 0 MATLAB: Live Scripts!= Scripts *.mlx vs. *.m Live scripts like Jupyter notebook Live scripts seem to take a really long time to compute.
3 Project 1 code How are people getting along? What issues are you facing? (Tell me now.)
4 High Dynamic Range Images Slides from Paul Debevec and Alexei A. Efros
5 Problem: Dynamic Range The real world is high dynamic range. 25, ,000 2,000,000,000
6 Long Exposure Real world 10-6 High dynamic range 10 6 Picture to 255
7 Short Exposure Real world 10-6 High dynamic range 10 6 Picture to 255
8 Image pixel (312, 284) = photons?
9 Camera Calibration Geometric How pixel coordinates relate to directions in the world Photometric How pixel values relate to radiance amounts in the world
10 Camera Calibration Geometric How pixel coordinates relate to directions in the world in other images. Photometric How pixel values relate to radiance amounts in the world in other images.
11 The Image Acquisition Pipeline Scene radiance 2 (W/sr/m ) Lens Radiance: radiant flux emitted, reflected, transmitted or received by a given surface. This is a directional quantity. (Often called intensity confusing.) Watts per steradian per square meter
12 The Image Acquisition Pipeline Lens Shutter Scene radiance Sensor irradiance Sensor exposure 2 (W/sr/m ) 2 (W/m ) Dt Irradiance: radiant flux (power) received by a surface per unit area. Not directional (collected at surface). (Often called intensity great, also confusing.)
13 The Image Acquisition Pipeline Lens Shutter Scene radiance Sensor irradiance Sensor exposure 2 (W/sr/m ) 2 (W/m ) Dt CCD ADC Remapping Analog voltages Digital values Raw Image Pixel values JPEG Image
14 Varying Exposure
15 Camera is not a photometer! Limited dynamic range Perhaps use multiple exposures? Unknown, nonlinear response Not possible to convert pixel values to radiance Solution: Recover response curve from multiple exposures, then reconstruct the radiance map
16 Key observation: Camera is not a photometer! Scene is static, and while we might not know the absolute value of exposure at each pixel, we know that the scene radiance remains constant across the image sequence.
17 Recovering High Dynamic Range Radiance Maps from Photographs Paul Debevec Jitendra Malik Computer Science Division University of California at Berkeley August 1997
18 Ways to vary exposure Shutter Speed (*) F/stop (aperture, iris) Neutral Density (ND) Filters
19 Shutter Speed Ranges: Canon D30: 30 to 1/4,000 sec. Sony VX2000: ¼ to 1/10,000 sec. Pros: Directly varies the exposure Usually accurate and repeatable Issues: Noise in long exposures
20 The Algorithm Image series Dt = 1/64 sec Dt = 1/16 sec Dt = 1/4 sec Dt = 1 sec 1 3 Pixel Value Z = f(exposure) Exposure = Radiance Dt log Exposure = log Radiance + log Dt 2 Dt = 4 sec
21 Imaging system response function 255 Pixel value 0 log Exposure = log (Radiance Dt) (CCD photon count)
22 Pixel value The Algorithm Dt = 1/64 sec Image series Dt = 1/16 sec Dt = 1/4 sec Dt = 1 sec Pixel Value Z = f(exposure) Exposure = Radiance * Dt log Exposure = log Radiance + log Dt Dt = 4 sec Assuming unit radiance for each pixel ln Exposure
23 Pixel value Pixel value Response Curve Assuming unit radiance for each pixel After adjusting radiances to obtain a smooth response curve ln Exposure ln Exposure
24 The Math Let g(z) be the discrete inverse response function For each pixel site i in each image j, want: ln Radiance i +ln Dt j = g(z ij ) Solve the overdetermined linear system: N = # pixels P = # images N P j =1 ln Radiance i + ln Dt j g(z ij ) 2 + g (z) 2 i =1 Z max z =Z min fitting term
25 The Math Let g(z) be the discrete inverse response function For each pixel site i in each image j, want: ln Radiance i +ln Dt j = g(z ij ) Solve the overdetermined linear system: N = # pixels P = # images N P j =1 ln Radiance i + ln Dt j g(z ij ) 2 + g (z) 2 i =1 Z max z =Z min g = second derivative λ = amount of smoothing fitting term smoothness term
26 Pixel value Results: Digital Camera Kodak DCS460 1/30 to 30 sec Recovered response curve log Exposure
27 Reconstructed radiance map
28 Results: Color Film Kodak Gold ASA 100, PhotoCD
29 Recovered Response Curves Red Green Blue RGB
30 The Radiance Map
31 How do we store this?
32 Portable FloatMap (.pfm) 12 bytes per pixel, 4 for each channel sign exponent mantissa Text header similar to Jeff Poskanzer s.ppm image format: Floating Point TIFF similar PF <binary image data>
33 Radiance Format (.pic,.hdr) 32 bits / pixel Red Green Blue Exponent (145, 215, 87, 149) = (145, 215, 87) * 2^( ) = ( , , ) (145, 215, 87, 103) = (145, 215, 87) * 2^( ) = ( , , ) Ward, Greg. "Real Pixels," in Graphics Gems IV, edited by James Arvo, Academic Press, 1994
34 ILM s OpenEXR (.exr) 6 bytes per pixel, 2 for each channel, compressed 5 bit exponent 10 bit mantissa sign exponent mantissa Several lossless compression options, 2:1 typical Compatible with the half datatype in NVidia's Cg Supported natively on GeForce FX and Quadro FX Available at
35 Now What?
36 Tone Mapping How can we do this? Real World Ray Traced World (Radiance) Linear scaling?, thresholding? Suggestions? 10-6 High dynamic range 10 6 Display/ Printer to 255
37 The Radiance Map Linearly scaled to display device
38 Simple Global Operator Compression curve needs to Bring everything within range Leave dark areas alone In other words Asymptote at 255 Derivative of 1 at 0
39 Global Operator (Reinhart et al) L display L = 1+ world L world
40 Global Operator Results
41 Reinhart Operator Darkest 0.1% scaled to display device
42 What do we see? Vs.
43 Issues with multi-exposure HDR Scene and camera need to be static Camera sensors are getting better and better Display devices are fairly limited anyway (although getting better).
44 Recovering HDR radiance maps for the project Additional slides not shown in class [James Hays]
45 Debevec and Malik 1997 Image series Dt = 1/64 sec Dt = 1/16 sec Dt = 1/4 sec Dt = 1 sec Dt = 4 sec Pixel Value Z = f(exposure) Pixel Value Z = f(radiance * Dt) f -1 (Pixel Value Z) = Radiance * Dt g(pixel Value Z) = ln(radiance) + ln(dt)
46 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec For each pixel site i in each image j, want: g( Z ) = ln E + ln Dt Equation 2 ij g() is the unknown discrete inverse response function. Z ij is the known (but noisy) pixel value from 0 to 255 for location i in image j. E i is the unknown radiance at location i. Δt j is the known exposure time, e.g. 1/16, 1/125, etc. for image j i j in Debevec
47 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z ij ) ln E i = ln Dt j g( Z Dt g( Z Dt g( Z Dt 11) ln E1 = ln 12) ln E1 = ln 13) ln E1 = ln g( Z ij ) = Example of what g(z ij ) looks like 256 unknowns g( Z Dt g( Z Dt g( Z Dt 21) ln E2 = ln 22) ln E2 = ln 23) ln E2 = ln g( Z Dt 31) ln E3 = ln 32) ln E3 = ln 33) ln E3 = ln g( Z Dt g( Z Dt
48 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z ij ) ln E i = ln Dt j g8 ln E = ln g g 1 25 ln E1 = ln 68 ln E1 = ln Dt Dt Dt g( Z ij ) = Example of what g(z ij ) looks like 256 unknowns g g g g g g 13 ln E2 = ln 42 ln E2 = ln 117 ln E2 = ln 51 ln E3 = ln 183 ln E3 = ln 254 ln E3 = ln Dt Dt Dt Dt Dt Dt 3 2 3
49 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z ij ) ln E i = ln Dt j g g g ln E 1 ln E 1 ln E 1 8 = 25 = 68 = g( Z ij ) = Example of what g(z ij ) looks like 256 unknowns We expect g() to vary smoothly g g g g g g ln E 2 ln E 2 ln E 2 13 = 42 = 117 = ln E 3 ln E 3 ln E 3 51 = 183 = 254 =
50 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z g( Z ij ) ij ) = ln E i = ln Dt Example of what g(z ij ) looks like 256 unknowns j g g g g g g g g g ln E 1 ln E 1 ln E 1 ln E 2 ln E 2 ln E 2 ln E 3 ln E 3 ln E 3 8 = 25 = 68 = 13 = 42 = 117 = 51 = 183 = 254 = Add constraints on second derivative of g() ( g( x) g( x 1)) ( g( x + 1) g( x)) = 2 g( x) g( x 1) g( x + 1) = We expect g() to vary smoothly
51 Image Sequence Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z g( Z ij ) ij ) = ln E i = ln Dt Example form of g(z ij ) 256 unknowns We expect g() to vary smoothly j g g g g g g g g g ln E 1 ln E 1 ln E 1 ln E 2 ln E 2 ln E 2 ln E 3 ln E 3 ln E 3 8 = 25 = 68 = 13 = 42 = 117 = 51 = 183 = 254 = g 2 g1 g3 = 2g 3 g 2 g 4 = 2g 254 g 253 g 255= g128= 0 num_samples * num_images equations 254 equations Lock down one point in g()
52 1 2 Image Sequence We can t trust all of these 1 values Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z ij ) ln E Weight each equation by trustworthiness i = ln Dt j g g g g g g g g g ln E 1 ln E 1 ln E 1 ln E 2 ln E 2 ln E 2 ln E 3 ln E 3 ln E 3 8 = 25 = 68 = 13 = 42 = 117 = 51 = 183 = 254 = g 2 g1 g3 = 2g 3 g 2 g 4 = 2g 254 g 253 g 255= num_samples * num_images equations 254 equations g128= 0 Lock down one point in g()
53 1 2 Image Sequence We can t trust all of these 1 values Dt j = 1/64 sec 3 Dt j = 1/16 sec 3 Dt j = 1/4 sec g( Z ij ) ln E Weight each equation by trustworthiness i = ln Dt j w 8g8 w8 ln E1 = w8*-4.16 w 25g25 w25ln E1 = w25*-2.77 w 68g68 w68ln E1 = w68*-1.39 w 13g13 w13ln E2 = w13*-4.16 w 42g42 w42ln E2 = w42*-2.77 w 117g117 w117ln E2 = w117*-1.39 w 51g51 w51ln E = w51* g183 w183 E3 w 254g 254 w254 E3 w w ln = 183*-2.77 w ln = 254* w2 g 2 w2g1 w2g 3 = 2w3 g 3 w3g 2 w3g 4 = 0 0 2w254 g 254 w254g 253 w254g 255 = 0 g128= 0
54 w 8g8 w8 ln E1 = w8*-4.16 w 25g25 w25ln E1 = w25*-2.77 w 68g68 w68ln E1 = w68*-1.39 w 13g13 w13ln E2 = w13*-4.16 w 42g42 w42ln E2 = w42*-2.77 w 117g117 w117ln E2 = w117*-1.39 w 51g51 w51ln E = w51* g183 w183 E3 w 254g 254 w254 E3 w w ln = 183*-2.77 w ln = 254*-1.39 num_samples * num_images equations 2w2 g 2 w2g1 w2g 3 = 2w3 g 3 w3g 2 w3g 4 = 0 0 2w254 g 254 w254g 253 w254g 255 = 254 equations g128= 0 1 equation 0 num_samples * num_images How do we convert this to Ax=b form for Matlab? A x = b non-zero entries num_samples x = A\b num_samples 1
55 High Dynamic Range as a look [
56 High-dynamic range on smartphones Apple, iphone X/XS/XR We have a brand new feature we call smart HDR Always-on bracketed exposures: - Even when you aren t taking pictures, the camera saves to a buffer with varying exposures. - Use that data to do HDR [
57 Google Pixel phones
58
59
60 But often a global operator isn t sufficient! I lose detail. WHAT ABOUT LOCAL TONE MAPPING?
61 Local tone mapping / bilateral filter Flickr user mapgoblin
62
63 Contributions Contrast reduction for HDR images Local tone mapping Preserves details fewer halos than naïve filtering Fast with optimized implementation of bilateral filter (otherwise sort of slow)
64 High-dynamic-range (HDR) images CG Images Multiple exposure photo [Debevec & Malik 1997] Recover response curve HDR value for each pixel HDR sensors
65 Contrast reduction Match limited contrast of the medium Preserve details Real world 10-6 High dynamic range 10 6 Picture Low contrast
66 A typical photo Sun is overexposed Foreground is underexposed
67 Gamma (g) compression X > X g Colors are washed-out Input Gamma
68 Gamma compression on intensity Colors are OK, but details (intensity high-frequency) are blurred Intensity Gamma on intensity Color
69 Quick and dirty intensity / color separation Compute the intensity (I) by averaging the color channels. Compute the chrominance channels: (R/I, G/I, B/I)
70 Gamma compression on intensity Colors are OK, but details (intensity high-frequency) are blurred Intensity Gamma on intensity Color
71 Chiu et al Reduce contrast of low-frequencies Keep high frequencies Low-freq. Reduce low frequency High-freq. Color
72 Quick low/high frequency computation Blur intensity to get low frequencies Subtract original from low frequency - =
73 The halo nightmare For strong edges Because they contain high frequency Low-freq. Reduce low frequency High-freq. Color
74 Our approach Do not blur across edges Non-linear filtering Large-scale Output Detail Color
75 Start with Gaussian filtering Here, input is a step function + noise J = f I output input
76 Start with Gaussian filtering Spatial Gaussian f = J f I output input
77 Start with Gaussian filtering Output is blurred J = f I output input
78 Gaussian filter as weighted average Weight of x depends on distance to x J (x) = x f ( x, x ) I(x ) x x x output input
79 The problem of edges Here, I(x) pollutes our estimate J(x) It is too different across the edge J (x) = x f ( x, x ) I(x ) I(x) x I(x ) output input
80 Principle of Bilateral filtering [Tomasi and Manduchi 1998] Penalty g on the intensity difference 1 J (x) = f ( x, x ) g( I( x ) I( x)) I(x ) k( x) x x I(x ) I(x) output input
81 Bilateral filtering [Tomasi and Manduchi 1998] Spatial Gaussian f 1 J (x) = f ( x, x ) g( I( x ) I( x)) I(x ) k( x) x x output input
82 Bilateral filtering [Tomasi and Manduchi 1998] Spatial Gaussian f Gaussian g on the intensity difference 1 J (x) = f ( x, x ) g( I( x ) I( x)) I(x ) k( x) x x output input
83 Normalization factor [Tomasi and Manduchi 1998] k(x)= x f ( x, x) g( I( x) I( x)) J (x) = 1 k( x) x f ( x, x ) g( I( x ) I( x)) I(x ) x output input
84 Bilateral filtering is non-linear [Tomasi and Manduchi 1998] The weights are different for each output pixel 1 J (x) = f ( x, x) g( I( x ) I( x)) I(x ) k( x) x x x output input
85 Handling uncertainty Sometimes, not enough similar pixels Happens for specular highlights Can be detected using normalization k(x) Simple fix (average with output of neighbors) Weights with high uncertainty Uncertainty
86 Contrast reduction Input HDR image Contrast too high!
87 Contrast reduction Input HDR image Intensity Color
88 Contrast reduction Input HDR image Intensity Large scale Fast Bilateral Filter Color
89 Contrast reduction Input HDR image Intensity Large scale Fast Bilateral Filter Detail Color
90 Contrast reduction Input HDR image Intensity Large scale Reduce contrast Large scale Fast Bilateral Filter Detail Color
91 Contrast reduction Input HDR image Intensity Large scale Reduce contrast Large scale Fast Bilateral Filter Detail Preserve! Detail Color
92 Contrast reduction Input HDR image Output Intensity Large scale Reduce contrast Large scale Fast Bilateral Filter Detail Preserve! Detail Color Color
93 Informal comparison Gradient-space [Fattal et al.] Bilateral [Durand et al.] Photographic [Reinhard et al.]
94 Informal comparison Gradient-space [Fattal et al.] Bilateral [Durand et al.] Photographic [Reinhard et al.]
95 Project 2 Recover high dynamic range radiance map from multiple uncalibrated images with known exposure time. Compress high dynamic range image into low dynamic range visualization using bilateral filter based decomposition. Solving linear systems! You might need to brush up.
96 What about avoiding radiance map reconstruction entirely? Exposure Fusion [Mertens, 2007] Not shown in class; for interest / project extra credit.
97 Exposure Fusion Tom Mertens 1 Jan Kautz 2 Frank Van Reeth 1 1 EDM - Hasselt University 2 University College London, UK /
98 !!!! underexposed overexposed both
99 High Dynamic Range photography Multi-exposure sequence
100 HDR Photography HDR assembly: LDRs HDR [Mann and Picard, IS&T 95] [Debevec and Malik, SIGGRAPH 97] Tone mapping: HDR LDR [Durand and Dorsey, SIGGRAPH 02] [Reinhard et al., SIGGRAPH 02] [Fattal et al., SIGGRAPH 02] [Li et al., SIGGRAPH 05] [Lischinski et al., SIGGRAPH 06] Exposure Fusion: LDRs LDR
101 Multi-exposure Response curve calibration Assembly Tone mapping bilateral gradient display High Dynamic Range photography
102 Multi-exposure Response curve calibration Assembly Tone mapping bilateral gradient display High Dynamic Range photography Multi-exposure Exposure fusion display
103 Idea: fuse multi-exposure sequence
104 Image Fusion Applications Extended depth-of-field [Ogden et al. 85] [Agarwala et al., SIGGRAPH 04] Multi-sensor photography [Burt et al. 93] Non-photorealistic video [Raskar et al., NPAR 04] Extended DOF Exposure Fusion Manual (Photoshop) Image pyramid [Burt et al. 93] Multisensor: IR + visible
105 [Ogden et al. 85, Burt et al. 93] Pyramid Fusion
106 Pyramid Fusion multi-scale edge filter abs max reconstruct Pyramid decomposition [Ogden et al. 85, Burt et al. 93]
107 Pyramid Fusion Exposure fusion
108 X + Challenges: 1. Good weights 2. Good blending Multi-exposure Weight
109
110 well-exposedness contrast saturation
111 weight well-exposedness saturation well-exposedness weight = st_dev(r,g,b) 1 weight = f(intensity) f(i) 0 intensity 1 contrast contrast weight = h * image, h = saturation
112 well-exposedness contrast saturation
113 Exposures Weights Naïve blending Multi-resolution blending
114 Multi-resolution Blending [Burt and Adelson, ACM TOG 83]
115 Multi-Resolution Blending x
116 Multi-Resolution Blending x + reconstruct Laplacian pyramid Gaussian pyramid [Burt and Adelson, ACM TOG 83]
117 intensity weight blended Laplacian pyramid
118 Results
119
120
121 Middle exposure Exposure fusion
122 Mean image Exposure fusion
123 Well-exposedness contrast + saturation
124 Pyramid fusion [Ogden85; Burt93] Exposure fusion Single exposure
125 [Durand and Dorsey 02] [Reinhard et al. 02] [Li et al. 05] Exposure Fusion
126 Flash/No Flash Ambient Flash Exposure Fusion Also see: [Eisemann and Durand 04, Petschnigg et al. 04, Agrawal et al. 05]
127 Conclusion Simple, fast and flexible Pixel selection + blend perceptual measures multi-resolution blending Few parameters to tweak Not physically-based Future work More measures More applications Real-time GPU implementation for video contrast saturation well-exposedness
128 Acknowledgments Photos: Min Kim Jesse Levinson Jacques Joffre Amit Agrawal European Regional Development Fund Matlab code
129
130 - Additional slides -
131 Extended DOF Graphcuts [Agrawala 04] Exposure Fusion
132 IR visible Exposure fusion
133 Middle exposure Exposure fusion
134 Mean Image Exposure fusion
135
136 Maui Sunset -2 stops +2 stops Mean image Exposure fusion
137 Maui Sunset -2 stops +2 stops Middle exposure Exposure fusion
138 Mean image Middle image Exposure fusion
139 - End -
Recap of Previous Lecture
Recap of Previous Lecture Matting foreground from background Using a single known background (and a constrained foreground) Using two known backgrounds Using lots of backgrounds to capture reflection and
More informationHigh Dynamic Range Images
High Dynamic Range Images Alyosha Efros with a lot of slides stolen from Paul Debevec and Yuanzhen Li, 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Grandma Problem Problem: Dynamic
More informationHigh Dynamic Range Images
High Dynamic Range Images Alyosha Efros CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2018 with a lot of slides stolen from Paul Debevec Why HDR? Problem: Dynamic
More informationImage-based Lighting (Part 2)
Image-based Lighting (Part 2) 10/19/17 T2 Computational Photography Derek Hoiem, University of Illinois Many slides from Debevec, some from Efros, Kevin Karsch Today Brief review of last class Show how
More informationHigh dynamic range imaging
High dynamic range imaging Digital Visual Effects Yung-Yu Chuang with slides by Fredo Durand, Brian Curless, Steve Seitz, Paul Debevec and Alexei Efros Camera is an imperfect device Camera is an imperfect
More informationGreg Ward / SIGGRAPH 2003
Global Illumination Global Illumination & HDRI Formats Greg Ward Anyhere Software Accounts for most (if not all) visible light interactions Goal may be to maximize realism, but more often visual reproduction
More informationHigh Dynamic Range Imaging.
High Dynamic Range Imaging High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this
More informationMulti-exposure Fusion Features
Multi-exposure Fusion Features Paper Number: 94 No Institute Given Abstract. This paper introduces a process where fusion features assist matching scale invariant feature transform (SIFT) image features
More informationLast Lecture. Bayer pattern. Focal Length F-stop Depth of Field Color Capture. Prism. Your eye. Mirror. (flipped for exposure) Film/sensor.
Last Lecture Prism Mirror (flipped for exposure) Your eye Film/sensor Focal Length F-stop Depth of Field Color Capture Light from scene lens Mirror (when viewing) Bayer pattern YungYu Chuang s slide Today
More informationCapturing light. Source: A. Efros
Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses
More informationPerceptual Effects in Real-time Tone Mapping
Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR
More informationIMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION
IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication
More informationProf. Feng Liu. Winter /15/2019
Prof. Feng Liu Winter 2019 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/15/2019 Last Time Filter 2 Today More on Filter Feature Detection 3 Filter Re-cap noisy image naïve denoising Gaussian blur better
More informationRecap from Monday. Frequency domain analytical tool computational shortcut compression tool
Recap from Monday Frequency domain analytical tool computational shortcut compression tool Fourier Transform in 2d in Matlab, check out: imagesc(log(abs(fftshift(fft2(im))))); Image Blending (Szeliski
More informationDetail-Enhanced Exposure Fusion
Detail-Enhanced Exposure Fusion IEEE Transactions on Consumer Electronics Vol. 21, No.11, November 2012 Zheng Guo Li, Jing Hong Zheng, Susanto Rahardja Presented by Ji-Heon Lee School of Electrical Engineering
More informationLight. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?
Light Properties of light Today What is light? How do we measure it? How does light propagate? How does light interact with matter? by Ted Adelson Readings Andrew Glassner, Principles of Digital Image
More informationImage-based Lighting
Image-based Lighting 10/17/15 T2 Computational Photography Derek Hoiem, University of Illinois Many slides from Debevec, some from Efros Next week Derek away for ICCV (Venice) Zhizhong and Aditya will
More informationToday. Motivation. Motivation. Image gradient. Image gradient. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 009 Today Gradient domain image manipulation Introduction Gradient cut & paste Tone mapping Color-to-gray conversion Motivation Cut &
More informationLenses & Exposure. Lenses. Exposure. Lens Options Depth of Field Lens Speed Telephotos Wide Angles. Light Control Aperture Shutter ISO Reciprocity
Lenses & Exposure Lenses Lens Options Depth of Field Lens Speed Telephotos Wide Angles Exposure Light Control Aperture Shutter ISO Reciprocity The Viewfinder Camera viewfinder Image Sensor shutter lens
More informationNonlinear Multiresolution Image Blending
Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study
More informationMeasuring Light: Radiometry and Cameras
Lecture 11: Measuring Light: Radiometry and Cameras Computer Graphics CMU 15-462/15-662, Fall 2015 Slides credit: a majority of these slides were created by Matt Pharr and Pat Hanrahan Simulating a pinhole
More informationCapture and Displays CS 211A
Capture and Displays CS 211A HDR Image Bilateral Filter Color Gamut Natural Colors Camera Gamut Traditional Displays LCD panels The gamut is the result of the filters used In projectors Recent high gamut
More informationOther approaches to obtaining 3D structure
Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera
More informationRecovering High Dynamic Range Radiance Maps in Matlab
Recovering High Dynamic Range Radiance Maps in Matlab cs060m - Final project Daniel Keller This project comprises an attempt to leverage the built-in numerical tools and rapid-prototyping facilities provided
More informationReflection Mapping
Image-Based Lighting A Photometric Approach to Rendering and Compositing Paul Debevec Computer Science Division University of California at Berkeley http://www.cs.berkeley.edu/~debevec August 1999 Reflection
More informationHigh Dynamic Range Lighting Paul Debevec, USC Institute for Creative Technologies. March 24, Game Developer s Conference 1
March 4, 004 High Dynamic Range Lighting Paul Debevec University of Southern California Institute for Creative Technologies March 4, 004 5:30 6:30 pm www.debevec.org/ibl004/ Scenes lit with point light
More informationEDGE-AWARE IMAGE PROCESSING WITH A LAPLACIAN PYRAMID BY USING CASCADE PIECEWISE LINEAR PROCESSING
EDGE-AWARE IMAGE PROCESSING WITH A LAPLACIAN PYRAMID BY USING CASCADE PIECEWISE LINEAR PROCESSING 1 Chien-Ming Lu ( 呂建明 ), 1 Sheng-Jie Yang ( 楊勝傑 ), 1 Chiou-Shann Fuh ( 傅楸善 ) Graduate Institute of Computer
More informationFiltering and Enhancing Images
KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.
More informationComputational Photography
Computational Photography Photography and Imaging Michael S. Brown Brown - 1 Part 1 Overview Photography Preliminaries Traditional Film Imaging (Camera) Part 2 General Imaging 5D Plenoptic Function (McMillan)
More informationStructure from Motion
11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from
More informationCoding and Modulation in Cameras
Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs
More informationModeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The
More informationRecap. DoF Constraint Solver. translation. affine. homography. 3D rotation
Image Blending Recap DoF Constraint Solver translation affine homography 3D rotation Recap DoF Constraint Solver translation 2 affine homography 3D rotation Recap DoF Constraint Solver translation 2 affine
More informationPanoramic Image Stitching
Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract
More informationIntroduction to Computer Graphics. Image Processing (1) June 8, 2017 Kenshi Takayama
Introduction to Computer Graphics Image Processing (1) June 8, 2017 Kenshi Takayama Today s topics Edge-aware image processing Gradient-domain image processing 2 Image smoothing using Gaussian Filter Smoothness
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha
More informationThe 7d plenoptic function, indexing all light.
Previous Lecture The 7d plenoptic function, indexing all light. Lightfields: a 4d (not 5d!) data structure which captures all outgoing light from a region and permits reconstruction of arbitrary synthetic
More informationINFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome!
INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 10: Shading Models Welcome! Today s Agenda: Introduction Light Transport Materials Sensors Shading INFOGR Lecture 10 Shading Models 3 Introduction
More informationFeature Preserving Depth Compression of Range Images
Feature Preserving Depth Compression of Range Images Jens Kerber Alexander Belyaev Hans-Peter Seidel MPI Informatik, Saarbrücken, Germany Figure 1: A Photograph of the angel statue; Range image of angel
More informationME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"
ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due
More informationDaytime Long Exposures
Daytime Long Exposures 1. Make sure you have the right equipment: DLSR Camera memory Card ND filter (neutral density)-make sure it will screw onto the front of your lens tripod camera shoe that fits your
More informationComputer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu
Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Image Processing Graphics concerned
More informationReal-time exposure fusion on a mobile computer
Real-time exposure fusion on a mobile computer Asheer Kasar Bachoo Signal Processing Research Group Optronics Sensor Systems Council for Scientific and Industrial Research (CSIR) Pretoria, South Africa
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationColor Me Right Seamless Image Compositing
Color Me Right Seamless Image Compositing Dong Guo and Terence Sim School of Computing National University of Singapore Singapore, 117417 Abstract. This paper introduces an approach of creating an image
More information(and what the numbers mean)
Using Neutral Density Filters (and what the numbers mean) What are ND filters Neutral grey filters that effectively reduce the amount of light entering the lens. On solid ND filters the light-stopping
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationRadiance. Pixels measure radiance. This pixel Measures radiance along this ray
Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:
More informationCS201 Computer Vision Lect 4 - Image Formation
CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationImage Composition. COS 526 Princeton University
Image Composition COS 526 Princeton University Modeled after lecture by Alexei Efros. Slides by Efros, Durand, Freeman, Hays, Fergus, Lazebnik, Agarwala, Shamir, and Perez. Image Composition Jurassic Park
More information3D graphics, raster and colors CS312 Fall 2010
Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 20: Light, reflectance and photometric stereo Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties
More informationComputational Photography and Capture: (Re)Coloring. Gabriel Brostow & Tim Weyrich TA: Frederic Besse
Computational Photography and Capture: (Re)Coloring Gabriel Brostow & Tim Weyrich TA: Frederic Besse Week Date Topic Hours 1 12-Jan Introduction to Computational Photography and Capture 1 1 14-Jan Intro
More informationCSCI 1290: Comp Photo
CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all of its acknowledgements. Questions from Tuesday Aaron: Crepuscular rays
More informationChapter 3-Camera Work
Chapter 3-Camera Work The perfect camera? Make sure the camera you purchase works for you Is it the right size? Does it have the type of lens you need? What are the features that I want? What type of storage
More informationDigital Workflow with Adobe Lightroom. Rob Redford November, 2013
Digital Workflow with Adobe Lightroom Rob Redford November, 2013 Digital Workflow Introduction In this presentation Motivations for a RAW + Lightroom workflow Workflow example based on my personal workflow
More informationProf. Feng Liu. Spring /17/2017. With slides by F. Durand, Y.Y. Chuang, R. Raskar, and C.
Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/17/2017 With slides by F. Durand, Y.Y. Chuang, R. Raskar, and C. Rother Last Time Image segmentation Normalized cut and segmentation
More informationEEM 463 Introduction to Image Processing. Week 3: Intensity Transformations
EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial
More informationVGP352 Week 8. Agenda: Post-processing effects. Filter kernels Separable filters Depth of field HDR. 2-March-2010
VGP352 Week 8 Agenda: Post-processing effects Filter kernels Separable filters Depth of field HDR Filter Kernels Can represent our filter operation as a sum of products over a region of pixels Each pixel
More informationEECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters
EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationTemplates, Image Pyramids, and Filter Banks
Templates, Image Pyramids, and Filter Banks Computer Vision James Hays, Brown Slides: Hoiem and others Reminder Project due Friday Fourier Bases Teases away fast vs. slow changes in the image. This change
More informationVivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.
Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,
More informationIMAGE DE-NOISING IN WAVELET DOMAIN
IMAGE DE-NOISING IN WAVELET DOMAIN Aaditya Verma a, Shrey Agarwal a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - (aaditya, ashrey)@iitk.ac.in KEY WORDS: Wavelets,
More informationComputer Graphics (CS 543) Lecture 10: Normal Maps, Parametrization, Tone Mapping
Computer Graphics (CS 543) Lecture 10: Normal Maps, Parametrization, Tone Mapping Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Normal Mapping Store normals in texture
More informationRadiometry and reflectance
Radiometry and reflectance http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 16 Course announcements Homework 4 is still ongoing - Any questions?
More informationGlobal Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller
Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete
More informationCS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow
CS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow DUE: Wednesday November 12-11:55pm In class we discussed optic flow as the problem of computing a dense flow field where a flow field is a vector
More informationSingle-view 3D Reconstruction
Single-view 3D Reconstruction 10/12/17 Computational Photography Derek Hoiem, University of Illinois Some slides from Alyosha Efros, Steve Seitz Notes about Project 4 (Image-based Lighting) You can work
More informationPrevious Lecture - Coded aperture photography
Previous Lecture - Coded aperture photography Depth from a single image based on the amount of blur Estimate the amount of blur using and recover a sharp image by deconvolution with a sparse gradient prior.
More informationComputer Vision I - Algorithms and Applications: Image Formation Process Part 2
Computer Vision I - Algorithms and Applications: Image Formation Process Part 2 Carsten Rother 17/11/2013 Computer Vision I: Image Formation Process Computer Vision I: Image Formation Process 17/11/2013
More informationCHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN
CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image
More informationProfessional. Technical Guide N-Log Recording
Professional Technical Guide N-Log Recording En Table of Contents N Log: A Primer... 3 Why Use N Log?...4 Filming N Log Footage... 6 Using Camera Controls...10 View Assist...11 Ensuring Consistent Exposure...12
More informationRadiometric Calibration from a Single Image
Radiometric Calibration from a Single Image Stephen Lin Jinwei Gu Shuntaro Yamazaki Heung-Yeung Shum Microsoft Research Asia Tsinghua University University of Tokyo Abstract Photometric methods in computer
More informationSupplemental Document for Deep Photo Style Transfer
Supplemental Document for Deep Photo Style Transfer Fujun Luan Cornell University Sylvain Paris Adobe Eli Shechtman Adobe Kavita Bala Cornell University fujun@cs.cornell.edu sparis@adobe.com elishe@adobe.com
More informationMotivation. Intensity Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More informationDigital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer
Wilhelm Burger Mark J. Burge Digital Image Processing An algorithmic introduction using Java Second Edition ERRATA Springer Berlin Heidelberg NewYork Hong Kong London Milano Paris Tokyo 12.1 RGB Color
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent
More informationImage Compositing and Blending
Computational Photography and Capture: Image Compositing and Blending Gabriel Brostow & Tim Weyrich TA: Frederic Besse Vignetting 3 Figure from http://www.vanwalree.com/optics/vignetting.html Radial Distortion
More informationRay Tracing Assignment. Ray Tracing Assignment. Ray Tracing Assignment. Tone Reproduction. Checkpoint 7. So You Want to Write a Ray Tracer
Ray Tracing Assignment So You Want to Write a Ray Tracer Goal is to reproduce the following Checkpoint 7 Whitted, 1980 Ray Tracing Assignment Seven checkpoints 1. Setting the Scene 2. Camera Modeling 3.
More informationElectromagnetic waves and power spectrum. Rays. Rays. CS348B Lecture 4 Pat Hanrahan, Spring 2002
Page 1 The Light Field Electromagnetic waves and power spectrum 1 10 10 4 10 6 10 8 10 10 10 1 10 14 10 16 10 18 10 0 10 10 4 10 6 Power Heat Radio Ultra- X-Rays Gamma Cosmic Infra- Red Violet Rays Rays
More informationBuxton & District U3A Digital Photography Beginners Group Lesson 6: Understanding Exposure. 19 November 2013
U3A Group Lesson 6: Understanding Exposure 19 November 2013 Programme Buxton & District 19 September Exploring your camera 1 October You ve taken some pictures now what? (Viewing pictures; filing on your
More informationHigh Dynamic Range Reduction Via Maximization of Image Information
High Dynamic Range Reduction Via Maximization of Image Information A. Ardeshir Goshtasby Abstract An algorithm for blending multiple-exposure images of a scene into an image with maximum information content
More informationRender all data necessary into textures Process textures to calculate final image
Screenspace Effects Introduction General idea: Render all data necessary into textures Process textures to calculate final image Achievable Effects: Glow/Bloom Depth of field Distortions High dynamic range
More informationStorage Efficient NL-Means Burst Denoising for Programmable Cameras
Storage Efficient NL-Means Burst Denoising for Programmable Cameras Brendan Duncan Stanford University brendand@stanford.edu Miroslav Kukla Stanford University mkukla@stanford.edu Abstract An effective
More informationFundamentals of Photography presented by Keith Bauer.
Fundamentals of Photography presented by Keith Bauer kcbauer@juno.com http://keithbauer.smugmug.com Homework Assignment Composition Class will be February 7, 2012 Please provide 2 images by next Tuesday,
More informationLow-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami
Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami Adaptive Systems Lab The University of Aizu Overview Introduction What is Vision Processing? Basic Knowledge
More informationWe ll go over a few simple tips for digital photographers.
Jim West We ll go over a few simple tips for digital photographers. We ll spend a fair amount of time learning the basics of photography and how to use your camera beyond the basic full automatic mode.
More informationComputational Imaging for Self-Driving Vehicles
CVPR 2018 Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta Kadambi--------Guy Satat Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta
More informationSolution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur
Pyramids Gaussian pre-filtering Solution: filter the image, then subsample blur F 0 subsample blur subsample * F 0 H F 1 F 1 * H F 2 { Gaussian pyramid blur F 0 subsample blur subsample * F 0 H F 1 F 1
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationImage Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3
Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation
More informationLight transport matrices
Light transport matrices http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 18 Course announcements Homework 5 has been posted. - Due on Friday
More informationImage Pyramids and Applications
Image Pyramids and Applications Computer Vision Jia-Bin Huang, Virginia Tech Golconda, René Magritte, 1953 Administrative stuffs HW 1 will be posted tonight, due 11:59 PM Sept 25 Anonymous feedback Previous
More informationIntroduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino
Introduction to Computer Vision Week 8, Fall 2010 Instructor: Prof. Ko Nishino Midterm Project 2 without radial distortion correction with radial distortion correction Light Light Light! How do you recover
More informationComputer Vision I - Basics of Image Processing Part 1
Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2
More informationCell-Sensitive Microscopy Imaging for Cell Image Segmentation
Cell-Sensitive Microscopy Imaging for Cell Image Segmentation Zhaozheng Yin 1, Hang Su 2, Elmer Ker 3, Mingzhong Li 1, and Haohan Li 1 1 Department of Computer Science, Missouri University of Science and
More informationSpecifications CAMEDIA C-8080 WIDE ZOOM
Specifications CAMEDIA C-8080 WIDE ZOOM Model Type CAMEDIA C-8080 WIDE ZOOM Digital camera with 4.5 cm/1.8 inch sunshine colour TFT LCD monitor. Image sensor Image sensor 2/3 inch CCD solid-state image
More information