The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics.

Similar documents
The Plenoptic Camera as Wavefront Sensor for the VTT Solar Telescope

3D imaging and wavefront sensing with a plenoptic objective

The Plenoptic Camera as a wavefront sensor for the European Solar Telescope (EST)

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

Simulations of Laser Guide Star Adaptive Optics Systems for the European Extremely Large Telescope

Plenoptic camera and its Applications

Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

Focal-plane wavefront sensing for segmented telescope

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

Winter College on Optics in Environmental Science February Adaptive Optics: Introduction, and Wavefront Correction

Wavefront Prediction using Artificial Neural Networks. By: Steve Weddell

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

Focal stacks and lightfields

Fast End-to-End Multi-Conjugate AO Simulations Using Graphical Processing Units and the MAOS Simulation Code

The AOLI low-order non-linear curvature wavefront sensor


UCLA Adaptive Optics for Extremely Large Telescopes 4 Conference Proceedings

Tomographic reconstruction for multiguidestar. Don Gavel, Jose Milovich LLNL

Coding and Modulation in Cameras

Depth Estimation with a Plenoptic Camera

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

Closed loop Optical Integrated Modeling

Computational Photography

Shack-Hartmann tomographic wavefront reconstruction using LGS: analysis of spot elongation and fratricide effect

Computing Challenges in Adaptive Optics for the Thirty Meter Telescope. Corinne Boyer ICALEPCS Grenoble, France October 10, 2011

Characterisation and Simulation of Atmospheric Seeing

WIDE-FIELD WAVEFRONT SENSING IN SOLAR ADAPTIVE OPTICS: MODELING AND EFFECTS ON RECONSTRUCTION

annual report 2011 / 2012 INSTITUT FÜR TECHNISCHE OPTIK UNIVERSITÄT STUTTGART

Computational Photography: Real Time Plenoptic Rendering

NEW WAVEFRONT SENSING CONCEPTS FOR ADAPTIVE OPTICS INSTRUMENTATION

Quasi-real-time end-to-end adaptive optics simulations at the E-ELT scale

The MCAO system of the EST: DM height determination for best performance using real daytime statistical turbulence data

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

DEPTH AND ANGULAR RESOLUTION IN PLENOPTIC CAMERAS. M. Damghanian, R. Olsson, M. Sjöström

STATE OF THE ART ADAPTIVE OPTICS. Philippe Feautrier WAVEFRONT SENSOR CAMERAS AT FIRST LIGHT IMAGING.

Fast Tomographic Reconstruction of Atmospheric Turbulence from Micro-Lens Imagery

3D Particle Position Reconstruction Accuracy in Plenoptic PIV

Energy efficient real-time computing for extremely large telescopes with GPU

1 Mathematical Descriptions of Imaging Systems

sensors ISSN

Formulas of possible interest

William J. Donnelly III, PhD. Jon A. Herlocker, PhD. - Sr. Optical Engineer. - Sr. Optical Scientist

Analysis of Two Representative Algorithms of Depth Estimation from Light Field Images

Contrast Optimization: A faster and better technique for optimizing on MTF ABSTRACT Keywords: INTRODUCTION THEORY

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

A Wave Optics Propagation Code for Multi-Conjugate Adaptive Optics. B. L. Ellerbroek Gemini Observatory, 670 N. A'ohoku Place, Hilo HI USA

Extracting Wavefront Error From Shack-Hartmann Images Using Spatial Demodulation

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Modeling Light. Slides from Alexei A. Efros and others

IRRADIANCE DISTRIBUTION OF IMAGE SURFACE IN MICROLENS ARRAY SOLAR CONCENTRATOR

Status of PSF Reconstruction at Lick

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

New Method of Microimages Generation for 3D Display. Nicolò Incardona *, Seokmin Hong, Manuel Martínez-Corral

Fast CCDs for AO Status & Plans

DESIGN AND TESTING OF GPU BASED RTC FOR TMT NFIRAOS

( ) = First Bessel function, x = π Dθ

Refractive Optical Design Systems Any lens system is a tradeoff of many factors Add optical elements (lens/mirrors) to balance these Many different

Gemini Observatory. Multi-Conjugate Adaptive Optics Control System. The Gemini MCAO System (ICALEPCS, Geneva, October 2005) 1

Performance of Monte-Carlo simulation of adaptive optics systems of the EAGLE multi-ifu Instrument for E-ELT

Enhanced Imaging of Reacting Flows Using 3D Deconvolution and a Plenoptic Camera

Baseline Shack Hartmann Design for the 6.5m MMT f/9 Focus

The Giant Magellan Telescope phasing system: Algorithms and performance simulations

Modeling Light. Michal Havlik

Assembly of thin gratings for soft x-ray telescopes

Methods for Measuring Ocular Wavefront Error

Plane Wave Imaging Using Phased Array Arno Volker 1

Supplementary Figure 1 Guide stars of progressively longer wavelengths can be used for direct wavefront sensing at increasingly large depth in the

Supporting Information: Highly tunable elastic dielectric metasurface lenses

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

TOWARDS A GENERIC COMPRESSION SOLUTION FOR DENSELY AND SPARSELY SAMPLED LIGHT FIELD DATA. Waqas Ahmad, Roger Olsson, Mårten Sjöström

Measuring the centroid gain of a Shack Hartmann quad-cell wavefront sensor by using slope discrepancy

Evaluation of Two-Dimensional Phase Unwrapping Algorithms for Interferometric Adaptive Optics Utilizing Liquid-Crystal Spatial Light Modulators

Lens Design I. Lecture 1: Basics Herbert Gross. Summer term

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Chapters 1 7: Overview

Hyperspectral interferometry for single-shot absolute measurement of 3-D shape and displacement fields

Modeling Light. Michal Havlik

AO-PSF reconstruction for Shack-Hartmann AO systems

Multi-frame blind deconvolution: Compact and multi-channel versions. Douglas A. Hope and Stuart M. Jefferies

Determining Wave-Optics Mesh Parameters for Modeling Complex Systems of Simple Optics

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Techniques of Noninvasive Optical Tomographic Imaging

Geometric Field Tracing through an Off- Axis Parabolic Mirror

A Wiener Filter Version of Blind Iterative Deconvolution

3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca

Efficient Signal Processing Algorithms for Optical Doppler Tomography Literature Survey

Measurements using three-dimensional product imaging

Modelling astronomical adaptive optics I. The software package CAOS

Measuring Light: Radiometry and Cameras

Optical Ptychography Imaging

A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography

Coherent Diffraction Imaging with Nano- and Microbeams

Invited Paper. Katherine Creath and James C. Wyant WYKO Corporation 2650 East Elvira Road, Tucson, Arizona ABSTRACT 1. INTRODUCTION.

11.1 Optimization Approaches

Lens Design I. Lecture 11: Imaging Herbert Gross. Summer term

Real-time Integral Photography Holographic Pyramid using a Game Engine

Chapter 36. Image Formation

Transcription:

1st AO4ELT conference, 05011 (2010) DOI:10.1051/ao4elt/201005011 Owned by the authors, published by EDP Sciences, 2010 The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics. J.M. Rodríguez-Ramos 1,a, B. Femenía 2, I. Montilla 2, L.F. Rodríguez-Ramos 2, J.G. Marichal- Hernández 1, J.P. Lüke 1, R. López 2, J.J. Díaz 2, and Y. Martín 2 1 Universidad de La Laguna, Tenerife 38207, Canary Islands, Spain 2 Instituto de Astrofísica de Canarias Abstract. The CAFADIS camera is a new wavefront sensor (WFS) patented by the Universidad de La Laguna. CAFADIS is a system based on the concept of plenoptic camera originally proposed by Adelson and Wang [1] and its most salient feature is its ability to simultaneously measuring wavefront maps and distances to objects [2]. This makes of CAFADIS an interesting alternative for LGS-based AO systems as it is capable of measuring from an LGS-beacon the atmospheric turbulence wavefront and simultaneously the distance to the LGS beacon thus removing the need of a NGS defocus sensor to probe changes in distance to the LGS beacon due to drifts of the mesospheric Na layer. In principle, the concept can also be employed to recover 3D profiles of the Na Layer allowing for optimizations of the measurement of the distance to the LGS-beacon. Currently we are investigating the possibility of extending the plenoptic WFS into a tomographic wavefront sensor. Simulations will be shown of a plenoptic WFS when operated within an LGS-based AO system for the recovery of wavefront maps at different heights. The preliminary results presented here show the tomographic ability of CAFADIS. 1 Introduction The plenoptic wavefront sensor, also known as the plenoptic camera or the CAFADIS camera, was originally created to allow the capture of the Light Field (LF) [3], a concept extremely useful in computer graphics where most of the optics are treated using exclusively the geometric paradigm. Basically the Light Field is a four-variable volume representation of all rays and their directions, and thus allows the creation by synthesis of the image as seen from virtually any point of view. The history of the search for this description is really old, and references can be found in the very early beginning of the twentieth century, where F. Ives (1903) described a system with an array of pinholes at the focal plane or Lippmann (1908) Nobel prize laureate who described the first Light Field capturing device [4] [5]. Modern light field description was stated by Adelson and Wang (1992) [1], and a significant step forward was done by Ren Ng (2005) [6] who built a working system and established a four-dimensional Fourier view of the imaging process. Recently, Georgiev (2009)[7] has introduced a number of diffraction considerations and defined the plenoptic camera 2.0. The use of the plenoptic (CAFADIS) camera for real-time 3D cameras has been patented at the University of La Laguna (Spain) by some of the authors [8], exploiting the capability of refocusing at several distances and selecting the best focus as the distance to the object. The use of plenoptic optics for wavefront measurement was described by Clare and Lane (2005) [9], for the case of point sources. We describe in this paper the use of the plenoptic camera as a wavefront sensor, especially adequate for computing the tomography of the atmospheric turbulence and the height variations of the LGS beacons. After a description of the sensor and of the simulations, the results of simulation tests are presented, which clearly show the viability of the system for tomography of the atmospheric turbulence. a e-mail: jmramos@ull.es This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial License, which permits unrestricted use, distribution, and reproduction in any noncommercial medium, provided the original work is properly cited. Article published by EDP Sciences and available at http://ao4elt.edpsciences.org or http://dx.doi.org/10.1051/ao4elt/201005011

First conference on Adaptive Optics for for Extremely ELT s Large Telescopes Fig. 1. Outline of the plenoptic camera used as wavefront sensor. A microlens array is located at the telescope focus, instead of in some pupil plane as required for the Shack-Hartmann (SH) sensor. If the f-numbers of both telescope and microlens are the same, the detector will be filled up with images of the pupil of maximum size without overlapping. 2 Description of the plenoptic wavefront sensor. A microlens array with the same f-number than the telescope is placed at its focus (Figure 1), in such a way that many pupil images are obtained at the detector, each of them representing a slightly different point of view. The use of the same f number guarantees that the size of the image of the pupil is as big as possible, without overlapping with its neighbor, providing thus optimum use of the detector surface. An example of a plenoptic image obtained at the Vacuum Tube Telescope (VTT, Canary Islands, Spain) is shown in Figure 2, depicting an array of 15x19 microlenses imaged on a 1024x1280 ueye CMOS Camera. The spider support for the secondary mirror can be easily identified, together with some vignetting at the lower left side, created by the telescope configuration at the moment of the data acquisition. In order to understand the behavior of the plenoptic wavefront sensor, it is important to identify the direct relationship existing between every pupil point and its corresponding image for each microlens. As shown in Figure 3, every pupil coordinate is imaged through each microlens in a way that all rays passing through the indicated square will arrive to one of the pupil images, depending only on the angle of arrival. This fact clearly indicates that the image that would be obtained if the pupil were restricted to this small square can be reconstructed by post-processing the plenoptic image, selecting the value of the corresponding coordinate at every microlens and building an image with all of them. This could be better understood looking at Figure 4, where the plenoptic image and the reordered plenoptic image are shown. In fact, for depth extraction the plenoptic sensor acts as a multiview stereo system; whereas, for wavefront phase extraction, it can be understood as a global sensor containing as extreme cases the Shack-Hartmann sensor and the pyramid sensor (2x2 microlens array at telescope focus). If diffraction considerations are introduced in addition to geometric optics, as stated by Clare and Lane [8], the image obtained using this post-processing is really the result of the convolution of the geometric image with the transfer function of the microlens, which basically implies a low pass blurring equivalent to the well known diffraction limiting effects found when imaging through a finite aperture. 05011-p.2

J.M. Rodríguez-Ramos et al.: The CAFADIS camera, a new tomographic wavefront sensor for AO Fig. 2. Example of plenoptic image of the Sun on a plenoptic camera of 19 x 15 microlenses, using a detector of 1280 x 1024 pixels at the VTT telescope (Observatorio del Teide, Canary Islands, Spain). Fig. 3. Correspondence between pupil and pupil images. Every pupil coordinate is re-imaged on the corresponding position of each pupil image, depending on the arriving angle of the incoming ray. Fig. 4. Left: plenoptic image. Center: reordered plenoptic image. Right: detail. The plenopticc camera can be understood as a multiview stereo system (Plenoptic data extracted from Ren Ng [6]) With this scheme, both pupil and image planes are sampled simultaneously, and later processing may extract relevant information coming from the volume above the telescope. The simplest case is the wavefront at the pupil, which can be extracted from the relative displacement of the reordered images from every pupil coordinate. 05011-p.3

First conference on Adaptive Optics for for Extremely ELT s Large Telescopes Fig. 5. A LGS beacon system comprised by 5 LGS beacons is considered. Six different turbulent layers are generated using CAOS. The cumulative wavefront phases at pupil plane, from each point of view, are calculated. The plenoptic frame is simulated including all the information [10]. height (m) Cn 2 (%) 0 23.0 477 29.6 1500 24.6 4838.5 11.0 14924 5.0 17445 0.4 Table 1. Atmospheric turbulence distribution and heights 3 Simulation and results. In order to demonstrate the tomographic capability of the plenoptic sensors, a simulation has been developed. A LGS-based AO system, placed at 90 km height, is comprised by five LGS beacons (Figure 5). Six different turbulent layers are generated using CAOS [10], and the turbulence distribution and heights (Table 1) are extracted from Femenia (2003) [11]. An 8m diameter telescope is assumed. The microlens array is conjugated with respect to the LGS Na layer (90km height), and the detector is placed at microlens focal plane. Every subpupil is sampled by 32x32 pixels, and this will be the recovered phase resolution. The artificial stars are separated 35 arcseconds from the optical axis. The cumulative phases at telescope pupil are calculated from every point of view (the five LGS beacon). The plenoptic image, containing data from the five artificial laser guide stars at the same frame, is used to recover the cumulative wavefront phases at telescope pupil. The plenoptic frame associated to the on axis LGS is shown in Figure 6. The phase gradients at pupil plane are calculated from the plenoptic frame using the partial derivatives of the wavefront aberration estimated in Clare and Lane (2005) [9]. The cumulative phases are recovered using an expansion over complex exponentials, allowing 2D-FFT use. Natural guide star is not considered. Each one of the original cumulative phases are depicted in Figure 7 on the left column. The adequately restored phases shown on the right column in Figure 7 imply the tomographic ability of the CAFADIS sensor. 05011-p.4

J.M. Rodríguez-Ramos et al.: The CAFADIS camera, a new tomographic wavefront sensor for AO Fig. 6. Section of the plenoptic frame showing the LGS on axis. The rest of the LGS, the off axis ones, present a similar aspect. 4 Discussion and future work. The final phase resolution depends on the number of pixels sampling each microlens, but depth resolution also depends on the same quantity. This implies that, increasing the phase resolution, higher height resolution is obtained at the same time. In the extreme case, when using a pyramid sensor (2x2 microlens), the phase and height resolution are maximized (in fact, depth information can also be extracted from the pyramid sensor). Of course, when using a five LGS beacons system, several pyramid sensors must be used conjugated to every beacon or to every turbulent layer. In the other extreme case, when using a Shack-Hartmann wavefront sensor, the phase resolution depends on the number of subpupils, and height resolution is minimized (even lost). A compromise solution might be taken: where a unique plenoptic sensor, comprised by 6x6 subpupils sampled by 84x84 pixels would be enough to get phases with 84x84 pixel resolution using only one detector of 504x504 pixels. Or even, in order to avoid contamination on the detector due to the neighbor LGS images, the plenoptic image could be sampled by 12x12 subpupils. In this case, a 1008x1008 detector is needed. But only one! The Field Programmable Gates Arrays and the Graphical Processing Units are already capable to manage such a big quantity of data in real time. Probably, the most crucial limitations are now the detector readout velocity (detectors must be designed with several parallel outputs) and the fact that the astronomical community is a little bit afraid of migrating to FPGAs or GPUs. Of course, a previous optical arrangement must be designed in order to place the different LGS images over the same plenoptic detector. Future work will include detailed analysis of the recovered wavefront phases; 3D artificial guide stars simulation and restoration; and tomographic wavefront phase restoration from solar observations. Acknowledgments This work has been funded by Programa Nacional I+D+i (Project DPI 2006-07906) of the Ministerio de Educación y Ciencia, and by the European Regional Development Fund (ERDF). 05011-p.5

First conference on Adaptive Optics for for Extremely ELT s Large Telescopes from LGS on axis from LGS off axis (1) from LGS off axis (2) from LGS off axis (3) from LGS off axis (4) Fig. 7. Left column: original wavefront phases. Right column: recovered wavefront phases. Tilt has been removed in all the frames. References 1. T. Adelson, J. Wang, Single lens stereo with a plenoptic camera, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992) 2. J. Rodríguez-Ramos et al., Wavefront and distance measurements using the CAFADIS camera, in Astronomical telescopes, Marseille (2008) 3. R. Ng, Fourier Slice Photography, Stanford University (2005) 4. M.G. Lippmann, Épreuves réversibles donnant la sensation du relief, J. de Phys. 4 VII (1908) 5. G. Lippmann, La photographie intégrale, Académie des Sciences 164 (1908) 6. R. Ng et al., Light field photography with a hand-held plenoptic camera, in Technical Report CSTR 2005-02 (Stanford Computer Science, 2005) 7. L.A. Georgiev, T., Superresolution with Plenoptic Camera 2.0, in Adobe Tech Report (2009) 8. J. Rodríguez-Ramos et al., ES200800126 ES200600210, in International Patent (2006) 9. R.M. Clare, R.G. Lane, Wave-front sensing from subdivision of the focal plane with a lenslet array, J. Opt. Soc. Am. A 22 (2005) 10. C. et al, Modelling astronomical adaptive optics- I. The software package CAOS, Astronomy and Astrophysics 356 (2005) 11. B. Femenía, N. Devaney, Astronomy and Astrophysics 404 (2003) 05011-p.6