Five Dimensional Interpolation:exploring different Fourier operators

Similar documents
Anisotropy-preserving 5D interpolation by hybrid Fourier transform

Least squares Kirchhoff depth migration: important details

Apex Shifted Radon Transform

Seismic data Interpolation in the Continuous Wavenumber Domain, Flexibility and Accuracy

SEISMIC INTERPOLATION VIA CONJUGATE GRADIENT PURSUIT L. Fioretti 1, P. Mazzucchelli 1, N. Bienati 2 1

A Nuclear Norm Minimization Algorithm with Application to Five Dimensional (5D) Seismic Data Recovery

Practical implementation of SRME for land multiple attenuation

B023 Seismic Data Interpolation by Orthogonal Matching Pursuit

Interpolation using asymptote and apex shifted hyperbolic Radon transform

Seismic Trace Interpolation Using Adaptive Prediction Filters

Non-stationary interpolation in the f-x domain

SUMMARY. denoise the original data at each iteration. This can be

y Input k y2 k y1 Introduction

Seismic data interpolation beyond aliasing using regularized nonstationary autoregression a

Lessons in Spatial Sampling Or... Does Anybody Know the Shape of a Wavefield?

Sampling functions and sparse reconstruction methods

P242 Sampling Considerations for Band-limited Fourier Reconstruction of Aliased Seismic Data

Toward an exact adjoint: semicircle versus hyperbola

G009 Scale and Direction-guided Interpolation of Aliased Seismic Data in the Curvelet Domain

Least Squares Kirchhoff Depth Migration: potentials, challenges and its relation to interpolation

CLASSIFICATION OF MULTIPLES

Interpolation with pseudo-primaries: revisited

Considerations in 3D depth-specific P-S survey design

Writing Kirchhoff migration/modelling in a matrix form

Removing blended sources interference using Stolt operator Amr Ibrahim, Department of Physics, University of Alberta,

Hierarchical scale curvelet interpolation of aliased seismic data Mostafa Naghizadeh and Mauricio Sacchi

SEG/San Antonio 2007 Annual Meeting

Azimuth Moveout Transformation some promising applications from western Canada

Multi-step auto-regressive reconstruction of seismic records

Preconditioning seismic data with 5D interpolation for computing geometric attributes

Minimum weighted norm wavefield reconstruction for AVA imaging

The power of stacking, Fresnel zones, and prestack migration

Th ELI1 12 Joint Crossline Reconstruction and 3D Deghosting of Shallow Seismic Events from Multimeasurement Streamer Data

Stanford Exploration Project, Report 124, April 4, 2006, pages 49 66

Reconstruction of irregularly sampled and aliased data with linear prediction filters

Amplitude and kinematic corrections of migrated images for non-unitary imaging operators

High Resolution Imaging by Wave Equation Reflectivity Inversion

A least-squares shot-profile application of time-lapse inverse scattering theory

A low rank based seismic data interpolation via frequencypatches transform and low rank space projection

Internal Multiple Attenuation on Radial Gathers With Inverse- Scattering Series Prediction

Analysis of different interpolation methods for uphole data using Surfer software

Seismic Attributes on Frequency-enhanced Seismic Data

Missing trace interpolation and its enhancement of seismic processes

Multicomponent land data pre-processing for FWI: a benchmark dataset

SUMMARY. These two projections are illustrated in the equation

3D pyramid interpolation

HTI anisotropy in heterogeneous elastic model and homogeneous equivalent model

Target-oriented wavefield tomography: A field data example

Least squares Kirchhoff depth migration with. preconditioning

Prestack residual migration in the frequency domain

Progress Report on: Interferometric Interpolation of 3D SSP Data

F020 Methods for Computing Angle Gathers Using RTM

G009 Multi-dimensional Coherency Driven Denoising of Irregular Data

Two-dimensional fast generalized Fourier interpolation of seismic records

Seismic reflection data interpolation with differential offset and shot continuation a

Multi-azimuth velocity estimation

5D leakage: measuring what 5D interpolation misses Peter Cary*and Mike Perz, Arcis Seismic Solutions

Coherent partial stacking by offset continuation of 2-D prestack data

Seismic data interpolation and de-noising in the frequency-wavenumber domain

We ELI1 02 Evaluating Ocean-bottom Seismic Acquisition in the North Sea - A Phased Survey Design Case Study

G012 Scattered Ground-roll Attenuation for 2D Land Data Using Seismic Interferometry

Summary. Introduction. Source separation method

Beyond alias hierarchical scale curvelet interpolation of regularly and irregularly sampled seismic data

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Migration velocity analysis of incomplete/irregular data using least squares Kirchhoff migration

Interpolation of irregularly sampled data

1.5D internal multiple prediction on physical modeling data

Common-angle processing using reflection angle computed by kinematic pre-stack time demigration

Footprint reduction by angle-weighted stacking after migration

Ground-roll characterization: Cold Lake 3-D seismic example

Time-jittered ocean bottom seismic acquisition Haneet Wason and Felix J. Herrmann

Shaping regularization in geophysical-estimation problems

Randomized sampling strategies

Using Similarity Attribute as a Quality Control Tool in 5D Interpolation

Processing converted-wave data in the tau-p domain: rotation toward the source and moveout correction

Data interpolation in pyramid domain

Downloaded 09/20/16 to Redistribution subject to SEG license or copyright; see Terms of Use at

Stochastic conjugate gradient method for least-square seismic inversion problems Wei Huang*, Hua-Wei Zhou, University of Houston

Tu N Internal Multiple Attenuation on Radial Gathers With Inverse-scattering Series Prediction

EARTH SCIENCES RESEARCH JOURNAL

Anatomy of common scatterpoint (CSP) gathers formed during equivalent offset prestack migration (EOM)

Angle Gathers for Gaussian Beam Depth Migration

Seismic reflection data interpolation with differential offset and shot continuation

M. Warner* (S-Cube), T. Nangoo (S-Cube), A. Umpleby (S-Cube), N. Shah (S-Cube), G. Yao (S-Cube)

Seismic data interpolation using nonlinear shaping regularization a

Downloaded 05/15/13 to Redistribution subject to SEG license or copyright; see Terms of Use at

Iterative resolution estimation in Kirchhoff imaging

Generalized sampling and beyond Nyquist imaging

Converted wave dip moveout

High-resolution Moveout Transform; a robust technique for modeling stackable seismic events Hassan Masoomzadeh* and Anthony Hardwick, TGS

Reverse time migration in midpoint-offset coordinates

Advances in radial trace domain coherent noise attenuation

Rank-Reduction Filters for 3 and 4 Spatial Dimensions

Improved Imaging through Pre-stack Trace Interpolation for missing offsets of OBC data A case study from North Tapti area of West Coast, India

SUMMARY. In combination with compressive sensing, a successful reconstruction

AVO, migration apertures Fresnel zones, stacking, Q, anisotropy, and fast inversions

Simultaneous source separation via robust time variant Radon operators

Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method

P071 Land Data Regularization and Interpolation Using Azimuth Moveout (AMO)

Introduction. Surface and Interbed Multtple Elimination

Transcription:

Five Dimensional Interpolation:exploring different Fourier operators Daniel Trad CREWES-University of Calgary Summary Five-Dimensional interpolation has become a very popular method to pre-condition data for migration. Many different implementations have been developed in the last decade, most of them sharing a similar dataflow and principles. In this abstract, I explore three different ways to implement the mapping between data and model in the context of Fourier transforms. These three methods are multidimensional Fast Fourier transform, Discrete Fourier Transforms, and Non-Equidistant Fast Fourier transforms. I incorporate the three operators inside the same inversion algorithm, to be able to perform a fair comparison between them. Introduction Most implementations for 5D interpolation work by calculating a transform by inversion with sparseness constraint and using this transform to map back to a new seismic geometry (Liu and Sacchi 2004). Applying the sparseness to the transform has an effect of filtering out sampling artifacts, because these are, by the definition of the transform, non-sparse. Looking from this high level of generality, all these different implementations are very similar. This similarity extends also to other dataflow details like overlapping five dimensional windows. Usually some version of either a conjugate gradient inversion or an anti-leakage inversion are applied to enforce the sparseness constraint. So, what do all these implementations differ on? The most common difference among them is on the transformation to connect data and models, in other words, how they represent the data and which basis functions are utilized. Some implementations use Fourier transforms, some use Hankel transforms, others curvelets, frames, Radon, Prediction Filters (Fxy) and Green functions based on modeling/migration operators. Many implementations used in the industry are based on Fourier transforms because they are very efficient and very good in representing small details of the data. One factor that strongly influences the efficiency and flexibility of all these transforms is whether they use exact input spatial locations or some approximation achieved by multi-dimensional binning, either direct or with some intermediate interpolation. When using exact locations, these transformations become computationally expensive to invert. When using binning these transformations are efficient and can handle large window sizes and gaps, but lose precision and flexibility to adapt to narrow azimuths and long offsets. When binning 5D data from standard geometries, the resulting grids turn to be very sparse. What is interesting about Fourier transforms is that they seem to adapt well for either approach. The subject of this paper is to explore how Fourier interpolation performs in one or the other case. I compare a five-dimensional interpolation with three different operators applying the three approaches mentioned above, binning, exact locations, and approximated with intermediate approximations. Special care is taken of using the same inversion and dataflow for the three cases, leaving differences strictly confined to the differences in operators. Operator Implementation The three more common choices for Fourier transformations are: Fast Fourier transform (FFT) with multidimensional binning and grid adaptation. Discrete Fourier transform (DFT) using exact spatial locations Non-Equidistant Fast Fourier transform (NFFT) using interpolation plus FFT. Fast Fourier transform operator GeoConvention 2017 1

A multidimensional FFT can be applied on each frequency slice but binning is required because FFT algorithms assume data are regularly sampled. This binning is not trivial for several reasons as explained in Trad (2009, 2014). A complete 5D grid implies a data set acquired with line interval equal to group interval, which is much denser than is usually done in practice. A typical 3D survey will fill only a small percentage, typically close to 3%, leaving for the algorithm to infill 97 % of the remainder cells. Binning along inline crossline directions is relatively straightforward, being a sensible choice the bin size, which is usually calculated to match the Fresnel zone size after migration. On the other hand, binning along offset and azimuth, or inline crossline offsets can be quite complicated, depending on the fold of the data, the shot patch pattern and the offset range. Optimal binning is critical for the FFT because coarse binning introduces time jitering that affects sparseness in the Fourier space, and creates a problem with traces that fall into the same bins. Fine binning makes the model size large and difficult to solve. For example, reducing binning by half in each direction forces interpolation to infill 2 4 =16 times as many grid cells. An industrial application of this algorithm has to adjust the binning size along offset and azimuth to deal with these issues. For example, in this work the user pre-defines the percentage of alive traces required, usually 3% for wide azimuth data or 5% for narrow azimuth data. With this information, the algorithm calculates the optimal size of the offset/azimuth binning intervals to achieve this target. More sophisticated schemes are possible, but this simple scheme helps to adjust the grid as the data fold changes on different areas of the survey. Another problem with binning is how to adjust the aperture across dimensions such that there are no extrapolated traces, but instead new traces are created only inside the region of support of the acquisition. For example, the azimuth range can be made automatic, depending of the actual coverage and the minimum/maximum offsets along inline/crossline directions can adjust to the patch size in the data. Discrete Fourier transform The simplest way to calculate the Fourier transform of irregular data is to use the mathematical definition, which for 4 spatial dimensions is: (1) In this equation u i is the value of the sample at position x i and U k is the value of the DFT at wavenumber k j. For 5D interpolation these are all four-dimensional vectors. Because the k axis is usually made regular, the k j value is replaced by an index j times a k interval. Equation 1 contains one nested summation for each dimension, which makes this formulation very slow for 5D. Therefore, although precise and straightforward, the DFT method may not necessarily be the best choice for wide azimuth data. To speed up this algorithm we can reduce the operator to calculate only the wavenumbers inside the FK volume with physical significance, with minimum/maximum wavenumbers variable with temporal frequency. Many details are required to make this formulation practical: precalculate the operator or use sin-cos tables, careful alignment of memory allocations, loop vectorization, and multi-threading. Parameterization for this approach is analogous to the case of binning. Although in principle there are no sampling intervals, in practice we need them to calculate the Nyquist frequencies on each spatial dimension. The algorithm has flexibility to adjust to variable spatial ranges, making possible to adjust for example for streamer acquisitions where the azimuth range changes continuously with offsets. Non-Equidistant Fast Fourier transform A more efficient approach than the DFT for calculating the Fourier spectrum of irregular sampled data is to use some intermediate, fast, and localized interpolation to move the traces to the bin centers in the 4D spatial grid, and then apply a multi-dimensional FFT. We can think of the binning + FFT method described first as a nearest neighbor pre-interpolation and the NFFT as a higher order pre-interpolation. In this work, I have used the method called Non-equidistant Fast Fourier transform (NFFT, Duijndam and Schonewille, 1999), and the libraries from the Technical University of Chemnitz (Keiner et al., 2009). The irregular data are (irregularly) convolved with a smoothing window to relocate samples onto GeoConvention 2017 2

a regular grid. A multidimensional FFT is applied to map the resulting regular (bandpass filtered) signal to the Fourier domain. The effect of the window is removed by deconvolution, which can be done in the Fourier domain since all samples are now in regular locations. One issue is that the library has some constraints for the window parameterization: it requires polynomials with an even number of parameters, and a minimum oversampling factor for these windows. This requires careful calculation of padding factors, and research is still ongoing to obtain a general an optimal parameterization. Examples Figure 1 shows a test for regularization where the input traces are located at irregular intervals, and the input traces are decimated four times. There is no normal moveout correction (NMO) applied, to illustrate the behavior for curved events. We see that the three operators succeed in regularizing the data and binning errors are not visible but there are some subtle differences in the spectral content for the most curved parts of the events, with the best results for the DFT. For this small example (2D data, 3D interpolation) the differences in running time are negligible. In this example, I used shot/receiver coordinates as spatial dimensions, which is feasible in some cases like 2D. In 3D geometries, usually acquisition coordinates are too far apart and commonly we use midpoints x/y, and either offset xy or offset azimuths, as in the following example. For sparse 3D data, we have to use large interpolation windows to be able to deal with large minimum offsets. In Figure 2 we see a comparison for a real data set from Brooks, Alberta (CO 2 sequestration project). In Figures 2a and 2b we see the test window in the midpoint domain (red square) and the shots and receivers contributing to it. The original geometry was orthogonal, with 100 meters line spacing and 10 meter group spacing (5x5 bin size), but I eliminated from the input every second receiver line, making now the spacing 200m (or 40 bins). I set the output geometry to 100 meter line interval to recover the missing receiver lines. To allow each window to cover a full box I set the window size to 40 inlines 40 crosslines. For the other dimensions, I use 20 offsets and 8 azimuths, with intervals of 50 meters and 45 degrees. Actually, these intervals are too large for the binning approach, but the implementation automatically handles this issue creating an interpolation grid of 25 meters and 22.4 degrees respectively. Figure 2c shows a portion of the original data with all traces belonging to a removed receiver line (none of these traces where in the input). Figure 2d shows the corresponding traces created by using the FFT algorithm. We can identify the same reflectors in both, although the FFT result is a bit cleaner as expected. Figure 2e contains the traces created by the DFT algorithm. The main difference with the FFT result is stronger filtering, because of the slower convergence of the algorithm with DFT (all tests used 30 iterations). Also, because of the high memory requirement of the DFT operator the DFT Nyquist frequencies were somewhat reduced, which can explain some of the additional filtering. Figure 2f shows the result for the NFFT algorithm. There is some low frequency energy that does not exist in the original traces, which may be explained by insufficient numerical regularization at low frequencies. The computational time for the FFT approach was only 8 minutes, the DFT approach took 3 hours, the NFFT approach took 50 minutes. For this second test the best results have come from the FFT approach, but that is influenced by the large window size, and the simple structure of the data tests. Conclusions In this paper, we saw how three different Fourier operators behave during 5D interpolation. The first operator, standard multidimensional FFT with binning, seems to work well if binning is carefully implemented. DFT operator is more precise and flexible but computationally very expensive. This operator is a good benchmark tool to understand how approximations to exact locations affect the other operators. NFFT operator is a compromise in terms of speed of flexibility. When very large windows are required, like in the case of sparse orthogonal geometries, the efficiency and performance of the FFT operator is superior to the other two, but in the presence of complex structures or long offsets strong curvature in the events may require use of the DFT or NFFT approach. Acknowledgements GeoConvention 2017 3

I thank CMC Research Institutes, Inc (CMC). for access to the seismic data and CREWES sponsors for contributing to this seismic research. I also gratefully acknowledge support from NSERC (Natural Science and Engineering Research Council of Canada) through the grant CRDPJ 461179-13. GeoConvention 2017 4

Fig 1: Regularization in shot/receiver coordinates from irregular decimated data: a) original without decimation b) input c) DFT, d) FFT e) NFFT (a) (b) (c) (d) (e) (a (b (c (d (e (f Fig 2: Interpolation of receiver lines in inline/crossline/offset/azimuth coordinates. Red square is one window in inline/crossline. a) input geometry contributing to the red window, b) output geometry, c) one missing receiver line (not in input). d) estimated from FFT, e) DFT, f) NFFT GeoConvention 2017 5

References Claerbout, J., 1992, Earth sounding analysis, Processing versus inversion: Blackwell Scientific Publications, Inc. Duijndam, A. J. W., and Schonewille, M. A., 1999, Nonuniform fast fourier transform: Geophysics, 64, No. 2, 539 551. Keiner, J., Kunis, S., and Potts, D., 2009, Using nfft 3 a software library for various nonequispaced fast fourier transforms: ACM Trans. Math. Softw., 36, No. 4, 19:1 19:30. URL http://doi.acm.org/10.1145/1555386.1555388 Liu, B., and Sacchi, M. D., 2004, Minimum weighted norm interpolation of seismic records: GEOPHYSICS, 69, No. 6, 1560 1568, http://dx.doi.org/10.1190/1.1836829. Sacchi, M. D., and Ulrych, T. J., 1996, Estimation of the discrete fourier transform, a linear inversion approach: GEOPHYSICS, 61, No. 4, 1128 1136. Trad, D., 2009, Five-dimensional interpolation: Recovering from acquisition constraints: GEOPHYSICS, 74, No. 6, V123 V132. Trad, D., 2014, Five-dimensional interpolation: New directions and challenges: CSEG Recorder, March, 40 46. GeoConvention 2017 6