Seismic Noise Attenuation Using Curvelet Transform and Dip Map Data Structure

Similar documents
Practical implementation of SRME for land multiple attenuation

A Novel 3-D De-multiple Workflow for Shallow Water Environments - a Case Study from the Brage field, North Sea

Y015 Complementary Data-driven Methods for Interbed Demultiple of Land Data

Tu N Maximising the Value of Seismic Data for Continued Mature Field Development in the East Shetland Basin

High-resolution Moveout Transform; a robust technique for modeling stackable seismic events Hassan Masoomzadeh* and Anthony Hardwick, TGS

Optimising 4D Seismic with Evolving Technology over 20 Years of Reservoir Monitoring of the Gullfaks Field, North Sea

G012 Scattered Ground-roll Attenuation for 2D Land Data Using Seismic Interferometry

We LHR2 08 Simultaneous Source Separation Using an Annihilation Filter Approach

G009 Scale and Direction-guided Interpolation of Aliased Seismic Data in the Curvelet Domain

G009 Multi-dimensional Coherency Driven Denoising of Irregular Data

We N Simultaneous Shooting for Sparse OBN 4D Surveys and Deblending Using Modified Radon Operators

Refraction Full-waveform Inversion in a Shallow Water Environment

Successes and challenges in 3D interpolation and deghosting of single-component marinestreamer

5D leakage: measuring what 5D interpolation misses Peter Cary*and Mike Perz, Arcis Seismic Solutions

Mostafa Naghizadeh and Mauricio D. Sacchi

SUMMARY. denoise the original data at each iteration. This can be

Enhanced adaptive subtraction method for simultaneous source separation Zhaojun Liu*, Bin Wang, Jim Specht, Jeffery Sposato and Yongbo Zhai, TGS

Challenges and Opportunities in 3D Imaging of Sea Surface Related Multiples Shaoping Lu*, N.D. Whitmore and A.A. Valenciano, PGS

SUMMARY. These two projections are illustrated in the equation

Challenges of pre-salt imaging in Brazil s Santos Basin: A case study on a variable-depth streamer data set Summary

Incoherent noise suppression with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC

Mitigation of the 3D Cross-line Acquisition Footprint Using Separated Wavefield Imaging of Dual-sensor Streamer Seismic

Advances in radial trace domain coherent noise attenuation

P. Bilsby (WesternGeco), D.F. Halliday* (Schlumberger Cambridge Research) & L.R. West (WesternGeco)

Summary. Introduction

Introduction. Surface and Interbed Multtple Elimination

Anisotropy-preserving 5D interpolation by hybrid Fourier transform

Non-stationary interpolation in the f-x domain

Attenuation of water-layer-related multiples Clement Kostov*, Richard Bisley, Ian Moore, Gary Wool, Mohamed Hegazy, Glenn Miers, Schlumberger

Shallow Reverberation Prediction Methodology with SRME

Improved Imaging through Pre-stack Trace Interpolation for missing offsets of OBC data A case study from North Tapti area of West Coast, India

Th Broadband Processing of Conventional Streamer Data - Optimized De-Ghosting in the Tau-P Domain

Adaptive de-ghosting by kurtosis maximisation. Sergio Grion, Rob Telling, Janet Barnes, Dolphin Geophysical. Summary

A047 Simultaneous-source Acquisition in the North Sea Prospect Evaluation

C014 Shot Based Pre-Processing Solutions for a WATS Survey An Example from a Field Trial in Green Canyon Gulf of Mexico

Anisotropic model building with well control Chaoguang Zhou*, Zijian Liu, N. D. Whitmore, and Samuel Brown, PGS

M. Warner* (S-Cube), T. Nangoo (S-Cube), A. Umpleby (S-Cube), N. Shah (S-Cube), G. Yao (S-Cube)

Th N Deghosting by Echo-deblending SUMMARY. A.J. Berkhout* (Delft University of Technology) & G. Blacquiere (Delft University of Technology)

P068 Case Study 4D Processing OBC versus Streamer Example of OFON filed, Block OML102, Nigeria

Tu A11 01 Seismic Re-processing and Q-FWI Model Building to Optimise AVO and Resolution in the Shallow Wisting Discovery

Selection of an optimised multiple attenuation scheme for a west coast of India data set

Downloaded 09/01/14 to Redistribution subject to SEG license or copyright; see Terms of Use at

Curvelet-domain multiple elimination with sparseness constraints. Felix J Herrmann (EOS-UBC) and Eric J. Verschuur (Delphi, TUD)

Removing blended sources interference using Stolt operator Amr Ibrahim, Department of Physics, University of Alberta,

Using Similarity Attribute as a Quality Control Tool in 5D Interpolation

Th ELI1 12 Joint Crossline Reconstruction and 3D Deghosting of Shallow Seismic Events from Multimeasurement Streamer Data

G017 Beyond WAZ - A Modeling-based Evaluation of Extensions to Current Wide Azimuth Streamer Acquisition Geometries

Common-angle processing using reflection angle computed by kinematic pre-stack time demigration

TU STZ1 04 DEMULTIPLE DEMULTIPLE OF HIGH RESOLUTION P-CABLE DATA IN THE NORWEGIAN BARENTS SEA - AN ITERATIVE APPROACH

Random noise attenuation using local signal-and-noise orthogonalization a

y Input k y2 k y1 Introduction

A Short Narrative on the Scope of Work Involved in Data Conditioning and Seismic Reservoir Characterization

Seismic Attributes on Frequency-enhanced Seismic Data

We G High-resolution Tomography Using Offsetdependent Picking and Inversion Conditioned by Image-guided Interpolation

Th N D Source Designature Using Source-receiver Symmetry in the Shot Tau-px-py Domain

Tu N Internal Multiple Attenuation on Radial Gathers With Inverse-scattering Series Prediction

SUMMARY INTRODUCTION METHOD. Review of VMD theory

Interpolation with pseudo-primaries: revisited

Seismic data interpolation beyond aliasing using regularized nonstationary autoregression a

We D03 Limitations of 2D Deghosting and Redatuming in Time-lapse Processing of Towed-streamer Data

Mitigating Uncertainties in Towed Streamer Acquisition and Imaging by Survey Planning

Principles and Techniques of VSP Data Processing

Th SRS3 07 A Global-scale AVO-based Pre-stack QC Workflow - An Ultra-dense Dataset in Tunisia

We LHR5 06 Multi-dimensional Seismic Data Decomposition for Improved Diffraction Imaging and High Resolution Interpretation

Source deghosting for synchronized multi-level source streamer data Zhan Fu*, Nan Du, Hao Shen, Ping Wang, and Nicolas Chazalnoel (CGG)

P292 Acquisition Footprint Removal from Time Lapse Datasets

Efficient Beam Velocity Model Building with Tomography Designed to Accept 3d Residuals Aligning Depth Offset Gathers

Summary. Introduction. Source separation method

Rank-Reduction Filters for 3 and 4 Spatial Dimensions

Main Menu. Summary. sampled) f has a sparse representation transform domain S with. in certain. f S x, the relation becomes

Multicomponent f-x seismic random noise attenuation via vector autoregressive operators

Tu STZ1 06 Looking Beyond Surface Multiple Predictions - A Demultiple Workflow for the Culzean High Density OBC Survey

Inversion after depth imaging

We ELI1 02 Evaluating Ocean-bottom Seismic Acquisition in the North Sea - A Phased Survey Design Case Study

Time-lapse acquisition with a dual-sensor streamer over a conventional baseline survey

G034 Improvements in 4D Seismic Processing - Foinaven 4 Years On

Acquisition of high shot density blended seismic data: a WAZ sea trial Thomas Mensch*, Damien Grenié, Risto Siliqi and Yunfeng Li.

Enhanced Angular Illumination from Separated Wavefield Imaging (SWIM)

Multiple attenuation for shallow-water surveys: Notes on old challenges and new opportunities

High Resolution Imaging by Wave Equation Reflectivity Inversion

Interpolation using asymptote and apex shifted hyperbolic Radon transform

Internal Multiple Attenuation on Radial Gathers With Inverse- Scattering Series Prediction

Time in ms. Chan. Summary

Shot-based pre-processing solutions for wide azimuth towed streamer datasets

Reverse time migration of multiples: Applications and challenges

P071 Land Data Regularization and Interpolation Using Azimuth Moveout (AMO)

Multi-Azimuth PSDM Processing in the Presence of Orthorhombic Anisotropy -- A Case History Offshore North West Australia Summary

SUMMARY. In combination with compressive sensing, a successful reconstruction

CFP migration; practical aspects

Summary. Introduction

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Sea surface source-side static optimization for 4D seismic Simon R. Barnes*, Paul Lecocq and Stephane Perrier, PGS and Mick Igoe, Tullow Oil

We N Converted-phase Seismic Imaging - Amplitudebalancing Source-independent Imaging Conditions

Deconvolution in the radial trace domain

= R G SUMMARY. The Santos basin in Brazil suffers from strong internal multiples that overlap with primaries from the pre-salt reservoirs.

Effect of Structure on Wide Azimuth Acquisition and Processing

Short Note. Non-stationary PEFs and large gaps. William Curry 1 INTRODUCTION

It is widely considered that, in regions with significant

Application of FX Singular Spectrum Analysis on Structural Data

VISTA. Desktop seismic data processing software

Transcription:

Seismic Noise Attenuation Using Curvelet Transform and Dip Map Data Structure T. Nguyen* (PGS), Y.J. Liu (PGS) Summary The curvelet transform is a known tool used in the attenuation of coherent and incoherent noise in seismic data. It utilises the fact that signal and noise are usually better separated in the curvelet domain than in the timespace (TX) domain. Coefficients of the transform are not independent and neighbouring coefficients are strongly correlated, which existing curvelet-based noise attenuation algorithms do not fully utilise. In this work we propose to use a data structure called a dip map to describe dip information in seismic data. This information links local curvelet coefficients together in adaptive thresholding or subtraction of curvelet coefficients in seismic denoising algorithms. We used the dip map to improve curvelet multiple subtraction algorithm and the results show significant improvement over traditional methods with real data.

Introduction All acquired seismic data are contaminated by noises that need to be removed or attenuated before further processing and interpretation can take place. The methods for seismic data denoising can be broadly classified into prediction filter methods and transform-based methods. In this paper we continue previous works on seismic data denoising using curvelet transform by using a technique that connects transform domain curvelet coefficients to seismic events in the original time-space (TX) domain. Curvelet transform has been used successfully in seismic denoising of both incoherent and coherent noise (Neelamani et al., 2008) mainly because of useful properties that can be leveraged: 1. Curvelets provide a sparse representation of seismic events, 2. events of differing dip, scale or TX location will often be well separated in the curvelet domain. The curvelet transform represents any 2D image as a linear sum of curvelet dictionary functions weighted by corresponding coefficient values. The data is decomposed into multiple scales, with each scale divided equally into multiple directions. Curvelet basis functions are designed to be simultaneously localized in scale, direction, space and time. Each curvelet coefficient is thus identified by a multi-index consisting of scale, direction and space-time indices. Each scale-direction (curvelet subband) is strictly band-limited, hence each curvelet subband is decimated to a much smaller size when compared to the original data (Nguyen et al., 2010). An example of the amplitude of complex curvelet domain coefficients is illustrated in Figure 1 (left). Each coefficient C(i,j,t,x) has scale index i, direction index j, and time-space indices t,x. Figure 1 (right) also illustrates the relationship between neighboring coefficients: coefficients with the same (j,t,x) describe the same event, but at different frequencies (or scale), group of coefficients at the same curvelet subband (i,j) describe linear seismic events, and coefficients with the same (i,t,x) describe events whose directions fall between multiple curvelet bands. The coefficients having close scaledirection-location indices in the curvelet domain are strongly correlated, because they describe approximately the same seismic events. Therefore our aim is to have a data structure to link neighboring curvelet coefficients in noise attenuation algorithms based on geometrical characteristics of seismic events in TX domain. Figure 1 Left: an example of complex curvelet representation of a typical seismic shot gather with primary and multiple noise, and, Right: an example of the relationship of a coefficient with its neighbouring coefficients. Curvelet transform and the dip map data structure The dip map is a data structure created for the purpose of connecting seismic events in TX domain to coefficients in transform domain. For a normal 2D TX seismic section the dip map is a 3D data cube D(t,x,d) of time, space and dip dimensions calculated from the data. For each location of the TX data the dip map provides a dip spectrum vector that corresponds to seismic events at that location. Because the objective of the dip map is to provide dip information for the curvelet transform, and the number of curvelet band for each scale is increasing with scale level, the number of dips in the dip map is set equal to the number of curvelet direction at the highest scale. Seismic data contain continuous and locally linear events; there are continuities between dip map sections corresponding to different times that are close to each other. This is illustrated in Figure 2. The two synthetic events have nearly the same dip at three different times. The dip map sections of time t 0 and t 2 show two separate seismic events in its directions and location, but it is difficult to separate the two events in the dip map section at time t 1 because the two events have the same x

location. At time t 1 the dip map sections from time t 0 and t 2 can help in separating these two crossing events. In practical seismic processing applications using curvelet transform and dip maps, any algorithm will need to estimate dip maps for the signal component and/or dip map for the noise component in case of coherent noise. Figure 2 Examples of dip map calculated from a TX section, and three slices in time of the dip map cube correspond to three time location t 0, t 1 and t 2. Application of dip maps to multiple subtraction We now describe how the dip map was used for our curvelet-domain adaptive multiple subtraction where the amount of rotation and scaling are controlled by an analysis step (Nguyen et al., 2016). A typical approach for adaptive multiple subtraction in the curvelet domain is to rotate and scale each curvelet domain coefficient of the model to match the data coefficient at the same index (Neelamani et al., 2010). The problem of this approach is that it relies on the assumption that primaries and multiples are mapping to different coefficients in the curvelet domain. While it is true that primaries and multiples are better separated in the curvelet domain than in the TX domain, crossing primaries and multiples can be mapped to the same curvelet coefficients. When the multiple model coefficients are rotated and scaled to match the data coefficients, they are not only adapted to the true multiple but also to the primary component, potentially leading to primary damage. Our algorithm for multiple subtraction (Nguyen et al., 2016) in the curvelet domain consists of an analysis and subtraction phases. The analysis phase takes into account a group of nearby curvelet coefficients estimating the constraint parameters for each pair of curvelet coefficients. These parameters control the amount of adaptation that the curvelet coefficients of the model are allowed to adapt to the data coefficient in the subtraction phase. By considering a group of coefficients together, the analysis phase can match modelled multiples to real multiple events in the TX domain, estimating the suitable adaptation parameter for the corresponding curvelet domain coefficients. Calculating the parameters in the analysis phase requires an estimation of the dip map of the multiple and primary components of the data, step by step as follows: 1. The multiple model is matched to real multiple by LSF (Least Square Filtering). 2. The multiple dip map is estimated from the adapted model (D M ). 3. The multiple is subtracted from the data by curvelet matching with a high level of adaptation. 4. Primary dip map (D P ) is estimated from an over-adapted subtraction with primary damages. 5. A new primary dip map (D P ) is interpolated using the multiple dip map D M as a weighting mask to compensate for multiple masking of primary events. In step 3, the multiple is subtracted with high adaptation parameters to ensure all multiples are subtracted and the result contains only primary. The crucial part of the analysis phase is to estimate a primary dip map from an over-adapted subtraction with primary damages in step 4. This is done by using the property that dip map events are continuous and linear and using the dip map of the multiple

as a weighting mechanism to compensate for primary events that are masked by multiples. An example of a processing algorithm for primary map reconstruction in step 5 is shown as follows: I. Estimate a normalized mask from the multiple dip map D M (t,x,d) ranging from 0 to 1. For II. example,, =,,,, For each point (t,x,d) of the dip map of the primary D P (t,x,d), estimate a new value taking into account the neighbouring value:,,=,,,,+,,,,,,, where W,, is an anisotropic smoothing window with its main axis in the direction of the current dip d, its value is estimated from,, and is normalized so that it sums up to 1. The main idea is that when the value of the multiple mask is high, the value of the primary dip map is compensated by taking into account its value further along the direction of the dip. After the above processing the analysis phase will have estimated dip maps for the primary and multiple components and these can be used to control adaptation level in the subtraction phase. Using the location (t,x) and direction (j) indices of each curvelet coefficient, the dip maps can provide an estimation of the primary and multiple components. The ratio between multiple and primary dip information will control the level of scaling and rotation of the modelled coefficient to match the data before being subtracted. If the amplitude of the multiple dip map is stronger than the primary dip map, the level of adaptation is higher; conversely, if the primary dip map is stronger, the level of adaptation can be reduced. By this way, the primary events are better preserved while high level of curvelet adaptation will remove more residual multiple in weak primary areas. Examples The authors have implemented and tested successfully the above algorithm on a synthetic dataset and a real deep water dataset. We present here the result of a test performed on a very challenging shallow water broadband data from the Barents Sea acquired using dual-sensor towed streamer technology. Figure 3 is the stack of the result of conventional model based denoising from which we can see that most of the multiples are attenuated although there is still some visible residual multiple noise left in the region where primaries overlap with complex structure/faults. This residual multiple noise is further attenuated using our new curvelet denoising method (Figures 4,5 and 6). Because curvelet adaptive subtraction relies on the directionality of the transform and curvelets have a higher number of curvelet directions at higher scales, our method tends to work best at frequencies above 20Hz. We found that at lower frequencies, the improved method for LSF subtraction produces better results. Both multiple subtraction techniques have been integrated into an intelligent adaptive subtraction program (Perrier et al., 2017) that combines different multiple subtraction techniques to produce the optimum demultiple. Conclusions The advantage for seismic noise attenuation in the curvelet domain is that noise and signal are better separated in the transform domain. However they are not totally separated and denoising algorithms need to take into account nearby coefficients in thresholding or subtraction step. We propose using the dip map data structure to connect local curvelet coefficients together in the noise attenuation process. In the specific case of multiple subtraction, matching and subtraction of curvelet coefficient independently can lead to primary damage. We avoid this problem by using estimated dip maps for the primary and multiple components to guide the adaptation process. Our example on real dataset shows that the improved multiple subtraction algorithm removes more residual multiples while preserving primaries. Acknowledgements We would like to thank PGS for permission to publish, and our colleagues, R. Dyer, S. Perrier, L. Marsiglio, P. Lecocq and S. Brandwood for reviewing this paper.

References Neelamani, R et al. [2008] Coherent and random noise attenuation using the curvelet transform, The Leading Edge, Feb. 2008 pp.240-248 Neelamani, R. et al. [2010] Adaptive subtraction using complex-valued curvelet transforms Geophysics (2010),75(4):V51 Nguyen, T. and Dyer, R. [2016] Adaptive multiple subtraction by statistical curvelet matching, SEG Technical Program Expanded Abstracts 2016: pp. 4566-4571. Nguyen, T. and Chauris, H. [2010] The Uniform Discrete Curvelet Transform IEEE Transactions on Signal Processing 08/2010; 58(7):3618-3634. Perrier, S., Dyer, R., Liu, Y.J. Nguyen, T. and Lecocq, P., [2017] Intelligent adaptive subtraction for multiple attenuation, EAGE 2017. Figure 3: Stack of the multiple denoising results in channel domain using conventional simultaneously Least-Squares Filtering based adaptive subtraction. Figure 4: Stack of the multiple attenuation result after applying this newly improved method in channel domain with a running window of 1000 traces in the space direction. Figure 5: Stack result difference (scaled by 2) This demonstrates clearly that primaries are well preserved while more multiples are attenuated. Figure 6: Stack result difference in the boxed area in Figure 5 (scaled by 10).