Thursday 23 January 2014

Marine Processing - Part 9 | Demultiples

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval

  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots

  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays

  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels


One of the key issues in marine data is addressing the issue of multiples. Multiples are delayed reflections that interfere with the “primary” reflections we want to image. The delay occurs because the reflection energy has taken a more complex and longer ray path from source to receiver, reverberating between two layers that are highly reflective.

While any two layers with high reflectivity can create a multiple image, in general it is energy reverberating in the water column (i.e. between the seafloor and the sea surface) that is the most important issue to contend with.

There are three main ways in which to remove multiples:
  1. treat the multiple as a long, ringing wavelet, and use signal processing to simplify it
  2. create a model of the multiple and try and subtract it from the data
  3. use the difference in stacking velocity of the primary and multiple to remove the multiple

We have already used the “signal processing” approach when we applied deconvolution to collapse the reverberations. So any further reduction in multiples is going to require a different technique/s.

The modelling approaches, such as SRME (Surface Related Multiple Elimination), usually require the primary seafloor reflection to be the first clear signal. This limits their application on shallow water lines such as this one, although there are some variations that can address this.

This leaves us with the velocity discrimination methods; I’m going to use the Parabolic Radon Transform (PRT) approach. The Radon transform is a generic mathematical procedure where input data in the frequency domain are decomposed into a series of events in the RADON domain. The PRT has been the “workhorse” demultiple method for many years; even when SRME is deployed it is often used to overcome limitations in the modelling.

The Tau-P transform is a special case of the Radon transform where Tau (intercept time) and P (slowness, slope, or parabolic curvature) represent two axes in Tau-p space. The data are summed along a series of parabolic tracks, each described by a curvature value (P). The user defines a number of P values to use as well as the total P range, which can be from positive to negative.

One of the key elements of the PRT as a demultiple method is that we usually apply an NMO correction of some sort to the data. This helps do two things: (i) target the primaries and multiples – for example we can use water velocity or a picked primary trend, and (ii) approximate the reflections (which are originally hyperbolic) through a series of parabolic curves. 

There are different ways of considering the parabolic curvature (P) however one of the easiest is to look at the relative moveout (in ms) of the events at the far offset. This means a flat event has a P value of zero, up-dip events (NMO corrected with too slow a velocity) have negative p values, and down-dip events (NMO corrected with too high a velocity) have positive P values.

Here’s the same synthetic as before; this time it has a NMO correction applied with 90% of the primary velocities so that the primaries are up-dip and the multiples down-dip.


A synthetic CDP gather corrected with 90% of the primary NMO velocity (left) and its parabolic Radon transform (right). The Radon transform is labelled by “far offset moveout” so that the over-corrected, up-dip primaries correspond to negative numbers and the under-corrected, down-dip multiples correspond to positive numbers. Where the primaries and multiples cross over in the X-T domain, they are separated in the PRT domain (circled)

There are some key things to notice in this result. 

Firstly, the PRT is only really effective when we have enough live traces – in the shallow part of the gather (down to ~500ms) with only 20 traces the events can still be seen, but they are smeared.  Below about 500ms TWT it is hard to make the PRT demultiple effective.

Secondly, the events form a “cross” shape in the X-T domain; this means that we can damage the primary (or the multiple) if we don’t transform a sufficiently large range of P-values (or we use too few values, causing the signal to be aliased).

Some other general points to consider are:
  • the more P-values and frequencies you have, the larger the matrix required for the transformation. You need a different matrix for each combination of offsets, and so regularising the data (and using a routine that saves the matrix) can make a big difference
  • it is much more important to focus on removing the multiples from the near offsets (as these will “stack-in”) as opposed to the far offsets (which will tend to be supressed by stacking or “stack out”)

Here are some synthetic examples of the PRT demultiple in action, which show how important it is to tune the parameters defining the “multiple-zone”, and check the results!


The synthetic gather (top left) with two different PRT demultiples applied (top centre, top right). The transforms of each gather are also displayed underneath. The demultiple only starts at 550ms TWT to avoid damaging the shallow events as these are poorly defined in the PRT space where the NMO stretch mute has limited the offsets. The centre panels show the “multiple zone” as being defined between +20 and +1600ms “far-offset moveout”, on the right this has been increased to include dips up to -40ms moveout. This allows the deepest multiple with the smallest moveout variation to be attacked

From this the PRT demultiple looks promising; it can manage even small separations between the primary and multiple as long as the velocities have been determined accurately enough. It needs to be tested carefully, however, if the primary and multiple are close in terms of moveout.

Here are some selected CDP gathers from the seismic line. In the shallow part of the section the multiples have been effectively managed by the deconvolution, however at depth (where the primary signals are weaker) there’s still a lot of multiple energy from more complex ray-paths.

The gathers have been NMO corrected with 90% of the picked primary velocity, so we can be sure the primaries will be up-dip, or at the very least flat.


Selected CDP gathers displaying the deeper data from about 800ms TWT onwards.  The gathers are corrected with 90% of the primary velocity. The multiples become more obvious in the deeper section from 1500ms or so – as indicated in yellow

To apply the PRT demultiple we need to look at the “far offset moveout” range we need for the data overall, and to define what part of the data we want to classify as multiples.

CDP gathers NMO corrected with 90% of the stacking velocity with some example measurements of positive and negative far-offset moveout

In this case I’m going to use a range of -300ms to +500ms, and as the primary is over-corrected (through applying a Normal Moveout correction with 90% of the primary velocity) I’m going to classify the multiple as having a far-offset moveout between +24 and +500ms.

I’ll also need to specify how many P-values to use; one way to approach this is to think about the total moveout range we are transforming, and what each P-value will represent. 

With a total range of 800ms (from -300ms to +500ms) I’m going to use 200 P-values; this means that each P-value corresponds to 4ms of moveout at the far offset (which is the sample interval.)

Finally I’m going to form the data into supergathers. As I described in the post on geometry we have different offsets in the “odd” and “even” numbered CDPs, each of which has 60 traces.  If we combine these into a single supergather with 120 traces then each gather will have the same offset distribution, and we’ll have better resolution in the PRT domain as well.

Here are the results – remembering that the demultiple is only applied from 1400ms onwards:

Test of PRT demultiple shown on two CDP gathers (500 and 501) which were formed into a single 120-trace supergather for the application. The gathers on the right have had the mutliples with a far-offset moveout between +24 and +500ms supressed;  the data had a NMO correction with 90% of the picked primary velocity applied prior to the PRT demultiple

This seems to have done a pretty efficient job at cleaning up the gathers; it will be a lot easier to pick the deeper velocities now, which should help with the imaging.

While it is important to check the results on the stacked data as well (with this dataset these are actually relatively small), the majority of the multiple ‘stacks out’ and our main goal is to make it easier to get an improved velocity analysis.

Velocity semblance spectra for CDP 500 before (left) and after (right) the PRT demultiple has been applied; removal of the “ringing” multiple chain (circled in red) has helped in differentiating primaries from multiples

Having used our first-pass velocities to help remove the multiples we can now go back and check or re-pick the deeper velocities to further improve the image.





By: Guy Maslen

Total Pageviews