This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with time (and software), however the idea is to provide a practical guide for applying seismic processing theory.
The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.
The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
|
The processing sequence developed so far:
|
One of the key issues in marine data is addressing the issue
of multiples. Multiples are delayed reflections that interfere with the
“primary” reflections we want to image. The delay occurs because the reflection
energy has taken a more complex and longer ray path from source to receiver,
reverberating between two layers that are highly reflective.
While any two layers with high reflectivity can create a
multiple image, in general it is energy reverberating in the water column (i.e.
between the seafloor and the sea surface) that is the most important issue to
contend with.
There are three main ways in which to remove multiples:
- treat the multiple as a long, ringing wavelet, and use signal processing to simplify it
- create a model of the multiple and try and subtract it from the data
- use the difference in stacking velocity of the primary and multiple to remove the multiple
We have already used the “signal processing” approach when
we applied deconvolution
to collapse the reverberations. So any further reduction in multiples is going
to require a different technique/s.
The modelling approaches, such as SRME (Surface Related
Multiple Elimination), usually require the primary seafloor reflection to be
the first clear signal. This limits their application on shallow water lines
such as this one, although there are some variations that can address this.
This leaves us with the velocity discrimination methods; I’m
going to use the Parabolic Radon Transform (PRT) approach. The Radon transform is a generic mathematical procedure
where input data in the frequency domain are decomposed into a series of events
in the RADON domain. The PRT has been the “workhorse” demultiple method
for many years; even when SRME is deployed it is often used to overcome
limitations in the modelling.
The Tau-P transform is a special case of the Radon transform
where Tau (intercept time) and P (slowness, slope, or parabolic curvature)
represent two axes in Tau-p space. The data are summed along a series of
parabolic tracks, each described by a curvature value (P). The user defines a
number of P values to use as well as the total P range, which can be from
positive to negative.
One of the key elements of the PRT as a demultiple method is
that we usually apply an NMO correction of some sort to the data. This helps do
two things: (i) target the primaries and multiples – for example we can use
water velocity or a picked primary trend, and (ii) approximate the reflections
(which are originally hyperbolic) through a series of parabolic curves.
There are different ways of considering the parabolic
curvature (P) however one of the easiest is to look at the relative moveout (in
ms) of the events at the far offset. This means a flat event has a P value of
zero, up-dip events (NMO corrected with too slow a velocity) have negative p
values, and down-dip events (NMO corrected with too high a velocity) have
positive P values.
Here’s
the same synthetic as before;
this time it has a NMO correction applied with 90% of the primary velocities so
that the primaries are up-dip and the multiples down-dip.
There are some key things to notice in this result.
Firstly, the PRT is only really effective when we have
enough live traces – in the shallow part of the gather (down to ~500ms) with
only 20 traces the events can still be seen, but they are smeared. Below about 500ms TWT it is hard to make the
PRT demultiple effective.
Secondly, the events form a “cross” shape in the X-T domain;
this means that we can damage the primary (or the multiple) if we don’t
transform a sufficiently large range of P-values (or we use too few values,
causing the signal to be aliased).
Some other general points to consider are:
- the more P-values and frequencies you have, the larger the matrix required for the transformation. You need a different matrix for each combination of offsets, and so regularising the data (and using a routine that saves the matrix) can make a big difference
- it is much more important to focus on removing the multiples from the near offsets (as these will “stack-in”) as opposed to the far offsets (which will tend to be supressed by stacking or “stack out”)
Here are some synthetic examples of the PRT demultiple in action, which show how important it is to tune the parameters defining the “multiple-zone”, and check the
results!
From this the PRT demultiple looks promising; it can manage
even small separations between the primary and multiple as long as the
velocities have been determined accurately enough. It needs to be tested
carefully, however, if the primary and multiple are close in terms of moveout.
Here are some selected CDP gathers from the seismic line. In
the shallow part of the section the multiples have been effectively managed by
the deconvolution, however at depth (where the primary signals are weaker)
there’s still a lot of multiple energy from more complex ray-paths.
The
gathers have been NMO corrected with 90% of the picked primary velocity, so we
can be sure the primaries will be up-dip, or at the very least flat.
To apply the PRT demultiple we need to look at the
“far offset moveout” range we need for the data overall, and to define what
part of the data we want to classify as multiples.
CDP gathers NMO corrected with 90% of the stacking velocity with some example measurements of positive and negative far-offset moveout |
In this case I’m going to use a range of -300ms to +500ms,
and as the primary is over-corrected (through applying a Normal Moveout
correction with 90% of the primary velocity) I’m going to classify the multiple
as having a far-offset moveout between +24 and +500ms.
I’ll also need to specify how many P-values to use; one way
to approach this is to think about the total moveout range we are transforming,
and what each P-value will represent.
With a total range of 800ms (from -300ms to +500ms) I’m
going to use 200 P-values; this means that each P-value corresponds to 4ms of
moveout at the far offset (which is the sample interval.)
Finally I’m going to form the data into supergathers. As I
described in the post on geometry
we have different offsets in the “odd” and “even” numbered CDPs, each of
which has 60 traces. If we combine these
into a single supergather with 120 traces then each gather will have the same
offset distribution, and we’ll have better resolution in the PRT domain as
well.
Here
are the results – remembering that the demultiple is only applied
from 1400ms onwards:
This seems to have done a pretty efficient job at cleaning
up the gathers; it will be a lot easier to pick the deeper velocities now,
which should help with the imaging.
While it is important to check the results on the stacked
data as well (with this dataset these are actually relatively small), the
majority of the multiple ‘stacks out’ and our main goal is to make it easier to
get an improved velocity analysis.
Having used our first-pass velocities to help remove the
multiples we can now go back and check or re-pick the deeper velocities to
further improve the image.