This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with time (and software), however the idea is to provide a practical guide for applying seismic processing theory.
The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.
The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
|
The processing sequence developed so far:
|
The processing sequence we have developed so far gives us
the ideal input for predictive (or gap) deconvolution; it is minimum phase,
has the swell noise and strong amplitude linear noise removed, and much of the
spatially aliased high frequency dipping events have been eliminated as well.
On marine datasets, the main goal of predictive
deconvolution is in part to collapse any residual effects caused by the
limitations of the tuned airgun array, and to help suppress short-period reverberations
in the wavelet. These reverberations occur mainly from energy that has
“multiple” reflections from the sea-surface and seafloor, but they can also be
from inter-bed multiples if there are strong reflectors that are close
together.
In this dataset I suspect there are also some mode converted
energy – specifically P-S-P mode conversions – where, especially in the high
shot point end of the line, the basement overthrust creates the right
conditions for this to happen.
The
basic tool we have for looking at the reverberations in a dataset is the
autocorrelation function. It uses a sliding window of fixed length window to mathematically
compare the trace with itself, often over a specific data range. The
autocorrelation function is always symmetrical about time zero where there is a
strong peak. Subsequent peaks indicate where a time-shifted version of the
trace is similar to the original.
Shots from the start and end of the line with an auto-correlation appended to the bottom. The design window for the autocorrelation function is indicated between the blue and yellow lines |
When working with deconvolution, this kind of display should
be your standard approach. I’ve used a bit of ‘trickery’ here in that I have reduced
the record length to 5500ms (for display purposes) and then extended it by
100ms to create a gap between the shots and their autocorrelations.
For the design window, I have defined the start gate using a
calculation based on offset (using the speed of sound in water as 1500ms-1
and shifting this down by 200ms), and then made the gate length 2500ms.
You can define the gates manually, but on marine data I
prefer to create a gate that is tied to offset and, if needed, shifted by the
water bottom. In doing so, if you see an anomalous result, it is easier to
back-track and adjust – and of course on large multi-line projects it’s less
work.
The design gate needs to:
- be at least 5-7 times the length of the auto-correlation
- avoid any very strong reflections – usually just the seafloor, but there can be others
- contain reflections – if you can’t see reflections in the gate window, you will get a bad result
In this case I’ve got an autocorrelation length of 300ms
which should be enough to show the reverberations caused by the water bottom
(at about 80ms); note how reverberant the data is on SP900.
The reason to focus on the autocorrelation is that it is not
just a quality control parameter – it is also used to design the deconvolution
operator we will apply.
You can use more complex designs – such as having multiple
design windows one above and one below a strong unconformity – but the problem
can become this limits the design window and hence the autocorrelation length
that is viable. A long autocorrelation
gives a more stable result!
The other key parameter, as well as the length of the
operator (defined in turn by the autocorrelation), is the predictive gap. In
this case, we are not aiming to do much in the way of wavelet shaping or
whitening, so a longer multi-sample gap is preferable to a short one.
This is where things become very subjective. Some people
have strong views on the gap being tied to particular values, or to the first
or second zero crossing of the auto-correlation function and so on – however
all deconvolution code is different, and my advice is to *always* test the gap.
There are three basic approaches to deconvolution we need to
test:
- we can work one trace at a time, in the X-T domain
- we can average autocorrelation functions over multiple traces, or even a shot
- we can apply deconvolution in the Tau-P domain
The first of these is the usual marine “work horse”, but in
situations where the data is noisy the trace-averaging approach can be
effective. Tau-P domain deconvolution is
a special case, as we’ll discuss later.
For the XT domain approaches, I generally start with operator
tests using a 24ms gap; I run these from about 1.5x the first peak on the
autocorrelation function up to the largest value that makes sense given the
design criteria. In this case I might look at 150ms, 250ms and 300ms.
Once I have an operator, I then test gaps – usually 8ms,
16ms, 24ms 32ms and 48ms, perhaps with a spiking (one sample) gap as well.
The results tend to be pretty subjective, and depend on the
interpreter’s needs, but 24ms is a fairly standard choice.
I’m
not going to fill this post with images of different deconvolution test panels
on shots and stacks – you can see those in Yilmaz (you should probably have access to a copy, I’ve never worked
anywhere that didn’t have one available).
Shots from the start and end of the line; a 24ms gap, 300ms operator XT domain deconvolution applied. Start/end design gates displayed (blue, yellow lines) |
Tau-P domain deconvolution is a little different. It is based on the idea that the multiples
are more periodic in the Tau-P domain than in X-T, but has the additional advantage
that you don’t have the same restriction on design gate lengths at far offsets
– and hence can have a longer, more stable operator.
The design process is the same as with the X-T domain, but
in general a longer gap (32ms or 48ms) works better. In general, Tau-P domain
deconvolution is a lot more effective that X-T domain.
In this case I’ve tested operators from 400ms to 500ms, and
gaps of 24ms, 32ms and 48ms. These tests
are a lot slower to run, of course.
In
practice the 500ms operator and 32ms gap gave the best result.
Shot record from start and end of the line with no deconvolution |
Shot record from start and end of the line with 24ms gap, 300ms operator XT deconvolution |
Shot record from start and end of the line with 32ms gap, 500ms operator Tau-P deconvolution |
In practice, the differences are relatively minor between
the X-T and Tau-P domain deconvolution results. This is partly because we have
already applied Tau-P domain linear noise suppression, which can have a big
impact on how effective the deconvolution is.
Ultimately the choice of what to use depends on the time and
resources you have available – the Tau-P domain deconvolution is
computationally expensive, but if you are using Tau-P domain linear noise
suppression, these methods can be combined at that stage.
Running a second deconvolution on common receiver gathers
can also help improve the effectiveness of the result; if you have used
shot-ensemble or Tau-P domain deconvolution in the first pass.
It’s
also important to review stacked sections – either the entire line or just key
areas – with these tests, to ensure that the results on the stacks match what
you require.
Stacked section with: constant velocity, stretch mute, amplitude recovery and swell noise recovery (no deconvolution) |
Stacked section from above with Tau-P domain muting and deconvolution applied |
Nice Article Guy!
ReplyDeleteBut it says almost nothing about Tau-P transform (liner or parabolic, how should we make autocorr function, how avoid edge affects, etc.). Quite poor article.
ReplyDeleteSorry you didn't find what you need in the context of Tau-P deconvolution.
DeleteThere is some discussion on the Tau-P transform with examples in the section on linear noise attenuation, which also indicates its the "linear Tau-P domain", as opposed to the hyperbolic or parabolic.
In terms of the auto-correlation function the same rules apply in the (linear) Tau-P space as they do in X-T - avoid the strong reflectors, design over data not noise, and make sure the design window is long enough to statistically support the operator you are after.
In my experience things like "edge effects" in Tau-P transforms tend to be highly software specific and/or related to how much effort you put into addressing the spatial aliasing - again, that's covered in the previous post on linear noise attenuation in part.
In general in was aiming to hit a balance between the information available via the SEG Wiki from Yilmaz and things like Rob Hardy's pages (http://www.xsgeo.com/course/decon.htm) while targeting people relatively new to processing.
I'm hoping to find the time to revisit this series in more detail; I was going to focus on a basic land processing guide, but perhaps a more detailed look at deconvolution could be a good topic.