Thursday, 19 December 2013

Marine Processing - Part 8 | Picking Velocities II

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval

  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots

  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays

  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels



In the last post we looked at some of the things that you need to bear in mind when you are performing velocity analysis.  

This post continues with that theme and looks at how to identify some key issues as part of our quality control checks.

The most important thing to remember is that whilst we are trying to pick the velocities of seismic reflections, there are other signals that we can pick by mistake.
The main problems we’ll see are:
  • Multiples - low velocity signals that are a delayed version of the primary. The delay is caused by the reflection energy reverberating between two strong reflectors, typically the sea surface and seafloor. There’s a nice diagram of this here.
  • Diffractions - where the energy is scattered (off an object or discontinuity); the tails of the diffraction have anomalously high velocities
The velocity we are picking is the “stacking velocity” – that is to say, it is the velocity which in conjunction with the hyperbolic NMO equation is going to flatten the CDP gathers and allow us to stack the data.

The “stacking velocity” has no other meaning apart from this, however in the situation where the source-receiver offset is small (which is generally less than 45 degrees) and the rock layers are horizontal and isotropic, the “stacking velocity” approximates the “root mean square velocity” (RMS velocity). 

This in turn is the RMS of the (P-wave) “interval velocity” within each of the isotropic layers.  There’s a useful page on these relationships here.

Most velocity analysis software will display the interval velocities while you work, which can act as a guide if you know the interval velocities you might expect in some common rock types.


Interval velocities (blue) plotted alongside the stacking velocities (black) on a semblance display


While the P-wave velocities for different lithologies vary significantly, the majority of multiples are caused by the water column. This leads to a corresponding interval velocity which matches that of seawater. Some typical ranges and values you might find are given below:


Layer
Expected Interval Velocities in ms-1
Seawater
Around 1480ms-1 in general, although there is some variation due to salinity and temperature. I’ve seen velocities as high as 1540ms-1 on data from Antarctica, for example
Seafloor sediment
The soft, seafloor sediments typically ramp up with compaction; from seawater velocity (1480ms-1) very gradually up to 1750-2000ms-1  - depending on thickness and age 
Partially compacted Sandstone and Shale
Partially compacted sediments still have fluids trapped within the pores and so the velocities can have a large range. 2500ms-1 is a typical value but it can push up to 5000ms-1 in older rocks such as the “Old Red Sandstone” from the Devonian Period, where the pores have been closed by pressure and diagenesis 
Limestone
Thin, reefy, limestones tend to have velocities close to 3700ms-1, while bulk limestones (or chalks) can be higher
Salt (Halite)
Usually a constant 4500ms-1, although you have to be aware of things like anhydrite that can “raft” in the salt if it becomes ductile
Basalt and Granite
“Bulk” igneous rocks tend to have high interval velocities of around 5500-6500ms-1
Anhydrite
Anhydrite layers typically have very high velocities, around 7000ms-1


As well as incorrectly identified signals, we also have to watch out for simple mistakes and errors.  We can do this by looking both at the velocities (usually termed an isovel plot) and the stacked section.

Anomalously high or low values, relative to the overall trend, indicate a miss-pick or problem with the velocity field and show up as “bulls eyes”. These must be edited or removed.


A single velocity miss-pick (2500ms-1 rather than ~1700ms-1) on the isovels plot (left), and the corresponding impact on the stack (right, circled). The result is dimmed/faded reflectors in that region


While it is easy to see a single miss-picked velocity on this occasion, when dealing with a missed trend it can be much more challenging to spot. It is important to carefully compare the current stack to the previous one, looking for areas where there are improvements and places where the stack quality has degraded.

It’s also important to look at the stack while you are picking; a sudden lateral change in velocities can dramatically change the overall primary trend, but the multiples may be close to the primary trend you have been following.


The brute section (top), an incorrect velocity trend (middle) and improved picks (bottom); centred on the “tricky” area of the line i.e. the main thrust fault. The incorrect trend has followed slow multiples and fast diffractions, and so misses the high-velocity overthrust (around CDP 1320).


Of course, the stack is only one guide. We also need to look at NMO corrected gathers to make sure that the primaries are flat, the diffractions are up-dip, and the multiples are down-dip. It can be hard to judge “flatness” with a single velocity so comparing different velocity fields, or looking at 95%, 100%, and 105% of the stacking velocities, side by side is worthwhile.


Two CDP gathers (900 and 1800): NMO corrected with 95% (left), 100% (centre), and 105% (right) of the stacking velocities. CDP 900 looks flattened with 100%, but above 1000ms TWT, CDP 1800 looks to have been picked too slowly; some upward dip remains. CDP 1800 looks to be best flattened with 105% of the stacking velocity, above 1000ms TWT.


At this stage in the processing the limitations we have on picking are the presence of multiples, diffractions, and dip. The velocities in and around the over-thrust at the high CDP end of the line vary rapidly, and it is difficult to identify the correct trend.


To improve further, we need to use the existing velocity field to remove the multiples, and then move on to pre-stack imaging of the data.




By: Guy Maslen

Thursday, 17 October 2013

Marine Processing - Part 7 | Picking Velocities

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval

  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots

  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays

  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels

We have now reached a stage where the “pre-processing” of the data is complete, and we have “cleaned” the shot gathers. The next stage is to create a model of the seismic velocities in the sub-surface.

This velocity field will be used in three main ways:
  1. to apply the Normal Moveout Correction, as part for the Common MidPoint (CMP, also called Common Depth Point, CDP) processing approach, and thus improve the signal-to-noise ratio
  2. to help remove the multiples (delayed copies of the primary reflection signals caused by reverberation in the sub-surface)
  3. to focus the seismic energy and form a crisp image - at the moment some of the signal is scattered off the sub-surface geology
For the majority of processing sequences, the velocity field is created iteratively. This allows us to improve velocities for the pre-stack time migration; if there are steep dips we may have to pick velocities again.

I’ll cover velocity analysis in a few different posts – this first part will make use of synthetics and look at some of the issues we are going to face.

The basic mechanics of velocity analysis are broadly similar in all packages. At a given CDP location we test-apply different NMO velocities and review the results in multiple analysis windows. We can then select the appropriate velocity, at a given two-way-time value, that best flattens the hyperbolic reflection in the CDP gather.

The main analysis tools that are used are:

Constant Velocity Gathers (CVG) – the CDP is displayed in a series of panels, each is NMO corrected using a different constant velocity. The user picks the velocity which best flattens the gather at a given two-way-time value.

Constant Velocity Stacks (CVS) – a range of CDPs centred on the analysis location (typically 11 or 19 CDPs) are displayed as panels, each is NMO corrected using a different constant velocity. The user picks the “best” stacked image at each two-way-time value.

Velocity Spectra – a graphical display that shows a measure of trace-to-trace coherence once the NMO at a given velocity correction is applied to the data, creating a contour. This is usually semblance, but can also be stacked amplitude or other coherence measures. The user can then pick a velocity function by focusing on the “bright spots” caused by primary reflections.


A synthetic CDP gather (left) and its semblance spectrum (right). The velocities are displayed at the top of the semblance plot. The first event is at 400ms (TWT) with a velocity of 1480ms-1 (a typical seawater velocity). Stacking velocities generally, although not always, increase with depth

Other techniques for picking velocities include using a predetermined velocity function as a “guide”. Modifying this function, by applying a fixed percentage scaling for example, can be used to create a series of “variable velocity” panels (either stacks or NMO corrected gathers). These functions generally form a “fan” with a narrow range of velocities in the near-surface, and a wider range at depth.

The “variable velocity stacks” (VVS) and “variable velocity gathers” (VVG) are probably the most widely used tools for velocity analysis. There is a small risk that the velocity “fan” can be too narrow (missing the primary velocity trend), however modern tools allow you to recalculate this as a check. Most tools also allow the user to “mix and match” these displays, as well as test-apply velocity functions.


A typical set of velocity analysis displays windows, with semblance based on a “fan” of velocities around a central function (left), variable velocity gathers that match this fan (centre) and an NMO corrected gather with velocities displayed (right). Note how the gathers are over-corrected (up dip) at low velocities, flat on the central function, and under-corrected (down dip) at high velocities

Velocities and Depth
In the near surface, there is a long ray-path difference between the near-offset reflection and the far offset – this means there is a big “moveout difference” between the near and far offsets. At depth, this variation is reduced. As a result, not only is there less moveout difference at depth, the accuracy to which we can determine the velocity also reduces.
 
In practice this means that while we can get a good stacking velocity at depth quite easily, the velocity we pick for this from location to location could vary a lot in the deep part of the section. This can produce a “saw tooth” appearance for the deeper velocities that won’t impact on the stack, but could cause issues with other processes that use the velocity field.


Impact on velocity resolution and accuracy with two-way-time. The green ray-paths correspond to the far offset, and the pink ray-paths to the near offset (left). For a shallow event these ray-paths are very different, but at depth they converge. As a result, on the gather (centre) we see less moveout-difference with depth, making it harder to accurately determine the velocity and giving larger “bright spots” on the semblance spectra (right)

NMO Stretch
In the first 1500ms of the data and where the velocities are low, you will have very large NMO corrections – so large that the correction at the top and base of an event can be very different. This “stretches” seismic events and in doing so creates artificially low frequencies that need to be addressed; otherwise it will reduce the frequency of the stacked image.

We can address this in two ways: (i) using a “stretch mute percentage” which removes the stretched part of the wavelet once it has been distorted too much, or (ii) by manually picking a mute. You need to be aware of how the data is being muted while you are picking velocities.


A synthetic CDP gather (left) with NMO applied (right). Note now the reflection signal “smears” at far offsets, especially in the shallow part of the section where the NMO correction is the largest – this is “NMO stretch”. It arises when the NMO correction at the top of the reflection signal is significantly larger than that at the bottom of the reflection

Accuracy with Offset
The NMO equation is an approximation. As a result its accuracy decreases with increasing offset; meaning it’s harder to flatten gathers at far offsets. By adding a fourth order term to the equation (in addition to the second-order velocity term) the accuracy of the reflection moveout can be improved, allowing you to pick velocities at those far offsets.


A gather with offsets from 100m to 3075m (left). A gather with doubled offsets, from 200m to 6150m (right). The doubled spectra on the right show improved sensitivity at depth, as a result of the larger offset. However the large NMO correction and stretch factors distort the semblance in the shallow

Other Signals
The synthetic we are using is of course very “clean” – there are only reflections (and white noise) present, with no other signals to complicate velocity picking. In practice, even just adding in sea-floor multiples can make the primary velocities much more difficult to pick. 


A more complex synthetic including seafloor multiples, with no NMO correction applied (left), the correct primary NMO correction applied (centre), and the semblance spectrum (right). Note how the multiples, with a lower velocity, remain under-corrected on the centre panel  

Picking Velocities on Line TRV-434
The line we are working with has a water depth that corresponds to about 80ms TWT; this means short-period multiples as well as inter-bed multiples. The data is also not flat, so the shapes of the reflection hyperbolae will be distorted. Finally, we can also expect some distorted velocities from the diffractions, which will give high velocity anomalies.

A few tips when picking:
  • pick velocity control points no closer than 100ms TWT apart; closer than this it can be hard to ensure that you are not “cycle” skipping, and the interval velocities calculated between points as a QC check start to become unstable
  • seismic velocities almost always increase with depth. You can get small inversions, and with a near-constant velocity (such as in the near-surface sediments in deeper water) these are not worth worrying about. Large inversions usually require a big step down in interval velocity, and you need to consider what geology could produce that change
  • very large velocity “kicks” up or down probably indicate some kind of non-reflection signal.  Multiples give a low velocity trend, and diffractions will give an abnormally high velocity
  • look at the stacks carefully when picking to make sure you are looking at a primary reflection
We typically pick velocities at a fixed increment – say 2km or 4km – and interpolate. This works reasonably well, but where the geology varies rapidly (e.g. a submarine canyon), a tighter increment over that area may be more suitable.

On this line, there is a sedimentary sequence on the low CDP end of the line down to around 3000ms TWT, where we start to lose velocity sensitivity. Under this the velocities are faster. On the high CDP end the structure is more complex, with a metamorphic basement thrust fault block that has been pushed over the sedimentary packages.The complexity of the line makes it challenging, especially when there is still multiple contamination.

When picking velocities you may also be able to generate an “NMO Stretch Mute” to remove the worst of the NMO stretch distortions; ideally you should do this in conjunction with your velocities, although you can pick it separately on NMO corrected CDP gathers.
I’ll discuss how to check and edit your velocities in another post, but for now, here’s the kind of stacked section you should be aiming at with these data:


Two stacks of TRV434, using a single velocity function (above) and picked velocities (below). Both have a 120% stretch mute applied; note the improved resolution and reduction in low-frequent, reverberant multiples using the picked velocities 





By: Guy Maslen

Tuesday, 13 August 2013

Marine Processing - Part 6 | Predictive Deconvolution

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval

  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots

  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays

  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Linear noise removal via Tau-P domain muting, with removable 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Interpolation back to 25m group interval, 240 channels

The processing sequence we have developed so far gives us the ideal input for predictive (or gap) deconvolution; it is minimum phase, has the swell noise and strong amplitude linear noise removed, and much of the spatially aliased high frequency dipping events have been eliminated as well.

On marine datasets, the main goal of predictive deconvolution is in part to collapse any residual effects caused by the limitations of the tuned airgun array, and to help suppress short-period reverberations in the wavelet. These reverberations occur mainly from energy that has “multiple” reflections from the sea-surface and seafloor, but they can also be from inter-bed multiples if there are strong reflectors that are close together.

In this dataset I suspect there are also some mode converted energy – specifically P-S-P mode conversions – where, especially in the high shot point end of the line, the basement overthrust creates the right conditions for this to happen.
 
The basic tool we have for looking at the reverberations in a dataset is the autocorrelation function. It uses a sliding window of fixed length window to mathematically compare the trace with itself, often over a specific data range. The autocorrelation function is always symmetrical about time zero where there is a strong peak. Subsequent peaks indicate where a time-shifted version of the trace is similar to the original.

Shots from the start and end of the line with an auto-correlation appended to the bottom. The design window for the autocorrelation function is indicated between the blue and yellow lines

When working with deconvolution, this kind of display should be your standard approach. I’ve used a bit of ‘trickery’ here in that I have reduced the record length to 5500ms (for display purposes) and then extended it by 100ms to create a gap between the shots and their autocorrelations.

For the design window, I have defined the start gate using a calculation based on offset (using the speed of sound in water as 1500ms-1 and shifting this down by 200ms), and then made the gate length 2500ms.

You can define the gates manually, but on marine data I prefer to create a gate that is tied to offset and, if needed, shifted by the water bottom. In doing so, if you see an anomalous result, it is easier to back-track and adjust – and of course on large multi-line projects it’s less work.
The design gate needs to:
  • be at least 5-7 times the length of the auto-correlation
  • avoid any very strong reflections – usually just the seafloor, but there can be others
  • contain reflections – if you can’t see reflections in the gate window, you will get a bad result
In this case I’ve got an autocorrelation length of 300ms which should be enough to show the reverberations caused by the water bottom (at about 80ms); note how reverberant the data is on SP900.

The reason to focus on the autocorrelation is that it is not just a quality control parameter – it is also used to design the deconvolution operator we will apply.

You can use more complex designs – such as having multiple design windows one above and one below a strong unconformity – but the problem can become this limits the design window and hence the autocorrelation length that is viable.  A long autocorrelation gives a more stable result!

The other key parameter, as well as the length of the operator (defined in turn by the autocorrelation), is the predictive gap. In this case, we are not aiming to do much in the way of wavelet shaping or whitening, so a longer multi-sample gap is preferable to a short one.

This is where things become very subjective. Some people have strong views on the gap being tied to particular values, or to the first or second zero crossing of the auto-correlation function and so on – however all deconvolution code is different, and my advice is to *always* test the gap.

There are three basic approaches to deconvolution we need to test:
  • we can work one trace at a time, in the X-T domain
  • we can average autocorrelation functions over multiple traces, or even a shot
  • we can apply deconvolution in the Tau-P domain
The first of these is the usual marine “work horse”, but in situations where the data is noisy the trace-averaging approach can be effective.  Tau-P domain deconvolution is a special case, as we’ll discuss later.

For the XT domain approaches, I generally start with operator tests using a 24ms gap; I run these from about 1.5x the first peak on the autocorrelation function up to the largest value that makes sense given the design criteria. In this case I might look at 150ms, 250ms and 300ms.
Once I have an operator, I then test gaps – usually 8ms, 16ms, 24ms 32ms and 48ms, perhaps with a spiking (one sample) gap as well.
 
The results tend to be pretty subjective, and depend on the interpreter’s needs, but 24ms is a fairly standard choice.

I’m not going to fill this post with images of different deconvolution test panels on shots and stacks – you can see those in Yilmaz (you should probably have access to a copy, I’ve never worked anywhere that didn’t have one available).


Shots from the start and end of the line; a 24ms gap, 300ms operator XT domain deconvolution applied. Start/end design gates displayed (blue, yellow lines) 

Tau-P domain deconvolution is a little different.  It is based on the idea that the multiples are more periodic in the Tau-P domain than in X-T, but has the additional advantage that you don’t have the same restriction on design gate lengths at far offsets – and hence can have a longer, more stable operator.

The design process is the same as with the X-T domain, but in general a longer gap (32ms or 48ms) works better. In general, Tau-P domain deconvolution is a lot more effective that X-T domain.

In this case I’ve tested operators from 400ms to 500ms, and gaps of 24ms, 32ms and 48ms.  These tests are a lot slower to run, of course.

In practice the 500ms operator and 32ms gap gave the best result.

Shot record from start and end of the line with no deconvolution

Shot record from start and end of the line with 24ms gap, 300ms operator XT deconvolution

Shot record from start and end of the line with 32ms gap, 500ms operator Tau-P deconvolution

In practice, the differences are relatively minor between the X-T and Tau-P domain deconvolution results. This is partly because we have already applied Tau-P domain linear noise suppression, which can have a big impact on how effective the deconvolution is.

Ultimately the choice of what to use depends on the time and resources you have available – the Tau-P domain deconvolution is computationally expensive, but if you are using Tau-P domain linear noise suppression, these methods can be combined at that stage.
 
Running a second deconvolution on common receiver gathers can also help improve the effectiveness of the result; if you have used shot-ensemble or Tau-P domain deconvolution in the first pass.
 
It’s also important to review stacked sections – either the entire line or just key areas – with these tests, to ensure that the results on the stacks match what you require.


Stacked section with: constant velocity, stretch mute, amplitude recovery and swell noise recovery (no deconvolution)

Stacked section from above with Tau-P domain muting and deconvolution applied





By: Guy Maslen

Total Pageviews