This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with time (and software), however the idea is to provide a practical guide for applying seismic processing theory.
The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.
The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:
We have now reached a stage where the “pre-processing” of the data is complete, and we have “cleaned” the shot gathers. The next stage is to create a model of the seismic velocities in the sub-surface.
This velocity field will be used in three main ways:
- to apply the Normal Moveout Correction, as part for the Common MidPoint (CMP, also called Common Depth Point, CDP) processing approach, and thus improve the signal-to-noise ratio
- to help remove the multiples (delayed copies of the primary reflection signals caused by reverberation in the sub-surface)
- to focus the seismic energy and form a crisp image - at the moment some of the signal is scattered off the sub-surface geology
I’ll cover velocity analysis in a few different posts – this first part will make use of synthetics and look at some of the issues we are going to face.
The basic mechanics of velocity analysis are broadly similar in all packages. At a given CDP location we test-apply different NMO velocities and review the results in multiple analysis windows. We can then select the appropriate velocity, at a given two-way-time value, that best flattens the hyperbolic reflection in the CDP gather.
The main analysis tools that are used are:
Constant Velocity Gathers (CVG) – the CDP is displayed in a series of panels, each is NMO corrected using a different constant velocity. The user picks the velocity which best flattens the gather at a given two-way-time value.
Constant Velocity Stacks (CVS) – a range of CDPs centred on the analysis location (typically 11 or 19 CDPs) are displayed as panels, each is NMO corrected using a different constant velocity. The user picks the “best” stacked image at each two-way-time value.
Velocity Spectra – a graphical display that shows a measure of trace-to-trace coherence once the NMO at a given velocity correction is applied to the data, creating a contour. This is usually semblance, but can also be stacked amplitude or other coherence measures. The user can then pick a velocity function by focusing on the “bright spots” caused by primary reflections.
Other techniques for picking velocities include using a predetermined velocity function as a “guide”. Modifying this function, by applying a fixed percentage scaling for example, can be used to create a series of “variable velocity” panels (either stacks or NMO corrected gathers). These functions generally form a “fan” with a narrow range of velocities in the near-surface, and a wider range at depth.
The “variable velocity stacks” (VVS) and “variable velocity gathers” (VVG) are probably the most widely used tools for velocity analysis. There is a small risk that the velocity “fan” can be too narrow (missing the primary velocity trend), however modern tools allow you to recalculate this as a check. Most tools also allow the user to “mix and match” these displays, as well as test-apply velocity functions.
Velocities and Depth
In the near surface, there is a long ray-path difference between the near-offset reflection and the far offset – this means there is a big “moveout difference” between the near and far offsets. At depth, this variation is reduced. As a result, not only is there less moveout difference at depth, the accuracy to which we can determine the velocity also reduces.
In practice this means that while we can get a good stacking velocity at depth quite easily, the velocity we pick for this from location to location could vary a lot in the deep part of the section. This can produce a “saw tooth” appearance for the deeper velocities that won’t impact on the stack, but could cause issues with other processes that use the velocity field.
In the first 1500ms of the data and where the velocities are low, you will have very large NMO corrections – so large that the correction at the top and base of an event can be very different. This “stretches” seismic events and in doing so creates artificially low frequencies that need to be addressed; otherwise it will reduce the frequency of the stacked image.
We can address this in two ways: (i) using a “stretch mute percentage” which removes the stretched part of the wavelet once it has been distorted too much, or (ii) by manually picking a mute. You need to be aware of how the data is being muted while you are picking velocities.
Accuracy with Offset
The NMO equation is an approximation. As a result its accuracy decreases with increasing offset; meaning it’s harder to flatten gathers at far offsets. By adding a fourth order term to the equation (in addition to the second-order velocity term) the accuracy of the reflection moveout can be improved, allowing you to pick velocities at those far offsets.
The synthetic we are using is of course very “clean” – there are only reflections (and white noise) present, with no other signals to complicate velocity picking. In practice, even just adding in sea-floor multiples can make the primary velocities much more difficult to pick.
Picking Velocities on Line TRV-434
The line we are working with has a water depth that corresponds to about 80ms TWT; this means short-period multiples as well as inter-bed multiples. The data is also not flat, so the shapes of the reflection hyperbolae will be distorted. Finally, we can also expect some distorted velocities from the diffractions, which will give high velocity anomalies.
A few tips when picking:
- pick velocity control points no closer than 100ms TWT apart; closer than this it can be hard to ensure that you are not “cycle” skipping, and the interval velocities calculated between points as a QC check start to become unstable
- seismic velocities almost always increase with depth. You can get small inversions, and with a near-constant velocity (such as in the near-surface sediments in deeper water) these are not worth worrying about. Large inversions usually require a big step down in interval velocity, and you need to consider what geology could produce that change
- very large velocity “kicks” up or down probably indicate some kind of non-reflection signal. Multiples give a low velocity trend, and diffractions will give an abnormally high velocity
- look at the stacks carefully when picking to make sure you are looking at a primary reflection
On this line, there is a sedimentary sequence on the low CDP end of the line down to around 3000ms TWT, where we start to lose velocity sensitivity. Under this the velocities are faster. On the high CDP end the structure is more complex, with a metamorphic basement thrust fault block that has been pushed over the sedimentary packages.The complexity of the line makes it challenging, especially when there is still multiple contamination.
When picking velocities you may also be able to generate an “NMO Stretch Mute” to remove the worst of the NMO stretch distortions; ideally you should do this in conjunction with your velocities, although you can pick it separately on NMO corrected CDP gathers.
I’ll discuss how to check and edit your velocities in another post, but for now, here’s the kind of stacked section you should be aiming at with these data:
|Two stacks of TRV434, using a single velocity function (above) and picked velocities (below). Both have a 120% stretch mute applied; note the improved resolution and reduction in low-frequent, reverberant multiples using the picked velocities|