Monday 26 May 2014

Marine Processing - Part 10 | Migration (continued)

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval


  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots


  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays


  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels



  • NMO correct with 90% of primary velocity
  • Form 120-fold supergathers
  • Parabolic Radon Demultiple
    • Transform range -300ms to 500ms far-offset moveout;  200 p-values
    • Multiple  removed from +24 to +500ms far-offset moveout
    • Application from 1400ms onwards
  • For 60-fold CDP gathers from supergathers
  • Remove 90% NMO correction

Re-pick Velocities


The last post really covered the theoretical basics of how to apply pre-stack time migration to the dataset, as well as the kinds of parameters you might encounter. In practice there are typically a few more things you will need to think about.

The first is to be aware that Kirchhoff pre-stack time migration is kinematic; that is to say the migration operator is going to redistribute the seismic energy across the seismic section. 

Depending on how the algorithm has been implemented you might need to remove the spherical spreading part of your amplitude correction prior to the migration, and then run a new sequence of amplitude recovery tests after the migration.

Pre-stack time migrated data from TRV-434. The panel on the left has had the spherical divergence correction removed before migration; the panel on the right has not. The cancellation of the migration operators is less effective where the amplitude correction has not been removed (red circled area) resulting in a noisier result with more artefacts


Another place where migration artefacts can be created is at the start and end of the seismic line, where we have incomplete summation of the migration operators. As we get to within one migration operator width of the edge of the data, the quality of the migrated image starts to deteriorate until at the very edge we only have artefacts.

This is most obvious in the “taper on” zone at the start of the line, however you will have similar effects at the end of the line, or where there are any gaps in the data.


The stacked section (left) and pre-stack time migrated section (right) from at the start of line TRV-434. Shooting direction is left-to-right. The “taper on” has been populated with migration “swings” where there is no more data to cancel out the un-needed parts of the operator (blue boxed area). The image will also be incorrectly formed where there isn’t a full migration operator to contribute to the data


These effects can be managed by scaling down the data at the start and end of the seismic line, using a function that is approximately the same shape as the operator. This is sometimes built into the migration routines as an optional pre-migration scaling. The scaling can also be approximated with a simple function that leaves the data un-scaled at full fold and ramps to zero amplitude at zero fold.

The next thing you need to think about is the offsets to use.

The pre-stack time migration works by forming the data into common-offset planes; we want to make sure these will be fully populated since gaps in the data will result in imaging artefacts.
In this case, we have data with 25m spacing between shotpoints and 25m between receivers - which means that the full 120 channels in each shot are divided up between two CDPs. 

There’s a full discussion of this here, which shows how adjacent CDPs will have the same number of traces (60) but different offsets.

We have two choices. We can either: (1) migrate to 120 offset planes corresponding to each offset present in the shot record, or (2) migrate to 60 offset planes corresponding to the fold of the CDPs.

With 120 offset planes, we’ll have gaps that need to be interpolated. We could do this pre-migration or allow the migration to fill in the gaps for us, accepting that it might create some noise artefacts which we will need to remove.

With 60 offset planes, we’d want to choose those that minimise the difference between the actual offset of any given trace, and the target offset plane it will contribute to. 

The simplest way to do this is to average the offsets that will make up each offset plane. In this case the first offset plane will be at 270.5m; half way between the offset for channel 120 (the near offset) and channel 119 (the next offset).

The biggest impact here is on run-time. If we take 120 offset planes and interpolate the data, the migration will take twice as long. Depending on time pressure, sometimes migrating to less offset planes than the fold (such as 30 planes, 100m apart) can be a good idea to get a result faster.

If there are any strong amplitude contrasts in the data – for example a noisy trace we missed in the edit stage – these are going to also cause problems in the migration. The migrated energy will not sum correctly and will leave an imprint of the migration operator in the data.


Pre-stack time migrated data from line TRV434;  the right hand panel has had a series of 7Hz Ricker wavelets added on the input data at CDP 1100, offset 1733m, which are approximately two orders of magnitude higher than the surrounding data. While these artefacts show through clearly, a single spike on a trace would be more subtle


Unlike other Pre-stack processing, you cannot avoid this kind of amplitude-based artefact with a removable AGC prior to the migration. This would prevent all of the amplitudes summing correctly and would create more noise. The only solution is to check the data carefully for high amplitudes as we did earlier in the sequence.
 
The extent to which you can afford to test all of the parameters and issues worked through here and in the previous post depends upon your time, project objectives, geological complexity and the resources at hand.

A good “baseline” to start off with is a 45-degree dip angle and a 3000m aperture, and migrating to at least as many offset planes as you have fold of coverage. With these limits the aperture will only start to restrict the migration operator around 2-3 seconds TWT.

It can also be worth looking at the approximate dips you want to image. Here there are dips of about 4ms/trace on a post-stack time migration, which (when I use the average interval velocity calculated from my velocity functions of 2750 ms-1, and remembering that it is two way time) corresponds to 5.5m/trace, or an angle of ~ 24o (tan-1 [5.5/12.5]).

This 60-fold migration took 696 seconds to run on my humble laptop (4 x 2.6Ghz cores, multithreaded to 8); this was running in parallel using 5 processes so that I could carry on working (as multi-threading is not quite the same as having real cores.)

While all migrations used different algorithms and scale differently, here are some runtimes I got:

Fold
Angle
Aperture
Stretch
Run time
60
45 degrees
3000m
120%
696s
60
60 degrees
3000m
120%
799s
60
60 degrees
6000m
120%
1183s
60
75 degrees
6000m
120%
1204s
60
75 degrees
9000m
120%
1313s

As you can see, opening up both the angle and aperture starts to make a big difference to the run-time of the migration. The diagram in this entry shows how these interact and why adjusting only the angle makes little difference. It is also worth remembering in this case that the seismic line is less than 24km long, so wider apertures will only impact the centre of the line.

In these examples I have not modified the stretch mute or the degree of anti-alias filtering; these have an impact on the runtime as well as the image quality at higher angles and bigger ranges.

In this case it makes sense to focus on the central portion of the line – between CDPs 800 and 1500, and between 2000ms and 3500ms two-way-time – as this was the main structural target.


A section of line TRV-434 Pre-stack time migrated, with a 15o (left), 30o (centre) and 75o (right) dip limit. Aperture is constant at 6000m


Running a full suite of aperture and dip limit tests can be time consuming and, as in this case, there is often a compromise on the dip limit in terms of image quality and introduced noise. Though I’ve not shown the full suite of tests here, the lower dip angles give a cleaner result - which makes sense when you look at the rough dip calculation above.

I have chosen a 30o angle and a 6000m aperture as it seems to be the best compromise.


Line TRV434 stacked (top), and with Pre-stack time migration applied (bottom). Both sections have a 500ms AGC applied. Migration dip limits were 35o and the aperture used was 6000m


With the migration completed - which as I mentioned in the last entry might include re-picking velocities and re-migration of the data – the bulk of the processing is now completed.

Post-migration processing is sometimes termed “finalisation” and usually includes some random noise attenuation, scaling, and filtering before creating a final SEGY output.




By: Guy Maslen

Monday 28 April 2014

Marine Processing - Part 10 | Migration

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval


  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots


  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays


  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels



  • NMO correct with 90% of primary velocity
  • Form 120-fold supergathers
  • Parabolic Radon Demultiple
    • Transform range -300ms to 500ms far-offset moveout;  200 p-values
    • Multiple  removed from +24 to +500ms far-offset moveout
    • Application from 1400ms onwards
  • For 60-fold CDP gathers from supergathers
  • Remove 90% NMO correction


Re-pick Velocities


In practice, we only get coherent reflections (that are in the correct lateral position) when the rock layers are flat and continuous.
 
One way to think of this is to consider the “zero-offset normal ray” section. This is where the reflected ray-path between a source and receiver at the same location at the surface makes a right-angle with the reflecting layer.


An example of “zero offset normal rays” that illustrates why only flat events will be correctly positioned laterally on a normal ray section.  The recorder at the surface has no directional information, so both reflections will appear on the same trace at different times, as shown on the right

In effect the “zero offset normal ray” section is a stack – we have corrected the source-receiver offset to zero using the  NMO correction on a CDP gather, and then stacked (summed) the traces together.

As well as the “ray path” view we can also think of how waves propagate based on Huygen’s Principle. Where we have discontinuities (such as faults, or intrusions) the secondary wavelets used in Huygen’s construction will not fully constructively or destructively interfere, giving rise to diffractions.

When looking at a stacked section, the key things to remember are that:
  • dipping events will be mispositioned laterally and appear less steep than they really are
  • the steeper the dip the greater the degree of mispositioning
  • discontinuities (such as faults, or intrusions) will give rise to diffractions
  • diffracted events may cross reflected events

In order to resolve these issues we need to migrate the data.


Stacked (left) and migrated (right) portions of the TRV-434 seismic line.  The migrated events are a little steeper and have moved laterally up-dip.  Diffractions caused by faults and events that are not continuous have been collapsed.  Crossing complex events have been resolved.


Migration uses a mathematical model to reposition data to its correct lateral position and collapse diffractions. This usually requires some kind of velocity model; as computation speeds have improved it has become practical to use algorithms that can work with more complex – or realistic – velocity models.

Post stack migrations (of the type shown above) have largely been replaced by pre-stack migration techniques.  In pre-stack migrations the data are formed into offset planes which are migrated to their near-offset position and time, which replaces the NMO correction and the migration stage.

In this case the line we are working on has some fairly steep dips and complex ray-paths; after a couple of rounds of velocity picking you will have come across some situations where the velocities are hard to pick;  this is because we have diffracted events (which do not have hyperbolic moveout) as well as dipping events.
 
Dipping events cause problems not only because the reflection hyperbola is distorted, but because the “common mid-point” is smeared and shifted up-dip.


Illustration of how the common midpoint (CMP, also called CDP) does not contain information from a single sub-surface location (or bin) when the events are dipping. The green ray paths show what we might assume is happening when collect data by its common mid-points, however the red ray paths show the actual ray paths for the shortest (near) and longest (far) offsets – these are both displaced up dip, by differing amounts


Pre-stack migration resolves this issue allowing better velocities to be picked and improved focussing to be obtained on dipping events. For that reason it is common to create a new velocity field after pre-stack time migration, and, in some cases, run a second iteration of the pre-stack time migration with that new velocity model.

Velocities picked on pre-stack time migrated data are a better approximation to the actual RMS velocity of the sub-surface layers, and so can be more useful as a starting point for depth conversion.

A good way to understand the migration parameters is to look at some impulse response functions; this will show you the shape of the migration operator with different parameters.  You can do this by zeroing all of the traces, and placing some suitable ricker wavelets at different time values then running migrations.


An impulse response function showing the full pre-stack time migration operator for a trace at CDP 1100, with an offset of 1283m, and 35Hz Ricker wavelets at 1000ms, 2000ms, 3500ms, 5000ms.  Since the pre-stack time migration corrects to zero offset, the operators include a correction for normal moveout. The migration was run a “open” parameters and a migration velocity of 1480 ms.-1


The parameters we can vary do two things.  They serve to speed up the migration (by limiting the operator in terms of its spatial extent) as well as reduce or remove artefacts from the data.

The main artefacts we want to eliminate are wavelet stretch, which occurs for the same reason as NMO stretch which picking velocities, and aliasing of the operator shape. The operator becomes aliased because (usually) the vertical sampling of the data (time sample interval converted to distance) is at a much higher resolution than the horizontal sampling (the CDP spacing).

In this case, we have a 4ms sample interval at a constant velocity of 1480ms-1, which corresponds to a 2.96m vertical resolution (remembering we are looking at two-way-time), and the CDP spacing is 12.5m.  As a result the operator will be spatially aliased at high angles.


Impulse response function of a 80Hz Ricker wavelets through pre-stack time migration with a constant velocity (left) with a close up variable-area/wiggle (VAWG) display on the right showing detail of the operator at a high dip angle (right);  note the wavelet  of the operator caused by spatial aliasing.  To migrate this without aliasing we would need to be able to represent the 80Hz wavelet horizontally at 1480 ms-1, which corresponds to a full wavelength of 18.5m.  With two samples per wavelength, this would need a CDP spacing of 9.25m, as opposed to the 12.5m spacing we have


The main controls we usually have in a pre-stack time migration are:

Name
Description
Impact
Migration Aperture, or Range
The horizontal distance that a trace is allowed to migrate.  Cuts off the migration operator at a specific distance, usually from the CDP position of the trace pre-migration. 
Reduces how many input traces contribute to each output trace, and so makes the migration run faster.  Reduces the ability to image steep dip, deep events.
Migration Angle
Limits the maximum angle of the migration, usually measured from the vertical trace at the CDP location out to the operator, so that 90 degrees is horizontal.
Reduces how far each trace migrates, so requires less calculation runs more quickly if applied in the algorithm and not as a mute.  Can act as a way of reducing both stretch and aliasing, and is approximately the maximum dip to image.
Anti-Alias Filtering
Acts to limit the operator shape where it might become aliased or distorted spatially;  usually has various degrees (weak, medium, strong) that can be applied
Little impact on speed of migration, as it can be computationally expensive to compute the limits even though less data is migrated;  if applied as a mute can actually slow down the migration.  Avoid artefacts where you have rapid lateral or vertical velocity gradients, and a single dip value is inappropriate.  
Stretch Muting
Acts to limit the operator if the shape of the wavelet is too distorted in time. Similar effect to NMO stretch.
Reduces low-frequency artefacts in the shallow-mid portion of the data.  May cause the near surface to be muted, unless there is a “near angle protection” option.


Pre-stack time migration operators from an impulse response function, with some limits applied.  In this case the red line shows the angle parameter, set to 60o, and the blue vertical line shows the effect of limiting the migration aperture to 2500m, which in this case corresponds to 200 cdps.  The input trace is at CDP 1100.  With a constant 1480ms-1 velocity, the angle parameter cuts of the migration before it is spatially aliased, although the shallow event is still stretched.  No stretch mute was applied


Pre-stack time migration is a zero-offset 'image ray' migration. This means the results will look like data that has zero offset between source and receiver. However, unlike the NMO correction, these will also be corrected for the dip of any reflectors. In the zero-offset 'image ray' migration, the rays are normal to the surface - not the rock interface.


Zero offset ray-paths for the normal ray (red) and images rays (orange) to a complex interface.  The image ray is identical to the normal ray if the interface is horizontal.  Where the interface is dipping, the image ray path starts and ends at a surface position (blue circle) that is above the reflection point on the interface.  The normal ray path for the dipping interface starts and ends at a different surface location (purple circle), which is laterally miss-positioned from the reflection point


The results of a pre-stack time migration will be zero-offset image ray gathers; these will be “flattened gathers” if:
  • the input velocity model was correct
  • we don’t have to worry about anisotropy
  • we don’t have to worry about the ray paths being refracted

In practice, these are all likely to be approximations. The velocities we supplied came from normal-ray gathers, and so as we identified earlier they are contaminated by dip.
   
Beyond 45° or so, we need to start worrying about anisotropic behaviours; this is where the simple second-order normal moveout equation breaks down and we must pick higher order terms and use anisotropic migrations.

Finally, if we have significant refractions as a result of rapid lateral or vertical velocity gradients, then we need to think about using a pre-stack depth migration.  

When we examine the gathers after migration we can look at whether we need to modify or re-
pick the velocities (sometimes called picking residual moveout or RMO) before making a final stack.

If the residual moveout is large it might be worth using the new velocity field and running a new migration. This could be another iteration of the same algorithm, however if there are non-hyperbolic components to the residual moveout then you might need to consider if it is worth the time and effort to move to a more sophisticated migration algorithm.


Stacked (top), post-stack time migrated (middle) and pre-stack time migrated (bottom) sections from TRV-434, using the same velocity model and with a 500ms robust AGC applied. The key thing to observe is the improved imaging of the steeper dip, strong amplitude events and reduction in noise between the post- and pre-stack migrations (circled); this is typically where a pre-stack time migration offers advantages



By: Guy Maslen

Thursday 23 January 2014

Marine Processing - Part 9 | Demultiples

This sequence of blog posts will build up into a complete description of a 2D marine processing sequence and how it is derived.
No processing sequence is definitive and techniques vary with   time (and software), however the idea is to provide a practical guide for applying seismic processing theory.

The seismic line used is TRV-434 from the Taranaki Basin offshore New Zealand. It is the same marine line used in our basic tutorial dataset.

The data are available from New Zealand Petroleum and Minerals, under the Ministry of Economic Development under the “Open File” System.
The processing sequence developed so far:

  • Reformat from field data (in this case, SEGY)
  • Apply a combined minimum-phase conversion and anti-alias filter
  • Resample to a 4ms sample interval

  • Assign 2D marine geometry; 12.5m CDP spacing and 60 fold gathers
  • QC shots and near trace plots

  • Amplitude recovery using a T2 Spherical divergence correction
  • Amplitude QC and Edit using peak and RMS amplitude displays

  • Swell noise suppression using projective filtering
  • Interpolation to 6.25m group interval, 480 channels to unwrap spatially aliased dips
  • Tau-p transform 500ms robust AGC "wrap", 960 p-values and transform range from -1400ms -1 to +1400 ms-1
  • Tail mute to remove linear arrivals and linear noise
  • Predictive deconvolution: 32ms gap, 480 ms operator
  • Rho filter for low frequency compensation
  • Inverse Tau-p transfrom
  • Interpolation back to 25m group interval, 240 channels


One of the key issues in marine data is addressing the issue of multiples. Multiples are delayed reflections that interfere with the “primary” reflections we want to image. The delay occurs because the reflection energy has taken a more complex and longer ray path from source to receiver, reverberating between two layers that are highly reflective.

While any two layers with high reflectivity can create a multiple image, in general it is energy reverberating in the water column (i.e. between the seafloor and the sea surface) that is the most important issue to contend with.

There are three main ways in which to remove multiples:
  1. treat the multiple as a long, ringing wavelet, and use signal processing to simplify it
  2. create a model of the multiple and try and subtract it from the data
  3. use the difference in stacking velocity of the primary and multiple to remove the multiple

We have already used the “signal processing” approach when we applied deconvolution to collapse the reverberations. So any further reduction in multiples is going to require a different technique/s.

The modelling approaches, such as SRME (Surface Related Multiple Elimination), usually require the primary seafloor reflection to be the first clear signal. This limits their application on shallow water lines such as this one, although there are some variations that can address this.

This leaves us with the velocity discrimination methods; I’m going to use the Parabolic Radon Transform (PRT) approach. The Radon transform is a generic mathematical procedure where input data in the frequency domain are decomposed into a series of events in the RADON domain. The PRT has been the “workhorse” demultiple method for many years; even when SRME is deployed it is often used to overcome limitations in the modelling.

The Tau-P transform is a special case of the Radon transform where Tau (intercept time) and P (slowness, slope, or parabolic curvature) represent two axes in Tau-p space. The data are summed along a series of parabolic tracks, each described by a curvature value (P). The user defines a number of P values to use as well as the total P range, which can be from positive to negative.

One of the key elements of the PRT as a demultiple method is that we usually apply an NMO correction of some sort to the data. This helps do two things: (i) target the primaries and multiples – for example we can use water velocity or a picked primary trend, and (ii) approximate the reflections (which are originally hyperbolic) through a series of parabolic curves. 

There are different ways of considering the parabolic curvature (P) however one of the easiest is to look at the relative moveout (in ms) of the events at the far offset. This means a flat event has a P value of zero, up-dip events (NMO corrected with too slow a velocity) have negative p values, and down-dip events (NMO corrected with too high a velocity) have positive P values.

Here’s the same synthetic as before; this time it has a NMO correction applied with 90% of the primary velocities so that the primaries are up-dip and the multiples down-dip.


A synthetic CDP gather corrected with 90% of the primary NMO velocity (left) and its parabolic Radon transform (right). The Radon transform is labelled by “far offset moveout” so that the over-corrected, up-dip primaries correspond to negative numbers and the under-corrected, down-dip multiples correspond to positive numbers. Where the primaries and multiples cross over in the X-T domain, they are separated in the PRT domain (circled)

There are some key things to notice in this result. 

Firstly, the PRT is only really effective when we have enough live traces – in the shallow part of the gather (down to ~500ms) with only 20 traces the events can still be seen, but they are smeared.  Below about 500ms TWT it is hard to make the PRT demultiple effective.

Secondly, the events form a “cross” shape in the X-T domain; this means that we can damage the primary (or the multiple) if we don’t transform a sufficiently large range of P-values (or we use too few values, causing the signal to be aliased).

Some other general points to consider are:
  • the more P-values and frequencies you have, the larger the matrix required for the transformation. You need a different matrix for each combination of offsets, and so regularising the data (and using a routine that saves the matrix) can make a big difference
  • it is much more important to focus on removing the multiples from the near offsets (as these will “stack-in”) as opposed to the far offsets (which will tend to be supressed by stacking or “stack out”)

Here are some synthetic examples of the PRT demultiple in action, which show how important it is to tune the parameters defining the “multiple-zone”, and check the results!


The synthetic gather (top left) with two different PRT demultiples applied (top centre, top right). The transforms of each gather are also displayed underneath. The demultiple only starts at 550ms TWT to avoid damaging the shallow events as these are poorly defined in the PRT space where the NMO stretch mute has limited the offsets. The centre panels show the “multiple zone” as being defined between +20 and +1600ms “far-offset moveout”, on the right this has been increased to include dips up to -40ms moveout. This allows the deepest multiple with the smallest moveout variation to be attacked

From this the PRT demultiple looks promising; it can manage even small separations between the primary and multiple as long as the velocities have been determined accurately enough. It needs to be tested carefully, however, if the primary and multiple are close in terms of moveout.

Here are some selected CDP gathers from the seismic line. In the shallow part of the section the multiples have been effectively managed by the deconvolution, however at depth (where the primary signals are weaker) there’s still a lot of multiple energy from more complex ray-paths.

The gathers have been NMO corrected with 90% of the picked primary velocity, so we can be sure the primaries will be up-dip, or at the very least flat.


Selected CDP gathers displaying the deeper data from about 800ms TWT onwards.  The gathers are corrected with 90% of the primary velocity. The multiples become more obvious in the deeper section from 1500ms or so – as indicated in yellow

To apply the PRT demultiple we need to look at the “far offset moveout” range we need for the data overall, and to define what part of the data we want to classify as multiples.

CDP gathers NMO corrected with 90% of the stacking velocity with some example measurements of positive and negative far-offset moveout

In this case I’m going to use a range of -300ms to +500ms, and as the primary is over-corrected (through applying a Normal Moveout correction with 90% of the primary velocity) I’m going to classify the multiple as having a far-offset moveout between +24 and +500ms.

I’ll also need to specify how many P-values to use; one way to approach this is to think about the total moveout range we are transforming, and what each P-value will represent. 

With a total range of 800ms (from -300ms to +500ms) I’m going to use 200 P-values; this means that each P-value corresponds to 4ms of moveout at the far offset (which is the sample interval.)

Finally I’m going to form the data into supergathers. As I described in the post on geometry we have different offsets in the “odd” and “even” numbered CDPs, each of which has 60 traces.  If we combine these into a single supergather with 120 traces then each gather will have the same offset distribution, and we’ll have better resolution in the PRT domain as well.

Here are the results – remembering that the demultiple is only applied from 1400ms onwards:

Test of PRT demultiple shown on two CDP gathers (500 and 501) which were formed into a single 120-trace supergather for the application. The gathers on the right have had the mutliples with a far-offset moveout between +24 and +500ms supressed;  the data had a NMO correction with 90% of the picked primary velocity applied prior to the PRT demultiple

This seems to have done a pretty efficient job at cleaning up the gathers; it will be a lot easier to pick the deeper velocities now, which should help with the imaging.

While it is important to check the results on the stacked data as well (with this dataset these are actually relatively small), the majority of the multiple ‘stacks out’ and our main goal is to make it easier to get an improved velocity analysis.

Velocity semblance spectra for CDP 500 before (left) and after (right) the PRT demultiple has been applied; removal of the “ringing” multiple chain (circled in red) has helped in differentiating primaries from multiples

Having used our first-pass velocities to help remove the multiples we can now go back and check or re-pick the deeper velocities to further improve the image.





By: Guy Maslen

Total Pageviews