Wednesday, 27 February 2013

Getting Started with Marine Data


Two of the “Seven Habits of Highly Successful People” are:

  • Start with the end in mind
  • Put first things first

These are pivotal in seismic data processing; spotting bad assumptions mid-way through processing is much harder than trapping them upfront.

I’m going to describe some of the things to look out for at the start of a marine seismic processing project, with a particular focus on reprocessing of older data.

The first thing you need to know about is the sequence of events that takes place when the data is recorded in the field. A seismic vessel needs to move through the water at a speed of 5 knots. This means that the ship covers the typical shot-point interval of 25 metres in about 10 seconds. This speed also ensures that the streamer can be held at the right depth by the depth controllers (i.e. the paravanes or ‘birds’).

The locations of every shot to be fired are pre-loaded into the navigation system; when the source is in the right place (on older data this might be when the ship’s antenna is in the right place) the navigation system triggers the gun controller to fire the shot. Worth a small mention that prior to the main ‘burst’ there are some small variations in the airgun; this is referred to as the ‘precursor’ and should be removed in the processing when creating a minimum phase equivalent (filter) of the original source wavelet.

At the same time that the shot is fired, some of the key navigation information needs to be “captured” and go into the header of the digital seismic record.  This is trivial on modern systems, but back in the 1980's (where 48K of RAM was pretty racy) this was much harder as the data was written directly to tape. At this point the data is recorded (usually with a back-up copy) and the system is reset for the next shot.

The Observers on the boat (often one junior and one senior per shift) keep track of what is happening by making notes of anything and everything that might impact the data. They also check that the data is “within specification” - the contract will outline key parameters like tow depths, sea state, minimum source configuration and so on.  This information is recorded into the Observer’s Log aka: 'Obs. Log'.

The exact position of the cable (or cables) is unknown at the time of shooting. The locations are calculated "offline" using a combination of sensors such as: compasses along the cable length, acoustic pods which send range-finding ‘pings’, and laser range finders that measure angle and distance to ‘targets’. These are attached to floats at the end of the cable (tail-buoys) or on the airgun string.

This information is used to create an ASCII file of the positional data, usually in P1/90 or P1/84 formats.  I’ll cover this at a later stage in another post.
  
The Observer’s Log is a key piece of “metadata” that can be extremely useful to the processor.
 
Example of an Observer's Log entry

Modern Observer's Logs are digital, but older ones are hand written on paper forms.  That can mean three things – obscure jargon, the odd mistake, and illegible handwriting! 


Spec.
Specifications - usually “out of spec.” meaning the data is not meeting requirements. This might be tow depths, sea state, noise levels, wind, feather angle, airgun pressure or airgun array volume.  The client doesn’t pay for “out of spec.” data, and gaps need to be re-shot
DNP
Do Not Process - bad or noisy files, ‘warm up” shots, or other data that isn’t being paid for and should be excluded
NR
Noise Record - a recorded file where the source didn’t fire to look at noise levels
BOT
Beginning Of Tape - usually with tape, shotpoint and file numbers
Warm-up
Shots fired to warm up the airguns, to avoid misfires
SOL
Start Of Line
FSP
First Shot Point
FCSP
First Chargeable ShotPoint - on a reshoot, the first new shotpoint (after the required overlap) the client can be charged for
M/F
Misfire - the airgun array fired incorrectly in time or volume
A/F
Autofire - the airgun is discharging incorrectly, automatically
SI
Seismic Interference – caused by the signals from another seismic vessel some distance off.
Ship Noise
Hydrophones were developed to detect ships and submarines by their propeller noise; they still do this very well!
FA
Feather Angle – the angle at which the cable deviates (feathers) from directly behind the vessel; often caused by currents, tides and wind. This is one of the “specifications”  
LGSP
Last Good ShotPoint - before a breakdown or failure
LCSP
Last Chargeable ShotPoint
FGSP
First Good ShotPoint - after a breakdown or failure 
EOT
End Of Tape - usually with tape, shotpoint and file numbers
EOL
End Of Line
LSP
Last ShotPoint
FFID
Field File Identification Number – the File Number on Tape
SP
ShotPoint - the unique number for each shotpoint



Diagram illustrating Feather Angle (taken from "Applied Seismology: A Comprehensive Guide To Seismic Theory And Application" by M. Gadallah & R. Fisher)


Things to check or look out for on the Observer’s Log include:

  • shooting geometry – number of receivers, the distance from the centre of the source array to the centre of the first receiver group, the receiver group separation, maximum offset, and nominal shot point interval
  • field file identification numbers (FFIDs) – are they the same as the shotpoint numbers for the whole line?  It’s usually true for modern systems but for older ones you may need to renumber the shotpoints. Shots can be missed for a variety of reasons, but the line is usually “in spec.”  until multiple shots are “dropped”   
  • gun delay - some (older) systems start to record data at a fixed time before the airgun fires.  This introduces a fixed “delay” that you will need to remove before processing
  • recording delay - this is when the airgun is fired and there is a set time delay before the start of recording (to save space on the tape)
  • bad traces, noise issues or bad files
  • seismic recording format, sample rate and record length
  • near- and far-channel - is channel 1 the near of far channel from the vessel?
  • shotpoint numbers - incrementing or decrementing?
  • filters - were any applied in the field?

When you start to read the seismic data, you can then match it back against the Observer’s Log to make sure that everything is as it should be at this first critical stage.

The basic checks you need to do are:

  • trace count - does the number of traces make sense? Calculate this first of all from [LSP-FSP+1] x [number of channels], and then confirm that any missing shots are fully accounted for by the Observer’s Log.  Too many traces might mean you have read some DNP, warm-up or noise records.
  • near-trace plot (NTP) – create a plot that shows just the first channel.  Is the time of the direct arrival correct?  A quick “measure and calculate” can highlight a potential issue such as a gun-delay you have missed, or an incorrect near-offset definition
  • shots - look at ideally at least one shot per cable length; check that the direct arrival has the right dip based on the receiver spacing and the speed of sound in water). You can also look for noise problems and/or display shots that are recorded as having issues in the Observer’s Logs.

A few traps for the unwary:

  • some data has no low-cut field filter;  all you will see is 1-2Hz or lower “swell noise” and very little else;  drop a quick filter over your selected data to QC;  5Hz low cut should be fine

Shot record with no low-cut field filter

  • older data might have odd file numbering;  I’ve encountered Octal file numbers (0,1,2,3,4,5,6,7,10,11 etc) as well as numbers that roll from 000-999 and then loop around to 000.  You’ll need to renumber these carefully based on the Observer’s Logs.
  • auxiliary (extra) channels contain things like near hydrophone measures and “time breaks”. They are usually flagged as not being seismic data and ignored, but sometimes they are included, so your channels numbers in a shot might go 1-7 and start again with 1-240.  More editing and renumbering.

When you start off with a new seismic line and a new vintage, taking the time to carefully check the items mentioned above can prevent a great deal of grief later on!




By: Guy Maslen

Wednesday, 13 February 2013

The Art of Testing


Seismic processing is often described as more of an art than a science. 

I suspect this is because, in practice, the Earth is a complex place. Fundamentally, seismic processing assumes a flat earth made up of homogeneous layers – sometimes a single layer – that have constant properties.  If it were truly that simple, the entire endeavour would be completely automated.

It is the variability of seismic data that makes processing a challenge and at the heart of this is testing and applying a processing sequence.

For the new processor, testing is probably the biggest challenge.  It’s not just what to test and how to test it, but also keeping track of what you have tested, what worked well, and then applying it to all of the data. 

Experts make this look easy but that’s largely because they have developed habits over a number of years that help them to avoid making mistakes. Rest assured that most of these habits only came about because they made the mistakes in the first place!

The aim of this blog post is to help new processors develop good habits when it comes to testing, so as to avoid making the mistakes in the first place.

What kind of mistakes are we talking about?  Here are some of the classics:
  • creating a spectacular test section, then overwriting the processing flow so you have no idea how you did it
  • using the wrong (or different) input files for the tests, with different pre-processing applied so that it is not a fair comparison
  • not being able to explain at a later stage why you chose a particular approach
  • putting the test sequence into “production” processing, and getting a different result

In my experience, testing is seldom well-documented in a final processing report. Many reports don’t discuss the things that were tried which didn’t work; this can be very important information at a later date. They can also describe what tests were applied, but not what issue was trying to be addressed and why a given result was chosen.

Here are my top ten testing suggestions, to help you become a better processor:

  1. Pick your test line/area carefully
There is always a push to use a test-line that is in the main area of interest, e.g. the well location or the target area.  From a processing flow point of view you ideally want a line that has the greatest geological variation, or the worst noise contamination.  Try and balance these needs if you can; it’s better to test on more lines than be unpleasantly surprised by the results!

  1. Keep a Testing Log
Create a log of the tests you have run; this can be a test file, a Word Document, even a hand written journal, but it needs to be done.  Treat each test as an experiment, and log the objective, method, results and conclusion.  Put the science back in to the art of testing.
Example of a basic Testing Log

  1. Do not re-use processing flows
Create a new processing flow for each test, and preserve the original.  The flow – and any log file – is a vital part of the “audit trail” that you will need if you have to back-track.  Recording the name of each testing flow in your testing log will save a huge amount of time and effort if things go wrong.

  1. Number your tests
Give each test a unique number.  I tend to use double digits 00, 01, 02 – I’ve never been on a project yet where I needed 99 different tests!  Record this number in the Testing Log.  Tests are not just processing flows, but can include things like a mute you have designed (and tested the application of) or a velocity function you picked.

  1. Number each test panel
I extend the numbering system so that each sub-test has a unique number as well.  The “00” sub-test is always “nothing applied”, and then all the other parameter variations have a separate number.  So for example, if my test 04 was “deconvolution before stack” my test numbers might be:

04_00: nothing applied
04_01: 160ms operator, 24ms gap
04_02: 200ms operator, 24ms gap
04_03: 240ms operator, 24ms gap
etc.

This allows you to extend the testing sequence as much or as little as you need.

  1. Use these numbers in your processing flows
Make the start of each testing flow the unique test number; the use of “double digits” will help to keep these in order when you list them.  Some systems let you run “panel tests” with repeat copies of data, but that’s not always applicable for every test. Some of your tests might end up with a test and sub-test number like this:  08_02_radon200.job

Because you have a Testing Log, you won’t need to try and think up (or remember) long and convoluted processing flow names that describe the test or the parameters. 

  1. Use the same numbering for input and output datasets
If you extend this numbering technique to the output datasets from each test, then you can easily trace an output file back to where it was created.  The datasets have a link to both the test number, and the processing flow, which is simple and easy to remember.  What was the output from 08_02_radon200.job?  Why it was 08_02_radon200.cdps!
This greatly reduces the chance of a labelling error or mistake when reviewing results and making decisions. 

  1. Number your metadata files
Metadata is “data about data”. In the context of testing, it’s all the “other” files and information you create, like mutes, velocities, horizons, filters, design gates, and picks.  Number these too! If 08_02_radon200.job uses the mute you picked as test number 05, then the mute file should start with 05 in the name. Applying the wrong “metadata” is one of the main causes of incorrect processing. 

  1. Create a flow diagram showing how tests are related
Simple test numbering really makes this easy; but it is useful as a check to make sure that parameters can actually be compared, especially when looking from one technique to another.

Example flow diagram for demultiple processing routes

A flow diagram is useful for spotting omissions or errors, as well as explaining what you did, and why.

  1. Extend tests until you find the ‘end members’

To be sure that you are not looking at a “local minima” when testing – especially with things like demultiple, filters, gain control and so on – it is a good idea to push the testing of the parameters so that the results look terrible.  These “end member” tests are quickly discarded, but show you have covered the full spectrum of possibilities.  Test panels that show artefacts, or over processed data help to assure that the overall sequence will be effective.




By: Guy Maslen


Wednesday, 6 February 2013

3D Land Geometry

Mention the word “geometry” and people think about shapes and angles, and maybe the time they outsmarted an opponent on the pool table. Like so many other terms in seismic processing, “geometry” carries a slightly different meaning in that it describes the relationship between geophysical data (e.g. shot records) and coordinate information (the specific position on the Earth where that shot took place).

Both the shooting and where the shooting occurs are handled by the acquisition crew, while relating the two is often left to someone in the processing centre as not all field crews have the capability to check the navigation/seismic merge in the field. It’s not that common for the person processing the data to also be in the field acquiring it, so it can be easy to fall into a ‘field vs. office’ mentality when errors are found. “They can fix that in the processing” is uttered from time to time by field crews so that they don’t have to fix a problem encountered during acquisition. Alternatively you might hear “if the acquisition crew had just done a better job this wouldn’t be an issue” from a processor, complaining about the extra work. The truth lies somewhere in the middle. Despite every effort being taken in the field during acquisition, from time to time issues do arise that lead to the data being recorded with errors and this is something that you, the processor, have to deal with.

The field can sometimes be a harsh environment. When things go wrong it can range from being soaked in stormy weather, to guerrilla encounters and a coup d’état! Not to mention the long days, temperamental equipment and pesky local wildlife; one observer log entry read “two exploded camels” as reason for a misfire. Nevertheless a lot of time can be lost in the processing centre having to solve for navigation errors (e.g. two stations with the same location) that really shouldn’t exist. Field or office, everyone makes mistakes and you need to have steps in place to lessen the effects.      
  
“Garbage in, garbage out” is a famous computer axiom which basically states that if invalid data is entered into a system, the resulting output will also be invalid. Pretty straightforward really. It should then come as no surprise that if you incorrectly set up your geometry (one of the first steps in processing) your results will be incorrect. Don’t underestimate the importance of quality control (QC) at this stage; the price of rewinding an entire project because the geometry was wrong is not one you want to pay.

When merging seismic data with coordinate data you are actually setting up a database (of sorts) which contains all of the survey information; such as shot point definitions, source and receiver coordinates, and elevations. This ‘database’ is accessed for a variety of uses including refraction statics calculations, CDP binning, creating CDP gathers, and populating SEG-Y trace headers.
Geometry database summary
With seismic processing in general, it’s good practice to follow robust QC procedures and there’s no exception when it comes to setting up and proofing geometry. Here are three useful methods:

  • Review the diagnostic text and check that values are consistent. Typical errors are likely to come from: seismic numbering, navigation numbering, shot location, and receiver spread. If an error message pops up, read it! It may be telling you something important about your data. An example of this was where a client was ignoring warning messages and was unable to build the geometry. The message was trying to inform them of a 6 m elevation difference from one station to the next. As it turns out the survey was shot in two phases and in between, a flood had washed away tons of sediment in the area, leaving a small chasm. Moral of the story, read the observer logs and pay attention to pop ups. Good software will be checking data consistency for you and alert you when it isn’t.

Shot position on map with live update diagnostic information


  • You’ve heard it again and again – “A picture is worth a thousand words” – the adage that it is easier to absorb large amounts of data quickly through visualization. If you graphically represent the information by producing a plot/map it will help you highlight any obvious inconsistencies. You can display source/receiver positions and quickly identify erroneous positions; e.g. a station that is shifted 30 metres away from the spread. It would pay to check the observer logs as there may have been a valid reason for relocating the position, such as a creek in the way. Other useful information to display graphically include: source/receiver elevations, first break picks, CDP fold, and bin locations.

Map showing source (red) and receiver positions
Map showing Common Mid Point positions

  • You’ll need to check if the geometry, once applied to the data, is correct. Start by overlaying the offset from the trace headers onto the seismic traces. This can be done by applying an estimated linear velocity to convert offset to time. Check that the apex of the offset values, match the apex of the seismic record. You can also apply a linear moveout (LMO) making sure that your velocity estimate gives you roughly horizontal first-breaks when the LMO is applied. Now you can cycle through the shots and if the header offsets are incorrect, the LMO will be out of sync and there will be a clear 'step' at zero-offset.
Offset from trace headers (red) does not match shot record; in this case channel numbers are not matched correctly against stations

Ideally you want the geometry to be loaded into the trace headers before it reaches you, and for it to be correct. Unfortunately it’s best to assume that it is incorrect and get into the habit of checking! If it hasn’t already been applied in the field you should receive the coordinate information in the form of Shell Processing Support (SPS) files. The SPS format was developed by Shell Internationale Petroleum Maatschappij as a common standard for positioning-data, between their 3D land field crews and the processing centres. The format proved durable enough that it was adopted by the Society of Exploration Geophysicists (SEG) Technical Standards Committee in 1993 (SEG SPS Revision).

Most SPS files today are generated automatically by the seismic recording instrumentation and consist of a set of three files; a Source file [s], a Receiver file [r], and a Cross Reference file [x]. There is also an optional fourth file for comments. The Receiver file contains information about the geophones such as their ID, type, and position (Easting, Northing, and Elevation). The Source file contains information about the seismic source, namely its position and ID. The Cross Reference file contains information about the Shot ID, and the source and receivers associated with that Shot ID.


At the end of the day, QC’ing your geometry is just one of those things that has to be done; kind of like flossing your teeth. Yes, you could not do it and just hope for the best, but I’m willing to bet you’d end up in a situation where pulling teeth (or what feels a lot like it) is involved.




By: Dhiresh Hansaraj

      

Total Pageviews