GeoExplo Ltda. 

Santiago Chile  Back to Home Page 
Tel(562)3265116  
Email: surveys@newsense.com  Dr. W.E.S.(Ted) Urquhart 
General Frequency Domain Theory for Magnetic Fields

Table of Contents 

4. Frequency Domain Filtering  
4.1 General Theory  
4.2 Basic Principles  Filter Recipes  
4.3 Downward Continuation  
4.4 First Vertical Derivative  
4.5 Matched Regional/Residual Filter  
4.6 Reduction to the Pole  
4.7 Analytic Signal 
The objective of frequency domain filtering is to condition a data set and to render the resulting presentation in such a way as to make it easier for the geoscientist to understand and interpret the significance of anomalies in terms of their geologic cause and setting.
In order to correctly use frequency domain techniques to aid in the interpretation of magnetic data one should have a general understanding of the physics and mathematics involved. The better the understanding the greater the ability to design filters that will best suit the task at hand. That is not to say that we must all understand every detail of the mathematics but we should have a qualitative sense of the science involved. Thus there is no attempt here to develop rigorously the equations presented. In fact we have tried to present a simple synopsis of the theory.
The general topic of frequency domain theory for magnetic field data is developed in detail by Allan Spector in his Doctoral thesis submitted to the University of Toronto in 1969 titled "Spectral Analysis of Aeromagnetic Data". Much of the formal theory developed below is taken somewhat directly from his text.A magnetic profile or grid may be represented as an infinite series of sines and cosines. This infinite series is called a Fourier Series and it can represent a function T_{(x,y)}, if the function is periodic, or at least can be considered to be periodic (repeats outside the limits of the data set).
Figure 1. The function T_{(x)} is continuous in x from 0 to X_{m}. T_{(x)} may be considered to be a periodic function with the portion between 0 and X_{m} being one period of the complete function.
The profile between 0 and X_{m} in Figure 1 represents a profile of field measurements. The periodicity of the function is created by repeating the limited sample function. This new function is then periodic with a period equal to the length of the profile (or the dimensions of the area in the two dimensional case) covered by the field measurements.
The function within each period is smoothly varying however at the edges of each period there is a step. This step can only be represented by the summation of an infinite number of terms. Since it is obviously impossible to sum an infinite number of terms the series in truncated at some point. The truncation will cause ringing in any filter result. Thus a smooth transition between periods would be more desirable. This is done by tapering the data. Tapering can be achieved by damping the function to the mean of the data as in Figure 2. This approach results in reducing the usable length of the profile. Another approach is to extend the data in one (profile) or two (grid) directions by prediction and then tapering the result to the mean (this approach is generally prefered). The length of taper will depend on the results required but the longer the taper the less problems will arise.
Figure 2. The function is tapered by damping the data near the edge to the mean with the use of a cosine roll off filter.
In order to use the fourier series technique the data must be sampled. Let the function DT _{(x,y)} represent the sampling at a number of equally spaced points. This sampling may be adequate to record large wavelengths (l), but it may not be close enough to record short wavelengths. Only wavelengths which are longer than twice the sampling interval can be reliably recorded. When l=2Dx, the spatial frequency is called the Nyquist frequency. If the frequencies in the sampled field are greater than Nyquist (short wavelengths), they cannot be reliably recorded. In fact they will introduce erroneous data at frequencies lower than Nyquist, by a process called aliasing.
The total magnetic field due to an assemblage of bodies can be written as a sum (in the two dimensional case), i.e.,
k
DT_{(x,y)} = SDT_{i(x,y)}
i=1
The Fourier transform, being a linear operator, can likewise be written as;
k
DT_{(u,v)} = SDT_{i(u,v)}
i=1
Each individual transform can be expressed as the product of four factors which include the body parameters, i.e.
DT_{i(u,v)} = h_{(hi;r)} . s_{(ai,bi,gi;u,v)} . m_{(T0,Mi;q)} . d_{(eihi;u,v)}
where:
h is the depth factor (e.g. in the case of a bottomless prism it is e^{hr} ),
s is the size and orientation factor (a,b are the body dimensions and g is the azimuth),
m involves the strength and orientation of T_{0} , the field vector, and also the intensity and orientation of the magnetization vector M where m is:
2pM.[n+i(lsinq +mcosq)].[N+i(Lsinq +Mcosq)]
where (l,m,n) are the direction cosines of the geometric vector T_{0} and (L,M,N) are the directioncosines for the magnetization vector. In most cases the magnetization vector in assumed to be parallel to T_{0}.
and q=tan^{1(u/v)}.d includes the horizontal position of each body (e,h) (i.e.,d=e^{i(ue+vh)} )
From and interpretation point of view there are two main observations to be made from the above formulation.
1.  The depth factor h can be divided into three types bottomless prism (point pole), laminar bodies (dipole) and bodies with finite depth extent. The depth factor for each of these in log space are as follows:  
 
where r=(u^{2}+v^{2})^{1/2}, t is the thickness of the lamina and d is the depth extent of the finite bodies.  
In each case the depth to the top of the bodies h is contained in the e^{hr} term only. Thus it should be possible to compute the total field at hights other than that at which it was measured. Recomputing the field at a height closer to the bodies, downward continuation , will result in a sharpening of the anomalies with a corresponding reduction of anomaly overlap. Recalculating the filed on a higher plane, upward continuation, will result in the suppression of anomalies closer to the measurement plane.  
2.  The strength and orientation factor m allows us to isolate and to remove the effect of the direction of the induced and reminant magnetization vectors T and M. If we divide the fourier transform of the field by:  
[n+i(lsinq+mcosq)]  
we can recalculate the field as if the inducing field vector were at the pole. By dividing by:  
[N+i(Lsinq+Mcosq)]  
we can recalculate the field as if the magnetization vector were at the pole. In practice the two terms are considered to be the same, ie. the magnetization vector being parallel to the field vector. Thus one would divide by the first term twice to produce anomalies shapes that would be seen at the magnetic pole. This is some times referred to as double reduction to the pole, but is usually referred to as simple Reduction to the Pole (RTP). 
There are many types of filters that can be designed, however, there are a few that are in most common use. These will be examined in the following section where simple "rules of thumb" will be presented along with example results.
In order to begin the filtering process the data must be appropriately conditioned. To reduce edge effects as much as possible the data set must be tapered to zero in a smooth way. The greater the distance from the edge of the data and the zero point the better, except that there is a computation time penalty that goes up as the square of the number of points. Typically with fairly active magnetic data with the IGRF removed the distance required for adequate results is 10 with diminishing improvement after 16.
One method of tapering achieved by progressively moving out using 8 points to predict the 9th. In addition a cosine taper is introduced such that it has a value of one (1) at the edge point onf the grid and zero (0) at the last point in the taper (ie 10th point in a ten point taper). In this way the data is bought to zero with certainty within the required number of points but still reflects the data near the edge of the data. Of course a base level (usually the mean) is remover from the data before the process begins.
Next the data is transformed into the frequency domain ready for filtering.
The first thing that one should do is to calculate the power (or amplitude) spectrum of the data. The power spectrum is usually presented in one of two forms, a radialy averaged plot in LogLinear space for presentation in graph form or as a grid displayed either as a contour plot or an image. Figure 3 shows a contour plot of a sample data set, Figure 4 displays the power spectrum as a graph and Figure 5 displays the power spectrum as an image.
Figure 3. Contour map of total field sample data
Figure 4. Radialy averaged power spectrum of the total field sample data
Figure 5. Two dimensional display of the power spectrum for the sample data.
Figure 6. Shape of the downward continuation filter.
Figure 7. Shape of power spectrum after downward continuation without damping. Note: the position of P3
If the noise is not in some way suppressed then the resulting data will not be of any use. There are several noise reduction strategies but in this case we will use the most direct, practical and easiest to understand. We will attack the noise problem with a COSINE ROLL OFF filter. The purpose of the cosine filter is to slowly taper the downward continuation filter to zero at the high frequencies. If the filter is too quickly brought to zero there will be negative effects in the result (ringing of the data) dew to the abrupt loss of the highest frequencies.
The cosine filter is applied at the same time as the downward continuation filter as displayed in Figure 6. The question remains to design a reasonable filter in one pass without making several attempts. It turns out that there is a good "rule of thumb" to use as follows: Referring to Figure 7 we can see there is a point in the power spectrum were the raw downward continuation filter result changes from a negative slope at the long wave lengths to a positive slope for the high frequencies. We call this point P3. The spectrum above this point contains frequencies that are too high for the downward continuation and will thus cause the resulting grid to "blow up". There is however some necessary information in these high frequencies so we must design a filter that slowly suppresses them to some point were the filter will go to zero. How to choose this point called P2? P2 is calculated as follows:
P2 = (.5P3)/2+P3
Now how to find the point to start the roll off (ie. the point were the cosine damping function starts to take effect). This point is called P1 and is calculated as follows:
P1 = (P2/.5)*P3
Having these two point we can now use the ROLL OFF filter.
The resulting shape of the combined downward continuation filter and the cosine roll off is in Figure 8, as well as the resulting power spectrum in Figure 9.
Figure 8. Shape of combined downward continuation and cosine roll off filters.
After some experimentation the user will realize that there is a limit to how far one can downward continue even with the help of the damping cosine function. As one continues further the P3 point moves further to the left (long wave length direction) and eventually so much of the data must be suppressed to stabilize the continuation process that there is little information left in the resulting data. Usually the practical limit is 2 to 3 cells.
Figure 9. Shape of power spectrum after application of both the downward continuation and cosine roll off filters.
Using the spectrum in Figure 7 which is a result of applying a 500 metre downward continuation on a 200 metre grid we would select .20 for P3.
Therefor:P2 = (.5.2)/2+.2 = .35
P1 = (.35/.5)*.2 = .17
Combining this filter with the downward continuation results in the spectrum in Figure 9 and the map in Figure 10.
Figure 10. Contour map of the cosine damped downward continuation filter. Move the mouse pointer over the contour map to compare the downward continuation with the original data.
Spatial resolution can also be achieved using the Vertical Derivative filter. The first vertical derivative filter computes the vertical rate of change in the magnetic field. Where the downward continuation achieves spatial resolution by increasing the amplitude of the high frequencies (shallow sources) the derivative filter suppresses the long wave lengths. Thus there is not such a severe problem with noise and a role off filter is usually not necessary unless the original data is particularly noisy. Figure 11 shows the shape of the filter and the results of the first derivative filter and displayed in Figure 12 as a spectrum and in Figure 13 as a map.
Figure 11. Shape of the first vertical derivative filter (vdv)
Figure 12. Power spectrum for 1st vdv
Figure 13. Contour map of first vertical derivative. Move the mouse pointer over the contour map to compare the VDV with the original data.
If we examine the shape of a spectrum caused by bottomless bodies (pole sources) at some depth we will see in Figure 14 that the shape is a straight line with a negative slope defining the depth to the top of the bodies. Of course in the real world the bodies are not all at the same depth so the line defining the spectrum of the deep bodies w ill represent the average.
Figure 14. Power spectrum with characteristic points indicated.
The shallow bodies can be considered depth limited (dipolar) and the spectrum of these types of bodies is shown in Figure 14. The depth to the top of the shallow bodies is also defined by the straight line tail of the curve.
Using the shape of the spectrum of these two components of the magnetic field we can design a filter to separate the two components, thus we are able to remove the deep component from a data set in a way that is matched to the data set, or visa versa. In order to design a filter that best fits the data we must first measure the slope (ie depth) to the deep and shallow source from the spectrum (Figure 4). It should be noted that it is easier to measure the slopes on a printout as the data is more stretched out and it is easier to see the straight line segments. Nevertheless we can see in the diagram that the depth to the deep sources is:
d=1592meters
And the depth to the shallow sources is:
d=560metres
There is one other piece of information that is needed and that is the Power Ratio. This is defined as the difference between the Y intercepts of the two straight lines. The power ratio for this data set is:
PR=3.889
We can now use these values to construct a filter that will result in the removal of the deep sources from the data set. The shape of the filter is in Figure 15 and the resulting spectrum is displayed in Figure 16. The resulting map is in Figure 17.
The regional map can be computed using the same parameters and the result are displayed in Figures 18, 19, and 20.
Figure 15. Shape of residual filter.
Figure 16. Power spectrum after residual filter
Figure 17. Residual total field map. Move the mouse pointer over the contour map to compare the residual with the original data.
Figure 18. Shape of the regional filter.
Figure 19. Power spectrum after regional filter
Figure 20. Regional magnetic field contour map. Move the mouse pointer over the contour map to compare the regional with the original data.
Since the reduction to the pole filter works on the phase as well as the amplitude the power spectrum of the reduced field and the shape of the amplitude portion of the filter is not very useful but the resulting magnetic field map is in Figure 21.
This type of filtering only works at magnetic latitudes of greater than 30 degrees. For lower latitudes there are other strategies that must be use.
Figure 21. Reduced to the pole total field map (inclination 70 degrees, declination 5 degrees east. Move the mouse pointer over the contour map to compare the RTP with the original data.
It is very important to understand that the reduction to the pole process is in fact an interpretation and the results are not the real representation on the field at the pole (only an approximation). At high magnetic latitudes (above 45 degrees) the RTP rendition is a reasonable representation. Between 30 and 45 degrees the process produces adequate results. Between 15 and 30 degrees some tricks can be used to produce a result that may be useful in cooperation with the original total field data. For magnetic latitudes less than 15 degrees the result is at best a fantasy and should only be used with extreme caution.
The reason that low latitude RTP is problematic is obvious if one considers the physical reality. For example, at the magnetic equator, a magnetic body with a rectangular surface (say 200 metres by 2000 metres) oriented with the long axis in the magnetic northsouth direction will have an anomaly at each end and little or no response in the middle. The same body at the pole will have a symmetrical positive anomaly positioned over the center of the body. The reduction to the pole process is thus trying to transform nil at the body center at the equator to a positive value in the polar representation. This is of coarse impossible because zero cannot be transformed to any value other than zero.
The following examples illustrate this problem:Figure 22 a: The above model results were calculated using a vertical inclination of the magnetic filed (pole).  Figure 22 b: The above model results were calculated using a horizontal inclination and northsouth orientation of the magnetic filed (equator). 
To further illustrate the reality of reduction to the pole we have assembled a suite of example models calculated at magnetic filed inclinations of 0^{o}, 5^{o}, 10^{o}, 30^{o}, 45^{o} and 60^{o}. Each was then transformed to the pole with varying degrees of success (figure 23). It should be noted as well that model calculations and their transforms are the best case scenario because there is none of the typical data noise or artifacts in model results that would normally be in real field data. In real total field data the noise and other artifacts further degrade the result.
Anomaly calculated at 60^{o} inclination reduced to the pole  Anomaly calculated at 45^{o} inclination reduced to the pole  Anomaly calculated at 30^{o} inclination reduced to the pole 
Anomaly calculated at 10^{o} inclination reduced to the pole  Anomaly calculated at 5^{o} inclination reduced to the pole  Anomaly calculated at 0^{o} inclination reduced to the pole 
Figure 23: Suite of reduction to the pole examples. In each frame placing the mouse in: the left third of the picture will reveal the calculated pole result, in the center shows the reduction to the pole result and the right third will yield the model as calculated at the indicated total field inclination. Moving the mouse out of the frame either through the top or the bottom will preserve the image that is currently visible.
The concept of analytic signal of magnetic anomalies was developed by Nabighian (1972, 1984). The analytic signal is calculated by taking the square root of the sum of the squares of each of the three directional first derivatives of the magnetic field as follows:
A(x,y)=((dT/dx)^{2}+ (dT/dy)^{2}+ (dT/dz)^{2})^{1/2}
The resulting shape of the analytic signal is independent of the orientation of the magnetization of the source and is centered on the causative body. This has the effect of transforming the shape of the magnetic anomaly from any magnetic inclination to one positive body centered anomaly.
Anomaly calculated at 60^{o} for a thin rectangular body  Anomaly calculated at 30^{o} for a thin rectangular body  Anomaly calculated at 0^{o} for a thin rectangular body 
Anomaly calculated at 60^{o} for a broad square body  Anomaly calculated at 30^{o} for a broad square body  Anomaly calculated at 0^{o} for a broad square body 
Figure 24: Suite of Analytic Signal calculations. In each frame placing the mouse in: the left third of the picture will reveal the calculated pole result, in the center shows the Analytic Signal result and the right third will yield the model as calculated at the indicated total field inclination. Moving the mouse out of the frame either through the top or the bottom will preserve the image that is currently visible.
Nabighian also showed that the shape of the analytic signal anomaly would peak over the magnetic north and south edges of the source body. Thus although the analytic signal transformation produces a map of body centered anomalies which can be useful at low magnetic latitudes (particularly were reduction to the pole is ineffective) each body will have two peaks increasing the interpretive complexity. As seen in the example above, narrow bodies (in the order of 1 grid cell) will appear to have one peak due to the proximity of the two peeks, while broader bodies will exhibit two distinct peaks. These two peeks can be used to map the edges of the causative magnetic bodies but it is difficult to tell the difference between two narrow bodies in close proximity and one broad feature of similar dimensions. As seen in the example below:
Figure 25: Example of the similarity in the Analytic Signal anomaly shape for two closely spaced thin bodies and one broad body . In each frame placing the mouse in: the left half of the picture will reveal the calculated equatorial anomaly result, and the right half will yield the calculated Analytic Signal. Moving the mouse out of the frame either through the top or the bottom will preserve the image that is currently visible.
Back to Home Page 