Introduction
Well logs provide the largest source of information available about the variation in lithology and fluid content within the subsurface. As the computational power available to the geological community has increased in recent years, the ability to process and map various rock properties based on digital log data has become easier. Mapping of these rock properties, either for a reservoir model or a regional exploration play, can identify areas where the data does not make any sense in view of the other subsurface information. Further study may reveal that some rock types, such as anhydrite, that should have identical properties in all wells, actually has differing readings in some of the wells. This does not mean that this data should be eliminated from the study, but only that a higher degree of accuracy is needed for mapping. The process of log normalization adjusts the log data, in all of the wells, to have the identical response in an identical lithology; thus making the log data compatible with each other.
There are numerous reasons why the logging tools do not always provide identical readings in similar lithologies. These may include, but are not limited to: differing vintages of logs, differing logging companies, tool design, borehole environment, on-site tool miscalibration, tool malfunction, and improper scaling.
Goals and Philosophy
Prior to any log normalization, one should always keep in mind the final goal of the project, and ask the question 'how critical is the data to the final interpretation?' For example, if you are mapping a regional porosity trend which is truncated by erosion, with no seismic data to support the truncation edge, is it necessary to have the porosity data to within 1 porosity unit, probably not. If on the other hand, you are asked to provide the pore volume of a reservoir for an economic analysis for a potential CO2 flood, then the time would be well spent.
The objective of well log normalization is to improve the dataset, not create it. The final results using the normalized data are only as valid as the variation (locally and regionally) in the interval used to normalize the data. If the interval used to normalize the data turns out to be less consistent than what you originally thought it to be, then you may end up warping the data to make it consistent with a lithology in one area, which in reality is erratic in nature across the area. In other words, if you are not sure that normalization will improve the quality of the data in a particular well, then leave it alone.
Situations arise which make the normalization process more of an art form than science. Regional shales that are frequently the only available high gamma ray and high neutron calibration lithology may have wide variations due to hole enlargement, or a shaly sand section may not have any suitable low response 'clean' lithology. It is best that one takes some time in deciding how much variation is in deed present across the study area, by laying out some logs scattered across the area. How consistent are the lithologies? Of the consistent lithologies available, do they represent both the high and low curve readings seen on the logs? Do the high and low log readings occur near to each other, or are they widely separated in the wellbore? Do these lithologies extend across the study area? Is it necessary to correlate only certain zones for data display or is it sufficient just to display the entire digital dataset? You might consider computing the average gamma ray value over the normalization interval using the Compute - From Logs - Statistics menu contouring the results.
It is best to keep good records, documenting your process of log normalization. One of the best places to do this is in the project - remarks file. This way, others can follow your steps in the future, or you can review your methods, when something doesn't go well. Always create new log curves, leave the original log curve intact. Sometimes one has to rethink the process and start over. In general, it is best to let well enough alone and use the results from your best attempt. If the process is unsuccessful, define your next try in terms of changing the original raw data, not another attempt on the already manipulated data.
If there are reasonably good normalization lithologies present in the wellbore, then you should be able to remove approximately 85% of the errors and other 'noise' in the data (Sheir, 1991). The remaining 15% of the 'noise' could be due to local changes in lithology, tool inaccuracies, or errors in normalization assumptions.
Which Logs?
Logging tool responses vary, some are consistently more accurate than others. Dan Shier (1991), indicated in a general terms the percentage of well log types, which needed to be normalized. The table is reproduced below:
Log Curve % Needing Adjustment
SP 100%
Gamma Ray 90%
Sonic (Compensated) 3% (Uncompensated have higher values)
Density 25%
Compensated/Sidewall Neutron 20%
Old Neutron 100%
Induction <2%
It can be seen that with the SP, gamma ray, and old neutron logs it is best to normalize all of the logs. For curves like the sonic and induction it is best just to go through the dataset and select only those wells which are anomalous and set them aside for later study.
For the intermediate group of logs, including the density and the compensated/sidewall neutron logs, the first task is to identify the wells in the study which appear to be consistent and to be correct. These are the 'type' wells. The other, more questionable wells are then compared to the nearest 'type' well and accepted as-is or set aside for normalization.
Method Overview
The objective of normalization work is to adjust all log curves so that they give an identical reading in an identical lithology. Because we will be re-scaling and shifting the curve response during normalization it is imperative that the curve be edited for any cycle skips, spikes or any other digitization errors before anything else is done.
Usually two different 'normalization lithologies' are used, one that has relatively high amplitude readings, and the other that has relatively low amplitude readings on the curve to be normalized. These two lithologies need not be found in a single contiguous zone. The zone(s) containing these 'normalization' lithologies, may or may not be separate from the 'objective' zone in which maps will be eventually be made.
The 'normalization intervals' should be correlated through all of the logs and the 'normalization zones' defined (if the entire curve is not be used as the normalization interval). If there are a number of curves to be normalized, it is better to define a normalization zone in the database, where all of the separate log 'picks' can be placed, keeping the general well zone from getting cluttered with data items.
Once these zone(s) have been defined in each well, a value representing the high and low log reading must be obtained. There are several methods were these values can easily be obtained:
Method 1:
Display a histogram of the curve to be normalized over the depth range of the normalization interval. (Histogram-Logs- Set Axes and Scales) The histogram shows the character of the log and is an excellent tool for picking the mode, mean, and the edges of the data accurately. Set axes and scales for the appropriate curve. Define the 'pick' by creating a high and low data item for the curve (example: highgr, lowgr, highsp, lowsp, etc.). Make the curve part of the data item name, especially if more than one curve is to be normalized.
Display the log curve to the right of the histogram, this will help when the 'pick' is made, to eliminate picking any data 'glitches'. Start picking the values in the histogram by hitting the start button located on the tool bar. With the curve displayed, pick from the histogram the 'high pick' in the high amplitude interval and the 'low pick' from the low amplitude interval, by toggling in the drop down menu between the various data items. Right click on the mouse to have the pop-up screen to either redraw the screen or to go on to the next well. It is best to do all of the wells first, and then come back and review your picks, making any changes as needed. To reduce the time required to perform this task, you may want to proceed through all wells doing all of the 'high picks', followed by doing all of the 'low picks'.
After all of the picks have been made, display a frequency plot of each pick (main panel - zone - view/edit - norm. zone - high (low) - display stats). With the statistics displayed, record the mean high and the mean low curve values. These two numbers are the regional high and low normalization values used to normalize the individual well log in the study.
In the general case, log normalization requires that the log curve be shifted and scale adjusted at the same time. This is accomplished by applying a linear equation to each data point on the curve. The basic equation to normalize any curve is:
CRVNRM= LONRM + (CURVE- PICKLO) * (HINRM - LONRM)/ (PICKHI - PICKLO)
CRVNRM = Normalized value
CURVE = Raw value
HINRM = Regional high normalization value (mean of all individual well's high values)
LONRM = Regional low normalization value (mean of all individual well's low values)
PICKHI = Well's high normalization value (i.e. zone.highgr, or zone.highsp)
PICKLO = Well's low normalization value (i.e. zone.lowghgr, or zone.lowsp
Compute the normalized log curve (compute-logs-equation expression either loading the normalization equation from the saved equations or enter the expression). Assign values to the equation with the regional high and low values entered as a constant and assign the well's high and low values from the appropriate zone. Create a new curve name for the curve that is being normalized, i.e. grnorm, or spnorm.
Method 2:
This method is very similar to the previous method. A PICKHI and a PICKLO are computed for each individual well instead of being picked from a histogram. Over the defined normalization interval, the statistical mean of the curve at the 90th percentile (high) and 10th percentile (low) are computed. (Compute-log-statistics, check the nth percentile and n = 10 or 90 and compute the arithmetic mean over the normalization interval)
Using the 10th and 90th percentile also eliminates any spurious data due to data spikes or cycle skips which were not originally edited out of the data. The mean of the histogram of these values become our regional low normalization value (mean of 10th percentile) and high normalization value (mean of 90th percentile). These high and low picks can be displayed and modified using the histogram module as the starting point for method 1 described above. Compute the normalized curve in the same manor using MEAN90 and MEAN10 for the individual well's normalization values.
Method 3:
In this method, the normalization interval should include both the high and low curve readings. Compute the mean and standard deviation of the curve over the interval. Histogram the mean and the standard deviation computed from all of the wells. Record the mean of each (i.e. mean of the mean, MEANT, and mean of the standard deviations, SDT). These 'means' become our regional or type value. The equation for normalization using the mean and standard deviation is:
CRVNORM= (MEANT- 2*SDT) + (SDT/SDI)*(CURVE-MEANI+2*SDI)
CRVNRM = Normalized value
CURVE = Raw value
MEANT = Regional mean normalization value
SDT = Regional standard deviation normalization value
MEANI = Well's mean value (i.e. zone.meangr, or zone.meansp)
SDI = Well's standard deviation value (i.e. zone.sdgr or zone.sdsp)
Checking the Results:
It is a good practice to review the newly normalized curve. Histogram the new curve, over any interval, and step through the wells, displaying the curve distribution. Note any well that doesn't fit the general histogram pattern for the interval, and inspect them for any obvious problems. These problem wells may have to be re-normalized.
Special Problems
Cased and Open Boreholes
Wells with log runs over both cased and open hole intervals have to be considered separately. This is especially true with the old neutron logs. In these cases the well has to be separated into cased and open hole sections and the log normalized over each portion separately.
Compaction related changes
Differences in compaction history can affect log response. This is not much of a problem in passive subsidence, but in areas of complex history, this could present to be a problem. Few sedimentary rocks change little with increasing burial depths, most notably low porosity carbonates and anhydrite. If these rock types can be used as normalization lithologies, generally effects of compaction can be ignored.
In sand/shale sequences most rocks display a systematic change with depth. Both sands and shales become more resistive, higher in density, lower in apparent neutron porosity and lower in travel time. In these cases, it is best to work with the shale and the shaly siltstones if at all possible.
Density compaction gradients are in the order of one porosity unit per 2000 feet of burial. If the structural relief is 2000 feet or less within the area, compaction changes between wells are not a significant concern. In areas of uniform dip, a linear trend surface generally provides the best regional pattern to which each well can be adjusted. If however, the residuals from that surface are non-random, further investigation is needed.
Comments on Various Log Types
SP Log
The best procedure with dealing with the SP log is to rescale it in units of percent deflection, where a 0% deflection represents the maximum right hand deflection (maximum shale) and a 100% deflection represents the maximum left hand deflection (clean sand). The first step is to establish a shale base line for each SP log. This is easily picked interactively from the cross section module (logs - pick baseline cut-offs). When this data item is subtracted from the digitized curve, the resulting curve should ideally have no drift. If the salinity of the formation water does not change over the digital interval, a histogram of the newly formed driftless curve should provide the needed clean (low) and high (shale) picks. This assumes that a clean sandstone is present within the interval. In cased where only shaly sandstones are represented, then it is best to set their maximum left hand deflection (clean) at 70% or some other arbitrary value (i.e. thick submarine fan sections).
Gamma Ray Logs
As a rule regional shale sections are chosen as high-pick gamma ray normalization zones. Highly radioactive uranium-bearing shales such as the Barnett Shale does not make a good normalization interval. These shales usually have values, which lie outside the range of some of the older gamma ray detectors for which they were designed for (Shier, 1991).
Old gamma ray logs should be converted into API units per the listed chart, prior to normalization.
Old gamma Ray Log Conversion Units (Hilchie, 1979)
Service Company Conversion Units API Units
Schlumberger 1 ug Ra equiv./ton 16.5
Lane Wells
Series 400 (scintillation) 1 radiation unit 2.16
Series 300 (geiger counter) 20.2 counts/minute 1.0
Series 200 (ionization ch.) 1 standard unit 216.0
PGAC
Type F (geiger counter) 1 microroentgen/hr. 14.0
Type T (scintillation) 1 microroentgen/hr. 15.0
McCulloough 1 microroentgen/hr. 10.4
Sonic Logs
Sonic logs are not subject to sensitivity problems, where only curve shifting is required for normalization. The best lithology is to chose a non-porous (limestones or anhydrites), low travel time interval (lithology) for the normalization lithology.
Neutron Logs
All neutron logs are based on the relationship that porosity is inversely proportional to the log of the counts/second. A relatively small error in calibration at the wellsite corresponding to the high porosity lithology leads to large errors in the apparent porosity of reservoir rocks. A relatively large error the counts/second in tight lithology produces a small inaccuracy in porosity. In many cases, the zero porosity is correct, but the high porosity end is off and a scaling factor must be applied.
Old Neutron Logs
Old neutron curves require one additional step, which is not needed for any other curves (Shier, 1991). This step converts a linear scale of counts/second (but logarithmic with respect to porosity) to one that is linear with respect to porosity. Shier (1991) describes the method as picking the PICKHI and PICKLO first from modern compensated neutron logs. These same high and low values are recognized on the old neutron logs and recorded. These four data items are entered into the following equation:
CVNORM = The antilog of:
(Curve*(log Rhp-log Rlp)+(Whc*log Rlp)-(Wlc* log Rhp)) / (Whc-Wlc)
CVNORM = Normalized neutron porosity in percent porosity
CURVE = Raw neutron porosity in counts/second, etc.
Rlp = Regional value of low porosity lithology in percent porosity
Rhp = Regional value of high porosity lithology in percent porosity
Wlc = Low porosity lithology for the well in counts/second
Whc = High porosity lithology of the well in counts/second
Note, if only a few old neutron logs are in the study it is probably best to simply ignore them for mapping purposes.
Density Logs
Nearly all of the non-porous rocks encountered in the petroleum industry have densities in the 2.6 to 2.9 gm/cc range. Coal and anhydrite are the exceptions. Within this density range it's usually possible to establish one good normalization lithology. Establishing a second reliable normalization lithology at a lower density level usually proves to be impossible, due to the fact that these lower densities are affected by hole enlargement and are also frequently blurred by overlapping lithologic variations. Log normalization is done strictly by curve shifting. This lack of establishing a reliable low-density lithology does not introduce the inaccuracies that might be expected because the reservoir beds are not very different in density than the normalization lithology that was used (Shier, 1991).
Resistivity Logs
It is generally best not to normalize any of the resistivity curves. Use them as-is in standard log analysis, correcting for mud resistivity, etc. as needed.
References
Doveton, J. H. and E. Bornemann, 1981, Log Normalization by Trend Surface Area Analysis, The Log Analyst, v. 22, no.4, pp. 3-9.
Hilchie, D. W., 1979, Old Electrical Log Interpretation, Colorado School of Mines, Golden, Colorado, p.193.
Land, W. J., Jr., 1980, SPWLA Ad Hoc Calibration Committee Report: Porosity Log Calibrations, The Log Analyst, v. 21, no. 2, pp. 14-19.
Neinast, G. S. and Knox, C. C., 1973, Normalization of Well Log Data, Society of Professional Well Log Analysts, 14th Ann. Symposium, Paper I.
Shier, D. E., 1991, Textbook on Well Log Normalization, Energy Data Services, Englewood, Co.
|