Produção Científica



Artigo em Revista
06/01/2017

A high-resolution weighted AB semblance for dealing with amplitude-variation-with-offset phenomenon
Velocity analysis is an essential step in seismic reflection data processing. The conventional and fastest method to estimate how velocity changes with increasing depth is to calculate semblance coefficients. Traditional semblance has two problems: low time and velocity resolution and an inability to handle amplitude variation-with-offset (AVO) phenomenon. Although a method known as the AB semblance can arrive at peak velocities in the areas with an AVO anomaly, it has a lower velocity resolution than conventional semblance. We have developed a weighted AB semblance method that can handle both problems simultaneously. We have developed two new weighting functions to weight the AB semblance to enhance the resolution of velocity spectra in the time and velocity directions. In this way, we increase the time and velocity resolution while eliminating the AVO problem. The first weighting function is defined based on the ratio between the first and the second singular values of the time window to improve the resolution of velocity spectra in velocity direction. The second weighting function is based on the position of the seismic wavelet in the time window, thus enhancing the resolution of velocity spectra in time direction.We use synthetic and field data examples to show the superior performance of our approach over the traditional one.
Artigo em Revista
06/12/2016

Resolution in crosswell traveltime tomography: The dependence on illumination
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well-known result but not well-exemplified. We have revisited resolution in the 2D case using a simple geometric approach based on the angular aperture distribution and the Radon transform properties. We have analytically found that if an isolated interface had dips contained in the angular aperture limits, it could be reconstructed using just one particular projection. By inversion of synthetic data, we found that a slowness field could be approximately reconstructed from a set of projections if the interfaces delimiting the slowness field had dips contained in the available angular apertures. On the one hand, isolated artifacts might be present when the dip is near the illumination limit. On the other hand, in the inverse sense, if an interface is interpretable from a tomogram, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region, it is diffusely imaged, but its interfaces, particularly vertical edges, cannot be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body, because this anomaly could be an artifact. These results are typical of ill-posed inverse problems: an absence of a guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of constraints. Crosswell tomograms derived with the use of sparsity constraints, using the discrete cosine transform and Daubechies bases, essentially reproduce the same features seen in tomograms obtained with the smoothness constraint. Interpretation must be done taking into consideration a priori information and the particular limitations due to illumination, as we have determined with a real data case.
Artigo em Revista
05/12/2016

A combined time-frequency filtering strategy for Q-factor compensation of poststack seismic data
Attenuation is one of the main factors responsible for limiting resolution of the seismic method. It selectively damps the higher frequency components of the signal more strongly, causing the earth to work as a low-pass filter. This loss of high-frequency energy may be partially compensated by application of inverse QQ filtering routines. However, such routines often increase the noise level of the data, thereby restricting their use. These filters also require a quality factor profile as an input parameter, which is rarely available. In recent years, alternative methods for stable inverse QQ filtering have been presented in the literature, which makes it possible to correct the attenuation without introducing so much noise. In addition, new methods have been proposed to estimate the quality factor from seismic reflection data. We have developed a three-stage workflow oriented for attenuation correction in stacked sections. In the first stage, a trace-by-trace estimate of the quality factor is performed along the section. The second stage consists of preparing the data for attenuation compensation, which is performed via a special filtering strategy for efficient noise removal to avoid high-frequency noise bursts. The last stage is the application of a stable inverse QQ filtering. As an example, we applied the proposed flow in a seismic section to compensate for the attenuation caused by shallow gas accumulation. Careful data preparation proved to be a key factor in achieving successful attenuation compensation.
Artigo em Revista
01/12/2016

Migration velocity analysis using residual diffraction moveout: a real-data example
Unfocused seismic diffraction events carry direct information about errors in the migration-velocity model. The residual-diffraction-moveout (RDM) migration-velocity-analysis (MVA) method is a recent technique that extracts this information by means of adjusting ellipses or hyperbolas to uncollapsed migrated diffractions. In this paper, we apply this method, which has been tested so far only on synthetic data, to a real data set from the Viking Graben. After application of a plane-wave-destruction (PWD) filter to attenuate the reflected energy, the diffractions in the real data become interpretable and can be used for the RDM method. Our analysis demonstrates that the reflections need not be completely removed for this purpose. Beyond the need to identify and select diffraction events in post-stack migrated sections in the depth domain, the method has a very low computational cost and processing time. To reach an acceptable velocity model of comparable quality as one obtained with common-midpoint (CMP) processing, only two iterations were necessary.
Artigo em Revista
01/12/2016

A separable strong-anisotropy approximation for pure qP-wave propagation in transversely isotropic media
The wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A Padé expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a low-rank solution of the exact qP-wave dispersion relation. Our numerical experiments showed that this approximation can provide highly accurate wavefields, even in strongly anisotropic inhomogeneous media.
The wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A Padé expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a low-rank solution of the exact qP-wave dispersion relation. Our numerical experiments showed that this approximation can provide highly accurate wavefields, even in strongly anisotropic inhomogeneous media.
The wave equation can be tailored to describe wave propagation in vertical-symmetry axis transversely isotropic (VTI) media. The qP- and qS-wave eikonal equations derived from the VTI wave equation indicate that in the pseudoacoustic approximation, their dispersion relations degenerate into a single one. Therefore, when using this dispersion relation for wave simulation, for instance, by means of finite-difference approximations, both events are generated. To avoid the occurrence of the pseudo-S-wave, the qP-wave dispersion relation alone needs to be approximated. This can be done with or without the pseudoacoustic approximation. A Padé expansion of the exact qP-wave dispersion relation leads to a very good approximation. Our implementation of a separable version of this equation in the mixed space-wavenumber domain permits it to be compared with a l
Artigo em Revista
01/12/2016

Offset-continuation stacking: Theory and proof of concept
The offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of average velocity and local event slope allows us to determine horizon-based optimal parameter pairs and a final stacked section at arbitrary offset. Synthetic examples demonstrate that the OCO stack works as predicted, almost completely removing random noise added to the data and successfully recovering the reflection events.The offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of ave
Artigo em Revista
01/12/2016

Estimation of quality factor based on peak frequency-shift method and redatuming operator: Application in real data set
Quality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as QQ-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve QQ-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the QQ factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis.Quality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as QQ-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve QQ-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the QQ factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis.
Artigo em Revista
11/10/2016

AN ALGORITHM FOR WAVE PROPAGATION ANALYSIS IN STRATIFIED POROELASTIC MEDIA
Abstract The classic poroelastic theory of Biot, developed in 1950’s, describes the propagation of elastic waves through a porous media containing a fluid. This theory has been extensively used in various fields dealing with porous media: seismic exploration, oil/gas reservoir characterization, environmental geophysics, earthquake seismology, etc. In this work we use the Ursin formalism to derive explicit formulas for the analysis of propagation of elastic waves
through a stratified 3D porous media, where the parameters of the media are characterized by piece-wise constant functions of only one spatial variable, depth.
Key words: poroelasticity, Biot system, low-frequency range, layered media, Ursin algorithm
Artigo em Revista
22/07/2016

Relief Geometric Effects on Frequency-Domain Eletromagnetic Data
A perpendicular transmiter-receiver coils arrangement used in the frequency-domain eletromagnetic survey can have deviations in relation to its standard geometric definition due to the relief geometry of the surveyed area when combined with large transmitter-receiver distance and large transmitter loop. This happens because the local relief characteristics along the equivalent magnetic moment axis from the vertical, and receiver positions at different elevations. A study about that is carried on here substituting the rugged relief by an inclined plane. We have developed a new formulation for the n-layered model that allowed us to investigate the relief geometry effects on FDEM data but restricting the analysis to the two-layer earth model, considering three cases of transmitter-receiver situations controlled by the relief model. Such procedures resulted to be very useful to demonstrate their behavior departing from those curves obtained for an inclined and a horizontal ground. These results show that small deviations in the verticality of the transmitter loop axis or in the horizontality of the surficial plane causes significant deviations, even for angles as small as 1º
Artigo em Revista
25/05/2016

How much averaging is necessary to cancel out cross-terms in noise correlation studies?
We present an analytical approach to jointly estimate the correlation window length and number of correlograms to stack in ambient noise correlation studies to statistically ensure that noise cross-terms cancel out to within a chosen threshold. These estimates provide the minimum amount of data necessary to extract coherent signals in ambient noise studies using
noise sequences filtered in a given frequency bandwidth. The inputs for the estimation process are (1) the variance of the cross-correlation energy density calculated over an elementary time length equal to the largest period present in the filtered data and (2) the threshold below which the noise cross-terms will be in the final stacked correlograms. The presented theory explains how to adjust the required correlation window length and number of stacks when changing from one frequency bandwidth to another. In addition, this theory provides a simple way to monitor stationarity in the noise. The validity of the deduced expressions have been confirmed with numerical cross-correlation tests using both synthetic and field data.

Key words: Time-series analysis; Interferometry.
<<  <   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67   >  >>