Morozovska, S. Svechnikov, S. Kim, and S. Hong, J. Woo, H. Shin, J. Jeon, Y. Park, E. Colla, N. Setter, E. Kim and K. Franke, J. Besold, W. Haessler, and C. Plonska, and Z. Pertsev, D. Kiselev, I. Bdikin, M. Kosec, and A. Yang, B. Gruverman, and R. Dennis and R.
Kalinin, S. Jesse, B. Rodriguez, E. Eliseev, V. Gopalan, and A. Roelofs, T. Schneller, K. Szot, and R. Hong, K. Noh, S.
Biot Savart Law: Mathematical Representation, Example and Applications
Park, S. Kwun, and Z. B , 58, Baek, J. Hong, and Z. Zavaliche, P. Shafer, R. Ramesh, M. Cruz, R. Das, D. Kim, and C. Matter , 23, Jungk, A. Hoffmann, and E. AFM Probes shop. Ask online. Piezoresponse Force Microscopy in Its Applications. Ferroelectric domains imaging Hysteresis loops measurements Local ferroelectric behavior investigation. Figure 1. Sketch illustrating the relationship between the polarization P , strain S and electric field E.
A representation of the theoretical hysteresis loop is shown in a. A representation of the theoretical quadratic hysteresis loop of strain vs. The numerical indicators in a,b indicate the E sweep direction and show the relationship between the two measurements, where in b the curve segments between and show the true piezoelectric components of the strain and the slope of these lines yields the d 33 modulus, and at points 3 and 6 the coercive field is reached and polarization is switched.
VPFM, Phase. Figure 3. The height images in a and b were recorded in the sequential scans in the contact mode. The contrast covers the height variations in the nm range in a and in the nm range in b. The contrast of height image covers the topography variations in the nm range. The phase changes in d are in the degrees range. VPFM, Amplitude. Figure 4. The contrast covers the height variations in the nm range in a and in the nm range in e.
The amplitude variations are in the nA range in b and in the The crosssection phase profiles taken along the direction marked with the white dashed line in c and g are shown in d and h. The measurements were performed with the electric excitation of 5 V at kHz contact resonant frequency for the softer conducting probe and with the electric excitation of 3 V at 30 KHz for the stiff probe. A white star in g indicates a location where the hysteresis curves were collected. VPFM, Piezoresponse. Figure 5. PFM spectroscopy curves: VPFM amplitude, phase and piezoresponse dependencies on bias voltage, which were obtained at the location of PZT film, which is marked with a white star in Figure 4g.
The electric excitation of 3 V at 30 kHz was applied to the sample. The bias sweep originated in the left low corner and followed the red trace that was changed by the blue trace coming back from the top right corner. Figure 6. The surface corrugations in a and c are in the nm range. The phase variations are in the degrees range in b and d. The insert in b shows that the phase of bright and dark domains differs by degree.
The crosssection profile is taken along the trace marked with a dashed white line in b. Figure 7. The contrast covers the height variations in the nm range in a. The amplitude changes in b are in the The amplitude and phase profiles along the traces marked with the white dashed lines in b and c are shown in d. The measurements were performed with the electric excitation 10 V at kHz.
Figure 8. Figure 9. The surface corrugations in a are in the nm range. Figure The amplitude variations in b are in the The crosssection amplitude and phase profiles across the polarized structure are shown in d. Figure 11 right column. The image in d was taken in the area marked with a white square in b. The amplitude variations in b and d are in the nA range. The cross-section amplitude and phase profiles across the polarized structure are shown in d.
Height and PFM images of TGS crystal prior a, b and after the spontaneous polarization caused by heating the sample above its Curie temperature and cooling back to the room temperature c — f. The surface corrugations are in the LPFM, Phase. Height and PFM images of nm thick film of BFO obtained with different orientation of the surface strips with respect to the probe.
The surface corrugations in a are in the nm range; in d — in the nm range and in g — in the nm range. This profile is taken along the white dashed trace in e. The measurements were performed with the electric excitation 3V at 10 kHz. Surface Potential. In the past, statistical theory has depended primarily on the Riemann integral. VDR depends more heavily on Lebesgue measure. However, the derivations are kept intuitive with respect to those issues in so far as possible. We also wish t o make a special acknowledgment of Professor Samuel Kotz. Without his early support and contributions t o the topic, this book may not have been possible.
Marvin D. Recursions and the Vertical Density. The Logistic Chaos Generator. The Uniform Density. Computations of the Sharkfin Generator. Generalizations of Sharkfin Generators. Re-estimation and Continuous Improvement. Comparisons with Management Coefficients Theory. Inappropriate Convergence. Miscellaneous Remarks. Single Cost Pool Case. Although the term, VDR, was not used at that time, the first paper Troutt, directly related to the topic focused on the latter case where a formula for the density of the density function value was derived. Namely, let f x be the pdf of a random variable X on!
This result gives, inter alia, a new derivation of the Box-Muller method as will be developed below. In Troutt , a generalization was given for the case in which V x , a function on? Rn but not necessarily a pdf, and g v , the density of the ordinate of V x ,are specified. I t is now required to find the resulting pdf, f x ,of X.
The pdf g u can be called the vertical or ordinate density. In later applications, u often represents a performance score, so that g v may be called the performance or performance score density. There are cases when f x and V x are given, and it is desired to find g v. Then the problem is essentially a special kind of change of variables technique. For 2 Vertical Density Representation many applications, it is the reverse situation that is of more interest.
That is, we often wish to construct pdfs for special circumstances by starting with V x and g v. VDR techniques often provide a useful alternative strategy for generating random variables. Thus, most of the initial applications have been t o Monte Carlo simulation. J - for 0 Original Motivation 1. Consider a group decision problem that involves choosing a most desirable vector of numbers. Table 1. Relative importance of teaching, research and service I 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Means: 0.
The exercise was conducted to estimate the best relative priorities a n research R , teaching T and service S , respectively, for use in an academic department. Relative priorities are measures of relative importance that have been normalized to sum to unity. However, this MAV function is not explicitly known. Moreover, it would be quite difficult to model such a function as it would be necessary to consider all the different ways that the priority vector, T,R,S , might be used, along with their own relative frequencies and relative importance measures.
For example, such data might be used in promotion and tenure decisions, hiring decisions, merit pay recommendations, etc. However, how should those activities be weighted according to their frequencies of occurrences and their relative impacts on the welfare of the department? Given these difficulties with a direct approach to modeling an appropriate MAV function, a more expedient alternat.
The responses in the sample of departmental members may be thought of as individual estimates of XI. The problem is complicated further in that academic departments tend to be self-selecting and the members are therefore more likely t o share common biases See Figure 1. I I Figure 1. That is, one would expect the centroid to reflect a similar degree of bias as that shared by individuals in the group, if any. Thus, ideally, an aggregator is desired that filters bias, or at least, suggests its presence. An early approach t o the problem was proposed in Troutt , which introduced the dome bias model.
That model proposed one mechanism for explaining a shared group bias. The problem was further reexamined in Troutt, Pang and Hou from the point of view of mode estimation and three new such aggregators were compared for the dome bias model. If the V x -function were known explicitly, then each individual estimate could be directly scored.
Considering the distribution of such v-score values led to the concept of the vertical or ordinate density or pdf, g v. The question arises as t o how g v and V x should be related t o what may be called the spatial pdf f x of the individual estimates. I t was soon noticed that a typical unimodal f x such as the multivariate normal pdf might itself serve as a V x -function. For that case, the question of finding g v can be described as that of finding the density or pdf for the density or pdf itself.
Together with a coincidental review of the Box-Muller method in simulation, these ideas led to the paper, Troutt After that, the more general case in which V x is not necessarily a pdf was considered further in Troutt , where as noted above, the VDR term was first used. At about the same time that these ideas were developed, a related set of estimation techniques began to be considered.
A first version, called maximum decisional efficiency MDE estimation, was proposed in Troutt A variation called maximum performance efficiency MPE was applied in Troutt et al. This approach to estimation focused on the achievement of the maximum average v-score and has the advantage that the form of g v , or f x , does not have to be specified in advance as in the maximum likelihood formulations.
About this product
The approach also has an intuitive rationale. Namely, assuming that an appropriate model, V x , has been specified for the desirability of the decisions or performance measures in question, then the decision-maker or organization should have attempted to maximize its average wscore over past occasions. The approach has been applied in Troutt , Troutt et al. This approach is also related to frontier regression models as discussed in Troutt et al. The per- 6 Vertical Density Representation formance efficiency score, or v-score, has also been applied as a statistic to facilitate further analysis in Alsalem et al.
The organization of the book is as follows. Chapter 1 discusses basic results and covers the original results in the papers of Troutt , along with some extensions to what is called general VDR. Chapter 2 covers the results of Kotz and Troutt on applications of VDR to the ordering of distributions. This chapter also includes some new material on the analysis of correlation into two components called vertical and contour correlations. Chapter 3 reviews the results of Pang et al.
This chapter deals with multivariate VDR issues. A result by Kozubowski , which proved a conjecture in Troutt , is also discussed. Chapter 4 is devoted to simulation applications. The results of Pang et al. Chapter 5 is devoted t o some new unpublished applications of VDR. It contains a n application of VDR to finding pdfs of chaotic orbit values for chaos generators on the unit interval.
The results are used t o construct a very large class of distinct uniform random number generators based on chaos generators of a particular design. Chapter 6 discusses some miscellaneous applications of VDR. This operationalizes a famous quote of Tolstoy in his celebrated novel Anna Karenina. We relate this to the issue of determining when the consensus of estimators is associated with greater accuracy of the estimate.
In addition, we consider a different way t o generalize the normal pdf on the unit interval and compare it with the entropy approach. Finally, we discuss an application of VDR t o density construction that arises in what may be called inverse linear programming. Chapter 7 contains a case application of VDR in what we call behavioral estimation.
The minimum decisional regret MDR estimation method is discussed as an application to estimating costs related t o production planning. VDR is used to construct a validation technique for that method. Chapter 8 is about the estimation of benchmark costs. Here, we define what may be called benchmark or most efficient costs per unit of cost driver. The principle of maximum performance eficiency MPE is proposed and an approach to estimating benchmark unit costs and benchmark cost matrices is derived from this principle.
T h e Density of the Density Ordinate 1. Recall the Box-Muller method generates normal uncorrelated deviates x1 and x2 from a pair of uniform [0,1] observations u1 and u2. Let be the standard uncorrelated bivariate normal distribution. Then x1 and x2 are normal uncorrelated deviates. In their paper, Box and Muller proved the validity of the method by reference to properties of the Chi-square x 2 distribution.
However, we may give an alternative proof based on the following theorem Troutt, Theorem 1. For example, consider the case in which f x is uniformly distributed on [0,1]. Let m denote the maximum ordinate of the density f x. Then as suggested by Figure 1. Now we may verify for 1. That is, the ordinate of this and so by 1. Figure 1. The reader may find it interesting t o compare the ease of this approach with a more routine first principles one. Vertical Density Representation 10 The Box-Muller method continues to be popular for generation of normal random deviates despite the existence of other methods which have been claimed to be more computationally efficient see Marsaglia and Bray, ; Atkinson and Pearce, ; Ahrens and Dieter, and Kinderman and Ramage, This idea can be generalized for representing a large class of densities and gives rise to the following definition of vertical density representation VDR.
Definition 1. Rn with range [a,b]. Let g v be a given density on [a,b]. Suppose for each v E [a,b ] , x is distributed uniformly on the boundary aS v. Then the process which selects v E [a,b] according to g v , and then x E aS v according t o a uniform density produces a random variable with a density f x on!
Rn, is called the vertical density representation VDR and the density g v is called the vertical or ordinate density. Based on this definition, we have the following main result. It follows that f x is constant on aS v , and hence, the level curves of f x and V x must coincide. Let G v denote the CDF for g v. Thus 1. For instance, V x might be given as x-x0 'Q x- xo where Q is a real positive definite matrix. In this case, 1. When the density of a variable of interest, V x , is desired, and a density f x can be recognized as a function of V x , then Theorem 1.
Xn Fleming, Hence by Theorem 1. This distribution is also called the Chi-square x 2 distribution with n degrees of freedom see, for example, Law and Kelton, In general, this will work provided A v the range of V x , Ran V ,is the same as the support of g w. Several of these are presented in this section. Some are familiar and some are new. The function V x may or may not be chosen as a pdf itself. For examples 1. These examples indicate the versatility of Theorem 1. Verification is left to the reader.
Examples in which V x itself is a pdf: Example 1. Example 1. Examples in which V x is not a pdf: Example 1. Laplace pdf Example 1. R2, then v has the uniform density on [0, an -']. That is, the ordinate of the standard uncorrelated bivariate normal density is uniformly distributed.
To generate a pair of independent N 0 , l variates we carry out the following two-stage procedure: 1 Randomly generate an ordinate of the density and solve for the isodensity contour associated with that ordinate; 2 Randomly generate a point on that contour. The above idea can be extended to general univariate representation cases as follows Troutt and Pang, Let h z1v be the conditional density on the boundary of S v. Also 1. Another particular solution pair can be derived from the Box-Muller method.
Clearly, the associated random variable W is distributed on the interval [-1, fl]. It can be easily checked that fw w , the pdf of W , is given by 1. Fkom Section l. Now the original Box-Muller method may be expressed in terms of h zlv and g v as follows. First we sample v from the density g v , and then sample z given v according to h x1v. The resulting z follows the standard normal distribution. Thus we have proven the following theorem Troutt and Pang, The above procedure is a compositional Monte Carlo method for the standard normal density.
The method should clearly extend t o a large class of bell-shaped distributions except that g v will not in general be the uniform Further Remarks o n Vertical Density Representation 17 pdf. The difference between the VDR method proposed by Troutt and the method in this section may be described as follows. The original VDR approach only generates a point on the boundary of the set S v , while the method in this section generates a point from the entire set S v.
In fact, equation 1. An extension of Theorem 1. We may note one further solution of the composition equation 1. Thus, since also A v 2 O,A v is itself a pdf with support equal to the range of f z , say, [O,m]. In addition, this can be viewed from the perspective of the acceptancerejection AR method from simulation theory, which is discussed further in Chapter 4. Further details on this connection are given in Section 4. Previous work on VDR assumed that the conditional density of the variate on isodensity contours is uniform.
In this section, we show the results for more general contour densities. Let x E! There are two questions regarding densities in! Rn in this context. First let a family of densities on the contours be given. Finally, let g v be a probability density function on [0,m. The first question is as follows. Suppose that realizations, v, of v E [O,co are generated according t o the pdf g v. What will be the pdf of the x-realizations on!
Conversely, given a pdf f x on and a V x of the above form, what are the corresponding g v and h x v? These questions are clearly answered by specifying the relationship among V x ,g v , h xlv ,and f x. The foregoing relationship has so far been developed only for the assumption that h xlv is uniform in x for each respective value of w. Interest in these issues arises from several sources. Such representations provide new avenues for simulation purposes. Also, these representations generalize the idea of L,-norm symmetric densities. They have also been applied t o study the tail behavior of univariate densities.
Finally, if V x is a performance measure, then VDR enables inferences about the density of V x scores from statistical observations related to f x. Let V x be a function on! Also it was shown that the vertical density, g v , is related t o f x as follows. The relation 1. The aim of this section is t o give a more general form for this particular relation. In fact, using a special change of variables formula for integrals, we prove in Section 1. Formula 1. Some examples are given in the next section. In the sequel, we shall assume that the gradient vector VV vanishes nowhere on Xn,i.
It is tacitly is the unit normal on the surface. Vertical Density Representation 20 Theorem 1. The proof of Theorem 1. We can now assert that 1. Using 1. The original basic result in Theorem 1. Let I A be the indicator function of a Bore1 set A in! Then Thus, This completes the proof.
In fact, it is degenerate. Therefore Example 1. Thus, - -V h'[h-l v ]Jii dal Comments 25 1. In Section 1. Thus a wide variety of different representations of f x become possible as the function V x is varied. Alternative representations may be useful in developing Monte Carlo simulation techniques. Interest in such representations also arises in the analysis of performance. In this setting, the density h might be expected to measure effects related to the relative orientations of the target and shooter.
This page intentionally left blank Chapter 2 Applications of Vertical Density Represent ation Introduction This chapter presents two applications. The first is an application to the tail behaviour of pdfs and the ordering of their distributions. The discussion builds on the paper of Kotz and Troutt and adds some new results. The second application is to the analysis or decomposition of correlation into two components called vertical and contour correlation, respectively. An application to the aggregation of experts is discussed.
This material has not been published previously. For instance, the vertical pdf remains the same if the values of f s over any interval are interchanged with those of another interval of equal length. The reader may verify that such interchanges do not change the values of the A v -function. Such changes may be called strata shifts and are applied in Chapter 5 and are also discussed further in Section 6. The resulting density may then be called a strata shaft density. Also, the vertical pdfs of X and X B are the same. However, the vertical pdfs will differ for X and C X for Ic1 0 or 1.
An approach that normalizes this last kind of difference is proposed next. It follows from Theorem 1. We would expect both to be more concentrated at values near zero for relatively thick-tailed distributions. This idea is used further in Section 2. To start with, we consider the exponential and Laplace distributions.
I t may serve as the measurement or etalon for the rate of tail ordinate decline for a distribution. Further interesting extension work on this topic would be the multivariate case. This will likely relate t o the concept of a scalar-valued multivariate hazard rate Goodman and Kotz, In that theorem, a pdf f x was given and it was desired to find the associated vertical density g v.
Here, we start with g v and seek to determine f z. The new class, which we call the negative power vertical density class, has thicker tails than that of the Cauchy density. Example 2. This class is unbounded a t zero, the more so as q increases to 1. Hence, if we can find a symmetric f z on --oo,m for which g v is the vertical density then that f x will have thick tails. From Theorem 1.
We will solve for A v in this relationship and then construct an f z from that solution. The reader may verify that problem 2. These are given by d v and z- v. Similarly, Fig 2. This is the case. The resulting f z derived above, 2. Thus ultimately, the tail of the new class is higher in ordinate. That is, it is thicker than that of the Cauchy. It is not known whether densities exist with still thicker tails. This is called the finite contour case. The results are applied to correlated unimodal densities on 8.
Correlation can be decomposed into two distinct components called vertical and contour, respectively. Some implications for the consensus of correlated experts are discussed. Thus, it is desired to specify the relationship between V z , g v , P w ,and f z. The following result holds in this case. Theorem 2. Application II 35 Proof.
We first consider a neighborhood of x- w. For example, if V x is concave then V x is monotone increasing on -m, 2 0 and monotone decreasing on x0,co for some ZO. In this case, f x may not be continuous at xo. It may also be noted that the following corollary holds. Corollary 2. T h e n P x L so is the weighted average of p- v with respect to the density, g v.
The Annals of Applied Probability
Similarly, P x 2 xo is the same weighted average of pf v. The last integral may be considered to be the weighted average ofp- V x on -m, 5 0. In this case, P w may have a different number of components depending on the value of w. In the next section, an application of these results is made t o the decomposition of correlation into vertical and contour components. Suppose that these variates have correlation, p.
A standard normal variate may be given a VDR representation of the type in the previous section as follows. We have for each variate, call them X1 and X z , the common pdf, 2. Let the two random variables having the g v -densities be denoted Vl and VZ. Dependency between the V1 and Vz random variables can be measured by a correlation, p v. Some notations and preliminary results are needed for the derivation. Define the following two events. Let y be their common value. We next compute the correlation between X I and X2, noting that their means are zero, and their standard deviations are unity.
It seems sensible in both these cases to define the contour correlation by pc where 2. Although the name, contour correlation, is expressive of the derivation, it may be noted that pc could also be called either the sign correlation or the error direction correlation. Table 2. Some implications and other examples are discussed further below in the context of expert estimation error distributions. The derivation from 2.
If the densities were specified initially in f z form, i. However, these steps should not be particularly difficult. In this more general setting, pc measures the extent to which two random variables tend to simultaneously be above or below their mean values; while pv measures the extent to which the errors are large or small together. These results may be especially useful in the aggregation or reconciliation of expert estimates. The Bayesian theory of expert aggregation has been developed by Winkler , , Clemen and Winkler , , , and others - see also Ganest and Zidek and West Correlation of expert errors plays a central role in these Bayesian approaches.
The present results provide a direction for more detailed analysis in this setting by permitting expert errors to have more than one kind of correlation a t the same time. The Bayesian theory generally tends t o decrease the effective weights of correlated experts in the final aggregated estimates. This is sensible on intuitive grounds since high correlation suggests that no, or less, new information is gained by including one or the other such expert.
P u t differently, inclusion of both correlated experts on an equal weights basis would, in effect, double-weight just one independent estimate. Therefore we expect aggregation methods to react by decreasing the assigned weights. At the other extreme of zero correlation between two expert error densities, they 40 Applications of Vertical Density Representation are treated equally and tend to receive about equal weights in the final aggregate. This also accords well with intuition. However, there are five cases, depending on the values of px and p,,, which raise doubts about these results.
In what follows, we assume a focus on two experts out of a possible larger sample; and all correlations, p c , p u , and p z , are zero between each of the focus pair and each of the others. From Table 2. Equal weightings of all the experts would be called for both by the Bayesian approach and by intuition. If expert one is very accurate large wscore then expert two is very inaccurate small wscore and vice versa. Intuition suggests in this case that only one or the other should be retained. An obvious selection criterion can be based on imputed error.
Namely, suppose expert one is retained first and expert two is omitted. An equal weighting of the estimates xi of the retained set yields the potential aggregate w1,say. Similarly, if expert two is retained, then an aggregate w2 is obtained.
Download Vertical Density Representations And Its Applications
But the perfect correlation between V1 and V2 shows that the errors of expert one and expert two must always be identical in magnitude. Evidently, such information would be extremely strong in its effect on aggregation. The next three cases consider pz and also pc to have maximum value for a given value of p,,.
This case is representative of very large values of pc Discussion 41 with very small values of pu. This case is an extreme version of very high error direction correlation along with zero accuracy correlation pu. The large pc and moderately positive overall p,-correlation argue for discounting one or both of these experts. Therefore, all that can be said at the present level of analysis is that whatever be the errors imputed to these experts, the signs of such errors must be identical.
This case is the extreme version with cases of high positive correlations of all three types. However, with pu merely near to 1. They need only to be near to each other with high probability. However, whatever error values are imputed to them, their signs should also be the same with high probability. The analysis of the foregoing cases shows that for one simple situation in which a pair of experts can be isolated from the rest of a sample, the consensus point estimate depends not only on the ordinary correlation or p z , but also is particularly sensitive t o the p,-correlation.
This is also to be expected from another viewpoint. In the Bayesian approach, accuracy, with respect to bias, is handled by initial adjustment of error distributions for known biases if any. Accuracy, as precision, is measured only by variances. Collection and use of pu provides an important additional source of information on precision. In fact, for Case 2 above, the p,-information was sufficient, along with other assumptions to actually identify an appropriate consensus point. These results indicate the need for a refinement of the Bayesian approach to reflect the impacts of the pv and pc correlation matrices.
Alternatively, an approach that directly assigns appropriate weights might be sought, in which the weights attempt t o reflect the implications of the pu and pc correlation matrices. A further important consideration, signalled particularly by Cases 2 and 3 and to a lesser extent by Cases 4 and 5, is that weights for aggregating individual experts into a consensus point estimate should ap- Applications of Vertical Density Representation 42 parently depend on the precision assigned to that expert by the weighting method.
A simple operationalization of such dependence of weights on imputed accuracies can be given as follows without reference to correlations. If weights wi are required to be proportionate to these precision measures, then the condition required on x, can be seen t o be 2. Results for the general univariate case are derived here. These results were applied to the analysis of correlation. Correlation can be analyzed into contour and vertical components, thus permitting more detailed analyses involving correlated variables. These results also give some insight on the well known difficulty of defining the multivariate gamma density.
See for example, Law and Kelton Let p" denote the matrix of contour correlations, p" the matrix of vertical correlations, and p" denote the overall correlation matrix. Evidently for one and the same multivariate normal distribution with correlation matrix p", one could associate a continuum of multivariate gamma densities as pc and p" vary in such a way that their combined effect is p".
Specifically, let X be the bivariate normal, N 0 ,C , with correlation matrix, p x. Let pc be the contour correlation matrix for X I and X2. Then any number of such bivariate gamma distributions can be associated with the N 0 , C density provided pc is chosen so that p" and p" yield p". The decomposition of correlation into separate components for accuracy and error direction, respectively, promises to be especially useful for the Further Considerations 43 Bayesian consensus of experts.
In particular, these considerations may be helpful in resolving the intuitive dilemma arising when highly accurate experts are discounted due to correlation. This page intentionally left blank Chapter 3 Multivariate Vertical Density Representation Introduction In this chapter, we develop the theory of vertical density representation VDR in the multivariate case.
We present a formula for the calculation of the conditional probability density of a random vector given its density value. Most of the materials given here are based on Pang et al. As we have seen, the concept of VDR as it was originally developed, was closely related to the generation of normal random variates. Troutt and Pang obtained a smooth solution hv. I w for equation 3. It is natural to ask whether we can generalize the above procedure to the d -dimensional case with d 2 3? As we shall see, the answer is positive.
It is perfectly possible to give a representation of a density function, f xd , of a d-dimensional normal vector X d in a form similar to 3. The above findings are valid for a very wide family of distributions. Let L d be the Lebesgue measure in! In order to solve this problem, the methods such as Monte Carlo method, genetic algorithm, simulated annealing method can be used to reach the global minimum [ 24 ].
Comparing to the gradient type method, these methods search the optimized model from the whole model space for the global minimum. However, these methods are usually time consuming, especially for full 3D inversion which may contain millions of model parameters. Instead of the deterministic inversion, the stochastic inversion approach can be applied to the potential field data [ 24 , 25 , 26 ]. Comparing to the deterministic inversion, the stochastic inversion can also provide uncertainty estimation for the model confidence.
Recently, we start to see more publications in geophysical stochastic inversion for potential field data. The conventional prism based inversion usually produce diffusive images even though some techniques such as focusing inversion [ 27 ] can be used to enforce a sharp boundary between different geological units. In petroleum reconnaissance using gravity method, it is of great importance to estimate the depth to basement. However, from the interpretation of conventional prism based inversion, it is difficult to pick up the correct location of such interface due to the non-uniqueness of the inversion and the low resolution of inverted density distribution.
In such environment, the density contrast between sedimentary rocks and basement is usually well known based on other information such as drilling.
The gravity anomaly can be attributed to the variation of the sediment-basement interface. Based on the method introduced in Section 2, this type of models can be simulated using the column discretization or the Cauchy-type integral approach. In this subsection, we will mainly focus on the inversion of sediment-basement interface using the 3D Cauchy-type integral approach.
Within the framework of this approach, we formulate the inversion with respect to the depth to basement and the density contrast value may not necessary be a constant. Similar to the prism based inversion, we can formulate the inverse problem using the parametric functional introduced in Eq. However, the model parameters now become as follows [ 19 ]:. Comparing to the prism based inversion for the density distribution, the forward modeling operator A which is related with the 3D analog of Cauchy-type integral, for density contrast surface inversion is a nonlinear operator since the gravity data does not have a simple relationship with the depth.
Such inversion can be solved with the gradient type inversion methods. The sensitivity matrix for the depth to basement and the density contrast values can be calculated by directly differentiate Eq. In some applications, the density contrast values are usually well known based on well logging. Under this circumstance, only the depth to basement value need to be inverted and the non-uniqueness of the inversion can be reduced significantly.
As an example, we consider a 3D sedimentary interface model with the vertical section shown in Figure 6. We consider a constant density value for the basement rock. To be realistic, we assume that the density value for sediment increase exponentially with depth due to compaction. As a result, the density contrast value will decrease exponentially with depth and approach the basement density which is assumed to be a constant.
For this model, we consider that the density contrast profile with depth is already known and only invert for the depth to basement. Figure 7 shows a comparison between the true model and the inverted model using the Cauchy-type integral approach.
Vertical density representation and its applications
One can see that the depth to basement and the shape of the sedimentary basin is well recovered in the inversion. Inversion result for the synthetic sediment-basement interface model with exponential density contrast . During the inversion, we use a flat surface as the initial model and a priori model. Actually, such inversion method is robust enough and does not depend too much on the selection of initial model and a priori model. However, one can use some other initial model in order to speed up the convergence of the inversion.
The famous Bouguer slab formula [ 28 ] can be used as an initial model:. As one may note, this equation works properly for the constant density contrast or linear density contrast but it does provide a good approximation of the depth to basement for a general case, e. In this subsection, we will briefly introduce some other new developed techniques for gravity inversion.
These new methods include, but not limited to, the binary inversion, multinary inversion and the joint inversion approach. In some application of gravity imaging such as subsurface tunnel detection, salt structure imaging, the density range of the target and the density of host rock are usually well known or well constrained [ 29 ].
However, the conventional inversion in this case will still produce a diffusive image with spread density ranges and continuous model space. In reality, the inverted density value should be clustered nearby the density value of the host rock and the density of the target. In the case of a model with two distinct density values, the continuous inversion parameters m in the original model space can be transformed into a new binary space for inversion [ 30 ].
Zhdanov and Cox [ 29 ] introduced a multinary inversion approach for geological models with more than two density values for different geological units. Several different functions, such as Heaviside function and Gaussian function, can be used for multinary transformation [ 31 ]. Within the framework of multinary approach, the value of the inverted model parameters is enforced to be nearby the vicinities of the preselected values. As we know that each geophysical method and data is only sensitive to specific model parameter. Due to the inherent non-uniqueness of geophysical data inversion, the recovered model parameter from individual inversion can be ambiguous.
Such ambiguity can be reduced by incorporating more a priori information into the inversion. Different geophysical models can be coupled with each other either directly or structurally. For example, the density and seismic velocity can be related with each other by empirical equations.
Alternatively, a density model can be related with the velocity model by assuming the structurally similarity. As a result, it is possible to enforce the coupling between different model parameters by inverting this different geophysical data set simultaneously.
The structurally similarity based joint inversion can be achieved by minimizing the cross gradients between different model parameters [ 32 , 33 , 34 ]. Within the framework of this approach, the structural similarity, between model parameter m 1 and m 2 , can be measured by the cross gradient which is defined as follows [ 32 ]:. Such regularization term will be minimized during the minimization of data misfit functional for the joint inversion approach. Zhdanov [ 31 ] proposed a new and more flexible joint inversion approach based on Gramian constraint.
Within the framework of this approach, different model parameters are coupled through the Gramian matrix which can either force the direct relationship between different model parameters or their spatial gradients. One good property of such joint inversion is that the algorithm will only enforce such coupling when it does exist and will not introduce artificial coupling when there is no relationship or coupling between different model parameters [ 31 ]. It is straightforward that the joint inversion formulation can be simplified if there exist a shared model parameter for different geophysical data set.
For example, the DC electric method and the magnetotelluric MT method both invert for the electric conductivity. As a result, it is unnecessary to formulate the joint inversion using the approach that we have just discussed, in this scenario. Based on this idea, the joint inversion for depth to basement using different geophysical data can be greatly simplified by considering that the depth to basement is a shared model parameter for different data set such as gravity and MT data.
In the meantime, each method may also have a private model parameter such as density contrast for gravity data and conductivity contrast for MT data. Here we consider a synthetic sediment-basement interface model [ 19 , 35 ]. These data will be used to recover the sediment-basement interface. Furthermore, we assume that the density contrast value and conductivity contrast value are also unknown.
Under this circumstance, the inversion of individual data set is characterized by strong non-uniqueness. By assuming the shared model parameter of depth to basement, for gravity and MT data, during the joint inversion approach, the recovered model parameters is much closer to their true value comparing to the individual data inversion as one can see in Figure 8.
A comparison between separate gravity and MT inversion for depth to basement, and the joint inversion result for a synthetic sediment-basement interface model . In this chapter, we have reviewed the 3D gravity forward modeling and inversion problem. We also introduced some other techniques, such as FFT method and differential equation method, for fast and efficient solving of the gravity forward modeling problem. In the application of sedimentary basin modeling, we have introduced the column discretization method and the advanced Cauchy-type integral method.
In the second section of this chapter, we introduced the gravity inversion problem and first start from the conventional prism based inversion. Following this, we introduce another application of gravity inversion for locating the density contrast interface. We mainly focus on the 3D inversion based on Cauchy-type integral for recovering the depth information.
Related Vertical Density Representations and Its Applications
Copyright 2019 - All Right Reserved