Skip to main content

    Robert Wolpert

    A variety of methods have been proposed for inference about extreme dependence for multivariate or spatially-indexed stochastic processes and time series. Most of these proceed by first transforming data to some specific extreme value... more
    A variety of methods have been proposed for inference about extreme dependence for multivariate or spatially-indexed stochastic processes and time series. Most of these proceed by first transforming data to some specific extreme value marginal distribution, often the unit Frechet, then fitting a family of max-stable processes to the transformed data and exploring dependence within the framework of that model. The marginal transformation, model selection, and model fitting are all possible sources of misspecification in this approach. We propose an alternative model-free approach, based on the idea that substantial information on the strength of tail dependence and its temporal structure are encoded in the distribution of the waiting times between exceedances of high thresholds at different locations. We propose quantifying the strength of extremal dependence and assessing uncertainty by using statistics based on these waiting times. The method does not rely on any specific underlyin...
    Often computer models yield massive output; e.g., a weather model will yield the predicted temperature over a huge grid of points in space and time. Emulation of a computer model is the process of finding an approximation to the computer... more
    Often computer models yield massive output; e.g., a weather model will yield the predicted temperature over a huge grid of points in space and time. Emulation of a computer model is the process of finding an approximation to the computer model that is much faster to run than the computer model itself (which can often take hours or days for a single run). Most successful emulation approaches are statistical in nature, but these have only rarely attempted to deal with massive computer model output; some approaches that have been tried include utilization of multivariate emulators, modeling of the output (e.g., through some basis representation, including PCA), and construction of parallel emulators at each grid point, with the methodology typically based on use of Gaussian processes to construct the approximations. These approaches will be reviewed, with the startling computational simplicity with which the last approach can be implemented being highlighted and its success illustrated...
    SUMMARY We propose a new method for making inference about an unknown measure Γ(dλ) upon observing some values of the Fredholm integral g(ω )= � k(ω, λ)Γ(dλ) of a known kernel k(ω, λ), using Levy random fields as Bayesian prior... more
    SUMMARY We propose a new method for making inference about an unknown measure Γ(dλ) upon observing some values of the Fredholm integral g(ω )= � k(ω, λ)Γ(dλ) of a known kernel k(ω, λ), using Levy random fields as Bayesian prior distributions for modeling uncertainty about Γ(dλ). Inference is based on simulation-based MCMC methods. The method is illustrated with a problem in polymer chemistry.
    Estimating the mortality of birds and bats at wind turbines based on periodic carcass counts is challenging because carcasses may be removed by scavengers or may be missed in investigators' searches, leading to undercounting. Existing... more
    Estimating the mortality of birds and bats at wind turbines based on periodic carcass counts is challenging because carcasses may be removed by scavengers or may be missed in investigators' searches, leading to undercounting. Existing mortality estimators intended to correct for this offer wildly different estimates when search intervals are short. We introduce a new estimator that includes many existing ones as special cases but extends and improves them in three ways to reflect phenomena discovered in the field: * Decreasing removal rate by scavengers as carcasses age; * Diminishing proficiency of Field Technicians in discovering carcasses as they age; * Possibility that some (but not all) carcasses arriving in earlier search periods may be discovered in the current period. It is this feature that makes the new estimator "partially periodic". Both point estimates and 50% and 90% Objective Bayes interval estimates are provided for mortality. The new ACME mortality est...
    Summary We characterize all stationary time reversible Markov processes whose finite dimensional marginal distributions (of all orders) are infinitely divisible– MISTI processes, for short. Aside from two degenerate cases (iid and... more
    Summary We characterize all stationary time reversible Markov processes whose finite dimensional marginal distributions (of all orders) are infinitely divisible– MISTI processes, for short. Aside from two degenerate cases (iid and constant), in both discrete and continuous time every such process with full support is a branching process with Poisson or Negative Binomial marginal univariate distributions and a specific bivariate distribution at pairs of times. As a corollary, we prove that all nondegenerate stationary integer valued processes constructed by the Markov thinning process fail to have infinitely divisible multivariate marginal distributions, except for the Poisson.
    A review of Monte Carlo methods for approximating the high-dimensional integrals that arise in Bayesian statistical analysis. Emphasis is on the features of many Bayesian applications which make Monte Carlo methods especially appropriate,... more
    A review of Monte Carlo methods for approximating the high-dimensional integrals that arise in Bayesian statistical analysis. Emphasis is on the features of many Bayesian applications which make Monte Carlo methods especially appropriate, and on Monte Carlo variance-reduction techniques especially well suited to Bayesian applications. A generalized logistic regression example is used to illustrate the ideas, and high-precision formulas are given for implementing Bayesian Monte Carlo integration.
    Fecal indicator bacteria (FIB) are commonly used to assess the threat of pathogen contamination in coastal and inland waters. Unlike most measures of pollutant levels however, FIB concentration metrics, such as most probable number (MPN)... more
    Fecal indicator bacteria (FIB) are commonly used to assess the threat of pathogen contamination in coastal and inland waters. Unlike most measures of pollutant levels however, FIB concentration metrics, such as most probable number (MPN) and colony-forming units (CFU), are not direct measures of the true in situ concentration distribution. Therefore, there is the potential for inconsistencies among model and sample-based water quality assessments, such as those used in the Total Maximum Daily Load (TMDL) program. To address this problem, we present an innovative approach to assessing pathogen contamination based on water quality standards that impose limits on parameters of the actual underlying FIB concentration distribution, rather than on MPN or CFU values. Such concentration-based standards link more explicitly to human health considerations, are independent of the analytical procedures employed, and are consistent with the outcomes of most predictive water quality models. We demonstrate how compliance with concentration-based standards can be inferred from traditional MPN values using a Bayesian inference procedure. This methodology, applicable to a wide range of FIB-based water quality assessments, is illustrated here using fecal coliform data from shellfish harvesting waters in the Newport River Estuary, North Carolina. Results indicate that areas determined to be compliant according to the current methods-based standards may actually have an unacceptably high probability of being in violation of concentration-based standards.
    In this paper, we introduce a class of quite general L evy processes, with both a diusion part and a pure jump component, as a prior distribution for log prices and volatilities in stochastic volatility models. This extends the work of... more
    In this paper, we introduce a class of quite general L evy processes, with both a diusion part and a pure jump component, as a prior distribution for log prices and volatilities in stochastic volatility models. This extends the work of Due et al. (2000) who model the jump part of the process as a compound Poisson process. Besides using a general L evy process, we also allow dependence in jumps for both (log) prices and volatility. We show how to do option pricing using the change of measure required of us by The First Fundamental Theorem of Asset pricing (see Delbaen and Schachermayer (1994)), and we present other models in the literature as particular cases of our model. Finally, we outline a method for hedging European call options on assets driven by innitely divisible vector processes.
    Research Interests:

    And 166 more