Department of Mathematical Sciences

University of Nevada, Las Vegas


Statistics Colloquium/Seminar Series

2008-2009

 

[2006-2008] [2009-2010] [2010-2011]

 

For more information, contact the Colloquium/Seminar Coordinator, Dr. Hokwon Cho

(To see Math Dept Colloquia/Seminars, click next: Math Dept Seminar)

 

Fall 2008

 

·         Friday 3:30 p.m. September 12, CBC C-224: Dr. Ashis SenGupta

Department of Statistics, University of California, Riverside and Applied Statistics Unit, Indian Statistical Institute, Kolkata, India
Title: Directional Data on 3-smooth Manifolds: Probability Models, Independence and Regression Analyses

        Abstract:  Observations on angular propagations, directional orientations, and even strictly periodic phenomena can be cast in the arena of directional data (DD). Such observations are frequently encountered in almost every sphere of applied science, ranging from e.g. agriculture to zoology, chronotherapy to defence, etc. There has been a paucity of probability distributions to model DD, even on 3-smooth manifolds, e.g. torus, cylinder, and hence on their higher dimensional generalizations. We present here unified approaches to derivations of such distributions. Tests for orthogonality of the directional random variables are then obtained based on these distributions. Next, models for regression with linear and circular variables are presented and related inference procedures are developed. Both classical and Bayesian approaches are discussed. Finally, the proposed methods are illustrated by several real-life examples.

 

·         Friday 11:30 a.m. October 3, CBC C-224: Dr. Ben Kedem

Department of Statistics, University of Maryland, College Park

Title: Bayesian Spatial Prediction

Abstract: We discuss Bayesian spatial/temporal prediction in transformed Gaussian random fields where the transformation belongs to a parametric family. Monte Carlo integration is used in the approximation of the predictive density function, which is easy to implement in this framework. The BTG software for the implementation of the method will be discussed by means of spatial and temporal examples. As a byproduct, we provide a Bayesian way to tackle the distribution problem of average rainfall rate.

 

·         Friday 11:30 a.m. October 10, CBC C-224: Dr. Lurdes Inoue

Department of Biostatistics, University of Washington, Seattle
Title: Modeling disease progression

Abstract: In this talk we discuss some modeling approaches to investigate disease progression.  First, we propose a model that links longitudinal biomarker and disease progression. Specifically, we consider an underlying latent disease process that describes the onset of the disease and models the transition to an advanced stage of the disease as dependent on the biomarker levels. Next, we propose a variation of the above model to investigate disease progression using data prospectively collected in a screening study. We illustrate our methods through simulations and a case study in prostate cancer.

 

·         Friday 11:30 a.m. November 7, CBC C-224: Dr. Glen Meeden

Department of Statistics, University of Minnesota, Twin City

Title: A Noninformative Bayesian Approach to Finite Population Sampling Using Auxiliary Variables

Abstract: In finite population sampling prior information is often available in the form of partial knowledge about an auxiliary variable, for example its mean may be known. In such cases, the ratio estimator and the regression estimator are often used for estimating the population mean of the characteristic of interest. The Polya posterior has been developed as a noninformative Bayesian approach to survey sampling. It is appropriate when little or no prior information about the population is available. Here we show that it can be extended to incorporate types of partial prior information about auxiliary variables. We will see that it typically yields procedures with good frequentist properties even in some problems where standard frequentist methods are difficult to apply. Moreover one does not need to select a model which explictly relates the characteristic of interest to the auxiliary variables.

 

·         Friday 11:30 a.m. November 21, CBC C-224: Dr. N. Balakrishnan

Department of Statistics, McMaster University

Title: Over/Under-Dispersed Poisson Distributions and Processes

Abstract: In this talk, I will establish several connections of the Poisson weight function to overdispersion and
underdispersion. Specifically, I will show that the logconvexity (logconcavity) of the mean weight function is a
necessary and sufficient condition for overdispersion (underdispersion) when the Poisson weight function does not
depend on the original Poisson parameter. I will also discuss some properties of the weighted Poisson distributions
(WPDs). I will then introduce a notion of pointwise duality between two WPDs and discuss some associated
properties.  Next, after presenting some illustrative examples and providing a discussion on various Poisson weight
functions used in practice, I will make some concluding remarks. Finally, I will use these results to introduce and
discuss over/under-dispersed Poisson processes.

 

Spring 2009

 

·         Fri. 2:30 p.m. January 23, CBC C-224:  Dr. Abel Rodriguez

Department of Applied Mathematics and Statistics, University of California, Santa Cruz
Title: Multilevel Functional Clustering and the Nested Dirichlet Process

Abstract: This talk discusses clustering procedures for nested samples of curves, where multiple profiles are collected for each subject in the study.  We start by considering the application of standard functional clustering tools to this problems, which lead to groupings based on the average profile for each subject.  After discussing some of the shortcoming of this approach, we present a model based on a generalization of the nested Dirichlet processes that uses the information on the distribution of curves to generate the clusters.  The method is illustrated using data from the Early Pregnancy Study on hormone profiles along multiple menstrual periods for a cohort of women.  The resulting model simultaneous cluster both curves and subjects, allowing us to identify outlier curves for each group of women, as well as outlying women whose distribution of profiles differs from the rest.

 

·         Fri. 1:00 p.m. February 6, CBC C-224:  Dr. Grace Chiu

Department of Statistics and Actuarial Science, University of Waterloo
Title: Gauging Ecosystem Health with Latent Health Factor Models

Abstract: We propose a model-based approach for constructing ecological health indices through statistical inference. Our latent health factor index (LHFI) is obtained by estimating an unobservable health factor term in a mixed-effects ANOCOVA that directly models the relationship among indicator variables (or metrics) and health. Unlike  conventional indices (e.g. IBI and O/E index) that rely on domain-specific calibrations of metrics against reference conditions whose non-constancy is largely unaccounted for, our methodology (a) involves no explicit reference conditions while metrics are intrinsically "calibrated" in the context of multiple comparisons, and (b) can naturally incorporate spatio-temporal influences on calibration schemes.

 

·         Fri. 11:30 a.m. February 13, CBC C-224:  Dr. Barry Arnold

Department of Statistics, University of California, Riverside
Title: Some models involving hidden truncation in non-Gaussian settings

Abstract: The Azzalini skew-normal density of the form 2φ(x)Φ(λx) can be viewed as having arisen by considering a bivariate random variable (X,Y) with a classical bivariate normal density and focussing on the conditional distribution of X given Y < E(Y). The same family of distributions is encountered if we consider the conditional distribution of X given Y > E(Y). A slightly more general family is provided by considering the conditional distribution of X given Y > y0 where y0 is not necessarily equal to E(Y). The resulting model (which we can call a hidden truncation model, since we only observe X if the unobserved or hidden variable Y exceeds a threshold value, is a flexible extension of the classical univariate normal model with potential to .t a broad spectrum of data configurations which may not be well fitted by a classical normal model. In the present paper we consider several other basic bivariate non-Gaussian models and investigate the nature of their corresponding hidden truncation models. In particular, it is of interest to identify situations in which hidden truncation fails to augment the basic model. Additive component representations provide an alternative to the hidden truncation paradigm in the normal case. It is conjectured that it is only in the normal case that the two models coincide.

 

·         Fri. 11:30 a.m. February 27, CBC C-224:  Dr. Kevin Quinn

Department of Government & Institute for Quantitative Social Science, Harvard University
Title: Measuring Explicit Political Positions of Media

Abstract: We amass a new, large-scale dataset of newspaper editorials that allows us to calculate fine-grained measures of the political positions of newspaper editorial pages. Collecting and classifying over 1500 editorials adopted by 25 major US newspapers on 495 Supreme Court cases from 1994 to 2004, we apply an item response theoretic approach to place newspaper editorial boards on a substantively meaningful—and long validated—scale of political preferences. We validate the measures, show how they can be used to shed light on the permeability of the wall between news and editorial desks, and argue that the general strategy we employ has great potential for more widespread use.

 

·         Fri. 11:30 a.m. April 24, CBC C-224:  Dr. Charles Davis

President, Environmetrics and Statistics Ltd
Title: A Model for Measurements of Lognormally Distributed Environmental Contaminants

Abstract: Lognormal (LN) distributions are often assumed for environmental contaminants, with perhaps some justification. But decisions are made from measurements, not the unobservable concentrations themselves. These often do not have LN distributions. Rather, at fixed concentrations distributions of measurements are often normally distributed, and if low-level measurements are unbiased one has negative values; standard LN inference techniques fail in this setting. This reality is universally ignored; measurement values are censored at a Reporting Limit, the negative values are never seen, and we continue to develop (and publish) methods for left-censored LN environmental data.
    A mixture model for such data is presented. The motivating application involves Upper Tolerance Limits (UTLs = upper confidence limits for upper percentiles) which arise in facility surveys for worker protection. We are dealing with ICP-AES measurements for beryllium surface contamination, and have obtained large quantities of uncensored data. We discuss the model and its five physically meaningful parameters in terms of the measurement process. We show that conventional censored-data LN methods provide conservative UTLs that, paradoxically, become more (not less) conservative as the RL decreases. We pay some attention to maximum likelihood estimation using uncensored data, and then present attractive alternate approaches.

 


 

è Statistics Colloquium/Seminar Series