For more information, contact the Colloquium/Seminar Coordinator, Dr. Hokwon Cho
(To see Math Dept Colloquia/Seminars, click next: Math Dept Seminar)
[Abstract] Mapping of small area mortality risks is a widely used technique in public health and in other area of statistical applications. The commonly used measure of risk, the standardized mortality ratio is not reliable due to its high variability in areas with low population. Advanced statistical techniques, such as, hierarchical modeling is common to overcome this issue. However the spatially correlated structures often possess challenges for their implementation and inferences. In this talk we will discuss the statistical issues from frequentist and Bayesian perspective and will offer some new ways of solving the problem.
[Abstract] In clinical trials it is quite common to test more than one dose for efficacy against placebo or an active control arm. Regulatory agencies such as Food and Drug administration insist that if there are multiple comparisons to be made in a clinical trial then family-wise error rate should be controlled—typically at 5% level. A closed testing procedure was developed by Marcus, Peritz and Gabriel and has been the mathematical foundation for multiple testing procedures. In general, we need to consider all possible intersections of the null hypotheses of interest. A hypothesis is rejected if its associated test and all tests associated with hypotheses implying it are significant. We shall explore implications of this procedure to generate interesting tests that control family-wise error rate at a given level.
University of Nevada, Las Vegas
present a sequential method for obtaining approximate confidence limits for
the ratio of two independent binomial proportions based on a slightly
modified maximum likelihood estimator. Large-sample properties of the
proposed sequential estimator are studied.
[Abstract] Medical research is often interested in finding subgroups in an outlier group. For example, a certain medical condition can be more frequent in a small group that is different from the majority of population. One approach to find groups in a data set is using cluster analysis. Cluster analysis has been widely used tool in exploring potential group structure in complex data and has received greater attention in recent years due to data mining and high dimensional data such as microarrays. In this presentation, I will introduce split-and-recombine procedure and its application for a medical data set. In addition, analysis results of the same data using other clustering methods will be discussed.
[Abstract] One comes across directions as the observations in a number of situations. The first inferential question that one should answer when dealing with such data is, “Are they isotropic or uniformly distributed?” The answer to this question goes back in history which we shall retrace a bit and provide an exact and approximate solution to this so-called “Pearson’s Random Walk” problem.
[Abstract] In this talk, parametric fractional
imputation is proposed as a frequentist approach of
generating imputed values. Using the fractional weights, the E-step of the EM
algorithm can be approximated by the weighted mean of the imputed data
likelihood where the fractional weights are computed from the current value
of the parameter estimates. Some
computational advantage over the existing methods can be achieved using the
idea of importance sampling in the
[Abstract] This talk will introduce a flexible class of
models for relational data based on a hierarchical extension of the
two-parameter Poisson-Dirichlet process. The models are motivated by two
different applications: 1) A study of cancer mortality rates in the