Publications

Teaching

  • 15.075 Statistical Thinking and Data Analysis: Spring 2018 TA

    Teaching assistant for an undergraduate course which aims to provide students with a theoretical understanding of fundamental techniques in statistics and data science, including linear regression and hypothesis testing, as well as a toolkit for practical implementation of statistical techniques.

Project Details

  • High Power Multivariate Testing with Directional Control

    Hypothesizing elaborate cause-effect relationships is a dangerous game. On one hand, if data supports an elaborate relationship, then the underlying model is well supported. However, elaborate relationships often invovle testing several different outcomes. For instance, to claim that an economic intervention is effective, examining its impact through several metrics helps increase credibility. When testing multiple outcomes, corretions for multiple comparisons are necessary to avoid making errors at a high rate; these corrections often dramatically reduce the power of statistical tests. Working with Colin Fogarty and Matt Olson, we have approached the problem of testing several one-sided hypotheses simultaneously with high power in the context of observational studies. Our results are available in Biometrika, and a pre-print is also available on arXiv. Code to implement the methods in the paper is available here.

  • Gaussian Prepivoting

    In finite population causal inference there are two central null hypotheses: Fisher's sharp null and Neyman's weak null. Fisher's sharp null stipulates that the treatment has no impact on any of the study participants whereas Neyman's weak null states that the treatment effect is zero on average for those involved in the study. The rich field of randomization testing applies well to tests of Fisher's sharp null, but can - at times - provide anti-conservative inference under Neyman's weak null. On a case-by-case basis some common test statistics have been modified so that randomization tests can be used to test both nulls with valid Type I error rate control in the large-sample limit. Furthermore, these modifications retain the extactness of randomization testing when examining only Fisher's sharp null. Working with Colin Fogarty, we have been able to construct a general procedure which modifies a given test statistic by composing it with a suitable cumulative distribution function in order to build a new statistic which is amenable to randomization inference under both Fisher's and Neyman's nulls. We show that this procedure is broadly applicable by providing a general characterization of the class of statistics for which it may be used. Important examples include the difference in means for rerandomized designs and regression adjusted estimators in CREs. The paper is available from the Journal of the Royal Statistical Society (Series B), and a pre-print of the paper can be found on arXiv. Some slides from a talk I gave on this are here.

Upcoming Talks

  • Nothing planned.

Contact