I am a PhD candidate at MIT’s Operations Research Center.
I work with Colin Fogarty on problems in causal inference. My earlier work centered around observational studies and the associated sensitivity analyses. More recently I have been working on inference procedures in settings without the risk of unmeasured confounding.
Before coming to MIT, I graduated from Bowdoin College in 2017 with a B.A. in Mathematics. Before starting operations research, I did some research in biology, quantum chemistry, analytic number theory, and random matrix theory.
PhD Student in Operations Research, 2022
Massachusetts Institute of Technology
B.A. in Mathematics, 2017
Bowdoin College
Teaching assistant for an undergraduate course which aims to provide students with a theoretical understanding of fundamental techniques in data science, including linear regression and hypothesis testing, as well as a toolkit for practical implementation of statistical techniques.
Duties: Assisting students, leading recitations, holding office hours, grading midterm and final exams.
Hypothesizing elaborate cause-effect relationships is a dangerous game. On one hand, if data supports an elaborate relationship, then the underlying model is well supported. However, elaborate relationships often invovle testing several different outcomes. For instance, to claim that an economic intervention is effective, examining its impact through several metrics helps increase credibility. When testing multiple outcomes, corretions for multiple comparisons are necessary to avoid making errors at a high rate; these corrections often dramatically reduce the power of statistical tests. Working with Colin Fogarty and Matt Olson, we have approached the problem of testing several one-sided hypotheses simultaneously with high power in the context of observational studies. We are in the process of publishing the results, and a pre-print is currently available on arXiv. Code to implement the methods in the paper is available here.
In finite population causal inference there are two central null hypotheses: Fisher's sharp null and Neyman's weak null. Fisher's sharp null stipulates that the treatment has no impact on any of the study participants whereas Neyman's weak null states that the treatment effect is zero on average for those involved in the study. The rich field of randomization testing applies well to tests of Fisher's sharp null, but can - at times - provide anti-conservative inference under Neyman's weak null. On a case-by-case basis some common test statistics have been modified so that randomization tests can be used to test both nulls with valid Type I error rate control in the large-sample limit. Furthermore, these modifications retain the extactness of randomization testing when examining only Fisher's sharp null. Working with Colin Fogarty, we have been able to construct a general procedure which modifies a given test statistic by composing it with a suitable cumulative distribution function in order to build a new statistic which is amenable to randomization inference under both Fisher's and Neyman's nulls. We show that this procedure is broadly applicable by providing a general characterization of the class of statistics for which it may be used. Important examples include the difference in means for rerandomized designs and regression adjusted estimators in CREs. We are in the process of publishing the results, and a pre-print is currently available on arXiv.