JASON W. MORGAN
Hello, I'm Jason Morgan, a Data Scientist
at Nationwide and visiting
scholar in the Department of Political
Science at The Ohio State
I use a number of modeling and machine learning techniques to solve
difficult problems. I have recently been engaged in a number of
That's my day job. I also maintain an academic research agenda, focused
on models for network analysis, particularly latent space models and
advanced exponential random graph models, as well as the computational
and implementational aspects of estimating these models. I have also done
research on causal inference in the context of natural experiments
(instrumental variables and regression discontinuity designs).
I earned my Ph.D. at OSU, focusing on applied political methodology and
comparative politics. Prior to being at OSU, I earned an M.A. in
political science from Boston College
and my B.A. in economics and German studies from
Lewis & Clark College.
How Strong is Strong Enough? Strengthening
Instruments through Matching and Weak Instrument Tests
(with Luke Keele). Annals
of Applied Statistics.
In a natural experiment, treatment assignments are made through a
haphazard process that is thought to be as-if random. In one form
of natural experiment, encouragement to accept treatment rather
than treatments themselves are assigned in this haphazard process.
This encouragement to accept treatment is often referred to as an
instrument. Instruments can be characterized by different levels
of strength depending on the amount of encouragement. Weak
instruments that provide little encouragement may produce biased
inferences, particularly when assignment of the instrument is not
strictly randomized. A specialized matching algorithm can be used
to strengthen instruments by selecting a subset of matched pairs
where encouragement is strongest. We demonstrate how weak
instrument tests can guide the matching process to ensure that the
instrument has been sufficiently strengthened. Specifically, we
combine a matching algorithm for strengthening instruments and
weak instrument tests in the context of a study of whether turnout
influences party vote share in US elections. It is thought that
when turnout is higher, Democratic candidates will receive a
higher vote share. Using excess rainfall as an instrument, we hope
to observe an instance where unusually wet weather produces lower
turnout in an as-if random fashion. Consistent with statistical
theory, we find that strengthening the instrument reduces
sensitivity to bias from an unobserved confounder.
Heterogeneity in Social Networks with the Frailty Exponential Random
Christenson). Forthcoming in Political Analysis.
In the study of social processes, the presence of unobserved
heterogeneity is a regular concern. It should be particularly
worrisome for the statistical analysis of networks, given the
complex dependencies that shape network formation combined with
the restrictive assumptions of related models. In this paper, we
demonstrate the importance of explicitly accounting for unobserved
heterogeneity in exponential random graph models (ERGM) with a
Monte Carlo analysis and two applications that have played an
important role in the networks literature. Overall, these analyses
show that failing to account for unobserved heterogeneity can have
a significant impact on inferences about network formation. The
proposed frailty extension to the ERGM (FERGM) generally
outperforms the ERGM in these cases, and does so by relatively
large margins. Moreover, our novel multilevel estimation strategy
has the advantage of avoiding the problem of degeneration that
plagues the standard MCMC-MLE approach.
Web Timing Attacks Made Practical
Morgan). Blackhat 2015.
This paper addresses the problem of exploiting timing side
channels in web applications. To date, differences in execution
time have been difficult to detect and to exploit. Very small
differences in execution time induced by different security
logics, coupled with the fact that these small differences are
often lost to significant network noise, make their detection
difficult. Additionally, testing for and taking advantage of
timing vulnerabilities is often hampered by the tools available.
To that end, we perform a thorough Monte Carlo comparison of
several statistical techniques meant to identify the existence of
differences in computation time in remote web applications. We
then implement a tool that allows penetration testers to more
thoroughly identify potential exploits
dynnet. An R
package providing an alternative implementation latent space models
for static and dynamic network models. Meant as a test-bed for
exploring new ideas.
reimplementation of the boolean package in R. Provide multiprocessor
support, improve performance, extended capabilities.
An implementation of Rosenbaum's instrumental variable sensitivity
analysis for causal inference. Available in the rbounds package for R
(with Luke Keele).