I am a financial economist at the U.S. Securities and Exchange Commission in the Office of Markets. I completed my Ph.D. in Finance at the Washington University in St. Louis Olin Business School. Previously, I worked as a research assistant at the Federal Reserve Board of Governors. My main research field is asset pricing. My current research interests include applications of machine learning, large-scale optimization, robust optimization, and natural language processing in finance.
- My working paper Cash-Hedged Stock Returns with Chase P. Ross and Sharon Y. Ross is a semifinalist for the 2022 FMA best paper award in the “Investments” category.
- I have revised the working paper Expected Returns, Firm Characteristics, and Cardinality Constraints. The revised working paper is available on SSRN.
- I have finished a new working paper titled Cash-Hedged Stock Returns with Chase P. Ross and Sharon Y. Ross. The working paper is available on SSRN
Expected Returns, Firm Characteristics, and Cardinality Constraints
I propose estimating stochastic discount factors for the cross-section of expected returns under the assumption of worst-case differences between the population moments and sample moments of characteristic-based factor portfolios’ excess returns. I show that a stochastic discount factor estimated under the worst-case difference assumption has a population and out-of-sample Sharpe ratio greater than or equal to a lower bound that can be computed ex-ante with model residuals. I also show that l1 regularization, in the context of SDF estimation, is hard thresholding on the factors’ sample Sharpe ratios, and propose using l0 regularization to estimate sparse SDFs instead. I find that the paper’s worst-case robustness approach to stochastic discount factor estimation works well using both out-of-sample and bootstrapped Sharpe ratios. The characteristic-based l0-sparse SDFs also perform much better than the l1-sparse SDFs and have very similar performance to sparse SDFs built with latent factors from the factor portfolios’ expected returns.
Cash-Hedged Stock Returns
with Chase P. Ross and Sharon Y. Ross
Corporate cash piles vary across companies and over time. A firm’s cash holding is an implicit position in a low-return asset that is correlated across firms. Cash generates variation in beta estimates. We show how investors can hedge out the cash on firms’ balance sheets when making portfolio choices. We decompose stock betas into components that depend on the firm’s cash holding, return on cash, and cash-hedged return. Common asset pricing premia — size, value, and momentum — have large implicit cash positions. Portfolios of cash-hedged premia often have higher Sharpe ratios because firms’ cash returns are correlated.
Are characteristic interactions important to the cross-section of expected returns?
Characteristic interactions play an important role in describing the cross-section of expected returns. I use a Fama-Macbeth regression modified to accommodate more vari- ables than observations to study the cross-sectional relationship between characteristic interactions and expected returns. The modified Fama-Macbeth regression uses a form of dimension reduction called an envelope, which does not require variable selection or slope regularization. I use the method to estimate the information in 3,655 character- istic interactions about the cross-section of expected returns. About 100 interactions have incremental information about expected returns. Standard long-short portfolios constructed from interaction-based estimates of expected returns have significant risk- adjusted returns compared to standard factor models.
Are Item 1A Risk Factors Priced?
Public companies report “the most significant factors that make” their common stock “speculative or risky” in section “Item 1A. Risk Factors” of their annual filings. This paper uses textual analysis to estimate common risks from Item 1A texts and study these risks’ effect on public companies’ stock returns. I find the textual relevance of common Item 1A risks to the cross-section of firms’ Item 1A texts predicts the cross-section of expected stock returns. A factor portfolio aggregating information about returns from fifty individual Item 1A risks has an average monthly return of 0.97% and a risk-adjusted return of 1.06%. Factor portfolios for nineteen individual Item 1A risks have significant average returns. Eighteen individual Item 1A risks provide independent information about stock returns.
Generating options-implied probability densities to understand oil market events
with Deepa Dhume Datta and Juan M. Londono
Energy Economics 64 (2017): 440-457
We investigate the informational content of options-implied probability density functions (PDFs) for the future price of oil. Using a semiparametric variant of the methodology in Breeden and Litzenberger (1978), we investigate the fit and smoothness of distributions derived from alternative PDF estimation methods, and develop a set of robust summary statistics. Using PDFs estimated around episodes of high geopolitical tensions, oil supply disruptions, macroeconomic data releases, and shifts in OPEC production strategy, we explore the extent to which oil price movements are expected or unexpected, and whether agents believe these movements to be persistent or temporary.
A Facilitated Interface to Generate a Combined Textual and Graphical Database System Using Widely Available Software
with Corey Lawson, Kirk Larson, Jonathan Van Erdewyk, Christopher Smith, Al Rizzo, Marc Rendell
Journal of Software Engineering and Applications 5.10 (2012): 789.
Data-Base Management System (DBMS) is the current standard for storing information. A DBMS organizes and maintains a structure of storage of data. Databases make it possible to store vast amounts of randomly created information and then retrieve items using associative reasoning in search routines. However, design of databases is cumbersome. If one is to use a database primarily to directly input information, each field must be predefined manually, and the fields must be organized to permit coherent data input. This static requirement is problematic and requires that database table(s) be predefined and customized at the outset, a difficult proposition since current DBMS lack a user friendly front end to allow flexible design of the input model. Furthermore, databases are primarily text based, making it difficult to process graphical data. We have developed a general and nonproprietary approach to the problem of input modeling designed to make use of the known informational architecture to map data to a database and then retrieve the original document in freely editable form.