Caleb Maresca

Hi, I'm Caleb! I'm a PhD student at New York University with a focus on the intersection of artificial intelligence and economic systems. My research interests span many areas, including macroeconomic modeling of the effects of AI, laboratory experiments evaluating the strategic behavior of AI agents, ML for causal inference, as well as AI safety and mechanistic interpretability.

Profile Photo

Research and Coding Projects

Transformative AI, Entrepreneurship, and Inequality

Developing a quantitative model examining how advanced AI could affect inequality through dual channels: empowering entrepreneurs while displacing workers. I analyze whether financial frictions and fixed costs trap workers in declining labor markets despite AI making business creation more attractive.

Strategic Wealth Accumulation Under Transformative AI Expectations

The rapid progress of development in the field of artificial intelligence may profoundly reshape the global economy by both increasing productivity and automating away many jobs. This paper explores how households adjust their economic behavior today in anticipation of transformative AI (TAI). Building on previous research, I introduce a novel mechanism where the future reallocation of labor from humans to AI systems owned by wealthy households creates a zero-sum contest for control over AI resources—driving changes in current savings decisions and asset prices.

SenGen: Scenario Generator for LLM RL Agents

SenGen is a Python package designed to generate interactive scenarios for training and evaluating LLM-based reinforcement learning agents, with a particular focus on ethical decision making and AI safety.

NSCAN: News-Stock Cross-Attention Network (with Nishant Asati)

NSCAN is a novel deep learning model for predicting multiple stock returns simultaneously using financial news data and cross-attention mechanisms. Unlike traditional approaches that predict returns for individual stocks in isolation, our model captures cross-asset relationships and market interactions.

Other Writings

Monet: Mixture of Monosemantic Experts for Transformers Explained

MONET is a novel neural network architecture using extreme expert specialization (~250k experts/layer) to improve interpretability without sacrificing performance. I explain how it works, present a new interpretation of the architecture, and propose efficiency improvements.