My forecasting approach blends applied economics, probabilistic reasoning, and decision science. I aim to produce forecasts that are transparent, testable, and useful for policy and planning. My work draws on ideas from behavioral research, uncertainty analysis, and structured analytic techniques used in both economics and forecasting communities.
Every forecast begins with historical frequencies and long-run patterns. Base rates anchor my initial estimates and prevent overreliance on anecdotes or recent events.
As new information arrives, I update my probabilities incrementally rather than making large, reactive shifts. This keeps forecasts stable, coherent, and grounded in evidence.
I consider not only the most likely outcome but also the range of plausible scenarios, including tail risks. I treat models as tools, not oracles, and remain aware of their limits—especially in complex or nonlinear systems.
For qualitative or ambiguous questions, I use structured analytic methods: decomposing problems, identifying key drivers, weighing evidence, and documenting assumptions. This makes my reasoning explicit and reviewable.
I publish my forecasts, track updates over time, and evaluate performance after resolution. This helps refine my judgment and provides a clear record of how my thinking evolves.
I use a combination of Python, R, and Stata to build models, analyze data, and visualize uncertainty. These tools allow me to work across both quantitative and qualitative domains—from time series forecasting and econometric modeling to scenario analysis and structured judgment.
My goal is to connect forecasts to real-world decisions. I focus on how economic and environmental outcomes interact, how uncertainty shapes policy choices, and how decision-makers can use probabilistic thinking to improve planning and resilience.