Reed Orchinik
PhD Candidate in Management Science
MIT Sloan School of Management
About
Hi! I'm a PhD candidate in Management Science at MIT Sloan where I'm advised by Professors Rahul Bhui and David Rand.
My research focuses on how information environments shape cognition, including the ways we process information like news, persuasive messages, and marketing communications. Using computational and formal models, experiments, and large-scale data, I link cognitive processes with the environments in which they operate to better understand the (surprisingly rational) foundations for important phenomena like polarization, resistance to pro-social behavior, and consumption.
In 2024, I interned at Microsoft Research in the Computational Social Science group working on projects related to the role of media narratives in beliefs at scale.
I grew up in Phoenix, AZ, and graduated from Swarthmore College in 2019 with high honors in Economics and Political Science. I previously worked as a senior research analyst at the Federal Reserve Bank of New York with economists in the Money and Payment Studies group.
Featured Research
Orchinik, R., Martel, C., Rand, D., Bhui, R.
Minor revisions at Management Science
Belief in misinformation has been attributed to digital media environments that promote intuitive thinking, which is thought to foster uncritical acceptance of content. We propose that this intuitive "truth-bias" may be an ecological adaptation to environments where information is typically accurate. Across a large-scale pre-registered survey experiment and an incentivized replication, we test whether intuitions indeed adapt to the base rate of true versus false content. Participants viewed news feeds composed primarily of either true or false headlines. We find that individuals make more — and faster — errors when encountering the less frequent headline type, and fewer errors with the more common type. Computational modeling of the deliberative process reveals these effects are driven by intuitive responses that function like Bayesian priors about content accuracy, which exhibit some persistence. Our findings suggest that susceptibility to misinformation may not merely reflect a cognitive failure, but rather a byproduct of learning from statistical regularities in digital environments.
Orchinik, R., Rand, D., Bhui, R.
Revise & Resubmit at Psychological Science
The illusory truth effect — that repetition leads to belief — is generally understood as a cognitive bias core to the psychology of beliefs. Here, we argue that the effect is a rational adaptation to generally high-quality information environments. Using a formal model, we show that increasing belief in repeated statements improves belief accuracy when a source is credible (i.e., likely to tell the truth) but sometimes makes errors. The model unifies key findings in the literature while predicting a testable edge case: when a source is not credible. Using a large (N=4,947) preregistered experiment quota-matched to the US population, we show that the illusory truth effect is substantially smaller with a low-quality (mostly false) relative to a high-quality (mostly true) information environment with a majority of participants in the low-quality condition demonstrating no illusory truth effect. We provide evidence that adaptation occurs via intuitions rather than a controlled process. In a second preregistered experiment (N=2,484) with American partisans, we show that the illusory truth effect does not appear for the presidential candidate (as of that time) that the participant does not trust.