Information can be highly ambiguous and can mean very different things to different people. Is a claim you see repeatedly on social media true? Is the highly rated restaurant you're considering truly good or is it buying positive reviews?
I explore how people navigate this uncertainty. I study the cognitive processes that we use to interpret ambiguous information and how these processes are shaped by the environments in which we encounter that information. Our perceptions of sources, our prior beliefs, the heuristics and intuitions that we rely on, and the narratives we use to interpret and tie together information come from these environments.
In one line of work, I show that people adapt their intuitions about what is true to the quality of an information environment, and that these adaptations provide a rational foundation for cognitive "biases" like the "truth bias" and the "illusory truth effect." In other work, I find that people interpret consensus information as stemming either from the skill of those giving reports or their possible bias, and that the weight placed on these two possibilities changes how people interpret consensus. I find that this process of inferring how information was generated explains differences in people's reactions to both the scientific consensus about climate change and product reviews. In ongoing work, I study how narratives - models that people use to understand the world - are crafted in political news and how these influence the beliefs people ultimately hold about politics.
Check out my publications and select working papers below listed by topic!
Orchinik, R., Dubey R., Gershman, S., Powell, D., Bhrui, R. (2024) Learning from and about climate scientists. PNAS Nexus (In Press)
Abstract: Despite overwhelming scientific consensus on the existence of human-caused climate change, public opinion among Americans remains split. Directly informing people of scientific consensus is among the most prominent strategies for climate communication, yet the reasons for its effectiveness and its limitations are not fully understood. Here, we propose that consensus messaging provides information not only about the existence of climate change but also traits of climate scientists themselves. In a large (N = 2,545) nationally representative survey experiment, we examine how consensus information affects belief in human-caused climate change by shaping perceptions of climate scientist credibility. In the control group (N = 847), we first show that people learn both from and about climate scientists when presented with consensus and that perceived scientist credibility (especially skill) mediates up to about 40% of the total effect of consensus information on climate belief. We demonstrate that perceptions of climate scientists are malleable with two novel interventions that increase belief in climate change above and beyond consensus information.
Orchinik, R., Dubey, R., Gershman, S., Powell, D., Bhui, R. (2023). Learning about scientists from climate consensus messaging. Proceedings of the Annual Meeting of the Cognitive Science Society.
Abstract: Informing people of the overwhelming consensus among climate scientists that human-caused climate change is occurring increases belief in the proposition and the importance of policy action. However, consensus may not be interpreted in the same way; it could emerge from skilled experts converging on the truth, or a biased cabal working for their own gain. We show that the weight that an individual places on the skill and bias of experts affects whether they are persuaded by strong consensus. We demonstrate that beliefs about the skill and bias of pro-consensus scientists (those who express that climate change is occurring) and anti-consensus scientists (those who do not) are central components of a belief system about climate change, determining what individuals learn from climate scientists. However, these characteristics are not fixed as individuals also learn about scientists from consensus. In this way, people learn both from and about climate scientists given consensus.
Hattersley, M., Orchinik, R., Ludvig, E., Bhui, R. (2023). Preferences for descriptiveness and co-explanation in complex explanations. Proceedings of the Annual Meeting of the Cognitive Science Society.
Abstract: Good explanations can be distinguished from bad ones in different ways, for instance by how much of the available information they can explain (i.e., maximise the likelihood of) the available data. Here, we consider two different components of likelihood: descriptiveness (the likelihood of the individual data points) and co-explanation (the likelihood of the specific subset of data under consideration). We consider whether people prefer explanations that are high in descriptiveness vs. coexplanation. Moreover, we consider whether people who endorse conspiracy theories prefer explanations for either quality. In a medical diagnosis task, participants make binary choices between two fictional disease variants: one higher in descriptiveness versus another higher in co-explanation. Overall, participants displayed a weak preference for descriptiveness. This preference, however, did not vary across increasing levels of descriptiveness. Moreover, such preferences were unrelated to conspiracy mentality. Thus, both explanatory virtues may play a role in the appeal of likely explanations.
Intuitions and Beliefs
The Not So Illusory Truth Effect: A Rational Foundation for Repetition Effects, with David Rand and Rahul Bhui
Abstract: The illusory truth effect - the finding that repeated statements are believed more - is understood as a cognitive bias at the core of the psychology of beliefs. Here, we propose that the effect, rather than representing a flaw in human cognition, is a rational adaptation to generally high-quality information environments. Using a formal model, we show that increasing belief in repeated statements improves belief accuracy when a source is credible (i.e., likely to tell the truth) but sometimes makes errors. The theory unifies four key findings in the literature while predicting a testable edge case for the illusory truth effect: when a source is likely to convey falsehoods. Using a large (N = 4,947) pre-registered online experiment, we show that the illusory truth effect is substantially smaller in a low-quality (mostly false) relative to a high-quality (mostly true) information environment. In fact, a majority of participants in the low-quality condition do not demonstrate any illusory truth effect. We identify the deployment of an alternative strategy in the low-quality condition where participants decrease their belief given repetition. Three process-level indicators -- response times, cognitive reflection, and the prior plausibility of items -- confirm an adaptively rational interpretation. In sum, we suggest the illusory truth effect may not be purely illusory, highlighting its adaptive foundations and the ability of people to efficiently navigate complex environments.
Adaptive Intuitions Shape Susceptibility to Misinformation, with Cameron Martel, David Rand, Rahul Bhui
[Invited resubmission: Management Science]
Abstract: Belief in misinformation has been attributed to digital media environments that promote intuitive thinking, which is thought to foster uncritical acceptance of content. We propose that this intuitive "truth bias" may be an ecologically rational adaptation to environments where information is typically accurate. Across a large-scale pre-registered survey experiment and an incentivized replication, we test whether intuitions indeed adapt to the base rate of true versus false content. Participants viewed news feeds composed primarily of either true or false headlines. We find that individuals make more—and faster—errors when encountering the less frequent headline type, and fewer errors with the more common type. Computational modeling of the deliberative process reveals these effects are driven by intuitive responses that function like Bayesian priors about content accuracy, which exhibit some persistence. Our findings suggest that susceptibility to misinformation may not merely reflect a cognitive failure, but rather a rational byproduct of learning from statistical regularities in digital environments.
Replicability and generalizability of the repeated exposure effect on moral condemnation of fake news, with Rahul Bhui and David Rand
[Stage 1 Proposal Accepted in Principal: Nature Communications]
Abstract: Repeated exposure to misinformation reduces moral condemnation of those falsehoods, as shown by Effron and Raj (2020) -- and moral condemnation may play an important role in stopping the spread of online misinformation. In this registered report, we conceptually replicate previous findings on the effect of repetition and moral condemnation and investigate the generalizability of the findings, using an updated and larger set of false headlines. We also investigate whether asking for accuracy evaluations of the headlines, a type of accuracy prompt that is standard in repeated exposure tasks, alters the effect of repetition on moral condemnation, as inattention to the veracity of headlines may decrease outrage and thus moral condemnation.
Repetition Does Not Increase Belief in Claims From Distrusted Politicians, with Rahul Bhui and David Rand
Abstract: Repeating falsehoods is a common political tactic, and a large body of research on the illusory truth effect suggests that such repetition should increase belief in these claims. This repetition effect is generally thought to be a low-level cognitive bias that applies broadly across scenarios and people, making it a powerful force in political persuasion. In contrast, we adopt the theoretical framework of adaptive rationality and argue that repetition should not increase belief in claims if they are made by distrusted sources. We test this prediction in a large (N = 2,484) pre-registered experiment in which American partisans are shown real claims made by Donald Trump and Joe Biden, and a randomly selected subset of claims are repeated. Consistent with our predictions, we find that repetition does not increase belief if the participant distrusts the politician making the claim. By showing that source credibility is a powerful moderator of the illusory truth effect, we demonstrate an important limitation on the power of repetition for inducing belief.
Blatantly false news increases belief in more plausible falsehoods, with David Levari, Cameron Martel, Paul Seli, Rahul Bhui, Gordon Pennycook, and David Rand
[Submitted: Journal of Experimental Psychology: General]
Abstract: What are the consequences of exposure to blatant falsehoods and “fake news”? Here we show exposure to highly implausible claims can increase belief in more ambiguous false claims, as they seem more believable in comparison. Participants in five preregistered experiments (N=5,476) were exposed to lower or higher rates of news headlines that seemed blatantly false, as well as some more plausible true and false headlines. Being exposed to a higher prevalence of extremely implausible headlines increased belief in more ambiguous headlines, regardless of whether they were actually true or false. The effect persisted for headlines describing hypothetical events and actual news headlines, whether headlines were actively evaluated or simply read passively, among liberals and conservatives, and among those high or low in cognitive reflection. Our findings emphasize the importance of reducing exposure to fake news.
Climate Communications and Marketing
Addressing climate change skepticism and inaction using human-AI dialogues, with Gabriela Czarnek, Hause Lin, Henry George Xu, Thomas Costello, Gordon Pennycook, and David Rand
Abstract: Despite scientific consensus on human-caused climate change, skepticism and inaction persist. We ask whether facts and evidence - tailored to address each person’s specific concerns by an AI model - can address climate skepticism and motivate climate action. Participants first described their main climate change reservation. Of the N = 1,947 who articulated reservations, the most prevalent were the belief that climate change has natural causes (15%), feeling overwhelmed by the problem (10%), and concerns about the economic consequences of climate policies (8%). Participants were then randomized to (1) have a conversation with a Large Language Model (LLM) that was given the goal of addressing their climate reservations, (2) discuss an irrelevant topic with the LLM (i.e., control), or (3) receive static information about the scientific consensus around climate change (i.e., “standard-of-care”) after the control conversation. The LLM treatment significantly reduced participants’ conviction in their specific reservations, while consensus messaging had no significant effect. Both treatments led to significant increases in general pro-climate attitudes, but the LLM treatment was significantly more effective than consensus messaging - particularly for increasing willingness to make sacrifices to address climate change. Pre-treatment beliefs did not moderate treatment effectiveness, and the treatment substantially reduced Republicans’ reservations (although less than for Independents or Democrats). We also find evidence that roughly 35% to 40% of the LLM treatment effect persisted after one month. These findings highlight that it is possible to reach many of the climate skeptical or hesitant with the right facts and evidence.
Getting smart on green branding: Ideological sorting and the targeting of environmentally conscious branding, with Santiago Pardo Sanchez and David Rand
Abstract: Implementing existing green digital technology at scale could lead to a 20% reduction in global emissions annually. However, climate change has become a central focus of the "culture wars" providing a significant hindrance to the adoption of this technology. Using two survey experiments (total N = 2,089), we show that there is substantial heterogeneity in purchasing intentions when the same pro-environmental product is called "smart" or "green." Environmental concern serves as an important moderator, with participants low in environmental concern showing a substantial aversion to green branding. We consider the marketing actions available to firms in this polarized environment and suggest that ideological sorting reduces the burden of targeting. We demonstrate that political party, an easily observable or inferred characteristic, provides a clean market segmentation correlating highly with environmental concern while capturing additional variation in the efficacy of green branding. Using simulations of multiple targeting schemes corresponding to different levels of sophistication and data access, we estimate that showing consumers the branding that aligns best with their political affiliation can increase purchasing intentions by 4%, performing just as well as a causal machine learning targeting approach. A back-of-the-envelope calculation suggests that a 4% increase in purchasing intentions for just the five products tested here would reduce carbon emissions by the equivalent of 5.4 million flights from London to New York City.
Economics and Finance
Multimarket Contact and Prices: Evidence From an Airline Merger Wave, with Marc Remer
Abstract: We study the US airline merger wave from 2008 through 2013, which included four mergers between Delta/Northwest, United/Continental, Southwest/AirTran, and American/USAir. We first show these mergers occurred between airlines with complementary networks and very little head-to-head competition on overlap, nonstop routes. Consequently, each merger led to minimal changes, on average, in route-level HHI but large increases in multimarket contact. We analyze the causal impact of the mergers on prices using synthetic difference-in-differences and the synthetic control method. We find that merger-induced increases in multimarket contact led to higher prices, especially in the latter two legacy mergers. We therefore find that these mergers led to coordinated price effects. In contrast to the previous literature, we implement econometric methods that satisfy the parallel trends assumption, and we do not find a significant impact on overlap routes, suggesting that a primary channel through which mergers affect prices is an increase in multimarket contact.
Price Effects in U.S. Merger Retrospectives: A Meta-Analytic Approach, with Andrew Olsen and Marc Remer
[Under Review: Review of Economics and Statistics]
Abstract: We conduct a meta-analysis of U.S. merger retrospectives, a large and growing literature that investigates ex post merger outcomes-typically price-through reduced-form methods. We summarize the state of this literature by comparing the mergers studied to the universe of mergers and computing a summary price effect with a Bayesian hierarchical model. Papers in our sample study mergers with deal values, industries, and timelines different than the universe of mergers, but there is little evidence of publication bias. For the compatible price effects in our sample, our preferred model predicts a mean price effect of 6.17% and a 95% probability the true mean price effect lies within [0.67%, 11.70%]. Price effects appear to be higher for healthcare and media mergers, but do not vary substantially with author characteristics. Our analysis informs both U.S. antitrust policy and meta-science in economics.