top of page

PUBLICATIONS

Journal Articles

 

Johnson, S.G.B., & Steinerberger, S. (2019). Intuitions about mathematical beauty: A case study in the aesthetic experience of ideas. Cognition, 189, 242–259. pdf

Can an idea be beautiful? Mathematicians often describe arguments as “beautiful” or “dull,” and famous scientists have claimed that mathematical beauty is a guide toward the truth. Do laypeople, like mathematicians and scientists, experience mathematics aesthetically? Three studies suggest that they do. When people rated the similarity of simple mathematical arguments to landscape paintings (Study 1) or pieces of classical piano music (Study 2), their similarity rankings were internally consistent across participants. Moreover, when participants rated beauty and various other potentially aesthetic dimensions for artworks and mathematical arguments, they relied mainly on the same three dimensions for judging beauty—elegance, profundity, and clarity (Study 3). These aesthetic judgments, made separately for artworks and arguments, could be used to predict similarity judgments out-of-sample. These studies also suggest a role for expertise in sharpening aesthetic intuitions about mathematics. We argue that these results shed light on broader issues in how and why humans have aesthetic experiences of abstract ideas

De Freitas, J., & Johnson, S.G.B. (2018). Optimality bias in moral judgment. Journal of Experimental Social Psychology, 79, 149–163. pdf

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between sub- optimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

 

Johnston, A.M., Sheskin, M., Johnson, S.G.B., & Keil, F.C. (2018). Preferences for explanation generality develop early in biology, but not physics. Child Development, 89, 1110–1119. pdf

One of the core functions of explanation is to support prediction and generalization. However, some explanations license a broader range of predictions than others. For instance, an explanation about biology could be presented as applying to a specific case (e.g., “this bear”) or more generally across “all animals.” The current study investigated how 5- to 7-year-olds (N=36), 11- to 13-year-olds (N=34), and adults (N=79) evaluate explanations at varying levels of generality in biology and physics. Findings revealed that even the youngest children preferred general explanations in biology. However, only older children and adults preferred explanation generality in physics. Findings are discussed in light of differences in our intuitions about biological and physical principles.

Johnston, A.M.*, Johnson, S.G.B.*, Koven, M.L., & Keil, F.C. (2017). Little Bayesians or little Einsteins? Probability and explanatory virtue in children's inferences. Developmental Science, 20, e12483. pdf

Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists—such as Einstein—do? Four experiments support the latter possibility. In particular, we demonstrate in four experiments that 4- to 8-year-old children, like adults, have a robust latent scope bias that leads to inferences that do not maximize posterior probability. When faced with two explanations equally consistent with observed data, where one explanation makes an unverified prediction, children consistently preferred the explanation that does not make this prediction (Experiment 1), even if the prior probabilities are identical (Experiment 3). Additional evidence suggests that this latent scope bias may result from the same explanatory strategies used by adults (Experiments 1 and 2), and can be attenuated by strong prior odds (Experiment 4). We argue that children, like adults, rely on ‘explanatory virtues’ in inference—a strategy that often leads to normative responses, but can also lead to systematic error.

 

Kim, N.S., Johnson, S.G.B., Ahn, W., & Knobe, J. (2017). The effect of abstract versus concrete framing on judgments of biological and psychological bases of behavior. Cognitive Research: Principles and Implications, 2, 17. pdf

 

Human behavior is frequently described both in abstract, general terms and in concrete, specific terms. We asked whether these two ways of framing equivalent behaviors shift the inferences people make about the biological and psychological bases of those behaviors. In five experiments, we manipulated whether behaviors are presented concretely (i.e. with reference to a specific person, instantiated in the particular context of that person’s life) or abstractly (i.e. with reference to a category of people or behaviors across generalized contexts). People judged concretely framed behaviors to be less biologically based and, on some dimensions, more psychologically based than the same behaviors framed in the abstract. These findings held true for both mental disorders (Experiments 1 and 2) and everyday behaviors (Experiments 4 and 5) and yielded downstream consequences for the perceived efficacy of disorder treatments (Experiment 3). Implications for science educators, students of science, and members of the lay public are discussed.

Johnson, S.G.B., Rajeev-Kumar, G., & Keil, F.C. (2016). Sense-making under ignorance. Cognitive Psychology, 89, 39–70. pdf

Much of cognition allows us to make sense of things by explaining observable evidence in terms of unobservable explanations, such as category memberships and hidden causes. Yet we must often make such explanatory inferences with incomplete evidence, where we are ignorant about some relevant facts or diagnostic features. In seven experiments, we studied how people make explanatory inferences under these uncertain conditions, testing the possibility that people attempt to infer the presence or absence of diagnostic evidence on the basis of other cues such as evidence base rates (even when these cues are normatively irrelevant) and then proceed to make explanatory inferences on the basis of the inferred evidence. Participants followed this strategy in both diagnostic causal reasoning (Experiments 1–4, 7) and in categorization (Experiments 5–6), leading to illusory inferences. Two processing predictions of this account were also confirmed, concerning participants’ evidence-seeking behavior (Experiment 4) and their beliefs about the likely presence or absence of the evidence (Experiment 5). These findings reveal deep commonalities between superficially distinct forms of diagnostic reasoning—causal reasoning and classification—and point toward common inferential machinery across explanatory tasks.

Kim, N.S., Ahn, W., Johnson, S.G.B., & Knobe, J. (2016). The influence of framing on clinicians’ judgments of the biological basis of behaviors. Journal of Experimental Psychology: Applied, 22, 39–47. pdf

Practicing clinicians frequently think about behaviors both abstractly (i.e., in terms of symptoms, as in the Diagnostic and Statistical Manual of Mental Disorders, 5th ed., DSM–5; American Psychiatric Association, 2013) and concretely (i.e., in terms of individual clients, as in DSM–5 Clinical Cases; Barnhill, 2013). Does abstract/concrete framing influence clinical judgments about behaviors? Practicing mental health clinicians (N = 74) were presented with hallmark symptoms of 6 disorders framed abstractly versus concretely, and provided ratings of their biological and psychological bases (Experiment 1) and the likely effectiveness of medication and psychotherapy in alleviating them (Experiment 2). Clinicians perceived behavioral symptoms in the abstract to be more biologically and less psychologically based than when concretely described, and medication was viewed as more effective for abstractly than concretely described symptoms. These findings suggest a possible basis for miscommunication and misalignment of views between primarily research-oriented and primarily practice-oriented clinicians; furthermore, clinicians may accept new neuroscience research more strongly in the abstract than for individual clients.

Johnson, S.G.B., & Rips, L.J. (2015). Do the right thing: The assumption of optimality in lay decision theory and causal judgment. Cognitive Psychology, 77, 42–76. pdf

Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent’s perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent’s actual and counterfactual choices (Experiments 1–3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5–6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take.

Johnson, S.G.B., & Ahn, W. (2015). Causal networks or causal islands? The representation of mechanisms and the transitivity of causal judgment. Cognitive Science, 39, 1468–1503. pdf

Knowledge of mechanisms is critical for causal reasoning. We contrasted two possible organizations of causal knowledge—an interconnected causal network, where events are causally connected without any boundaries delineating discrete mechanisms; or a set of disparate mechanisms —causal islands—such that events in different mechanisms are not thought to be related even when they belong to the same causal chain. To distinguish these possibilities, we tested whether people make transitive judgments about causal chains by inferring, given A causes B and B causes C, that A causes C. Specifically, causal chains schematized as one chunk or mechanism in semantic memory (e.g., exercising, becoming thirsty, drinking water) led to transitive causal judgments. On the other hand, chains schematized as multiple chunks (e.g., having sex, becoming pregnant, becoming nauseous) led to intransitive judgments despite strong intermediate links (Experiments 1–3). Normative accounts of causal intransitivity could not explain these intransitive judgments (Experiments 4 and 5).

 

Johnson, S.G.B., & Keil, F.C. (2014). Causal inference and the hierarchical structure of experience. Journal of Experimental Psychology: General, 143, 2223–2241. pdf

Children and adults make rich causal inferences about the physical and social world, even in novel situations where they cannot rely on prior knowledge of causal mechanisms. We propose that this capacity is supported in part by constraints provided by event structure—the cognitive organization of experience into discrete events that are hierarchically organized. These event-structured causal inferences are guided by a level-matching principle, with events conceptualized at one level of an event hierarchy causally matched to other events at that same level, and a boundary-blocking principle, with events causally matched to other events that are parts of the same superordinate event. These principles are used to constrain inferences about plausible causal candidates in unfamiliar situations, both in diagnosing causes (Experiment 1) and predicting effects (Experiment 2). The results could not be explained by construal level (Experiment 3) or similarity-matching (Experiment 4), and were robust across a variety of physical and social causal systems. Taken together, these experiments demonstrate a novel way in which noncausal information we extract from the environment can help to constrain inferences about causal structure.

Book Chapters and Commentaries

Johnson, S.G.B., & Steinerberger, S. (2019). The universal aesthetics of mathematics. Mathematical Intelligencer, 41, 67–70. pdf

Johnson, S.G.B. (2018). Financial alchemists and financial shamans. Behavioral & Brain Sciences41, e78. pdf

Professional money management appears to require little skill, yet its practitioners command astronomical salaries. Singh’s theory of shamanism provides one possible explanation: Financial professionals are the shamans of the global economy. They cultivate the perception of superhuman traits, maintain grueling initiation rituals, and rely on esoteric divination rituals. An anthropological view of markets can usefully supplement economic and psychological approaches.

Johnson, S.G.B. (2018). Why do people believe in a zero-sum economy? Behavioral & Brain Sciences, 41, e172. pdf

Zero-sum thinking and aversion to trade pervade our society, yet fly in the face of everyday experience and the consensus of economists. Boyer & Petersen’s (B&P’s) evolutionary model invokes coalitional psychology to explain these puzzling intuitions. I raise several empirical challenges to this explanation, proposing two alternative mechanisms – intuitive mercantilism (assigning value to money rather than goods) and errors in perspective-taking.

Johnson, S.G.B., & Ahn, W. (2017). Causal mechanisms. In M.R. Waldmann (Ed.). Oxford Handbook of Causal Reasoning (pp. 127–146). New York, NY: Oxford University Press. pdf

This chapter reviews empirical and theoretical results concerning knowledge of causal mechanisms—beliefs about how and why events are causally linked. First, we review the effects of mechanism knowledge, showing that mechanism knowledge can trump other cues to causality (including covariation evidence and temporal cues) and structural constraints (the Markov condition), and that mechanisms play a key role in various forms of inductive inference. Second, we examine several theories of how mechanisms are mentally represented—as associations, forces or powers, icons, abstract placeholders, networks, or schemas—and the empirical evidence bearing on each theory. Finally, we describe ways that people acquire mechanism knowledge, discussing the contributions from statistical induction, testimony, reasoning, and perception. For each of these topics, we highlight key open questions for future research.

Dissertation

Johnson, S.G.B. (2016). Cognition as sense-making. Unpublished doctoral dissertation. pdf  précis

Humans must understand their world in order to act on it. I develop this premise into a set of empirical claims concerning the organization of the mind—namely, claims about strategies that people use to bring evidence to bear on hypotheses, and to harness those hypotheses for predicting the future and making choices. By isolating these sense-making strategies, we can study which faculties of mind share common cognitive machinery.

My object in Chapter 1 is to make specific the claim that a common logic of explanation underlies diverse cognitive functions. In this dissertation, the empirical work focuses on causal inference and categorization—the core achievements of higher-order cognition—but there are rumblings throughout psychology, hinting that sense-making processes may be far more general. I explore some of these rumblings and hints.

In Chapters 2–4, I get into the weeds of the biases that afflict our explanatory inferences— necessary side effects of the heuristics and strategies that make it possible. Chapter 2 looks at the inferred evidence strategy—a way that reasoners coordinate evidence with hypotheses. Chapter 3 examines our preferences for simple and for complex explanations, arguing that there are elements in explanatory logic favoring simplicity and elements favoring complexity—opponent heuristics which are tuned depending on contextual factors. Chapter 4 studies the aftermath of explanatory inferences—how such inferences are used to predict the future. I show that these inferences are not treated probabilistically, but digitally, as certainly true or false, leading to distortions in predictions.

Chapter 5 considers the origins of these strategies. Given that children and adults are sometimes capable of sophisticated statistical intuition, might these heuristics be learned through repeated experiences with rational inference? Or might the converse be true, with our probabilistic machinery built atop an early-emerging heuristic foundation? I use the inferred evidence strategy as a case study to examine this question.

Chapters 6 and 7 are concerned with how these processes propagate to social cognition and action. Chapter 6 studies how all three of these strategies and associated biases—inferred evidence, opponent simplicity heuristics, and digital prediction—enter into our stereotyping behavior and our mental-state inferences. Chapter 7 looks at how explanatory inferences influence our choices, again using inferred evidence as a case study. We shall find that choice contexts invoke processes that operate on top of explanatory inference, which can lead to choices that are simultaneously less biased but also more incoherent.

In the concluding Chapter 8, I close with a meditation on the broader implications of this research program for human rationality and for probabilistic notions of rationality in particular. Even as our efforts to make sense of things can get us into trouble, they may be our only way of coping with the kinds of uncertainty we face in the world.

Peer-Reviewed Conference Proceedings

 

Johnson, S.G.B. (in press). Moral reputation and the psychology of giving: Praise judgments track personal sacrifice rather than social good. In Proceedings of the 41st Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. pdf

Do we praise altruistic acts because they produce social benefits or because they require a personal sacrifice? On the one hand, utilitarianism demands that we maximize the social benefit of our actions, which could motivate altruistic acts. On the other hand, altruistic acts signal reputation precisely because personal sacrifice is a strong, costly signal. Consistent with the reputational account, these studies find that in the absence of reputational cues, people mainly rely on personal cost rather than social benefit when evaluating prosocial actors (Study 1). However, when reputation is known, personal cost acts as a much weaker signal and play a smaller role in moral evaluations (Study 2). We argue that these results have farreaching implications for the psychology and philosophy of altruism, as well as practical import for charitable giving, particularly the effective altruism movement.

Johnson, S.G.B., Murphy, G.L., Rodrigues, M., & Keil, F.C. (in press). Predictions from uncertain moral character. In Proceedings of the 41st Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. pdf

People assess others’ moral characters to predict what they will do. Here, we study the computational mechanisms used to predict behavior from uncertain evidence about character. Whereas previous work has found that people often ignore hypotheses with low probabilities, we find that people often account for the possibility of poor moral character even when that possibility is relatively unlikely. There was no evidence that comparable inferences from uncertain non-moralized traits integrate across multiple possibilities. These results contribute to our understanding of moral judgment, probability reasoning, and theory of mind.

Johnson, S.G.B., Royka, A., McNally, P., & Keil, F.C. (in press). When is science considered interesting and important? In Proceedings of the 41st Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. pdf

Scientists seek to discover truths that are interesting and important. We characterized these notions by asking laypeople to assess the importance, interestingness, surprisingness, practical value, scientific impact, and comprehensibility of research reported in the journals Science and Psychological Science. These judgments were interrelated in both samples, with interest predicted by practical value, surprisingness, and comprehensibility, and importance predicted mainly by practical value. However, these judgments poorly tracked the academic impact of the research, measured by citation counts three and seven years later. These results suggest that although people have internally reliable notions of what makes science interesting and important, these notions do not track scientific findings’ actual impact.

Johnson, S.G.B., & Steinerberger, S. (2018). The aesthetics of mathematical explanations. In Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 572–577). Austin, TX: Cognitive Science Society. pdf

Mathematicians often describe arguments as “beautiful” or “dull,” and famous scientists have claimed that mathematical beauty is a guide toward the truth. Do laypeople, like mathematicians and scientists, perceive mathematics through an aesthetic lens? We show here that they do. Two studies asked people to rate the similarity of simple mathematical arguments to pieces of classical piano music (Study 1) or to landscape paintings (Study 2). In both cases, there was internal consensus about the pairings of arguments and artworks at greater than chance levels, particularly for visual art. There was also some evidence for correspondence to the aesthetic ratings of undergraduate mathematics students (Study 1) and of professional mathematicians (Studies 1 and 2).

Johnson, S.G.B., & Tuckett, D. (2018). Asymmetric use of information about past and future: Toward a narrative theory of forecasting. In Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 1883–1888). Austin, TX: Cognitive Science Society. pdf

Story-telling helps to define the human experience. Do narratives also inform our predictions and choices? The current study provides evidence that they do, using financial decision-making as an example of a domain where, normatively, publicly available information (about the past or the future) is irrelevant. Despite this, participants used past company performance information to project future price trends, as though using affectively laden information to predict the ending of a story. Critically, these projections were stronger when information concerned predictions about a company’s future performance rather than actual data about its past performance, suggesting that people not only rely on financially irrelevant (but narratively relevant) information for making predictions, but erroneously impose temporal order on that information.

Johnson, S.G.B., Zhang, J., & Keil, F.C. (2018). Psychological underpinnings of zero-sum thinking. In Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 566–571). Austin, TX: Cognitive Science Society. pdf

A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead using zero-sum thinking. Participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterwards. These studies revealed that zero-sum beliefs are pervasive. These beliefs seem to arise in part due to intuitive mercantilist beliefs that money has value over-and-above what it can purchase, since buyers are seen as less likely to benefit than sellers, and barters are often seen as failing to benefit either party (Study 1). Zero-sum beliefs are greatly reduced by giving reasons for the exchange (Study 2), suggesting that a second mechanism underlying zero-sum thinking is a failure to spontaneously take the perspective of the buyer. Implications for politics and business are discussed.

Johnson, S.G.B., & Hill, F. (2017). Belief digitization in economic prediction. In Proceedings of the 39th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society (pp. 2314–2319). pdf

Economic choices depend on our predictions of the future. Yet, at times predictions are not based on all relevant information, but instead on the single most likely possibility, which is treated as though certainly the case—that is, digitally. Two sets of studies test whether this digitization bias would occur in higher-stakes economic contexts. When making predictions about the future asset prices, participants ignored conditional probability information given relatively unlikely events and relied entirely on conditional probabilities given the more likely events. This effect was found for both financial aggregates and individual stocks, for binary predictions about the direction and continuous predictions about expected values, and even when the “unlikely” event explicitly had a probability as high as 30%; further, it was not moderated by investing experience. Implications for behavioral finance are discussed.

 

Johnson, S.G.B., Johnston, A.M., Koven, M.L., & Keil, F.C. (2017). Principles used to evaluate mathematical explanations. In Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 612–617). Austin, TX: Cognitive Science Society. pdf

Mathematics is critical for making sense of the world. Yet, little is known about how people evaluate mathematical explanations. Here, we use an explanatory reasoning task to investigate the intuitive structure of mathematics. We show that people evaluate arithmetic explanations by building mental proofs over the conceptual structure of intuitive arithmetic, evaluating those proofs using criteria similar to those of professional mathematicians. Specifically, we find that people prefer explanations consistent with the conceptual order of the operations ("9÷3=3 because 3×3=9" rather than "3×3=9 because 9÷3=3"), and corresponding to simpler proofs ("9÷3=3 because 3×3=9" rather than "9÷3=3 because 3+3+3=9"). Implications for mathematics cognition and education are discussed.

 

Johnson, S.G.B., & Keil, F.C. (2017). Statistical and mechanistic information in evaluating causal claims. In Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 618–623). Austin, TX: Cognitive Science Society. pdf

People use a variety of strategies for evaluating causal claims, including mechanistic strategies (seeking a step-by- step explanation for how a cause would bring about its effect) and statistical strategies (examining patterns of co- occurrence). Two studies examine factors leading one or the other of these strategies to predominate. First, general causal claims (e.g., "Smoking causes cancer") are evaluated predominantly using statistical evidence, whereas statistics is less preferred for specific claims (e.g., "Smoking caused Jack’s cancer"). Second, social and biological causal claims are evaluated primarily through statistical evidence, whereas statistical evidence is deemed less relevant for evaluating physical causal claims. We argue for a pluralistic view of causal learning on which a multiplicity of causal concepts lead to distinct strategies for learning about causation.

 

Johnson, S.G.B., Valenti, J.J., & Keil, F.C. (2017). Opponent uses of simplicity and complexity in causal explanation. In Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 606–611). Austin, TX: Cognitive Science Society. pdf

People often prefer simpler explanations because they have higher prior probability. However, simpler explanations are not always normatively superior because they often do not fit the data as well as complex explanations. How do people negotiate this trade-off between prior probability (favoring simplicity) and goodness-of-fit (favoring complexity)? Here, we argue that people use opponent heuristics—relying on simplicity as a cue to prior probability but complexity as a cue to goodness-of-fit (Study 1). We also examine factors that lead one or the other heuristic to predominate in a given context. Study 2 finds that people have a stronger simplicity preference in deterministic rather than stochastic contexts, while Study 3 finds that people have a stronger simplicity preference for physical rather than social causal systems. Together, we argue that these cues and contextual moderators act as powerful constraints that help to specify otherwise ill-defined hypothesis comparison problems.

Johnson, S.G.B. (2016). Explaining December 4, 2015: Cognitive science ripped from the headlines. In Proceedings of the 38th Annual Conference of the Cognitive Science Society (pp. 63–68). Austin, TX: Cognitive Science Society. pdf

Do the discoveries of cognitive science generalize beyond artificial lab experiments? Or do they have little hope of helping us to understand real-world events? Fretting on this question, I bought a copy of the Wall Street Journal and found that the three front page headlines each connect to my own research on explanatory reasoning. I report tests of the phenomena of inferred evidence, belief digitization, and revealed truth in real-world contexts derived from the headlines. If my own corner of cognitive science has such explanatory relevance to the real world, then cognitive science as a whole must be in far better shape yet.

Johnson, S.G.B., Kim, H.S., & Keil, F.C. (2016). Explanatory biases in social categorization. In Proceedings of the 38th Annual Conference of the Cognitive Science Society (pp. 776–781). Austin, TX: Cognitive Science Society. pdf

Stereotypes are important simplifying assumptions we use for navigating the social world, associating traits with social categories. These beliefs can be used to infer an individual’s likely social category from observed traits (a diagnostic inference) or to make inferences about an individual’s unknown traits based on their putative social category (a predictive inference). We argue that these inferences rely on the same explanatory logic as other sorts of diagnostic and predictive reasoning tasks, such as causal explanation. Supporting this conclusion, we demonstrate that stereotype use involves four of the same biases known to be used in causal explanation: A bias against categories making unverified predictions (Exp. 1), a bias toward simple categories (Exp. 2), an asymmetry between confirmed and disconfirmed predictions of potential categories (Exp. 3), and a tendency to treat uncertain categorizations as certainly true or false (Exp. 4).

Johnson, S.G.B., Kim, K., & Keil, F.C. (2016). The determinants of knowability. In Proceedings of the 38th Annual Conference of the Cognitive Science Society (pp. 1577–1582). Austin, TX: Cognitive Science Society. pdf

 

Many propositions are not known to be true or false, and many phenomena are not understood. What determines what propositions and phenomena are perceived as knowable or unknowable? We tested whether factors related to scientific methodology (a proposition’s reducibility and falsifiability), its intrinsic metaphysics (the materiality of the phenomena and its scope of applicability), and its relation to other knowledge (its centrality to one’s other beliefs and values) influence knowability. Across a wide range of naturalistic scientific and pseudoscientific phenomena (Studies 1 and 2), as well as artificial stimuli (Study 3), we found that reducibility and falsifiability have strong direct effects on knowability, that materiality and scope have strong indirect effects (via reducibility and falsifiability), and that belief and value centrality have inconsistent and weak effects on knowability. We conclude that people evaluate the knowability of propositions consistently with principles proposed by epistemologists and practicing scientists.

Johnson, S.G.B., Zhang, M.*, & Keil, F.C. (2016). Decision-making and biases in causal-explanatory reasoning. In Proceedings of the 38th Annual Conference of the Cognitive Science Society (pp. 1967–1972). Austin, TX: Cognitive Science Society. pdf

Decisions often rely on judgments about the probabilities of various explanations. Recent research has uncovered a host of biases that afflict explanatory inference: Would these biases also translate into decision-making? We find that although people show biased inferences when making explanatory judgments in decision-relevant contexts (Exp. 1A), these biases are attenuated or eliminated when the choice context is highlighted by introducing an economic framing (price information; Exp. 1B–1D). However, biased inferences can be “locked in” to subsequent decisions when the judgment and decision are separated in time (Exp. 2). Together, these results suggest that decisions can be more rational than the corresponding judgments—leading to choices that are rational in the output of the decision process, yet irrational in their incoherence with judgments.

de Freitas, J.*, & Johnson, S.G.B.* (2015). Behaviorist thinking in judgments of wrongness, punishment, and blame. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 524–529). Austin, TX: Cognitive Science Society. pdf

Moral judgment depends upon inferences about agents’ beliefs, desires, and intentions. Here, we argue that in addition to these factors, people take into account the moral optimality of an action. Three experiments show that even agents who are ignorant about the nature of their moral decisions are held accountable for the quality of their decision—a kind of behaviorist thinking, in that such reasoning bypasses the agent’s mental states. In particular, whereas optimal choices are seen as more praiseworthy than suboptimal choices, decision quality has no further effect on moral judgments—a highly suboptimal choice is seen as no worse than a marginally suboptimal choice. These effects held up for judgments of wrongness and punishment (Experiment 1), positive and negative outcomes (Experiment 2), and agents with positive and negative intentions (Experiment 3). We argue that these results reflect a broader tendency to irresistibly apply the Efficiency Principle when explaining behavior.

Johnson, S.G.B., Merchant, T., & Keil, F.C. (2015). Argument scope in inductive reasoning: Evidence for an abductive account of induction. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1015–1020). Austin, TX: Cognitive Science Society. pdf

Our ability to induce the general from the specific is a hallmark of human cognition. Inductive reasoning tasks ask participants to determine how strongly a set of premises (e.g., Collies have sesamoid bones) imply a conclusion (Dogs have sesamoid bones). Here, we present evidence for an abductive theory of inductive reasoning, according to which inductive strength is determined by treating the conclusion as an explanation of the premises, and evaluating the quality of that explanation. Two inductive reasoning studies found two signatures of explanatory reasoning, previously observed in other studies: (1) an evidential asymmetry between positive and negative evidence, with observations casting doubt on a hypothesis given more weight than observations in support; and (2) a latent scope effect, with ignorance about potential evidence counting against a hypothesis. These results suggest that inductive reasoning relies on the same hypothesis evaluation mechanisms as explanatory reasoning.

Johnson, S.G.B., Merchant, T., & Keil, F.C. (2015). Predictions from uncertain beliefs. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1003–1008). Austin, TX: Cognitive Science Society. pdf

According to probabilistic theories of higher cognition, beliefs come in degrees. Here, we test this idea by studying how people make predictions from uncertain beliefs. According to the degrees-of-belief theory, people should take account of both high- and low-probability beliefs when making predictions that depend on which of those beliefs are true. In contrast, according to the all-or-none theory, people only take account of the most likely belief, ignoring other potential beliefs. Experiments 1 and 2 tested these theories in explanatory reasoning, and found that people ignore all but the best explanation when making subsequent inferences. Experiment 3A extended these results to beliefs fixed only by prior probabilities, while Experiment 3B found that people can perform the probability calculations when the needed probabilities are explicitly given. Thus, people’s intuitive belief system appears to represent beliefs in a ‘digital’ (true or false) manner, rather than taking uncertainty into account.

Johnson, S.G.B., Rajeev-Kumar, G., & Keil, F.C. (2015). Belief utility as an explanatory virtue. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1009–1014). Austin, TX: Cognitive Science Society. pdf

Our beliefs guide our actions. But do potential actions also guide our beliefs? Three experiments tested whether people use pragmatist principles in fixing their beliefs, by examining situations in which the evidence is indeterminate between an innocuous and a dire explanation that necessitate different actions. According to classical decision theory, a person should favor a prudent course of action in such cases, but should nonetheless be agnostic in belief between the two explanations. Contradicting this position, participants believed the dire explanation to be more probable when the evidence was ambiguous. Further, when the evidence favored either an innocuous or a dire explanation, evidence favoring the dire explanation led to stronger beliefs compared to evidence favoring the innocuous explanation. These results challenge classic theories of the relationship between belief and action, suggesting that our system for belief fixation is sensitive to the utility of potential beliefs for taking subsequent action.

Johnston, A.M.*, Johnson, S.G.B.*, Koven, M.L., & Keil, F.C. (2015). Probabilistic versus heuristic accounts of explanation in children: Evidence from a latent scope bias. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1021–1026). Austin, TX: Cognitive Science Society. pdf

Like scientists, children must find ways to explain causal systems in the world. The Bayesian approach to cognitive development holds that children evaluate explanations by applying a normative set of statistical learning and hypothesis-testing mechanisms to the evidence they observe. Here, we argue for certain supplements to this approach. In particular, we demonstrate in two studies that children, like adults, have a robust latent scope bias that conflicts with the laws of probability. When faced with two explanations equally consistent with observed data, where one explanation made an unverified prediction, children consistently preferred the explanation that did not make this prediction (Experiment 1). The bias can be overridden by strong prior odds, indicating that children can integrate cues from multiple sources of evidence (Experiment 2). We argue that children, like adults, rely on heuristics for making explanatory judgments which often lead to normative responses, but can lead to systematic error.

Johnson, S.G.B., Jin, A., & Keil, F.C. (2014). Simplicity and goodness-of-fit in explanation: The case of intuitive curve-fitting. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 701–706). Austin, TX: Cognitive Science Society. pdf

Other things being equal, people prefer simpler explanations to more complex ones. However, complex explanations often provide better fits to the observed data, and goodness-of-fit must therefore be traded off against simplicity to arrive at the most likely explanation. In three experiments, we examine how people negotiate this tradeoff. As a case study, we investigate laypeople’s intuitions about curve-fitting in visually presented graphs, a domain with established quantitative criteria for trading off simplicity and goodness-of-fit. We examine whether people are well-calibrated to normative criteria, or whether they instead have an underfitting or overfitting bias (Experiment 1), we test people’s intuitions in cases where simplicity and goodness-of-fit are no longer inversely correlated (Experiment 2), and we directly measure judgments concerning the complexity and goodness-of-fit in a set of curves (Experiment 3). To explain these findings, we posit a new heuristic: That the complexity of an explanation is used to estimate its goodness-of-fit to the data.

Johnson, S.G.B., Johnston, A.M., Toig, A.E., & Keil, F.C. (2014). Explanatory scope informs causal strength inferences. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 2453–2458). Austin, TX: Cognitive Science Society. pdf

People judge the strength of cause-and-effect relationships as a matter of routine, and often do so in the absence of evidence about the covariation between cause and effect. In the present study, we examine the possibility that explanatory power is used in making these judgments. To intervene on explanatory power without changing the target causal relation, we manipulated explanatory scope—the number of effects predicted by an explanation—in two different ways, finding downstream consequences of these manipulations on causal strength judgments (Experiment 1). Directly measuring perceived explanatory power for the same items also revealed item-by-item correlations between causal strength and explanatory power (Experiment 2). These results suggest that explanatory power may be a useful heuristic for estimating causal strength in the absence of statistical evidence.

Johnson, S.G.B., Rajeev-Kumar, G., & Keil, F.C. (2014). Inferred evidence in latent scope explanations. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 707–712). Austin, TX: Cognitive Science Society. pdf

Explanations frequently lead to predictions that we do not have the time or resources to fully assess or test, yet we must often decide between potential explanations with incomplete evidence. Previous research has documented a latent scope bias, wherein explanations consistent with fewer predictions of unknown truth are preferred to those consistent with more such predictions. In the present studies, we evaluate an account of this bias in terms of inferred evidence: That people attempt to infer what would be observed if those predictions were tested, and then reason on the basis of this inferred evidence. We test several predictions of this account, including whether and how explanatory preferences depend on the reason why the truth of the effect is unknown (Experiment 1) and on the base rates of the known and unknown effects (Experiment 2), and what evidence people see as relevant to deciding between explanations (Experiment 3). These results help to reveal the cognitive processes underlying latent scope biases and highlight boundary conditions on these effects.

Johnson, S.G.B., & Rips, L.J. (2014). Predicting behavior from the world: Naïve behaviorism in lay decision theory. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 695–700). Austin, TX: Cognitive Science Society. pdf

Life in our social world depends on predicting and interpreting other people’s behavior. Do such inferences always require us to explicitly represent people’s mental states, or do we sometimes bypass such mentalistic inferences and rely instead on cues from the environment? We provide evidence for such behaviorist thinking by testing judgments about agents’ decision-making under uncertainty, comparing agents who were knowledgeable about the quality of each decision option to agents who were ignorant. Participants believed that even ignorant agents were most likely to choose optimally, both in explaining (Experiment 1) and in predicting behavior (Experiment 2), and assigned them greater responsibility when acting in an objectively optimal way (Experiment 3).

Johnson, S.G.B., & Rips, L.J. (2013). Good decisions, good causes: Optimality as a constraint on attribution of causal responsibility. In Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 2662–2667). Austin, TX: Cognitive Science Society. pdf

How do we assign causal responsibility for others’ decisions? The present experiments examine the possibility that an optimality constraint is used in these attributions, with agents considered less responsible for outcomes when the decisions that led to those outcomes were suboptimal. Our first two experiments investigate scenarios in which agents are choosing among multiple options, varying the efficacy of the forsaken alternatives to examine the role of optimality in attributing responsibility. Experiment 3 tests whether optimality considerations also play a role in attribution of causality more generally. Taken together, these studies indicate that optimality constraints are used in lay decision theory and in causal judgment.

bottom of page