The big news about chaos is supposed to be that the smallest of changes in a system can result in very large differences in that system’s behavior. The so-called butterfly effect has become one of the most popular images of chaos. The idea is that the flapping of a butterfly’s wings in Argentina could cause a tornado in Texas three weeks later. By contrast, in an identical copy of the world sans the Argentinian butterfly, no such tornado would have occurred in Texas. The mathematical version of this property is known as sensitive dependence and such sensitivity has implications for predictability of future behavior. Clarifying sensitive dependence’s significance is important given there have always been limits on prediction. Chaos studies have highlighted these implications in fresh ways, enabled forms of mitigation as well as control of chaos, and have led to other implications for how we think about our world.
In addition to exhibiting sensitive dependence, chaotic systems are deterministic and nonlinear and exhibit aperiodic behavior (Lorenz 1963). This entry discusses systems exhibiting these properties and their philosophical implications. For those not familiar with the basic phenomenology of chaos, reading nontechnical treatments such as Smith (2007) or Bishop (2023) is highly recommended. Because of the distinctive nature of quantum chaos, it is treated separately in the Supplement: Quantum Chaos, needed for discussions of broader implications in §6.
Randomness, as we ordinarily think of it, exists when some outcomes occur haphazardly, unpredictably, or by chance. These latter three notions are all distinct, but all have some kind of close connection to probability. Notoriously, there are many kinds of probability: subjective probabilities (‘degrees of belief’), evidential probabilities, and objective chances, to name a few (Hájek 2012), and we might enquire into the connections between randomness and any of these species of probability. In this entry, we focus on the potential connections between randomness and chance, or physical probability. The ordinary way that the word ‘random’ gets used is more or less interchangeable with ‘chancy’, which suggests this Commonplace Thesis—a useful claim to target in our discussion:
(CT)
Something is random iff it happens by chance.
The Commonplace Thesis, and the close connection between randomness and chance it proposes, appears also to be endorsed in the scientific literature, as in this example from a popular textbook on evolution (which also throws in the notion of unpredictability for good measure):
scientists use chance, or randomness, to mean that when physical causes can result in any of several outcomes, we cannot predict what the outcome will be in any particular case. (Futuyma 2005: 225)
Some philosophers are, no doubt, equally subject to this unthinking elision, but others connect chance and randomness deliberately. Suppes approvingly introduces
the view that the universe is essentially probabilistic in character, or, to put it in more colloquial language, that the world is full of random happenings. (Suppes 1984: 27)
However a number of technical and philosophical advances in our understanding of both chance and randomness open up the possibility that the easy slide between chance and randomness in ordinary and scientific usage—a slide that would be vindicated by the truth of the Commonplace Thesis—is quite misleading. This entry will attempt to spell out these developments and clarify the differences between chance and randomness, as well as the areas in which they overlap in application. It will also aim to clarify the relationship of chance and randomness to other important notions in the vicinity, particularly determinism and predictability (themselves often subject to confusion).
There will be philosophically significant consequences if the Commonplace Thesis is incorrect, and if ordinary usage is misleading. For example, it is intuitively plausible that if an event is truly random it cannot be explained (if it happens for a reason, it isn’t truly random). It might seem then that the possibility of probabilistic explanation is undermined when the probabilities involved are genuine chances. Yet this pessimistic conclusion only follows under the assumption, derived from the Commonplace Thesis, that all chancy outcomes are random. Another interesting case is the role of random sampling in statistical inference. If randomness requires chance, then no statistical inferences on the basis of ‘randomly’ sampling a large population will be valid unless the experimental design involves genuine chance in the selection of subjects. But the rationale for random sampling may not require chance sampling—as long as our sample is representative, those statistical inferences may be reliable. But in that case, we’d be in a curious situation where random sampling wouldn’t have much to do with randomness, and whatever justification for beliefs based on random sampling that randomness is currently thought to provide would need to be replaced by something else.
A final case of considerable philosophical interest is the frequentist approach to objective probability, which claims (roughly) that the chance of an outcome is its frequency in an appropriate series of outcomes (Hájek 2012 §3.4). To avoid classifying perfectly regular recurring outcomes as chancy, frequentists like von Mises (1957) proposed to require that the series of outcomes should be random, without pattern or order. Frequentism may fall with the Commonplace Thesis: if there can be chancy outcomes without randomness, both will fail.
The Commonplace Thesis is central to all three examples. As it is widely accepted that probabilistic explanation is legimitate, that random sampling doesn’t need genuine chance (though it can help), and that frequentism is in serious trouble (Hájek 1997), there is already some some pressure on the Commonplace Thesis. But we must subject it to closer examination to clarify whether these arguments do succeed, and what exactly it means to say of some event or process that it is random or chancy. Though developing further consequences of this kind is not the primary aim of this entry, it is hoped that what is said here may help to untangle these and other vexed issues surrounding chance and randomness.
Certainty, or the attempt to obtain certainty, has played a central role in the history of philosophy. Some philosophers have taken the kind of certainty characteristic of mathematical knowledge to be the goal at which philosophy should aim. In the Republic, Plato says that geometry “draws the soul towards truth and produces philosophic thought by directing upwards what we now wrongly direct downwards” (527b). Descartes also thought that a philosophical method that proceeds in a mathematical way, enumerating and ordering everything exactly, “contains everything that gives certainty to the rules of mathematics” (Discourse on the Method, PW 1, p. 121). Other philosophers have adopted different models for how to best understand certainty. For example, Aristotle and Aquinas take scientific explanation to be essential to certainty (see Pasnau 2017, pp. 5–7), while Al-Ghazālī thought that certainty arose out of religious practice (see Albertini 2005). For many empiricists, certainty with respect to empirical matters is to be found in basic beliefs that are grounded in some fundamental aspect of perceptual experience (see, e.g., Lewis 1952).
Like knowledge, certainty is an epistemic property of beliefs. (In a derivative way, certainty is also an epistemic property of subjects: S is certain that p just in case S’s belief that p is certain.) Although some philosophers have thought that there is no difference between knowledge and certainty, it has become increasingly common to distinguish them. On this conception, then, certainty is either the highest form of knowledge or is the only epistemic property superior to knowledge. One of the primary motivations for allowing kinds of knowledge less than certainty is the widespread sense that skeptical arguments are successful in showing that we rarely or never have beliefs that are certain (see Unger 1975 for this kind of skeptical argument) but do not succeed in showing that our beliefs are altogether without epistemic worth (see, for example, Lehrer 1974, Williams 1999, and Feldman 2003; see Fumerton 1995 for an argument that skepticism undermines every epistemic status a belief might have; and see Klein 1981 for the argument that knowledge requires certainty, which we are capable of having).
As with knowledge, it is difficult to provide an uncontentious analysis of certainty. There are several reasons for this. One is that there are different kinds of certainty, which are easy to conflate. Another is that the full value of certainty is surprisingly hard to capture. A third reason is that there are two dimensions to certainty: a belief can be individually certain at a particular moment, or it can be certain over some greater length of time in a system of beliefs.
Our narrow minds make us believe that we live in strange days. And we are, but not particularly stranger than ever before. With every step we take, we forget the third or praise the ground it had just stepped on. Our world might be destroyed by climate change or nuclear war, but people have lived for thousands of years in worlds no larger than the six houses around them or the small city they inhabited in fear of instant destruction. The theatre we live in now might be as large as the whole world and the angst real, but don’t pretend it’s new.
I don’t have many good reasons for my aversion to this kind of doom-and-gloom. We will always live among nay-sayers and deniers, and the idea of progress is a challenge for many, but don’t forget that these people are also happy to have clean water to drink and to visit a dentist when they have a toothache. Progress does not care what we think; it will move forward, and no one is innocent in this game of who is to blame.
Don’t forget that behind all the nonsense we believe in and are willing to die for are people who bleed when they get cut, mourn their loved ones, and even fall in love for the first time, just like we do, even when these feelings are hidden behind shame and fear. You know for yourself what is hidden inside, we are all afraid to look in the mirror, to look deep in the reflections of our own eyes.
Causal models are mathematical models representing causal relationships within an individual system or population. They facilitate inferences about causal relationships from statistical data. They can teach us a good deal about the epistemology of causation, and about the relationship between causation and probability. They have also been applied to topics of interest to philosophers, such as the logic of counterfactuals, decision theory, and the analysis of actual causation.
Causal modeling is an interdisciplinary field that has its origin in the statistical revolution of the 1920s, especially in the work of the American biologist and statistician Sewall Wright (1921). Important contributions have come from computer science, econometrics, epidemiology, philosophy, statistics, and other disciplines. Given the importance of causation to many areas of philosophy, there has been growing philosophical interest in the use of mathematical causal models. Two major works—Spirtes, Glymour, and Scheines 2000 (abbreviated SGS), and Pearl 2009—have been particularly influential.
A causal model makes predictions about the behavior of a system. In particular, a causal model entails the truth value, or the probability, of counterfactual claims about the system; it predicts the effects of interventions; and it entails the probabilistic dependence or independence of variables included in the model. Causal models also facilitate the inverse of these inferences: if we have observed probabilistic correlations among variables, or the outcomes of experimental interventions, we can determine which causal models are consistent with these observations. The discussion will focus on what it is possible to do in “in principle”. For example, we will consider the extent to which we can infer the correct causal structure of a system, given perfect information about the probability distribution over the variables in the system. This ignores the very real problem of inferring the true probabilities from finite sample data. In addition, the entry will discuss the application of causal models to the logic of counterfactuals, the analysis of causation, and decision theory.
Category mistakes are sentences such as ‘The number two is blue’, ‘The theory of relativity is eating breakfast’, or ‘Green ideas sleep furiously’. Such sentences are striking in that they are highly odd or infelicitous, and moreover infelicitous in a distinctive sort of way. For example, they seem to be infelicitous in a different way to merely trivially false sentences such as ‘2+2=5 ’ or obviously ungrammatical strings such as ‘The ran this’.
The majority of contemporary discussions of the topic are devoted to explaining what makes category mistakes infelicitous, and a wide variety of accounts have been proposed including syntactic, semantic, and pragmatic explanations. Indeed, this is part of what makes category mistakes a particularly important topic: a theory of what makes category mistakes unacceptable can potentially shape our theories of syntax, semantics, or pragmatics and the boundaries between them. As Camp (2016, 611–612) explains: “Category mistakes … are theoretically interesting precisely because they are marginal: as by-products of our linguistic and conceptual systems lacking any obvious function, they reveal the limits of, and interactions among, those systems. Do syntactic or semantic restrictions block ‘is green’ from taking ‘Two’ as a subject? Does the compositional machinery proceed smoothly, but fail to generate a coherent proposition or delimit a coherent possibility? Or is the proposition it produces simply one that our paltry minds cannot grasp, or that fails to arouse our interest? One’s answers to these questions depend on, and constrain, one’s conceptions of syntax, semantics, and pragmatics, of language and thought, and of the relations among them and between them and the world.”
Moreover, the question of how to account for the infelicity of category mistakes has implications for a variety of other philosophical questions. For example, in metaphysics, it is often argued that a statue must be distinct from the lump of clay from which it is made because ‘The statue is Romanesque’ is true, while ‘The lump of clay is Romanesque’ is not—indeed, the latter ascription arguably constitutes a category mistake. Correspondingly, an assessment of this argument depends on one’s account of category mistakes (see §4.3 below).
The fear of God is not the beginning of wisdom. The fear of God is the death of wisdom. Skepticism and doubt lead to study and investigation, and investigation is the beginning of wisdom.
The modern world is the child of doubt and inquiry, as the ancient world was the child of fear and faith. (why I am an Agnostic)
When we fully understand the brevity of life, its fleeting joys and unavoidable pains; when we accept the facts that all men and women are approaching an inevitable doom: the consciousness of it should make us more kindly and considerate of each other. This feeling should make men and women use their best efforts to help their fellow travelers on the road, to make the path brighter and easier as we journey on. It should bring a closer kinship, a better understanding, and a deeper sympathy for the wayfarers who must live a common life and die a common death. (The Myth of the Soul)
Why I am an Agnostic and Other Essays, Clarence Darrow
This entry is intended as a brief and general introduction to the development of category theory from the beginning of the Middle Ages, in the sixth century, to the Silver Age of Scholasticism, in the sixteenth. This development is fascinating but extraordinarily complex. Scholars are just beginning to take note of the major differences in the understanding of categories and of how these differences are related to the discussion of other major philosophical topics in the Middle Ages. Much work remains to be done, even regarding the views of towering figures, so necessarily we have had to restrict our discussion to only a few major figures and topics. Still, we hope that the discussion will serve as a good starting point for anyone interested in category theory and its history.
1. Issues
Philosophers speak about categories in many different ways. There is one initial, and rather substantial, difference between philosophers who allow a very large number of categories and those who allow only a very small number. The first include among categories such different things as human, green, animal, thought, and justice; the second speak only of very general things such as substance, quality, relation, and the like, as categories. Among twentieth-century authors who allow many categories is Gilbert Ryle (b. 1900, d. 1976). Roderick Chisholm (b. 1916, d. 1999) is an example of those who have only very few. Medieval authors follow Aristotle’s narrow understanding.
The disagreement concerning categories in the history of philosophy does not end there. Even if we restrict the discussion to a small number of items of the sort that Aristotle regards as categories, many issues remain to be settled about them, and philosophers frequently disagree about how to settle them. These issues may be gathered into roughly ten groups.
The first group comprises what may be described roughly as extensional issues; they have to do with the number of categories. The extension of a term is comprised by the things of which the term can be truthfully predicated. Thus the extension of ‘cat’ consists of all the animals of which it is true to say that they are cats. Philosophers in general frequently disagree on how many categories there are. For example, Aristotle lists up to ten, but gives the impression that the ultimate number is not settled at all. Plotinus (204/5–270) and Baruch Spinoza (1632–77) reduce the number radically, but their views do not by any means establish themselves as definitive. In the Middle Ages the number of categories is always small (ten or less) but it nonetheless varies.
The capability approach is a theoretical framework that entails two normative claims: first, the claim that the freedom to achieve well-being is of primary moral importance and, second, that well-being should be understood in terms of people’s capabilities and functionings. Capabilities are the doings and beings that people can achieve if they so choose – their opportunity to do or be such things as being well-nourished, getting married, being educated, and travelling; functionings are capabilities that have been realized. Whether someone can convert a set of means – resources and public goods – into a functioning (i.e., whether she has a particular capability) crucially depends on certain personal, sociopolitical, and environmental conditions, which, in the capability literature, are called ‘conversion factors.’ Capabilities have also been referred to as real or substantive freedoms as they denote the freedoms that have been cleared of potential obstacles, in contrast to mere formal rights and primary social goods.
Within philosophy, the capability approach has been employed to the development of several conceptual and normative theories within, most prominently, development ethics, political philosophy, public health ethics, environmental ethics and climate justice, and philosophy of education. This proliferation of capability literature has led to questions concerning what kind of framework it is (section 1); how its core concepts should be defined (section 2); how it can be further specified for particular purposes (section 3); what is needed to develop the capability approach into an account of social and distributive justice (section 4); how it relates to non-Western philosophies (section 5); and how it can be and has been applied in practice (section 6).
Herbert Simon introduced the term ‘bounded rationality’ (Simon 1957b: 198; see also Klaes & Sent 2005) as shorthand for his proposal to replace the perfect rationality assumptions of homo economicus with a concept of rationality better suited to cognitively limited agents:
Broadly stated, the task is to replace the global rationality of economic man with the kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist. (Simon 1955a: 99)
Bounded rationality now describes a wide range of descriptive, normative, and prescriptive accounts of effective behavior which depart from the assumptions of perfect rationality. This entry aims to highlight key contributions—from the decision sciences, economics, cognitive- and neuropsychology, biology, physics, computer science, and philosophy—to our current understanding of bounded rationality.
1. Homo Economicus and Expected Utility Theory
Bounded rationality has come to encompass models of effective behavior that weaken, or reject altogether, the idealized conditions of perfect rationality assumed by models of economic man. In this section we state what models of economic man are committed to and their relationship to expected utility theory. In later sections we review proposals for departing from expected utility theory.
The perfect rationality of homo economicus imagines a hypothetical agent who has complete information about the options available for choice, perfect foresight of the consequences from choosing those options, and the wherewithal to solve an optimization problem (typically of considerable complexity) that identifies an option which maximizes the agent’s personal utility.
We think of a boundary whenever we think of an entity demarcated from its surroundings. There is a boundary (a line) separating Maryland and Pennsylvania. There is a boundary (a circle) isolating the interior of a disc from its exterior. There is a boundary (a surface) enclosing the bulk of this apple. Sometimes the exact location of a boundary is unclear or otherwise controversial (as when you try to trace out the borders of a desert, the edges of a mountain, or even the boundary of your own body). Sometimes the boundary lies skew to any physical discontinuity or qualitative differentiation (as with the border of Wyoming, or the boundary between the upper and the lower halves of a homogeneous sphere). But whether sharp or blurry, natural or artificial, for every object there appears to be a boundary that marks it off from the rest of the world. Events, too, have boundaries — at least temporal boundaries. Our lives are bounded by our births and by our deaths; the soccer game began at 3pm sharp and ended with the referee’s final whistle at 4:45pm. It is sometimes suggested that even abstract entities, such as concepts or sets, have boundaries of their own (witness the popular method for representing the latter by means of simple closed curves encompassing their contents, as in Euler circles and Venn diagrams), and Wittgenstein could emphatically proclaim that the boundaries of our language are the boundaries of our world (1921: prop. 5.6). Whether all this boundary talk is coherent, however, and whether it reflects the structure of the world or simply the organizing activity of our mind, or of our collective practices and conventions, are matters of deep philosophical controversy.
1. Issues
Euclid defined a boundary as “that which is an extremity of anything” (Elements, I, def. 13). Aristotle defined the extremity of a thing as “the first thing beyond which it is not possible to find any part [of the given thing], and the first within which every part is” (Metaphysics, V, 1022a4–5). Together, these two definitions deliver the classic account of boundaries, an account that is both intuitive and comprehensive and offers the natural starting point for any further investigation into the boundary concept. Indeed, although Aristotle’s definition concerned primarily the extremities of spatial entities, it applies equally well in the temporal domain. Just as the Mason-Dixon line marks the boundary between Maryland and Pennsylvania insofar as no part of Maryland can be found on the northern side of the line, and no part of Pennsylvania on its southern side, so “the now is an extremity of the past (no part of the future being on this side of it), and again of the future (no part of the past being on that side of it)” (Physics, VI, 233b35–234a2). Similarly for concrete objects and events: just as the surface of an apple marks its spatial boundary insofar as the apple extends only up to it, so the referee’s whistle marks the temporal boundary of the game insofar as the game protracts only up to it. In the case of abstract entities, such as concepts and sets, the account is perhaps adequate only figuratively. Still, it is telling that one of the Greek words for ‘boundary’, ὅρος, is also a word for ‘definition’: as John of Damascus nicely put it, “definition is the term for the setting of land boundaries taken in a metaphorical sense” (The Fount of Knowledge, I, 8). Likewise, it is telling that in point-set topology the standard definition of a set’s boundary (from Hausdorff 1914, §7.2) reflects essentially the same intuition: the boundary, or frontier, of a set A is the set of those points all of whose neighborhoods intersect both A and the complement of A (where a neighborhood of a point p is, intuitively, a set of points that entirely “surround” p). It is not an exaggeration, therefore, to say that the Euclidean-Aristotelian characterization captures a general intuition about boundaries that applies across the board. Nonetheless, precisely this intuitive characterization gives rise to several puzzles that justify philosophical concern.
Blame is a common reaction to something of negative normative significance about someone or their behavior. A paradigm case, perhaps, would be when one person wrongs another, and the latter responds with resentment and a verbal rebuke, but of course we also blame others for their attitudes and characters (Eshleman 2004, Smith 2005, Holroyd 2012). Thus blaming scenarios typically involve a wide range of inward and outward responses to a wrongful or bad action, attitude, or character (such responses include: beliefs, desires, expectations, emotions, sanctions, and so on). In theorizing about blame, then, philosophers have typically asked two questions:
Which reactions and interactions constitute blame?
When is it appropriate to respond in these ways?
It is common to approach these questions with a larger theoretical agenda in mind: for example, in an effort to understand the conditions of moral responsibility and the nature of freedom. But the questions are interesting in their own right, especially since blame is such a common feature of our lives. This entry will critically discuss the answers that have been offered in response to the above questions concerning blame, with the aim of shedding some light on blame’s nature, ethics, and significance. (It is blame, rather than praise, that has received the lion’s share of attention from philosophers in recent years, despite the fact that they are a natural pair. Though that is perhaps beginning to change—see King 2023, Lippert-Rasmussen 2024, and Shoemaker 2024 for book-length treatments of blame that also pay serious attention to praise.)
Research on “implicit bias” suggests that people can act on the basis of prejudice and stereotypes without intending to do so. While psychologists in the field of “implicit social cognition” study consumer products, self-esteem, food, alcohol, political values, and more, the most striking and well-known research has focused on implicit biases toward members of socially stigmatized groups, such as African-Americans, women, and the LGBTQ community. For example, imagine Frank, who explicitly believes that women and men are equally suited for careers outside the home. Despite his explicitly egalitarian belief, Frank might nevertheless behave in any number of biased ways, from distrusting feedback from female co-workers to hiring equally qualified men over women. Part of the reason for Frank’s discriminatory behavior might be an implicit gender bias. Psychological research on implicit bias has grown steadily, raising metaphysical, epistemological, and ethical questions.
2. If you want to know what other people think about somethingthat concerns you, you have only to reflecton what you would think of themunder the same circumstances. you schould regard no one as morally superior to you on this point, and no one as more simple. More often than we think, people notice things we believe we have artfully concealed from them. Of this remark, more than half is true, and that is saying a lot for a maxim composed in one’s thirtieth year.
The Principle of Beneficence in Applied Ethics (SEP)
Beneficent actions and motives have traditionally occupied a central place in morality. Common examples today are found in social welfare programs, scholarships for needy and meritorious students, communal support of health-related research, policies to improve the welfare of animals, philanthropy, disaster relief, programs to benefit children and the incompetent, and preferential hiring and admission policies. What makes these diverse acts beneficent? Are such beneficent acts and policies obligatory or merely the pursuit of optional moral ideals?
These questions have generated a substantial literature on beneficence in both theoretical ethics and applied ethics. In theoretical ethics, the dominant issue in recent years has been how to place limits on the scope of beneficence. In applied and professional ethics, a number of issues have been treated in the fields of biomedical ethics and business ethics.
1. The Concepts of Beneficence and Benevolence
The term beneficence connotes acts or personal qualities of mercy, kindness, generosity, and charity. It is suggestive of altruism, love, humanity, and promoting the good of others. In ordinary language, the notion is broad, but it is understood even more broadly in ethical theory to include effectively all norms, dispositions, and actions with the goal of benefiting or promoting the good of other persons. The language of a principle or rule of beneficence refers to a normative statement of a moral obligation to act for the others’ benefit, helping them to further their important and legitimate interests, often by preventing or removing possible harms. Many dimensions of applied ethics appear to incorporate such appeals to obligatory beneficence, even if only implicitly. For example, when apparel manufacturers are criticized for not having good labor practices in factories, the ultimate goal of the criticisms is usually to obtain better working conditions, wages, and other benefits for workers.
Anglophone philosophers of mind generally use the term “belief” to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true. To believe something, in this sense, needn’t involve actively reflecting on it: Of the vast number of things ordinary adults believe, only a few can be at the fore of the mind at any single time. Nor does the term “belief”, in standard philosophical usage, imply any uncertainty or any extended reflection about the matter in question (as it sometimes does in ordinary English usage). Many of the things we believe, in the relevant sense, are quite mundane: that we have heads, that it’s the 21st century, that a coffee mug is on the desk. Forming beliefs is thus one of the most basic and important features of the mind, and the concept of belief plays a crucial role in both philosophy of mind and epistemology. The “mind-body problem”, for example, so central to philosophy of mind, is in part the question of whether and how a purely physical organism can have beliefs. Much of epistemology revolves around questions about when and how our beliefs are justified or qualify as knowledge.
Most contemporary philosophers characterize belief as a “propositional attitude”. Propositions are generally taken to be whatever it is that sentences express (see the entry on propositions). For example, if two sentences mean the same thing (e.g., “snow is white” in English, “Schnee ist weiss” in German), they express the same proposition, and if two sentences differ in meaning, they express different propositions. (Here we are setting aside some complications that might arise concerning indexicals; see the entry on indexicals.) A propositional attitude, then, is the mental state of having some attitude, stance, take, or opinion about a proposition or about the potential state of affairs in which that proposition is true—a mental state of the sort canonically expressible in the form “SA that P”, where S picks out the individual possessing the mental state, A picks out the attitude, and P is a sentence expressing a proposition. For example: Ahmed [the subject] hopes [the attitude] that Alpha Centauri hosts intelligent life [the proposition], or Yifeng [the subject] doubts [the attitude] that New York City will exist in four hundred years. What one person doubts or hopes, another might fear, or believe, or desire, or intend—different attitudes, all toward the same proposition. Discussions of belief are often embedded in more general discussions of the propositional attitudes; and treatments of the propositional attitudes often take belief as the first and foremost example.