
Chance versus Randomness
Randomness, as we ordinarily think of it, exists when some outcomes occur haphazardly, unpredictably, or by chance. These latter three notions are all distinct, but all have some kind of close connection to probability. Notoriously, there are many kinds of probability: subjective probabilities (‘degrees of belief’), evidential probabilities, and objective chances, to name a few (Hájek 2012), and we might enquire into the connections between randomness and any of these species of probability. In this entry, we focus on the potential connections between randomness and chance, or physical probability. The ordinary way that the word ‘random’ gets used is more or less interchangeable with ‘chancy’, which suggests this Commonplace Thesis—a useful claim to target in our discussion:
- (CT)
- Something is random iff it happens by chance.
The Commonplace Thesis, and the close connection between randomness and chance it proposes, appears also to be endorsed in the scientific literature, as in this example from a popular textbook on evolution (which also throws in the notion of unpredictability for good measure):
scientists use chance, or randomness, to mean that when physical causes can result in any of several outcomes, we cannot predict what the outcome will be in any particular case. (Futuyma 2005: 225)
Some philosophers are, no doubt, equally subject to this unthinking elision, but others connect chance and randomness deliberately. Suppes approvingly introduces
the view that the universe is essentially probabilistic in character, or, to put it in more colloquial language, that the world is full of random happenings. (Suppes 1984: 27)
However a number of technical and philosophical advances in our understanding of both chance and randomness open up the possibility that the easy slide between chance and randomness in ordinary and scientific usage—a slide that would be vindicated by the truth of the Commonplace Thesis—is quite misleading. This entry will attempt to spell out these developments and clarify the differences between chance and randomness, as well as the areas in which they overlap in application. It will also aim to clarify the relationship of chance and randomness to other important notions in the vicinity, particularly determinism and predictability (themselves often subject to confusion).
There will be philosophically significant consequences if the Commonplace Thesis is incorrect, and if ordinary usage is misleading. For example, it is intuitively plausible that if an event is truly random it cannot be explained (if it happens for a reason, it isn’t truly random). It might seem then that the possibility of probabilistic explanation is undermined when the probabilities involved are genuine chances. Yet this pessimistic conclusion only follows under the assumption, derived from the Commonplace Thesis, that all chancy outcomes are random. Another interesting case is the role of random sampling in statistical inference. If randomness requires chance, then no statistical inferences on the basis of ‘randomly’ sampling a large population will be valid unless the experimental design involves genuine chance in the selection of subjects. But the rationale for random sampling may not require chance sampling—as long as our sample is representative, those statistical inferences may be reliable. But in that case, we’d be in a curious situation where random sampling wouldn’t have much to do with randomness, and whatever justification for beliefs based on random sampling that randomness is currently thought to provide would need to be replaced by something else.
A final case of considerable philosophical interest is the frequentist approach to objective probability, which claims (roughly) that the chance of an outcome is its frequency in an appropriate series of outcomes (Hájek 2012 §3.4). To avoid classifying perfectly regular recurring outcomes as chancy, frequentists like von Mises (1957) proposed to require that the series of outcomes should be random, without pattern or order. Frequentism may fall with the Commonplace Thesis: if there can be chancy outcomes without randomness, both will fail.
The Commonplace Thesis is central to all three examples. As it is widely accepted that probabilistic explanation is legimitate, that random sampling doesn’t need genuine chance (though it can help), and that frequentism is in serious trouble (Hájek 1997), there is already some some pressure on the Commonplace Thesis. But we must subject it to closer examination to clarify whether these arguments do succeed, and what exactly it means to say of some event or process that it is random or chancy. Though developing further consequences of this kind is not the primary aim of this entry, it is hoped that what is said here may help to untangle these and other vexed issues surrounding chance and randomness.
Chatbot
Chance and randomness are not the same.
Randomness refers to the structure of a process. A process is random when its outcomes cannot be predicted with certainty and show no usable pattern. In probability theory this means individual outcomes are unpredictable, even though the distribution of outcomes can still be described statistically. A coin toss is treated as random because each result cannot be predicted beforehand.
Chance refers to the occurrence of an event relative to intention, planning, or expectation. When something happens “by chance,” it means it was not arranged, intended, or controlled. It emphasizes contingency: the event could easily have been otherwise.
So the distinction is:
Randomness concerns how outcomes are generated.
Chance concerns how an event appears in relation to plans, expectations, or purposes.
Example:
A coin toss is random. If you happen to flip a coin exactly when a friend calls you, that coincidence is chance.
It is also possible to have chance without true randomness. In a deterministic system, every event could be fully caused and still appear as chance from the perspective of someone who cannot know all the causes. In that case the process is not random in principle, but events still occur “by chance” relative to human plans and knowledge.
In short: randomness describes the nature of a process, while chance describes the contingency of an event from a human viewpoint.
Wikipedia
Randomness
In common usage, randomness is the apparent or actual lack of definite patterns or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a known probability distribution, the frequency of different outcomes over repeated events (or “trials”) is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.
The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some ‘objective’ probability distribution. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness.
Randomness is most often used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input (such as from random number generators or pseudorandom number generators), are important techniques in science, particularly in the field of computational science. By analogy, quasi-Monte Carlo methods use quasi-random number generators.
Random selection, when narrowly associated with a simple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random. According to Ramsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. Mathematician Theodore Motzkin suggested that “while disorder is more probable in general, complete disorder is impossible”. Misunderstanding this can lead to numerous conspiracy theories. Cristian S. Calude stated that “given the impossibility of true randomness, the effort is directed towards studying degrees of randomness”. It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.