English
Noun
probabilities (plural)
- Plural of probability
Probability is the likelihood or chance that
something is the case or will happen.
Probability
theory is used extensively in areas such as
statistics,
mathematics,
science and
philosophy to draw
conclusions about the likelihood of potential events and the
underlying mechanics of complex systems.
Interpretations
The word probability does not have a consistent
direct definition. Actually, there are two broad categories of
probability interpretations:
Frequentists talk about
probabilities only when dealing with well defined
random experiments. The
relative frequency of occurrence of an experiment's outcome, when
repeating the experiment, is a measure of the probability of that
random event.
Bayesians,
however, assign probabilities to any statement whatsoever, even
when no random process is involved, as a way to represent its
subjective plausibility.
History
The scientific study of probability is a modern
development.
Gambling shows
that there has been an interest in quantifying the ideas of
probability for millennia, but exact mathematical descriptions of
use in those problems only arose much later.
According to Richard Jeffrey, "Before the middle
of the seventeenth century, the term 'probable' (Latin probabilis)
meant approvable, and was applied in that sense, univocally, to
opinion and to action. A probable action or opinion was one such as
sensible people would undertake or hold, in the
circumstances."
Aside from some elementary considerations made by
Girolamo
Cardano in the 16th century, the doctrine of probabilities
dates to the correspondence of
Pierre de
Fermat and
Blaise
Pascal (1654).
Christiaan
Huygens (1657) gave the earliest known scientific treatment of
the subject.
Jakob
Bernoulli's
Ars
Conjectandi (posthumous, 1713) and
Abraham
de Moivre's
Doctrine
of Chances (1718) treated the subject as a branch of
mathematics. See
Ian Hacking's
The Emergence of Probability for a history of the early development
of the very concept of mathematical probability.
The theory of errors may be traced back to
Roger
Cotes's Opera Miscellanea (posthumous, 1722), but a memoir
prepared by
Thomas
Simpson in 1755 (printed 1756) first applied the theory to the
discussion of errors of observation. The reprint (1757) of this
memoir lays down the axioms that positive and negative errors are
equally probable, and that there are certain assignable limits
within which all errors may be supposed to fall; continuous errors
are discussed and a probability curve is given.
Pierre-Simon
Laplace (1774) made the first attempt to deduce a rule for the
combination of observations from the principles of the theory of
probabilities. He represented the law of probability of errors by a
curve y = \phi(x), x being any error and y its probability, and
laid down three properties of this curve:
- it is symmetric as to the y-axis;
- the x-axis is an asymptote, the probability of
the error \infty being 0;
- the area enclosed is 1, it being certain that an error
exists.
He also gave (1781) a formula for the law of facility
of error (a term due to Lagrange, 1774), but one which led to
unmanageable equations.
Daniel
Bernoulli (1778) introduced the principle of the maximum
product of the probabilities of a system of concurrent
errors.
The
method
of least squares is due to
Adrien-Marie
Legendre (1805), who introduced it in his Nouvelles méthodes
pour la détermination des orbites des comètes (New Methods for
Determining the Orbits of Comets). In ignorance of Legendre's
contribution, an Irish-American writer,
Robert
Adrain, editor of "The Analyst" (1808), first deduced the law
of facility of error,
h being a constant depending on precision of
observation, and c a scale factor ensuring that the area under the
curve equals 1. He gave two proofs, the second being essentially
the same as
John
Herschel's (1850).
Gauss
gave the first proof which seems to have been known in Europe (the
third after Adrain's) in 1809. Further proofs were given by Laplace
(1810, 1812), Gauss (1823),
James Ivory (1825, 1826), Hagen (1837),
Friedrich
Bessel (1838),
W. F.
Donkin (1844, 1856), and
Morgan
Crofton (1870). Other contributors were Ellis (1844),
De
Morgan (1864),
Glaisher (1872), and
Giovanni
Schiaparelli (1875). Peters's (1856) formula for r, the
probable error of a single observation, is well known.
In the
nineteenth
century authors on the general theory included
Laplace,
Sylvestre
Lacroix (1816), Littrow (1833),
Adolphe
Quetelet (1853),
Richard
Dedekind (1860), Helmert (1872),
Hermann
Laurent (1873), Liagre, Didion, and
Karl
Pearson.
Augustus
De Morgan and
George Boole
improved the exposition of the theory.
On the geometric side (see
integral
geometry) contributors to
The
Educational Times were influential (Miller, Crofton, McColl,
Wolstenholme, Watson, and Artemas Martin).
Mathematical treatment
In mathematics a probability of an
event,
A is represented by a real number in the range from 0 to 1 and
written as P(A), p(A) or Pr(A). An impossible event has a
probability of 0, and a certain event has a probability of 1.
However, the converses are not always true: probability 0 events
are not always impossible, nor probability 1 events certain. The
rather subtle distinction between "certain" and "probability 1" is
treated at greater length in the article on "
almost
surely".
The opposite or complement of an event A is the
event [not A] (that is, the event of A not occurring); its
probability is given by . As an example, the chance of not rolling
a six on a six-sided die is = - \tfrac = \tfrac. See
Complementary
event for a more complete treatment.
If two events, A and B are
independent
then the
joint
probability is
- P(A \mboxB) = P(A \cap B) = P(A) P(B),\,
for example if two
coins are flipped the chance of both being heads is
\tfrac\times\tfrac = \tfrac.
If two events are
mutually
exclusive then the probability of either occurring is
- P(A\mboxB) = P(A \cup B)= P(A) + P(B).
For example, the
chance of rolling a 1 or 2 on a six-sided die is P(1\mbox2) = P(1)
+ P(2) = \tfrac + \tfrac = \tfrac.
If the events are not mutually exclusive then
- \mathrm\left(A \hbox
B\right)=\mathrm\left(A\right)+\mathrm\left(B\right)-\mathrm\left(A
\mbox B\right).
For example, when drawing a single card at
random from a regular deck of cards, the chance of getting a heart
or a face card (J,Q,K) (or one that is both) is \tfrac + \tfrac -
\tfrac = \tfrac, because of the 52 cards of a deck 13 are hearts,
12 are face cards, and 3 are both: here the possibilities included
in the "3 that are both" are included in each of the "13 hearts"
and the "12 face cards" but should only be counted once.
Conditional
probability is the
probability of some event A,
given the occurrence of some other event B. Conditional probability
is written P(A|B), and is read "the probability of A, given B". It
is defined by
- P(A \mid B) = \frac.\,
If P(B)=0 then P(A \mid B) is
undefined.
Theory
Like other
theories, the
theory of
probability is a representation of probabilistic concepts in
formal terms—that is, in terms that can be considered separately
from their meaning. These formal terms are manipulated by the rules
of mathematics and logic, and any results are then interpreted or
translated back into the problem domain.
There have been at least two successful attempts
to formalize probability, namely the
Kolmogorov
formulation and the
Cox
formulation. In Kolmogorov's formulation (see
probability
space),
sets are
interpreted as
events
and probability itself as a
measure
on a class of sets. In
Cox's
theorem, probability is taken as a primitive (that is, not
further analyzed) and the emphasis is on constructing a consistent
assignment of probability values to propositions. In both cases,
the
laws of
probability are the same, except for technical details.
There are other methods for quantifying
uncertainty, such as the
Dempster-Shafer
theory and
possibility
theory, but those are essentially different and not compatible
with the laws of probability as they are usually understood.
Applications
Two major applications of probability theory
in everyday life are in
risk assessment and in trade on
commodity
markets. Governments typically apply probabilistic methods in
environmental
regulation where it is called "
pathway
analysis", often
measuring
well-being using methods that are stochastic in nature, and
choosing projects to undertake based on statistical analyses of
their probable effect on the population as a whole. It is not
correct to say that
statistics are involved in
the modelling itself, as typically the assessments of
risk are one-time and thus require
more fundamental probability models, e.g. "the probability of
another 9/11". A
law
of small numbers tends to apply to all such choices and
perception of the effect of such choices, which makes probability
measures a political matter.
A good example is the effect of the perceived
probability of any widespread Middle East conflict on oil prices -
which have ripple effects in the economy as a whole. An assessment
by a commodity trader that a war is more likely vs. less likely
sends prices up or down, and signals other traders of that opinion.
Accordingly, the probabilities are not assessed independently nor
necessarily very rationally. The theory of
behavioral
finance emerged to describe the effect of such
groupthink on pricing, on
policy, and on peace and conflict.
It can reasonably be said that the discovery of
rigorous methods to assess and combine probability assessments has
had a profound effect on modern society. Accordingly, it may be of
some importance to most citizens to understand how odds and
probability assessments are made, and how they contribute to
reputations and to decisions, especially in a
democracy.
Another significant application of probability
theory in everyday life is
reliability. Many consumer products, such as
automobiles and consumer
electronics, utilize
reliability
theory in the design of the product in order to reduce the
probability of failure. The probability of failure is also closely
associated with the product's
warranty.
Relation to randomness
In a
deterministic universe,
based on
Newtonian
concepts, there is no probability if all conditions are known. In
the case of a roulette wheel, if the force of the hand and the
period of that force are known, then the number on which the ball
will stop would be a certainty. Of course, this also assumes
knowledge of inertia and friction of the wheel, weight, smoothness
and roundness of the ball, variations in hand speed during the
turning and so forth. A probabilistic description can thus be more
useful than Newtonian mechanics for analysing the pattern of
outcomes of repeated rolls of roulette wheel. Physicists face the
same situation in
kinetic
theory of gases, where the system, while deterministic in
principle, is so complex (with the number of molecules typically
the order of magnitude of
Avogadro
constant (6\cdot 10^) that only statistical description of its
properties is feasible.
A revolutionary discovery of 20th century physics
was the random character of all physical processes that occur at
microscopic scales and are governed by the laws of
quantum
mechanics. The
wave
function itself evolves deterministically as long as no
observation is made, but, according to the prevailing
Copenhagen
interpretation, the randomness caused by the
wave
function collapsing when an observation is made, is
fundamental. This means that
probability
theory is required to describe nature. Others never came to
terms with the loss of determinism.
Albert
Einstein famously
remarked in a letter to
Max Born:
Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. (I am
convinced that God does not play dice). Although alternative
viewpoints exist, such as that of
quantum
decoherence being the cause of an apparent random collapse, at
present there is a firm consensus among the physicists that
probability theory is necessary to describe quantum
phenomena.
See also
Footnotes
Sources
- Olav
Kallenberg, Probabilistic Symmetries and Invariance Principles.
Springer -Verlag, New York (2005). 510 pp. ISBN 0-387-25115-4
- Kallenberg, O., Foundations of Modern Probability, 2nd ed.
Springer Series in Statistics. (2002). 650 pp. ISBN
0-387-95313-2
Quotations
- Damon
Runyon, "It may be that the race is not always to the swift,
nor the battle to the strong - but that is the way to bet."
- Pierre-Simon
Laplace "It is remarkable that a science which began with the
consideration of games of chance should have become the most
important object of human knowledge." Théorie Analytique des
Probabilités, 1812.
- Richard
von Mises "The unlimited extension of the validity of the exact
sciences was a characteristic feature of the exaggerated
rationalism of the eighteenth century" (in reference to Laplace).
Probability, Statistics, and Truth, p 9. Dover edition, 1981
(republication of second English edition, 1957).
External links
wikibooks
Probability
probabilities in Arabic: احتمال
probabilities in Bulgarian: Вероятност
probabilities in Bosnian: Vjerovatnoća
probabilities in Czech: Pravděpodobnost
probabilities in German:
Wahrscheinlichkeit
probabilities in Estonian: Tõenäosus
probabilities in Spanish: Probabilidad
probabilities in Esperanto: Probablo
probabilities in Persian: احتمالات
probabilities in French: Probabilité
probabilities in Korean: 확률
probabilities in Ido: Probableso
probabilities in Italian: Probabilità
probabilities in Hebrew: הסתברות
probabilities in Latvian: Varbūtība
probabilities in Dutch: Kans (statistiek)
probabilities in Japanese: 確率
probabilities in Norwegian: Sannsynlighet
probabilities in Polish:
Prawdopodobieństwo
probabilities in Portuguese: Probabilidade
probabilities in Romanian: Probabilitate
probabilities in Russian: Вероятность
probabilities in Simple English:
Probability
probabilities in Slovak: Pravdepodobnosť
probabilities in Serbian: Вероватноћа
probabilities in Sundanese: Probabilitas
probabilities in Finnish: Todennäköisyys
probabilities in Swedish: Sannolikhet
probabilities in Tamil: நிகழ்தகவு
probabilities in Thai: ความน่าจะเป็น
probabilities in Turkish: Olasılık
probabilities in Ukrainian: Ймовірність
probabilities in Chinese: 概率