|
Inductive reasoning is a method of reasoning in which the premises are
viewed as supplying some evidence, but not full assurance, of the truth of the
conclusion.[1] It is also described as a method where one's experiences and
observations, including what is learned from others, are synthesized to come up
with a general truth.[2] Many dictionaries define inductive reasoning as the
derivation of general principles from specific observations (arguing from
specific to general), although there are many inductive arguments that do not
have that form.[3] Inductive reasoning is distinct from deductive reasoning. If
the premises are correct, the conclusion of a deductive argument is certain; in
contrast, the truth of the conclusion of an inductive argument is probable,
based upon the evidence given.[4]
Types:
Generalization:
A generalization (more accurately, an inductive generalization) proceeds from a
premise about a sample to a conclusion about the population.[5] The observation
obtained from this sample is projected onto the broader population.[5] The
proportion Q of the sample has attribute A. Therefore, the proportion Q of the
population has attribute A.
For example, say there are 20 ballseither black or whitein an urn.
To estimate their respective numbers, you draw a sample of four balls and find
that three are black and one is white. An inductive generalization would be
that there are 15 black and 5 white balls in the urn. How much the premises
support the conclusion depends upon (1) the number in the sample group, (2) the
number in the population, and (3) the degree to which the sample represents the
population (which may be achieved by taking a random sample). The hasty
generalization and the biased sample are generalization fallacies.
Statistical generalization:
A statistical generalization is a type of inductive argument in which a
conclusion about a population is inferred using a statistically-representative
sample.
For example: Of a sizeable random sample of voters surveyed, 66% support
Measure Z. Therefore, approximately 66% of voters support Measure Z. The
measure is highly reliable within a well-defined margin of error provided the
sample is large and random. It is readily quantifiable. Compare the preceding
argument with the following. "Six of the ten people in my book club are
Libertarians. Therefore, about 60% of people are Libertarians." The
argument is weak because the sample is non-random and the sample size is very
small. Statistical generalizations are also called statistical projections[6]
and sample projections.[7]
Anecdotal generalization:
An anecdotal generalization is a type of inductive argument in which a
conclusion about a population is inferred using a non-statistical sample.[8] In
other words, the generalization is based on anecdotal evidence. For example: So
far, this year his son's Little League team has won 6 of 10 games. Therefore,
by season's end, they will have won about 60% of the games. This inference is
less reliable (and thus more likely to commit the fallacy of hasty
generalization) than a statistical generalization, first, because the sample
events are non-random, and second because it is not reducible to mathematical
expression. Statistically speaking, there is simply no way to know, measure and
calculate as to the circumstances affecting performance that will obtain in the
future. On a philosophical level, the argument relies on the presupposition
that the operation of future events will mirror the past. In other words, it
takes for granted a uniformity of nature, an unproven principle that cannot be
derived from the empirical data itself. Arguments that tacitly presuppose this
uniformity are sometimes called Humean after the philosopher who was first to
subject them to philosophical scrutiny.[9]
Prediction:
An inductive prediction draws a conclusion about a future instance from a past
and current sample. Like an inductive generalization, an inductive prediction
typically relies on a data set consisting of specific instances of a
phenomenon. But rather than conclude with a general statement, the inductive
prediction concludes with a specific statement about the probability that the
next instance will (or will not) have an attribute shared (or not shared) by
the previous and current instances.[10] Proportion Q of observed members of
group G have had attribute A. Therefore, there is a probability corresponding
to Q that other members of group G will have attribute A when next observed.
Inference regarding past events An inference regarding past events is similar
to prediction in that, one draws a conclusion about a past instance from the
current and past sample. Like an inductive generalization, an inductive
inference regarding past events typically relies on a data set consisting of
specific instances of a phenomenon. But rather than conclude with a general
statement, the inference regarding past events concludes with a specific
statement about the probability that the next instance will (or will not) have
an attribute shared (or not shared) by the previous and current instances.[11]
Proportion Q of observed members of group G has attribute A. Therefore, there
is a probability corresponding to Q that other members of group G had attribute
A during a past observation. Inference regarding current events An inference
regarding current events is similar to an inference regarding past events in
that, one draws a conclusion about a current instance from the current and past
sample. Like an inductive generalization, an inductive inference regarding
current events typically relies on a data set consisting of specific instances
of a phenomenon. But rather than conclude with a general statement, the
inference regarding current events concludes with a specific statement about
the probability that the next instance will (or will not) have an attribute
shared (or not shared) by the previous and current instances.[11] Proportion Q
of observed members of group G has attribute A. Therefore, there is a
probability corresponding to Q that other members of group G had attribute A
during the current observation.
Statistical syllogism:
Main article: Statistical syllogism:
A statistical syllogism proceeds from a generalization about a group to a
conclusion about an individual. Proportion Q of the known instances of
population P has attribute A. Individual I is another member of P. Therefore,
there is a probability corresponding to Q that I has A. For example: 90% of
graduates from Excelsior Preparatory school go on to University. Bob is a
graduate of Excelsior Preparatory school. Therefore, Bob will go on to
University. This is a statistical syllogism.[12] Even though one cannot be sure
Bob will attend university, we can be fully assured of the exact probability
for this outcome (given no further information). Arguably the argument is too
strong and might be accused of "cheating". After all, the probability
is given in the premise. Typically, inductive reasoning seeks to formulate a
probability. Two dicto simpliciter fallacies can occur in statistical
syllogisms: "accident" and "converse accident". Argument
from analogy Main article: Argument from analogy The process of analogical
inference involves noting the shared properties of two or more things and from
this basis inferring that they also share some further property:[13] P and Q
are similar in respect to properties a, b, and c. Object P has been observed to
have further property x. Therefore, Q probably has property x also. Analogical
reasoning is very frequent in common sense, science, philosophy, law, and the
humanities, but sometimes it is accepted only as an auxiliary method. A refined
approach is case-based reasoning.[14] Mineral A and Mineral B are both igneous
rocks often containing veins of quartz and most commonly found in South America
in areas of ancient volcanic activity. Mineral A is also a soft stone suitable
for carving into jewelry. Therefore, mineral B is probably a soft stone
suitable for carving into jewelry. This is analogical induction, according to
which things alike in certain ways are more prone to be alike in other ways.
This form of induction was explored in detail by philosopher John Stuart Mill
in his System of Logic, where he states, "[t]here can be no doubt that
every resemblance [not known to be irrelevant] affords some degree of
probability, beyond what would otherwise exist, in favor of the
conclusion."[15] Some thinkers contend that analogical induction is a
subcategory of inductive generalization because it assumes a pre-established
uniformity governing events.[citation needed] Analogical induction requires an
auxiliary examination of the relevancy of the characteristics cited as common
to the pair. In the preceding example, if a premise were added stating that
both stones were mentioned in the records of early Spanish explorers, this
common attribute is extraneous to the stones and does not contribute to their
probable affinity. A pitfall of analogy is that features can be cherry-picked:
while objects may show striking similarities, two things juxtaposed may
respectively possess other characteristics not identified in the analogy that
are characteristics sharply dissimilar. Thus, analogy can mislead if not all
relevant comparisons are made.
Causal inference:
Main article: Causal reasoning:
A causal inference draws a conclusion about a causal connection based on the
conditions of the occurrence of an effect. Premises about the correlation of
two things can indicate a causal relationship between them, but additional
factors must be confirmed to establish the exact form of the causal
relationship. Methods The two principal methods used to reach inductive
conclusions are enumerative induction and eliminative induction.[16][17]
Enumerative induction Enumerative induction is an inductive method in which a
conclusion is constructed based upon the number of instances that support it.
The more supporting instances, the stronger the conclusion.[16][17] The most
basic form of enumerative induction reasons from particular instances to all
instances, and is thus an unrestricted generalization.[18] If one observes 100
swans, and all 100 were white, one might infer a universal categorical
proposition of the form All swans are white. As this reasoning form's premises,
even if true, do not entail the conclusion's truth, this is a form of inductive
inference. The conclusion might be true, and might be thought probably true,
yet it can be false. Questions regarding the justification and form of
enumerative inductions have been central in philosophy of science, as
enumerative induction has a pivotal role in the traditional model of the
scientific method. All life forms so far discovered are composed of cells.
Therefore, all life forms are composed of cells. This is enumerative induction,
also known as simple induction or simple predictive induction. It is a
subcategory of inductive generalization. In everyday practice, this is perhaps
the most common form of induction. For the preceding argument, the conclusion
is tempting but makes a prediction well in excess of the evidence. First, it
assumes that life forms observed until now can tell us how future cases will
be: an appeal to uniformity. Second, the concluding All is a bold assertion. A
single contrary instance foils the argument. And last, to quantify the level of
probability in any mathematical form is problematic.[19] By what standard do we
measure our Earthly sample of known life against all (possible) life? For
suppose we do discover some new organismsuch as some microorganism
floating in the mesosphere or an asteroidand it is cellular. Does the
addition of this corroborating evidence oblige us to raise our probability
assessment for the subject proposition? It is generally deemed reasonable to
answer this question "yes," and for a good many this "yes"
is not only reasonable but incontrovertible. So then just how much should this
new data change our probability assessment? Here, consensus melts away, and in
its place arises a question about whether we can talk of probability coherently
at all without numerical quantification. All life forms so far discovered have
been composed of cells. Therefore, the next life form discovered will be
composed of cells. This is enumerative induction in its weak form. It truncates
"all" to a mere single instance and, by making a far weaker claim,
considerably strengthens the probability of its conclusion. Otherwise, it has
the same shortcomings as the strong form: its sample population is non-random,
and quantification methods are elusive.
Eliminative induction:
Eliminative induction, also called variative induction, is an inductive method
in which a conclusion is constructed based on the variety of instances that
support it. Unlike enumerative induction, eliminative induction reasons based
on the various kinds of instances that support a conclusion, rather than the
number of instances that support it. As the variety of instances increases, the
more possible conclusions based on those instances can be identified as
incompatible and eliminated. This, in turn, increases the strength of any
conclusion that remains consistent with the various instances. This type of
induction may use different methodologies such as quasi-experimentation, which
tests and where possible eliminates rival hypothesis.[20] Different evidential
tests may also be employed to eliminate possibilities that are entertained.[21]
Eliminative induction is crucial to the scientific method and is used to
eliminate hypotheses that are inconsistent with observations and
experiments.[16][17] It focuses on possible causes instead of observed actual
instances of causal connections.[22] [icon] This section needs expansion. You
can help by adding to it. (June 2020)
History:
Ancient philosophy:
For a move from particular to universal, Aristotle in the 300s BCE used the
Greek word epagogé, which Cicero translated into the Latin word
inductio.[23]
Pyrrhonism:
The ancient Pyrhonists were the first Western philosophers to point out the
Problem of induction: that induction cannot justify the acceptance of universal
statements as true.[23]
Ancient medicine:
The Empiric school of ancient Greek medicine employed epilogism as a method of
inference. 'Epilogism' is a theory-free method that looks at history through
the accumulation of facts without major generalization and with consideration
of the consequences of making causal claims.[24]
Epilogism is an inference which moves entirely within the domain of visible and
evident things, it tries not to invoke unobservables. The Dogmatic school of
ancient Greek medicine employed analogismos as a method of inference.[25] This
method used analogy to reason from what was observed to unobservable forces.
Early modern philosophy In 1620, early modern philosopher Francis Bacon
repudiated the value of mere experience and enumerative induction alone. His
method of inductivism required that minute and many-varied observations that
uncovered the natural world's structure and causal relations needed to be
coupled with enumerative induction in order to have knowledge beyond the
present scope of experience. Inductivism therefore required enumerative
induction as a component. David Hume The empiricist David Hume's 1740 stance
found enumerative induction to have no rational, let alone logical, basis;
instead, induction was a custom of the mind and an everyday requirement to
live. While observations, such as the motion of the sun, could be coupled with
the principle of the uniformity of nature to produce conclusions that seemed to
be certain, the problem of induction arose from the fact that the uniformity of
nature was not a logically valid principle. Hume was skeptical of the
application of enumerative induction and reason to reach certainty about
unobservables and especially the inference of causality from the fact that
modifying an aspect of a relationship prevents or produces a particular
outcome.
Immanuel Kant:
Awakened from "dogmatic slumber" by a German translation of Hume's
work, Kant sought to explain the possibility of metaphysics. In 1781, Kant's
Critique of Pure Reason introduced rationalism as a path toward knowledge
distinct from empiricism. Kant sorted statements into two types. Analytic
statements are true by virtue of the arrangement of their terms and meanings,
thus analytic statements are tautologies, merely logical truths, true by
necessity. Whereas synthetic statements hold meanings to refer to states of
facts, contingencies. Finding it impossible to know objects as they truly are
in themselves, however, Kant concluded that the philosopher's task should not
be to try to peer behind the veil of appearance to view the noumena, but simply
that of handling phenomena. Reasoning that the mind must contain its own
categories for organizing sense data, making experience of space and time
possible, Kant concluded that the uniformity of nature was an a priori
truth.[26] A class of synthetic statements that was not contingent but true by
necessity, was then synthetic a priori. Kant thus saved both metaphysics and
Newton's law of universal gravitation, but as a consequence discarded
scientific realism and developed transcendental idealism. Kant's transcendental
idealism gave birth to the movement of German idealism. Hegel's absolute
idealism subsequently flourished across continental Europe.
Late modern philosophy:
Positivism, developed by Saint-Simon and promulgated in the 1830s by his former
student Comte, was the first late modern philosophy of science. In the
aftermath of the French Revolution, fearing society's ruin, Comte opposed
metaphysics. Human knowledge had evolved from religion to metaphysics to
science, said Comte, which had flowed from mathematics to astronomy to physics
to chemistry to biology to sociologyin that orderdescribing
increasingly intricate domains. All of society's knowledge had become
scientific, with questions of theology and of metaphysics being unanswerable.
Comte found enumerative induction reliable as a consequence of its grounding in
available experience. He asserted the use of science, rather than metaphysical
truth, as the correct method for the improvement of human society. According to
Comte, scientific method frames predictions, confirms them, and states
lawspositive statementsirrefutable by theology or by metaphysics.
Regarding experience as justifying enumerative induction by demonstrating the
uniformity of nature,[26] the British philosopher John Stuart Mill welcomed
Comte's positivism, but thought scientific laws susceptible to recall or
revision and Mill also withheld from Comte's Religion of Humanity. Comte was
confident in treating scientific law as an irrefutable foundation for all
knowledge, and believed that churches, honouring eminent scientists, ought to
focus public mindset on altruisma term Comte coinedto apply science
for humankind's social welfare via sociology, Comte's leading science. During
the 1830s and 1840s, while Comte and Mill were the leading philosophers of
science, William Whewell found enumerative induction not nearly as convincing,
and, despite the dominance of inductivism, formulated
"superinduction".[27] Whewell argued that "the peculiar import
of the term Induction" should be recognised: "there is some
Conception superinduced upon the facts", that is, "the Invention of a
new Conception in every inductive inference". The creation of Conceptions
is easily overlooked and prior to Whewell was rarely recognised.[27] Whewell
explained: "Although we bind together facts by superinducing upon them a
new Conception, this Conception, once introduced and applied, is looked upon as
inseparably connected with the facts, and necessarily implied in them. Having
once had the phenomena bound together in their minds in virtue of the
Conception, men can no longer easily restore them back to detached and
incoherent condition in which they were before they were thus
combined."[27] These "superinduced" explanations may well be
flawed, but their accuracy is suggested when they exhibit what Whewell termed
consiliencethat is, simultaneously predicting the inductive
generalizations in multiple areasa feat that, according to Whewell, can
establish their truth. Perhaps to accommodate the prevailing view of science as
inductivist method, Whewell devoted several chapters to "methods of
induction" and sometimes used the phrase "logic of induction",
despite the fact that induction lacks rules and cannot be trained.[27]
In the 1870s, the originator of pragmatism, C S Peirce performed vast
investigations that clarified the basis of deductive inference as a
mathematical proof (as, independently, did Gottlob Frege). Peirce recognized
induction but always insisted on a third type of inference that Peirce
variously termed abduction or retroduction or hypothesis or presumption.[28]
Later philosophers termed Peirce's abduction, etc., Inference to the Best
Explanation (IBE).[29] Contemporary philosophy Bertrand Russell Having
highlighted Hume's problem of induction, John Maynard Keynes posed logical
probability as its answer, or as near a solution as he could arrive at.[30]
Bertrand Russell found Keynes's Treatise on Probability the best examination of
induction, and believed that if read with Jean Nicod's Le Probleme logique de
l'induction as well as R B Braithwaite's review of Keynes's work in the October
1925 issue of Mind, that would cover "most of what is known about
induction", although the "subject is technical and difficult,
involving a good deal of mathematics".[31] Two decades later, Russell
proposed enumerative induction as an "independent logical
principle".[32][33] Russell found: "Hume's skepticism rests entirely
upon his rejection of the principle of induction. The principle of induction,
as applied to causation, says that, if A has been found very often accompanied
or followed by B, then it is probable that on the next occasion on which A is
observed, it will be accompanied or followed by B. If the principle is to be
adequate, a sufficient number of instances must make the probability not far
short of certainty. If this principle, or any other from which it can be
deduced, is true, then the casual inferences which Hume rejects are valid, not
indeed as giving certainty, but as giving a sufficient probability for
practical purposes. If this principle is not true, every attempt to arrive at
general scientific laws from particular observations is fallacious, and Hume's
skepticism is inescapable for an empiricist. The principle itself cannot, of
course, without circularity, be inferred from observed uniformities, since it
is required to justify any such inference. It must, therefore, be, or be
deduced from, an independent principle not based on experience. To this extent,
Hume has proved that pure empiricism is not a sufficient basis for science. But
if this one principle is admitted, everything else can proceed in accordance
with the theory that all our knowledge is based on experience. It must be
granted that this is a serious departure from pure empiricism, and that those
who are not empiricists may ask why, if one departure is allowed, others are
forbidden. These, however, are not questions directly raised by Hume's
arguments. What these arguments proveand I do not think the proof can be
controvertedis that induction is an independent logical principle,
incapable of being inferred either from experience or from other logical
principles, and that without this principle, science is impossible."[33]
Gilbert Harman In a 1965 paper, Gilbert Harman explained that enumerative
induction is not an autonomous phenomenon, but is simply a disguised
consequence of Inference to the Best Explanation (IBE).[29] IBE is otherwise
synonymous with C S Peirce's abduction.[29] Many philosophers of science
espousing scientific realism have maintained that IBE is the way that
scientists develop approximately true scientific theories about nature.[34]
Comparison with deductive reasoning:
Argument terminology:
Inductive reasoning is a form of argument thatin contrast to deductive
reasoningallows for the possibility that a conclusion can be false, even
if all of the premises are true.[35] This difference between deductive and
inductive reasoning is reflected in the terminology used to describe deductive
and inductive arguments. In deductive reasoning, an argument is
"valid" when, assuming the argument's premises are true, the
conclusion must be true. If the argument is valid and the premises are true,
then the argument is "sound". In contrast, in inductive reasoning, an
argument's premises can never guarantee that the conclusion must be true;
therefore, inductive arguments can never be valid or sound. Instead, an
argument is "strong" when, assuming the argument's premises are true,
the conclusion is probably true. If the argument is strong and the premises are
true, then the argument is "cogent".[36] Less formally, an inductive
argument may be called "probable", "plausible",
"likely", "reasonable", or "justified", but never
"certain" or "necessary". Logic affords no bridge from the
probable to the certain. The futility of attaining certainty through some
critical mass of probability can be illustrated with a coin-toss exercise.
Suppose someone tests whether a coin is either a fair one or two-headed. They
flip the coin ten times, and ten times it comes up heads. At this point, there
is a strong reason to believe it is two-headed. After all, the chance of ten
heads in a row is .000976: less than one in one thousand. Then, after 100
flips, every toss has come up heads. Now there is virtual certainty
that the coin is two-headed. Still, one can neither logically nor empirically
rule out that the next toss will produce tails. No matter how many times in a
row it comes up heads this remains the case. If one programmed a machine to
flip a coin over and over continuously at some point the result would be a
string of 100 heads. In the fullness of time, all combinations will appear. As
for the slim prospect of getting ten out of ten heads from a fair cointhe
outcome that made the coin appear biasedmany may be surprised to learn
that the chance of any sequence of heads or tails is equally unlikely (e.g.,
H-H-T-T-H-T-H-H-H-T) and yet it occurs in every trial of ten tosses. That means
all results for ten tosses have the same probability as getting ten out of ten
heads, which is 0.000976. If one records the heads-tails sequences, for
whatever result, that exact sequence had a chance of 0.000976. An argument is
deductive when the conclusion is necessary given the premises. That is, the
conclusion must be true if the premises are true. If a deductive conclusion
follows duly from its premises, then it is valid; otherwise, it is invalid
(that an argument is invalid is not to say it is false; it may have a true
conclusion, just not on account of the premises). An examination of the
following examples will show that the relationship between premises and
conclusion is such that the truth of the conclusion is already implicit in the
premises. Bachelors are unmarried because we say they are; we have defined them
so. Socrates is mortal because we have included him in a set of beings that are
mortal.
The conclusion for a valid deductive argument is already contained in the
premises since its truth is strictly a matter of logical relations. It cannot
say more than its premises.
Inductive premises, on the other hand, draw their substance from fact and
evidence, and the conclusion accordingly makes a factual claim or prediction.
Its reliability varies proportionally with the evidence. Induction wants to
reveal something new about the world. One could say that induction wants to say
more than is contained in the premises. To better see the difference between
inductive and deductive arguments, consider that it would not make sense to
say: "all rectangles so far examined have four right angles, so the next
one I see will have four right angles." This would treat logical relations
as something factual and discoverable, and thus variable and uncertain.
Likewise, speaking deductively we may permissibly say. "All unicorns can
fly; I have a unicorn named Charlie; Charlie can fly." This deductive
argument is valid because the logical relations hold; we are not interested in
their factual soundness.
Inductive reasoning is inherently uncertain. It only deals in the extent to
which, given the premises, the conclusion is credible according to some theory
of evidence. Examples include a many-valued logic, DempsterShafer theory,
or probability theory with rules for inference such as Bayes' rule. Unlike
deductive reasoning, it does not rely on universals holding over a closed
domain of discourse to draw conclusions, so it can be applicable even in cases
of epistemic uncertainty (technical issues with this may arise however; for
example, the second axiom of probability is a closed-world assumption).[37]
Another crucial difference between these two types of argument is that
deductive certainty is impossible in non-axiomatic systems such as reality,
leaving inductive reasoning as the primary route to (probabilistic) knowledge
of such systems.[38] Given that "if A is true then that would cause B, C,
and D to be true", an example of deduction would be "A is true
therefore we can deduce that B, C, and D are true". An example of
induction would be "B, C, and D are observed to be true therefore A might
be true". A is a reasonable explanation for B, C, and D being true. For
example: A large enough asteroid impact would create a very large crater and
cause a severe impact winter that could drive the non-avian dinosaurs to
extinction. We observe that there is a very large crater in the Gulf of Mexico
dating to very near the time of the extinction of the non-avian dinosaurs.
Therefore, it is possible that this impact could explain why the non-avian
dinosaurs became extinct. Note, however, that the asteroid explanation for the
mass extinction is not necessarily correct. Other events with the potential to
affect global climate also coincide with the extinction of the non-avian
dinosaurs. For example, the release of volcanic gases (particularly sulfur
dioxide) during the formation of the Deccan Traps in India. Another example of
an inductive argument: All biological life forms that we know of depend on
liquid water to exist. Therefore, if we discover a new biological life form, it
will probably depend on liquid water to exist. This argument could have been
made every time a new biological life form was found, and would have been
correct every time; however, it is still possible that in the future a
biological life form not requiring liquid water could be discovered. As a
result, the argument may be stated less formally as: All biological life forms
that we know of depend on liquid water to exist. Therefore, all biological life
probably depends on liquid water to exist. A classical example of an incorrect
inductive argument was presented by John Vickers: All of the swans we have seen
are white. Therefore, we know that all swans are white. The correct conclusion
would be: we expect all swans to be white.
Succinctly put: deduction is about certainty/necessity; induction is about
probability.[12] Any single assertion will answer to one of these two criteria.
Another approach to the analysis of reasoning is that of modal logic, which
deals with the distinction between the necessary and the possible in a way not
concerned with probabilities among things deemed possible. The philosophical
definition of inductive reasoning is more nuanced than a simple progression
from particular/individual instances to broader generalizations. Rather, the
premises of an inductive logical argument indicate some degree of support
(inductive probability) for the conclusion but do not entail it; that is, they
suggest truth but do not ensure it. In this manner, there is the possibility of
moving from general statements to individual instances (for example,
statistical syllogisms).
Note that the definition of inductive reasoning described here differs from
mathematical induction, which, in fact, is a form of deductive reasoning.
Mathematical induction is used to provide strict proofs of the properties of
recursively defined sets.[39] The deductive nature of mathematical induction
derives from its basis in a non-finite number of cases, in contrast with the
finite number of cases involved in an enumerative induction procedure like
proof by exhaustion. Both mathematical induction and proof by exhaustion are
examples of complete induction. Complete induction is a masked type of
deductive reasoning.
Criticism:
Main article: The problem of induction:
Although philosophers at least as far back as the Pyrrhonist philosopher Sextus
Empiricus have pointed out the unsoundness of inductive reasoning,[40] the
classic philosophical critique of the problem of induction was given by the
Scottish philosopher David Hume.[41] Although the use of inductive reasoning
demonstrates considerable success, the justification for its application has
been questionable. Recognizing this, Hume highlighted the fact that our mind
often draws conclusions from relatively limited experiences that appear correct
but which are actually far from certain. In deduction, the truth value of the
conclusion is based on the truth of the premise. In induction, however, the
dependence of the conclusion on the premise is always uncertain. For example,
let us assume that all ravens are black. The fact that there are numerous black
ravens supports the assumption. Our assumption, however, becomes invalid once
it is discovered that there are white ravens. Therefore, the general rule
"all ravens are black" is not the kind of statement that can ever be
certain. Hume further argued that it is impossible to justify inductive
reasoning: this is because it cannot be justified deductively, so our only
option is to justify it inductively. Since this argument is circular, with the
help of Hume's fork he concluded that our use of induction is unjustifiable
.[42] Hume nevertheless stated that even if induction were proved unreliable,
we would still have to rely on it. So instead of a position of severe
skepticism, Hume advocated a practical skepticism based on common sense, where
the inevitability of induction is accepted.[43] Bertrand Russell illustrated
Hume's skepticism in a story about a chicken, fed every morning without fail,
who following the laws of induction concluded that this feeding would always
continue, until his throat was eventually cut by the farmer.[44]
In 1963, Karl Popper wrote, "Induction, i.e. inference based on many
observations, is a myth. It is neither a psychological fact, nor a fact of
ordinary life, nor one of scientific procedure."[45][46] Popper's 1972
book Objective Knowledgewhose first chapter is devoted to the problem of
inductionopens, "I think I have solved a major philosophical
problem: the problem of induction".[46] In Popper's schema, enumerative
induction is "a kind of optical illusion" cast by the steps of
conjecture and refutation during a problem shift.[46] An imaginative leap, the
tentative solution is improvised, lacking inductive rules to guide it.[46] The
resulting, unrestricted generalization is deductive, an entailed consequence of
all explanatory considerations.[46]
Controversy continued, however, with Popper's putative solution not generally
accepted.[47] More recently, inductive inference has been shown to be capable
of arriving at certainty, but only in rare instances, as in programs of machine
learning in artificial intelligence (AI).[48] Popper's stance on induction
being an illusion has been falsified: enumerative induction exists. Even so,
Donald Gillies argues that rules of inferences related to inductive reasoning
are overwhelmingly absent from science.[48] Biases Inductive reasoning is also
known as hypothesis construction because any conclusions made are based on
current knowledge and predictions.[citation needed] As with deductive
arguments, biases can distort the proper application of inductive argument,
thereby preventing the reasoner from forming the most logical conclusion based
on the clues. Examples of these biases include the availability heuristic,
confirmation bias, and the predictable-world bias. The availability heuristic
causes the reasoner to depend primarily upon information that is readily
available to him or her. People have a tendency to rely on information that is
easily accessible in the world around them. For example, in surveys, when
people are asked to estimate the percentage of people who died from various
causes, most respondents choose the causes that have been most prevalent in the
media such as terrorism, murders, and airplane accidents, rather than causes
such as disease and traffic accidents, which have been technically "less
accessible" to the individual since they are not emphasized as heavily in
the world around them. The confirmation bias is based on the natural tendency
to confirm rather than to deny a current hypothesis. Research has demonstrated
that people are inclined to seek solutions to problems that are more consistent
with known hypotheses rather than attempt to refute those hypotheses. Often, in
experiments, subjects will ask questions that seek answers that fit established
hypotheses, thus confirming these hypotheses. For example, if it is
hypothesized that Sally is a sociable individual, subjects will naturally seek
to confirm the premise by asking questions that would produce answers
confirming that Sally is, in fact, a sociable individual. The predictable-world
bias revolves around the inclination to perceive order where it has not been
proved to exist, either at all or at a particular level of abstraction.
Gambling, for example, is one of the most popular examples of predictable-world
bias. Gamblers often begin to think that they see simple and obvious patterns
in the outcomes and therefore believe that they are able to predict outcomes
based upon what they have witnessed. In reality, however, the outcomes of these
games are difficult to predict and highly complex in nature. In general, people
tend to seek some type of simplistic order to explain or justify their beliefs
and experiences, and it is often difficult for them to realise that their
perceptions of order may be entirely different from the truth.[49]
Bayesian inference:
As a logic of induction rather than a theory of belief, Bayesian inference does
not determine which beliefs are a priori rational, but rather determines how we
should rationally change the beliefs we have when presented with evidence. We
begin by committing to a prior probability for a hypothesis based on logic or
previous experience and, when faced with evidence, we adjust the strength of
our belief in that hypothesis in a precise manner using Bayesian logic.
Inductive inference:
Around 1960, Ray Solomonoff founded the theory of universal inductive
inference, a theory of prediction based on observations, for example,
predicting the next symbol based upon a given series of symbols. This is a
formal inductive framework that combines algorithmic information theory with
the Bayesian framework. Universal inductive inference is based on solid
philosophical foundations,[50] and can be considered as a mathematically
formalized Occam's razor. Fundamental ingredients of the theory are the
concepts of algorithmic probability and Kolmogorov complexity.
|
|