Uncertainty has presented a difficult obstacle in artificial intelligence. Bayesian learning outlines a mathematically solid method for dealing with uncertainty based upon Bayes' Theorem. The theory establishes a means for calculating the probability an event will occur in the future given some evidence based upon prior occurrences of the event and the posterior probability that the evidence will predict the event. Its use in artificial intelligence has been met with success in a number of research areas and applications including the development of cognitive models and neural networks. At the same time, the theory has been criticized for being philosophically unrealistic and logistically inefficient.
The aim of artificial intelligence is to provide a computational model of intelligent behavior (Pearl, 1988). Expert systems are designed to embody the knowledge of an expert in a given field. But how do people become experts themselves?
While artificial intelligence can produce Ph.D. quality experts, a more difficult challenge lies in creating a naive observer. The common sense people use in everyday reasoning provides one of the most difficult challenges in building intelligent systems. Common sense reasoning is often based on incomplete knowledge and is powerfully broad in its use. Intelligent systems have historically been successful in specific domains with well defined structures. To make them succeed in a broad arena, they would need either a greater base of knowledge or be able to deal with uncertainty and learn. In light of the fact that the former option is more demanding in resources and assumes that all the appropriate knowledge is obtainable, the latter is an attractive option.
Probability theories offer an intuitive guide to changing the beliefs in a system of knowledge in the presence of partial or uncertain information. They allow intelligent systems flexibility and a logical way to update their database of knowledge. The appeal of probability theories in AI lies in the way they express the qualitative relationship among beliefs and can process these relationships to draw conclusions (Pearl, 1988).
One of the most formalized probabilistic theories used in AI relates to Bayes' theorem. Bayesian methods have been used for a variety of AI applications across many disciplines including cognitive modeling, medical diagnosis, learning causal networks, and finance.
Two years after his death, in 1763, Rev. Thomas Bayes' "Essay Toward solving a Problem in the Doctrine of Chances" was published. Bayes is regarded as the first to use probability inductively and established a mathematical basis for probability inference which he outlined in this now famous paper. The idea behind Bayes' method is simple; the probability that an event will occur in future trials can be calculated from the frequency with which it has occurred in prior trails. Let's consider some everyday knowledge...