Bayes Theorem
Chapter 2 - EGM
Back to Book content or Previous page - Next page
Or directly to Main Page - Experiential Growth Method - How to think about growth
Welcome to the Bayes Theorem page
Introduction
.
Bayes' theorem is a way of figuring out the probability of something happening based on what we already know. It's named after a man named Thomas Bayes, who was a mathematician and philosopher.
Here's a simple example of how Bayes' theorem works:
Imagine you have a jar of marbles. Some of the marbles are red and some of them are green. You want to figure out the probability that a marble you pick out of the jar will be red.
You know that there are 10 marbles in the jar, and 3 of them are red. So, you can say that the probability of picking a red marble out of the jar is 3/10, or 30%.
Now, imagine that you pick a marble out of the jar and it turns out to be red. You might want to update your estimate of the probability that the next marble you pick out of the jar will be red. Bayes' theorem helps you do this. Using Bayes' theorem, you can take into account the fact that you just picked a red marble and update your estimate of the probability that the next marble you pick will be red. This can be helpful if you want to be more accurate in your predictions: 2/9, or 22%
Description
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. Bayes' theorem is particularly useful for inferring causes from their effects since it is often fairly easy to discern the probability of an effect given the presence or absence of a cause.
The objective is to discover what sorts of constraints experience tends to impose, and to explain how the person's prior opinions can be used to justify the choice of a posterior probability from among the many that might satisfy a given constraint. This is a kind of "no jumping to conclusions" requirement. We explain it here as a natural result of the idea that rational learners should proportion their beliefs to the strength of the evidence they acquire. (1) |
.
.
Basic formula
The probability of a hypothesis - conditional on a given body of data - is
"the ratio of the unconditional probability of the conjunction of the hypothesis with the data to the unconditional probability of the data alone."
.
.
Example
P(Model|Data) = P(Data|Model)*P(Model) / P(Data)
P(Model|Data) - What is the probability that the train will run on time, given my experience?
=
P(Data|Model) - My opinion that the train will run on time expresed in a probability : 60 %
x
P(Model) - The probability that the train will run on time (Railway statistics) : 80%
/
P(Data) - "My personal certainty of my opinion/experience" expresed in a probability : 100% (I am very sure of myself)
The calculation gives us: (60% * 80%) / 100% = 48%
In words:
- My believe that only 60% of the trains run on time, and being very sure of this opinion, gives me a feeling of dealing with a railroad where 50% of the trains run on time.
- Even if all trains (100%) would realy run on time, this would only enhance my feeling from 48% to 60%, not to the present railway statistics.
- If I would syncronise my opinion and my certainty, I would come very close to the railway statistics.
.
.
Background information
Source & External link | |
---|---|
(1) | Bayes’ Theorem (Stanford Encyclopedia of Philosophy) |
.
.
Back to Book content or Previous page - Next page