Bayes' Theorem Definition Bayes' Theorem is a formula for updating the probability of a hypothesis (event A) given new evidence (event B). It converts a prior probability into a posterior probability by incorporating the likelihood of the observed evidence. Why it matters Bayes' Theorem lets you revise beliefs rationally when new information arrives. It's widely used in medical testing, finance, machine learning, spam filtering, and any setting where conditional probabilities matter. Explore More Resources

The formula P(A|B) = [P(A) · P(B|A)] / P(B) Where:
P(A) = prior probability of A (before seeing B)
P(B) = probability of observing B
P(B|A) = likelihood: probability of B given A
P(A|B) = posterior probability of A after observing B Explore More Resources

Derivation (brief): P(A|B) = P(A ∩ B) / P(B). Since P(A ∩ B) = P(A) · P(B|A), substituting yields Bayes' formula above. Key concepts
* Prior probability: your initial belief about A before new data.
* Likelihood: how probable the new data is under A.
* Posterior probability: updated belief about A after seeing the data.
Simple examples
* Card example
* Draw one card from a 52-card deck. P(king) = 4/52 = 1/13 ≈ 7.69%.
* If you learn the card is a face card (12 face cards total), P(king | face) = 4/12 ≈ 33.33%. Explore More Resources

  * Derivation example using stocks (intuitive)
  * Let A = "Amazon stock falls" and B = "DJIA fell."
  * P(A|B) = P(A ∩ B) / P(B). Since P(A ∩ B) = P(A)·P(B|A), you can express P(A|B) = [P(A)·P(B|A)] / P(B). This shows how an initial probability for Amazon (the prior) is updated given evidence about the DJIA. Explore More Resources




  * Drug test numerical example
  * Test sensitivity = 98% (true positive rate). Specificity = 98% (true negative rate). Prevalence = 0.5% (0.005).
  * Compute P(user | positive):

P(user|+) = (0.98 × 0.005) / [(0.98 × 0.005) + (0.02 × 0.995)]
= 0.0049 / (0.0049 + 0.0199) ≈ 0.1976 ≈ 19.8%.
* Even with a highly accurate test, a low base rate (prevalence) means most positive results are false positives.
Special considerations
* Base-rate (prior) sensitivity: Posterior results can be dominated by the prior when the event is rare. Ignoring base rates leads to the base-rate fallacy.
* Test characteristics: Sensitivity and specificity directly affect P(B|A) and P(B|¬A).
* Independence and model assumptions: Bayes' calculations assume the probabilities provided are correct and events are modeled appropriately.
* Continuous problems: In Bayesian statistics, priors and posteriors are often continuous distributions; Bayes' rule applies to densities as well.
When to use Bayes' Theorem Use Bayes' Theorem when you need the probability of an event given related evidence—for example:
Interpreting medical or diagnostic tests
Updating risk assessments in finance
Inference tasks in machine learning (classification, spam detection)
Any decision-making where beliefs are updated with new data Explore More Resources

Key takeaways
* Bayes' Theorem provides a principled way to update probabilities in light of new evidence.
* The prior matters: low-prevalence scenarios can produce counterintuitive posteriors even with accurate tests.
* It is foundational to Bayesian statistics and widely applicable across disciplines.
Bottom line Bayes' Theorem links prior belief and new evidence to produce a revised probability. Proper use requires careful specification of priors and likelihoods; when applied correctly, it yields clearer, more rational assessments of uncertain events.