Probability axioms and sample spaces
Kolmogorov's axioms formalize probability: non-negativity, normalization (sum = 1), and additivity for mutually exclusive events. All probability theory follows from these three rules.
- Conditional probability P(A|B) = P(A∩B)/P(B) — crucial for Bayesian reasoning.
- Independence: P(A∩B) = P(A)·P(B); multiplying independent die roll probabilities is valid.
- The complement rule: P(Aᶜ) = 1 − P(A) simplifies many calculations.
Discrete distributions for counting problems
Binomial, geometric, and Poisson distributions model counts of events in discrete trials. They arise naturally in dice, quality control, queueing, and reliability.
- Binomial B(n, p): n trials, probability p of success each; mean np; variance np(1−p).
- Poisson P(λ): rare events in a fixed time window; convenient limit of binomial as n→∞, p→0.
- Dice rolling: uniform on {1,…,s}; mean = (s+1)/2; variance = (s²−1)/12.
Law of large numbers and the CLT
The law of large numbers guarantees that sample averages converge to the true expected value. The Central Limit Theorem says the sampling distribution of the mean approaches normal regardless of the original distribution.
- LLN: why more simulation trials improve accuracy — error ∝ 1/√n.
- CLT enables normal-based confidence intervals even for non-normal data (n > 30 rule of thumb).
- Monte Carlo integration exploits LLN: approximate integrals with random sampling.