Monte Carlo


Lecture 24

March 23, 2026

Review

Random Variable Generation

  • Quantile transform method turns uniforms into known distributions.
  • Rejection sampling can be used when quantiles are difficult to compute (but requires proposal covering target distribution).
  • Pseudorandom number generation and importance of seeds.

Monte Carlo Simulation

“Math Is Hard, Let’s Go Simulating”

Suppose we wanted to calculate the probability that a standard Gaussian random variable falls within \((0.8, 0.9)\).

Figure 1: Illustration of integral problem

\[\int_{0.8}^{0.9} \frac{1}{2\pi} \exp\left(-x^2/2\right)dx\]

It turns out \(\Phi(a) = \int_{-\infty}^a \frac{1}{2\pi} \exp\left(-x^2/2\right)dx\) has no closed form expression.

An Alternative Approach To Calculating Integrals

Can we reframe this integral as a probability of some outcome?

  1. Generate a “large number” of Gaussian-distributed random variables;
  2. Calculate the proportion that are between 0.8 and 0.9.

Using Simulation Instead of Integrating

Figure 2: Histogram of samples from Gaussian distribution

The “true” value: 0.0278.

The simulated value: 0.02781.

Approximating Integrals By Simulation

Goal: Estimate \(\mathbb{E}_p\left[f(x)\right]\), \(x \sim p(X)\)

Monte Carlo principle:

  • Sample \(x^1, x^2, \ldots, x^N \sim p(X)\)
  • Estimate \(\mathbb{E}_p\left[f(x)\right] \approx \sum_{n=1}^N f(x^n)\) / N

In other words: replace calculus with data summaries!

Monte Carlo Process Schematic

G a Probability Distribution b Random Samples a->b Sample c Model b->c Input d Outputs c->d Simulate

MC Example: Dice

What is the probability of rolling 4 dice for a total of 19?

Can simulate dice rolls and find the frequency of 19s among the samples.

Code
function dice_roll_repeated(n_trials, n_dice)
    dice_dist = DiscreteUniform(1, 6) 
    roll_results = zeros(n_trials)
    for i=1:n_trials
        roll_results[i] = sum(rand(dice_dist, n_dice))
    end
    return roll_results
end

nsamp = 10000
# roll four dice 10000 times
rolls = dice_roll_repeated(nsamp, 4) 

# calculate probability of 19
sum(rolls .== 19) / length(rolls)

# initialize storage for frequencies by sample length
avg_freq = zeros(length(rolls)) 
std_freq = zeros(length(rolls)) 

# compute average frequencies of 19
avg_freq[1] = (rolls[1] == 19)
count = 1
for i=2:length(rolls)
    avg_freq[i] = (avg_freq[i-1] * (i-1) + (rolls[i] == 19)) / i
    std_freq[i] = 1/sqrt(i-1) * std(rolls[1:i] .== 19)
end

plt = plot(
    1,
    xlim = (1, nsamp),
    ylim = (0, 0.1),
    legend = :false,
    tickfontsize=16,
    guidefontsize=16,
    xlabel="Iteration",
    ylabel="Estimate",
    right_margin=8mm,
    color=:black,
    linewidth=3,
    size=(600, 400)
)
hline!(plt, [0.0432], color="red", 
    linestyle=:dash) 

mc_anim = @animate for i = 1:nsamp
    push!(plt, 1, i, avg_freq[i])
end every 100

gif(mc_anim, "figures/mc_dice.gif", fps=10)
[ Info: Saved animation to /Users/vs498/Teaching/BEE4850/spring2026/slides/figures/mc_dice.gif
Figure 3: MCMC Estimation of pi

MC Example: Finding \(\pi\)

How can we use MC to estimate \(\pi\)?

Hint: Think of \(\pi\) as an expected value…

MC Example: Finding \(\pi\)

Finding \(\pi\) by sampling random values from the unit square and computing the fraction in the unit circle.

\[\frac{\text{Area of Circle}}{\text{Area of Square}} = \frac{\pi}{4}\]

Code
Logging.disable_logging(Logging.Info)

function circleShape(r)
    θ = LinRange(0, 2 * π, 500)
    r * sin.(θ), r * cos.(θ)
end

nsamp = 3000
unif = Uniform(-1, 1)
x = rand(unif, (nsamp, 2))
l = mapslices(v -> sum(v.^2), x, dims=2)
in_circ = l .< 1
pi_est = [4 * mean(in_circ[1:i]) for i in 1:nsamp]

plt1 = plot(
    1,
    xlim = (-1, 1),
    ylim = (-1, 1),
    legend = false,
    markersize = 4,
    framestyle = :origin,
    tickfontsize=16,
    grid=:false
    )
plt2 = plot(
    1,
    xlim = (1, nsamp),
    ylim = (3, 3.5),
    legend = :false,
    linewidth=3, 
    color=:black,
    tickfontsize=16,
    guidefontsize=16,
    xlabel="Iteration",
    ylabel="Estimate",
    right_margin=5mm
)
hline!(plt2, [π], color=:red, linestyle=:dash)
plt = plot(plt1, plt2, layout=Plots.grid(2, 1, heights=[2/3, 1/3]), size=(600, 500))

plot!(plt, circleShape(1), linecolor=:blue, lw=1, aspectratio=1, subplot=1)


mc_anim = @animate for i = 1:nsamp
    if l[i] < 1
        scatter!(plt[1], Tuple(x[i, :]), color=:blue, markershape=:x, subplot=1)
    else
        scatter!(plt[1], Tuple(x[i, :]), color=:red, markershape=:x, subplot=1)
    end
    push!(plt, 2, i, pi_est[i])
end every 100

gif(mc_anim, "figures/mc_pi.gif", fps=3)
Figure 4: MCMC Estimation of pi

Monte Carlo and Uncertainty Propagation

Monte Carlo Simulation: propagate uncertainties from inputs through a model to outputs.

This is an example of uncertainty propagation:

Draw samples from some distribution, and run them through one or more models to find the (conditional) probability of outcomes of interest (for good or bad).

Why Monte Carlo Works

Monte Carlo: Formal Approach

Formally: Monte Carlo estimation as the computation of the expected value of a random quantity \(Y = f(X)\), \(\mu = \mathbb{E}[Y]\).

To do this, generate \(n\) independent and identically distributed values \(Y_1, \ldots, Y_n\). Then the sample estimate is

\[\tilde{\mu}_n = \frac{1}{n}\sum_{i=1}^n Y_i\]

What Makes a Good Statistical Estimator?

Statistical estimators are random, which means we can’t ever guarantee that we get back the “true” value.

However, we might want:

  1. No bias (\(\text{Bias} = \mathbb{E}_g[\tilde{\mu}] - \mu\))
  2. Consistency (\(\tilde{\mu}_n \to \mu\) as \(n \to \infty\))
  3. Well-characterized, ideally small, variance (\(\text{Var}(\tilde{\mu})\))

The Law of Large Numbers

If

  1. \(Y\) is a random variable and its expectation exists and

  2. \(Y_1, \ldots, Y_n\) are independently and identically distributed

Then by the weak law of large numbers:

\[\lim_{n \to \infty} \mathbb{P}\left(\left|\tilde{\mu}_n - \mu\right| \leq \varepsilon \right) = 1\]

The Law of Large Numbers

In other words, eventually Monte Carlo estimates will get within an arbitrary error of the true expectation (which shows consistency).

But how large is large enough?

Monte Carlo Sample Mean

The sample mean \(\tilde{\mu}_n = \frac{1}{n}\sum_{i=1}^n Y_i\) is itself a random variable.

With some assumptions (the mean of \(Y\) exists and \(Y\) has finite variance), the expected Monte Carlo sample mean \(\mathbb{E}[\tilde{\mu}_n]\) is

\[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^n Y_i\right] = \frac{1}{n} \sum_{i=1}^n \mathbb{E}\left[Y_i\right] = \mu\]

Monte Carlo Error

We’d like to know more about the error of this estimate for a given sample size.

\[\begin{aligned} \tilde{\sigma}_n^2 = \mathbb{V}\left[\tilde{\mu}_n\right] &= \mathbb{V}\left[\frac{1}{n}\sum_{i=1}^n Y_i\right] \\ &= \left(\frac{1}{n}\right)^2 \sum_{i=1}^n \mathbb{V}\left[Y_i\right] \\ &= \frac{\sigma_Y^2}{n} \end{aligned}\]

Monte Carlo Standard Error

So as \(n\) increases, the Monte Carlo standard error (MCSE) decreases:

\[\tilde{\sigma}_n = \frac{\sigma_Y}{\sqrt{n}}\]

Implications of MCSE

In other words, if we want to decrease the Monte Carlo error by 10x, we need 100x additional samples. This is not an ideal method for high levels of accuracy.

Monte Carlo is an extremely bad method. It should only be used when all alternative methods are worse.

— Sokal, Monte Carlo Methods in Statistical Mechanics, 1996

But…often most alternatives are worse!

When Might We Want to Use Monte Carlo?

If you can compute your integrals analytically or through quadrature, you probably should.

But for many “real” problems, this is either

  1. Not possible (or computationally intractable);
  2. Requires a lot of stylization and simplification.

Key Points

Key Points

  • Monte Carlo: stochastic simulation instead of integration to estimate expected values
  • Monte Carlo is an unbiased estimator; variance decreases by \(1/n\).
  • Be mindful of Monte Carlo standard error for “naive” MC with iid samples.

Upcoming Schedule

Next Classes

Wednesday: Some Advanced Monte Carlo Methods

Friday: Monte Carlo Examples

HW4: Due Friday

References

References (Scroll for Full List)