Monte Carlo Simulation
Imitation of a specific functionality to perfome probability analysis with various Math concepts
Monte Carlo Simulation
A method of estimating the value of an unknown quantity using the principles of inferential statistics.
-
Inferentila Statistics
- Population: a set of examples
- Sample: s proper subset of a population
- key fact: a random Sample tends to exhibit the same properties as the population from which it is drawn
-
We make an inference about the population based upon some set of statistics we do on the sample.
- if we start drawing and the sample is not random then there is no reason to expect the same properties as that of the population.
Example:
Given a single coin, estimate fraction of heads you would get if you flipped the coin an infinite number of times.
- Consier one flip and you got Heads, Now how about saying the probability of getting heads for the next time is 1.0.
- Let's flip the coin again, you the Heads again.
- Flip the coin for 100 times, still you get the heads, now do you think that the next flip will come up heads?
-
Next, flipping the coin 100 Times, you noticed
- 50 Heads
- 48 Tails
-
02 Heads in a series.
Do you think the probability of the next flip coming up heads is $\frac{52}{100}$?
Given the data, it's your best to estimate. But confidence should be low.
Why the difference in Confidence?
- Confidence in our estimate depends upon two things
- Size of sample(e.g 100 versus 2)
- variance of sample(e.g, all heads versus 52 heads)
- As the variance grows, we need larger sample to have the same degree of confidence.
looking into Roulette.
As the win or loose is very quick and the possiblites are more often in the game. We don't need to simulate, since answers obbious
It allow us to compare simulaiton results to actual probabilities
import random
class FairRoulette():
def __init__(self):
self.pockets = []
# One to 36 Pockets
for i in range(1,37):
self.pockets.append(i)
self.ball = None
# if you bet on a pocket and you win, you get len(pockets) - 1
# You bet a dollar if you win, you get $36, which means
# your dollar + $35 back. If you loose you loose all.
self.pocketOdds = len(self.pockets) - 1
def spin(self):
self.ball = random.choice(self.pockets)
def betPocket(self, pocket, amt):
if str(pocket) == str(self.ball):
return amt*self.pocketOdds
else: return -amt
def __str__(self):
return 'Fair Roulette'
def playRoulette(game, numSpins, pocket, bet, toPrint):
totPocket = 0
for i in range(numSpins):
game.spin()
totPocket += game.betPocket(pocket, bet)
if toPrint:
print(numSpins, 'spins of', game)
print('Expected return betting', pocket, '=',\
str(100*totPocket/numSpins) + '%\n')
return (totPocket/numSpins)
# Let's play the game
game = FairRoulette()
for numSpins in (100,1000000):
# We are gonna see what happens when we spin it 100 times
# and what happens when we spin it 1000000 times.
for i in range(3):
playRoulette(game, numSpins, 2, 1, True)
What's gooing on here called the law of large numbers, or sometimes Bernoulli's law.
Law of Large Numbers
- In repeated independnet tests with the same actual probability
p
of a aprticular outocme in each test, the chance that the fraction of times that outcome occurs differs fromp
converges to zero as the number of trails goes to infinity
This says, If I were to spin this roulette wheel infinite number of times, the expected return would be 0. The real true probability from the mathematics.
And what this says is the close I get to inifnite, the close it will be to the true probaility.
- The law of large numbers is a theorem from probability and statistics that suggests that the average result from repeating an experiment multiple times will better approximate the true or expected underlying result. ref:https://machinelearningmastery.com/a-gentle-introduction-to-the-law-of-large-numbers-in-machine-learning/
Independent and Identically Distributed
It is important to be clear that the observations in the sample must be independent.
Gambler's fallacy
Also known as the Monte Carlo Fallacy, the Gambler's Fallacy occurs when an individual erroneously believes that a certain random event is less likely or more likely, given a previous event or a series of events. This line of thinking is incorrect, since past events do not change the probability that certain events will occur in the future.
"On August 18, 1913, at casino in Monte Carlo, black came up a record twenty-six times in succession[in roulette]...[Threre] was a nerar-panicky rush to bet on red, beginning about the time black had come up a phenomenal fifteen times." --HUff and Geis, How to Take a Chace.
- The Probability of 26 consecutive reds: $\frac{1}{67,108,865}$
- The Probability of 26 consecutive reds when previous 25 rolls were red: $\frac{1}{2}$
There's something that it's gambler's fallacy that's often confused with it, and that's called Regression to the Mean. This term was coined in 1885by Francis Galton.
The paper by him say:
if somebody's parents are both taller than average, it's likely that the child will be smaller thatn the parents. Conversely, if the parents are shorter than average, it's likely that the child will be taller than average. Then he came up with this Regression to the Mean.
Regression to the Mean
ref:https://www.youtube.com/watch?v=1tSqSMOyNFE We expect that a single result or the mean result from a small sample is likely. That is close to the central tendency, the mean of the population distribution. It may not be; in fact, it may be very strange or unlikely.
As we increase the sample size, the finding or mean of the sample will move back toward the population mean, back toward the true underlying expected value. This is called regression to the mean or sometimes reversion to the mean.
- Following an exreeme random event, the next random event is likely to be less extreme. Let's understand in roulette. #### Example: If you spin a fair roulette wheel 10 times and get 100% red[10 reds], that is an extreme event (probability = $\frac{1}{1024}$)
- Now the Gambler's fallacy says, If I were to spin it another 10 times, it would need to even out. As in I should get more blacks than you would usually get to make up for these excess reds.
-
Now, What Regression to the Mean says, is differnt. It says, It's likely that in the next 10 spins, you will get fewer than 10 reds.
-
So, If you look at the average of the 20 spins, it will be closer to the expected mean of 50% reds than to the 100% of the first 10 spins. So that's why it's called regression to the mean The more samples you take, the more likely you'll get to the mean.
class EuRoulette(FairRoulette):
def __init__(self):
FairRoulette.__init__(self)
self.pockets.append('0')
def __str__(self):
return 'European Roulette'
class AmRoulette(EuRoulette):
def __init__(self):
EuRoulette.__init__(self)
self.pockets.append('00')
def __str__(self):
return 'American Roulette'
def findPocketReturn(game, numTrials, trialSize, toPrint):
pocketReturns = []
for t in range(numTrials):
trialVals = playRoulette(game, trialSize, 2, 1, toPrint)
pocketReturns.append(trialVals)
return pocketReturns
random.seed(0)
numTrials = 20
resultDict = {}
games = (FairRoulette, EuRoulette, AmRoulette)
for G in games:
resultDict[G().__str__()] = []
for numSpins in (1000, 10000, 100000, 1000000):
print('\nSimulate', numTrials, 'trials of',
numSpins, 'spins each')
for G in games:
pocketReturns = findPocketReturn(G(), numTrials,
numSpins, False)
expReturn = 100*sum(pocketReturns)/len(pocketReturns)
print('Exp. return for', G(), '=',
str(round(expReturn, 4)) + '%')
We are Sampling, that's why the results will change. If we ran a different simulation with a different seed we'd get different numbers.
When ever you are sampling you can't be guaranteed to get prefect accuracy.It's always possible we get a weird sample and that's not the say that you won't get exactly the right answer.
This gets us to what's in some sense the fundamental question of all computational statistics, is How many samples do we need to look at before we can have real, justifiable confidence?
As we've seen in coins example in the top. Our intuituion tells us that it depends upon the variability in the underlying possibilities.
We have to look at the variability in the data so let's look at first something called variance. $$\sigma^{2}(X) = \frac{\sum{(x-\mu)}^2}{|X|}$$
- Why we are squaring here?
- Squaring it has one virtue, which is that it means I don't care wheather the difference is positive or negative. And I shouldn't, right? I don't care which side of the mean it's on, But if that's all I wanted to do I could take the absolute value.
- The other thing with squaring is it gives the outliers extra emphasis, because I'm I am sqaring that distance. But you can think it's good or bad. but it's worth knowing it's a fact.
Standard Deviation:
- The more important thing to think about is standard deviation all by itself is a meaningless number. You always have to think about it in the context of the mean.
- If I tell you the standard deviation is 100. and I ask you wheather it's big or small, you have no idea.
- If the mean is 100 and the standard deviation is 100, it's pretty big. If the mean is billion and standard deviaiton is 100, it's pretty small.
- So you should never want to look at just the standard deviation.
def getMeanAndStd(X):
mean = sum(X)/float(len(X))
tot = 0.0
for x in X:
tot += (x - mean)**2
std = (tot/len(X))**0.5
return mean, std
Confidence Levels and Intervals
- Insted of estimating an unknown paramater by a single value (e.g., the mean of a set of trials), a confidence interval provides a range that is likely to conatin the unknown value and a confidence that the unknown value lays within that range.
- The retun on betting a pocket 10k times in European roulette is -3.3%. The margian eroor is +/-3.5% with a 95% level of confidence."
How do we compute Confidence Interval? Most of the time we compute them using Empirical rule.
- The empirical rule says that if i take the data, find the mean, compute the standard deviaion.
- Under Some Assumptions[Normal Distribution]
- ~68% of data within one standard deviation of mean
- ~95% of data within 1.96 standard deviation of mean
- ~99.7% of data within 3 standard deviation of mean.
resultDict = {}
games = (FairRoulette, EuRoulette, AmRoulette)
for G in games:
resultDict[G().__str__()] = []
for numSpins in (100, 1000, 10000):
print('\nSimulate betting a pocket for', numTrials,
'trials of', numSpins, 'spins each')
for G in games:
pocketReturns = findPocketReturn(G(), 20,
numSpins, False)
mean, std = getMeanAndStd(pocketReturns)
resultDict[G().__str__()].append((numSpins,
100*mean,
100*std))
print('Exp. return for', G(), '=',
str(round(100*mean, 3))
+ '%,', '+/- ' + str(round(100*1.96*std, 3))
+ '% with 95% confidence')
Assumptions Underlying Empirical Rule
- The mean estimation error is zero
- The distribution of the error in the estimates is normal