### Archive

Archive for the ‘probability’ Category

## Probability manipulations with Mathematica…

There comes a time that a statistician needs to do some ananytic calculations. There more than a bunch of tools to use but I usually prefer Mathematica or Maple. Today, I’m gonna use Mathematica to do a simple exhibition.

Let’s set this example upon the  $U(2 \theta _1-\theta _2\leq x\leq 2 \theta _1+\theta _2)$  distribution.

pfun = PDF[UniformDistribution[{2*Subscript[θ, 1] - Subscript[θ, 2],
2*Subscript[θ, 1] + Subscript[θ, 2]}], x]

$\begin{cases} \frac{1}{2 \theta _2} & 2 \theta _1-\theta _2\leq x\leq 2 \theta _1+\theta _2 \\ 0 & \text{True} \end{cases}$

One of the most intensive calculations is the characteristic function (eq. the moment generating function). This is straightforward to derive.

cfun=CharacteristicFunction[UniformDistribution[
{2*Subscript[θ, 1]-Subscript[θ, 2],2*Subscript[θ, 1]+Subscript[θ, 2]}],x]

$-\frac{i \left(-e^{i x \left(2 \theta _1-\theta _2\right)}+e^{i x \left(2 \theta _1+\theta _2\right)}\right)}{2 x \theta _2}$.

The Table[] command calculates for us the raw moments for our distribution.

Table[Limit[D[cfun, {x, n}], x -> 0]/I^n, {n, 4}]

$\left\{2 \theta _1,\frac{1}{3} \left(12 \theta _1^2+\theta _2^2\right),2 \theta _1 \left(4 \theta _1^2+\theta _2^2\right),16 \theta _1^4+8 \theta _1^2 \theta _2^2+\frac{\theta _2^4}{5}\right\}$.

Calculate the sample statistics.

T=List[8.23,6.9,1.05,4.8,2.03,6.95];
{Mean[T],Variance[T]}

$\{4.99333,8.46171\}$.

Now, we can use a simple moment matching technique to get estimates for the parameters.

Solve[{Mean[T]-2*Subscript[θ, 1]==0,-(2*Subscript[θ, 1])^2+
1/3 (12 Subscript[θ, 1]^2+\!\*SubsuperscriptBox[$$θ$$, $$2$$, $$2$$])-
Variance[T]==0},{Subscript[θ, 2],Subscript[θ, 1]}]

$\left\{\left\{\theta _1\to 2.49667,\theta _2\to -5.03836\right\},\left\{\theta _1\to 2.49667,\theta _2\to 5.03836\right\}\right\}$.

Check the true value for the $\theta _2$.

Reduce[2 Subscript[θ, 1]-Subscript[θ, 2]<=2 Subscript[θ, 1]+Subscript[θ, 2],
Subscript[θ, 2]]

$\theta _1\in \text{Reals}\&\&\theta _2\geq 0 .$

Then, $\left\{\left\{\theta _1\to 2.49667,\theta _2\to 5.03836\right\}\right\}$.

Categories: probability, statistics

## A von Mises variate…

Inspired from a mail that came along the previous random generation post the following question rised :

How to draw random variates from the Von Mises distribution?

First of all let’s check the pdf of the probability rule, it is $f(x):=\frac{e^{b \text{Cos}[y-a]}}{2 \pi \text{BesselI}[0,b]}$, for $-\pi \leq x\leq \pi$.

Ok, I admit that Bessels functions can be a bit frightening, but there is a work around we can do. The solution is a Metropolis algorithm simulation. It is not necessary to know the normalizing constant, because it will cancel in the computation of the ratio. The following code is adapted from James Gentle’s notes on Mathematical Statistics .

n <- 1000
x <- rep(NA,n)
a <-1
c <-3
yi <-3
j <-0

i<-2
while (i < n) {
i<-i+1
yip1 <- yi + 2*a*runif(1)- 1
if (yip1 < pi & yip1 > - pi) {
if (exp(c*(cos(yip1)-cos(yi))) > runif(1)) yi <- yip1
else yi <- x[i-1]
x[i] <- yip1
}
}
hist(x,probability=TRUE,fg = gray(0.7), bty="7")
lines(density(x,na.rm=TRUE),col="red",lwd=2)

Categories: probability, statistics

## \pi day!

It’s π-day today so we gonna have a little fun today with Buffon’s needle and of course R. A well known approximation to the value of $\pi$ is the experiment tha Buffon performed using a needle of length,$l$. What I do in the next is only to copy from the following file the function estPi and to use an ergodic sample plot… Lame,huh?

estPi<- function(n, l=1, t=2) {
m <- 0
for (i in 1:n) {
x <- runif(1)
theta <- runif(1, min=0, max=pi/2)
if (x < l/2 * sin(theta)) {
m <- m +1
}
}
return(2*l*n/(t*m))
}

So, an estimate would be…
A gambler plays the following game. He starts with $r$ dollars, and is trying to end up with $\alpha$ dollars. At each go he chooses an integer $s$ between 1 and $min(r,\alpha-r)$ and then tosses a fair coin. If the coin comes up heads, then he wins $s$ dollars, and if it comes up tails then he loses $s$ dollars.
The game finishes if he runs out of money (in which case he loses) or reaches $\alpha$ dollars (in which case he wins). Prove that whatever strategy the gambler adopts (that is, however he chooses each stake based on what has happened up to that point), the probability that the game finishes is 1 and the probability that the gambler wins is $r/\alpha$.