Archive

Posts Tagged ‘simulation’

A von Mises variate…

Inspired from a mail that came along the previous random generation post the following question rised :

How to draw random variates from the Von Mises distribution?

First of all let’s check the pdf of the probability rule, it is f(x):=\frac{e^{b \text{Cos}[y-a]}}{2 \pi  \text{BesselI}[0,b]}, for -\pi \leq x\leq \pi .

Ok, I admit that Bessels functions can be a bit frightening, but there is a work around we can do. The solution is a Metropolis algorithm simulation. It is not necessary to know the normalizing constant, because it will cancel in the computation of the ratio. The following code is adapted from James Gentle’s notes on Mathematical Statistics .

n <- 1000
x <- rep(NA,n)
a <-1
c <-3
yi <-3
j <-0

i<-2
while (i < n) {
	i<-i+1
	yip1 <- yi + 2*a*runif(1)- 1
	if (yip1 < pi & yip1 > - pi) {
		if (exp(c*(cos(yip1)-cos(yi))) > runif(1)) yi <- yip1
		else yi <- x[i-1]
		x[i] <- yip1
	}
}
hist(x,probability=TRUE,fg = gray(0.7), bty="7")
lines(density(x,na.rm=TRUE),col="red",lwd=2)

Advertisements

In search of a random gamma variate…

March 16, 2010 9 comments

One of the most common exersices given to Statistical Computing,Simulation or relevant classes is the generation of random numbers from a gamma distribution. At first this might seem straightforward in terms of the lifesaving relation that exponential and gamma random variables share. So, it’s easy to get a gamma random variate using the fact that

{{X}_{i}}\tilde{\ }Exp(\lambda )\Rightarrow \sum\limits_{i}{{{X}_{i}}}\tilde{\ }Ga(k,\lambda ).

The code to do this is the following

rexp1 <- function(lambda, n) {
  u <- runif(n)
  x <- -log(u)/lambda
  }

rgamma1 <- function(k, lambda) {
  sum(rexp1(lambda, k))
}

This works unfortunately only for the case k\in \mathbb{N}
Read more…

Simulating the maximum…

Studying the distributional properties of the maximum is gaining more of attention nowadays given the relative failure of the conventional tools the quants used before the recession.

Let’s suppose that an insurance company faces the challenge to cover claims incurring in random time and size. The intensity of the process is \lambda (t)=100t(1-t), \lambda (0)=25 and the process itself is (surprise!!!) Poisson. The claims are iid from a lognormal distribution with parameters \mu=0.75t and \sigma=1, where t is the intermediate time between occurence i and i-1.

Some Mathematica code from my Graduate Simulation class is the following.

First we set up the intersity function and the generic vectors recoring Claims (Claimsv) and maximums (Maxv)

λ0 = 25; t0 = 1;
λ[t_] := 100 (t – t^2); Claimsv = {}; Maxv = {};

The main program is the following

Do[S = {}; t = 0; n = 0;
X = -Log[Random[]]/λ0; t = t + X;
While[t < t0, If[Random[] < λ[t]/λ0, n = n + 1; S = Append[S, t]];
X = -Log[Random[]]/λ0; t = t + X];
Claims = 0; S[[0]] = 0; Maxx = 0; Do[rr = Exp[
Random[NormalDistribution[0.75*(S[[i]] – S[[i – 1]]), 1]]];
If[rr > Maxx, Maxx = rr];
Claims = Claims + rr, {i, 1, n}] AppendTo[Claimsv, Claims];
AppendTo[Maxv, Maxx];, {10000}]

And plotting the histograms of the simulated distribution of the Claims and the maximum we nearly finished…

<< Histograms`

Histogram[Claims.v]
claimsv Histogram[Max.v]

maxv

to be continued…