Archive

Posts Tagged ‘distribution’

A von Mises variate…

Inspired from a mail that came along the previous random generation post the following question rised :

How to draw random variates from the Von Mises distribution?

First of all let’s check the pdf of the probability rule, it is f(x):=\frac{e^{b \text{Cos}[y-a]}}{2 \pi  \text{BesselI}[0,b]}, for -\pi \leq x\leq \pi .

Ok, I admit that Bessels functions can be a bit frightening, but there is a work around we can do. The solution is a Metropolis algorithm simulation. It is not necessary to know the normalizing constant, because it will cancel in the computation of the ratio. The following code is adapted from James Gentle’s notes on Mathematical Statistics .

n <- 1000
x <- rep(NA,n)
a <-1
c <-3
yi <-3
j <-0

i<-2
while (i < n) {
	i<-i+1
	yip1 <- yi + 2*a*runif(1)- 1
	if (yip1 < pi & yip1 > - pi) {
		if (exp(c*(cos(yip1)-cos(yi))) > runif(1)) yi <- yip1
		else yi <- x[i-1]
		x[i] <- yip1
	}
}
hist(x,probability=TRUE,fg = gray(0.7), bty="7")
lines(density(x,na.rm=TRUE),col="red",lwd=2)

The distribution of rho…

There was a post here about obtaining non-standard p-values for testing the correlation coefficient. The R-library

SuppDists

deals with this problem efficiently.

library(SuppDists)

plot(function(x)dPearson(x,N=23,rho=0.7),-1,1,ylim=c(0,10),ylab="density")
plot(function(x)dPearson(x,N=23,rho=0),-1,1,add=TRUE,col="steelblue")
plot(function(x)dPearson(x,N=23,rho=-.2),-1,1,add=TRUE,col="green")
plot(function(x)dPearson(x,N=23,rho=.9),-1,1,add=TRUE,col="red");grid()

legend("topleft", col=c("black","steelblue","red","green"),lty=1,
		legend=c("rho=0.7","rho=0","rho=-.2","rho=.9"))</pre>

This is how it looks like,


Now, let’s construct a table of critical values for some arbitrary or not significance levels.

q=c(.025,.05,.075,.1,.15,.2)
xtabs(qPearson(p=q, N=23, rho = 0, lower.tail = FALSE, log.p = FALSE) ~ q )
# q
#     0.025      0.05     0.075       0.1      0.15       0.2
# 0.4130710 0.3514298 0.3099236 0.2773518 0.2258566 0.1842217

We can calculate p-values as usual too…

1-pPearson(.41307,N=23,rho=0)
# [1] 0.0250003

In search of a random gamma variate…

March 16, 2010 9 comments

One of the most common exersices given to Statistical Computing,Simulation or relevant classes is the generation of random numbers from a gamma distribution. At first this might seem straightforward in terms of the lifesaving relation that exponential and gamma random variables share. So, it’s easy to get a gamma random variate using the fact that

{{X}_{i}}\tilde{\ }Exp(\lambda )\Rightarrow \sum\limits_{i}{{{X}_{i}}}\tilde{\ }Ga(k,\lambda ).

The code to do this is the following

rexp1 <- function(lambda, n) {
  u <- runif(n)
  x <- -log(u)/lambda
  }

rgamma1 <- function(k, lambda) {
  sum(rexp1(lambda, k))
}

This works unfortunately only for the case k\in \mathbb{N}
Read more…

A normal philosophy…

The following was sent by email to me. It originates to Youden.

                                 THE
                                NORMAL
                             LAW OF ERROR
                           STANDS OUT IN THE
                         EXPERIENCE OF MANKIND
                        AS ONE OF  THE BROADEST
                       GENERALIZATIONS OF NATURAL
                     PHILOSOPHY . IT SERVES AS THE
                   GUIDING INSTRUMENT IN RESEARCHES
                IN THE PHYSICAL AND SOCIAL SCIENCES AND
               IN MEDICINE AGRICULTURE AND ENGINEERING .
          IT IS AN INDISPENSABLE TOOL FOR THE ANALYSIS AND THE
INTERPRETATION OF THE BASIC DATA OBTAINED BY OBSERVATION AND EXPERIMENT

–W.J. Youden

Youden is one of the truly inspiring statisticians to me.

Categories: infos Tags: , ,

(Im)Perfect Detectors…

Detecting reliably an event is surely something to worry about in applied science. One of the main models used is the perfect-imperfect detector, obviously underlying a Poisson process…  Two detectors are counting events generated by a source (eg a photon device). The first one detects efficiently (perfect) the events whether the other one lacks efficiency.

Then X~Poi(λ) and Y~Poi(λp), where p is the inefficiency ratio and estimation is (almost) trivial. Assume that m,r are the counts of X,Y respectively. Furthermore, let k be the total observations and n the observations of X.

The mle of λ is m/n as usual. What about p?

The likelihood is proportional to exp \left[ -\lambda p\left( k-n \right) \right]{{\left( \lambda p \right)}^{r}}, so taking logarithms and differentiating with respect to p gives us that the mle is \hat{p}=\frac{mr}{n(k-n)} .

The real question is : what if \frac{mr}{n(k-n)}>1?

Look it up… S. S. Chitgopekar, “A note on the estimation of the Poisson parameter,” International Journal of Mathematics and Mathematical Sciences, vol. 8, no. 1, pp. 193-196, 1985. [pdf]