User:Timothee Flutre/Notebook/Postdoc/2011/12/14
From OpenWetWare
(→Learn about mixture models and the EM algorithm: add figure for example) 
(→Learn about mixture models and the EM algorithm: rephrase the part on comp/incomp likelihood) 

Line 16:  Line 16:  
* '''Hypothesis''': let's assume that the data are heterogeneous and that they can be partitioned into <math>K</math> clusters (see examples above). This means that we expect a subset of the observations to come from cluster <math>k=1</math>, another subset to come from cluster <math>k=2</math>, and so on.  * '''Hypothesis''': let's assume that the data are heterogeneous and that they can be partitioned into <math>K</math> clusters (see examples above). This means that we expect a subset of the observations to come from cluster <math>k=1</math>, another subset to come from cluster <math>k=2</math>, and so on.  
  * '''Model''': technically, we say that the observations were generated according to a [http://en.wikipedia.org/wiki/Probability_density_function density function] <math>f</math>. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that each cluster <math>k</math> corresponds to a Normal distribution, which density is here noted <math>g</math>, with mean <math>\mu_k</math> and standard deviation <math>\sigma_k</math>. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight <math>w_k</math> to be the probability that any given observation comes from cluster <math>k</math>. As a result, we have the following list of parameters: <math>\theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K)</math>. Finally, for a given observation <math>x_i</math>, we can write the model  +  * '''Model''': technically, we say that the observations were generated according to a [http://en.wikipedia.org/wiki/Probability_density_function density function] <math>f</math>. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that each cluster <math>k</math> corresponds to a Normal distribution, which density is here noted <math>g</math>, with mean <math>\mu_k</math> and standard deviation <math>\sigma_k</math>. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight <math>w_k</math> to be the probability that any given observation comes from cluster <math>k</math>. As a result, we have the following list of parameters: <math>\theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K)</math>. Finally, for a given observation <math>x_i</math>, we can write the model: 
  +  <math>f(x_i/\theta) = \sum_{k=1}^{K} w_k g(x_i/\mu_k,\sigma_k) = \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp \left(\frac{1}{2}(\frac{x_i  \mu_k}{\sigma_k})^2 \right)</math>  
  * '''  +  * '''Missing data''': it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why we need to introduce the following N [http://en.wikipedia.org/wiki/Latent_variable latent variables] <math>Z_1,...,Z_i,...,Z_N</math>, one for each observation, such that <math>Z_i=k</math> means that observation <math>x_i</math> belongs to cluster <math>k</math> ([http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29 indicators]). This is called the "missing data formulation" of the mixture model. Thanks to this, we can reinterpret the mixture weights: <math>w_k = P(Z_i=k/\theta)</math>. Moreover, we can now define the membership probabilities, one for each observation: 
  <math>  +  
  +  <math>p(k/i) = P(Z_i=k/x_i,\theta) = \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)}</math>  
+  
+  We can now write the complete likelihood of the augmented model (even if we don't need it as such in the following), where <math>I_k = \{i / Z_i = k\}</math>:  
+  <math>L_{comp}(\theta) = P(X,Z/\theta) = P(X/Z,\theta) P(Z/\theta) = \left( \prod_{k=1}^K \prod_{i \in I_k} g(x_i/\mu_k,\sigma_k) \right) \prod_{i=1}^N P(Z_i/\theta)</math>.  
+  
+  And, more useful, the incomplete (or marginal) likelihood, assuming all observations are independent:  
+  
+  <math>L_{incomp}(\theta) = P(X/\theta) = \prod_{i=1}^N f(x_i/\theta)</math>  
+  
+  * '''ML estimation''': we want to find the values of the parameters that maximize the likelihood. This reduces to (i) differentiating the loglikelihood with respect to each parameter, and then (ii) finding the value at which each partial derivative is zero. Instead of maximizing the likelihood, we maximize its logarithm, noted <math>l(\theta)</math>. It gives the same solution because the log is monotonically increasing, but it's easier to derive the loglikelihood than the likelihood. Here is the whole formula for the (incomplete) loglikelihood:  
+  <math>l(\theta) = \sum_{i=1}^N log(f(x_i/\theta)) = \sum_{i=1}^N log( \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{\frac{1}{2}(\frac{x_i  \mu_k}{\sigma_k})^2})</math>  
* '''MLE analytical formulae''': a few important rules are required, but only from a highschool level in maths (see [http://en.wikipedia.org/wiki/Differentiation_%28mathematics%29#Rules_for_finding_the_derivative here]). Let's start by finding the maximumlikelihood estimates of the mean of each cluster:  * '''MLE analytical formulae''': a few important rules are required, but only from a highschool level in maths (see [http://en.wikipedia.org/wiki/Differentiation_%28mathematics%29#Rules_for_finding_the_derivative here]). Let's start by finding the maximumlikelihood estimates of the mean of each cluster: 
Revision as of 18:01, 3 January 2012
Project name  Main project page Previous entry Next entry 
Learn about mixture models and the EM algorithm(Caution, this is my own quickanddirty tutorial, see the references at the end for presentations by professional statisticians.)
We can now write the complete likelihood of the augmented model (even if we don't need it as such in the following), where I_{k} = {i / Z_{i} = k}: . And, more useful, the incomplete (or marginal) likelihood, assuming all observations are independent:
As we derive with respect to μ_{k}, all the others means μ_{l} with are constant, and thus disappear:
And finally:
Once we put all together, we end up with:
By convention, we note the maximumlikelihood estimate of μ_{k}:
Therefore, we finally obtain:
By doing the same kind of algebra, we derive the loglikelihood w.r.t. σ_{k}:
And then we obtain the ML estimates for the standard deviation of each cluster:
The partial derivative of l(θ) w.r.t. w_{k} is tricky. ... <TO DO> ...
Finally, here are the ML estimates for the mixture weights:
#' Generate univariate observations from a mixture of Normals #' #' @param K number of components #' @param N number of observations GetUnivariateSimulatedData < function(K=2, N=100){ mus < seq(0, 6*(K1), 6) sigmas < runif(n=K, min=0.5, max=1.5) tmp < floor(rnorm(n=K1, mean=floor(N/K), sd=5)) ns < c(tmp, N  sum(tmp)) clusters < as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})), ncol=1)) obs < matrix(unlist(lapply(1:K, function(k){ rnorm(n=ns[k], mean=mus[k], sd=sigmas[k]) }))) new.order < sample(1:N, N) obs < obs[new.order] rownames(obs) < NULL clusters < clusters[new.order] return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas, mix.weights=ns/N)) }
#' Return probas of latent variables given data and parameters from previous iteration #' #' @param data Nx1 vector of observations #' @param params list which components are mus, sigmas and mix.weights Estep < function(data, params){ GetMembershipProbas(data, params$mus, params$sigmas, params$mix.weights) } #' Return the membership probabilities P(zi=k/xi,theta) #' #' @param data Nx1 vector of observations #' @param mus Kx1 vector of means #' @param sigmas Kx1 vector of std deviations #' @param mix.weights Kx1 vector of mixture weights w_k=P(zi=k/theta) #' @return NxK matrix of membership probas GetMembershipProbas < function(data, mus, sigmas, mix.weights){ N < length(data) K < length(mus) tmp < matrix(unlist(lapply(1:N, function(i){ x < data[i] norm.const < sum(unlist(Map(function(mu, sigma, mix.weight){ mix.weight * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.weights))) unlist(Map(function(mu, sigma, mix.weight){ mix.weight * GetUnivariateNormalDensity(x, mu, sigma) / norm.const }, mus[K], sigmas[K], mix.weights[K])) })), ncol=K1, byrow=TRUE) membership.probas < cbind(tmp, apply(tmp, 1, function(x){1  sum(x)})) names(membership.probas) < NULL return(membership.probas) } #' Univariate Normal density GetUnivariateNormalDensity < function(x, mu, sigma){ return( 1/(sigma * sqrt(2*pi)) * exp(1/(2*sigma^2)*(xmu)^2) ) }
#' Return ML estimates of parameters #' #' @param data Nx1 vector of observations #' @param params list which components are mus, sigmas and mix.weights #' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta) Mstep < function(data, params, membership.probas){ params.new < list() sum.membership.probas < apply(membership.probas, 2, sum) params.new$mus < GetMlEstimMeans(data, membership.probas, sum.membership.probas) params.new$sigmas < GetMlEstimStdDevs(data, params.new$mus, membership.probas, sum.membership.probas) params.new$mix.weights < GetMlEstimMixWeights(data, membership.probas, sum.membership.probas) return(params.new) } #' Return ML estimates of the means (1 per cluster) #' #' @param data Nx1 vector of observations #' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta) #' @param sum.membership.probas Kx1 vector of sum per column of matrix above #' @return Kx1 vector of means GetMlEstimMeans < function(data, membership.probas, sum.membership.probas){ K < ncol(membership.probas) sapply(1:K, function(k){ sum(unlist(Map("*", membership.probas[,k], data))) / sum.membership.probas[k] }) } #' Return ML estimates of the std deviations (1 per cluster) #' #' @param data Nx1 vector of observations #' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta) #' @param sum.membership.probas Kx1 vector of sum per column of matrix above #' @return Kx1 vector of std deviations GetMlEstimStdDevs < function(data, means, membership.probas, sum.membership.probas){ K < ncol(membership.probas) sapply(1:K, function(k){ sqrt(sum(unlist(Map(function(p_ki, x_i){ p_ki * (x_i  means[k])^2 }, membership.probas[,k], data))) / sum.membership.probas[k]) }) } #' Return ML estimates of the mixture weights #' #' @param data Nx1 vector of observations #' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta) #' @param sum.membership.probas Kx1 vector of sum per column of matrix above #' @return Kx1 vector of mixture weights GetMlEstimMixWeights < function(data, membership.probas, sum.membership.probas){ K < ncol(membership.probas) sapply(1:K, function(k){ 1/length(data) * sum.membership.probas[k] }) }
... <TO DO> ...
## simulate data K < 3 N < 300 simul < GetUnivariateSimulatedData(K, N) data < simul$obs ## run the EM algorithm params0 < list(mus=runif(n=K, min=min(data), max=max(data)), sigmas=rep(1, K), mix.weights=rep(1/K, K)) res < EMalgo(data, params0, 10^(3), 1000, 1) ## check its convergence plot(res$logliks, xlab="iterations", ylab="loglikelihood", main="Convergence of the EM algorithm", type="b") ## plot the data along with the inferred densities png("mixture_univar_em.png") hist(data, breaks=30, freq=FALSE, col="grey", border="white", ylim=c(0,0.15), main="Histogram of data overlaid with densities inferred by EM") rx < seq(from=min(data), to=max(data), by=0.1) ds < lapply(1:K, function(k){dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])}) f < sapply(1:length(rx), function(i){ res$params$mix.weights[1] * ds[[1]][i] + res$params$mix.weights[2] * ds[[2]][i] + res$params$mix.weights[3] * ds[[3]][i] }) lines(rx, f, col="red", lwd=2) dev.off() It seems to work well, which was expected as the clusters are well separated from each other...
