User:Timothee Flutre/Notebook/Postdoc/2011/12/14: Difference between revisions
(→Learn about mixture models and the EM algorithm: add implem simul data and E step) |
|||
Line 7: | Line 7: | ||
<!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> | <!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> | ||
==Learn about mixture models and the EM algorithm== | ==Learn about mixture models and the EM algorithm== | ||
''(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)'' | |||
* '''Motivation and examples''': a large part of any scientific activity is about measuring things, in other words collecting data, and it is not unfrequent to collect ''heterogeneous'' data. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is then to recover from the data, ie. to infer, (i) the values of the parameters of the probability distribution of each cluster, and (ii) from which cluster each sample comes from. | * '''Motivation and examples''': a large part of any scientific activity is about measuring things, in other words collecting data, and it is not unfrequent to collect ''heterogeneous'' data. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is then to recover from the data, ie. to infer, (i) the values of the parameters of the probability distribution of each cluster, and (ii) from which cluster each sample comes from. | ||
Line 37: | Line 39: | ||
Once we put all together, we end up with: | Once we put all together, we end up with: | ||
<math>\frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i)</math> | <math>\frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i) = \sum_{i=1}^N \frac{1}{\sigma^2} p(k/i) (\mu_k - x_i)</math> | ||
By convention, we note <math>\hat{\mu_k}</math> the maximum-likelihood estimate of <math>\mu_k</math>: | By convention, we note <math>\hat{\mu_k}</math> the maximum-likelihood estimate of <math>\mu_k</math>: |
Revision as of 09:48, 29 December 2011
Project name | <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page <html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html> </html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html> |
Learn about mixture models and the EM algorithm(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)
[math]\displaystyle{ l(\theta) = \sum_{i=1}^N log(f(x_i/\theta)) = \sum_{i=1}^N log( \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2}) }[/math]
[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{f(x_i/\theta)} \frac{\partial f(x_i/\theta)}{\partial \mu_k} }[/math] As we derive with respect to [math]\displaystyle{ \mu_k }[/math], all the others means [math]\displaystyle{ \mu_l }[/math] with [math]\displaystyle{ l \ne k }[/math] are constant, and thus disappear: [math]\displaystyle{ \frac{\partial f(x_i/\theta)}{\partial \mu_k} = w_k \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} }[/math] And finally: [math]\displaystyle{ \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} = \frac{\mu_k - x_i}{\sigma_k^2} g(x_i/\mu_k,\sigma_k) }[/math] Once we put all together, we end up with: [math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i) = \sum_{i=1}^N \frac{1}{\sigma^2} p(k/i) (\mu_k - x_i) }[/math] By convention, we note [math]\displaystyle{ \hat{\mu_k} }[/math] the maximum-likelihood estimate of [math]\displaystyle{ \mu_k }[/math]: [math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k}_{\mu_k=\hat{\mu_k}} = 0 }[/math] Therefore, we finally obtain: [math]\displaystyle{ \hat{\mu_k} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)} }[/math] By doing the same kind of algebra, we also obtain the ML estimates for the standard deviation of each cluster: [math]\displaystyle{ \hat{\sigma_k} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \mu_k)^2}{\sum_{i=1}^N p(k/i)}} }[/math]
#' Generate univariate observations from a mixture of Normals #' #' @param K number of components #' @param N number of observations GetUnivariateSimulatedData <- function(K=2, N=100){ mus <- seq(0, 6*(K-1), 6) sigmas <- runif(n=K, min=0.5, max=1.5) tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5)) ns <- c(tmp, N - sum(tmp)) clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})), ncol=1)) obs <- matrix(unlist(lapply(1:K, function(k){ rnorm(n=ns[k], mean=mus[k], sd=sigmas[k]) }))) new.order <- sample(1:N, N) obs <- obs[new.order] rownames(obs) <- NULL clusters <- clusters[new.order] return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas, mix.probas=ns/N)) }
#' Return probas of latent variables given data and parameters from previous iteration #' #' @param data Nx1 vector of observations #' @param params list which components are mus, sigmas and mix.probas Estep <- function(data, params){ GetMembershipProbas(data, params$mus, params$sigmas, params$mix.probas) } #' Return the membership probabilities P(zi=k/xi,theta) #' #' @param data Nx1 vector of observations #' @param mus Kx1 vector of means #' @param sigmas Kx1 vector of std deviations #' @param mix.probas Kx1 vector of mixing probas P(zi=k/theta) #' @return NxK matrix of membership probas GetMembershipProbas <- function(data, mus, sigmas, mix.probas){ N <- length(data) K <- length(mus) tmp <- matrix(unlist(lapply(1:N, function(i){ x <- data[i] norm.const <- sum(unlist(Map(function(mu, sigma, mix.proba){ mix.proba * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.probas))) unlist(Map(function(mu, sigma, mix.proba){ mix.proba * GetUnivariateNormalDensity(x, mu, sigma) / norm.const }, mus[-K], sigmas[-K], mix.probas[-K])) })), ncol=K-1, byrow=TRUE) membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)})) names(membership.probas) <- NULL return(membership.probas) } #' Univariate Normal density GetUnivariateNormalDensity <- function(x, mu, sigma){ return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) ) } |