User:Timothee Flutre/Notebook/Postdoc/2011/12/14

From OpenWetWare
Jump to navigationJump to search
Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Learn about mixture models and the EM algorithm

(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)

  • Motivation and examples: a large part of any scientific activity is about measuring things, in other words collecting data, and it is not unfrequent to collect heterogeneous data. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is then to recover from the data, ie. to infer, (i) the values of the parameters of the probability distribution of each cluster, and (ii) from which cluster each sample comes from.
  • Data: we have N observations, noted [math]\displaystyle{ X = (x_1, x_2, ..., x_N) }[/math]. For the moment, we suppose that each observation [math]\displaystyle{ x_i }[/math] is univariate, ie. each corresponds to only one number.
  • Hypotheses and aim: let's assume that the data are heterogeneous and that they can be partitioned into [math]\displaystyle{ K }[/math] clusters (see examples above). This means that we expect a subset of the observations to come from cluster [math]\displaystyle{ k=1 }[/math], another subset to come from cluster [math]\displaystyle{ k=2 }[/math], and so on.
  • Model: technically, we say that the observations were generated according to a density function [math]\displaystyle{ f }[/math]. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that each cluster [math]\displaystyle{ k }[/math] corresponds to a Normal distribution, here noted [math]\displaystyle{ g }[/math], of mean [math]\displaystyle{ \mu_k }[/math] and standard deviation [math]\displaystyle{ \sigma_k }[/math]. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture probability [math]\displaystyle{ w_k }[/math] to be the probability that any given observation comes from cluster [math]\displaystyle{ k }[/math]. As a result, we have the following list of parameters: [math]\displaystyle{ \theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K) }[/math]. Finally, for a given observation [math]\displaystyle{ x_i }[/math], we can write the model [math]\displaystyle{ f(x_i/\theta) = \sum_{k=1}^{K} w_k g(x_i/\mu_k,\sigma_k) }[/math] , with [math]\displaystyle{ g(x_i/\mu_k,\sigma_k) = \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2} }[/math].
  • Likelihood: this corresponds to the probability of obtaining the data given the parameters: [math]\displaystyle{ L(\theta) = P(X/\theta) }[/math]. We assume that the observations are independent, ie. they were generated independently, whether they are from the same cluster or not. Therefore we can write: [math]\displaystyle{ L(\theta) = \prod_{i=1}^N f(x_i/\theta) }[/math].
  • Estimation: now we want to find the values of the parameters that maximize the likelihood. This reduces to (i) differentiating the likelihood with respect to each parameter, and then (ii) finding the value at which each partial derivative is zero. Instead of maximizing the likelihood, we maximize its logarithm, noted [math]\displaystyle{ l(\theta) }[/math]. It gives the same solution because the log is monotonically increasing, but it's easier to derive the log-likelihood than the likelihood. Here is the whole formula:

[math]\displaystyle{ l(\theta) = \sum_{i=1}^N log(f(x_i/\theta)) = \sum_{i=1}^N log( \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2}) }[/math]

  • Latent variables: here it's worth noting that, although everything seems fine, a big information is lacking, we aim at finding the parameters defining the mixture but we don't know from which cluster each observation is coming... That's why we need to introduce the following N latent variables [math]\displaystyle{ Z_1,...,Z_i,...,Z_N }[/math], one for each observation, such that [math]\displaystyle{ Z_i=k }[/math] means that [math]\displaystyle{ x_i }[/math] belongs to cluster [math]\displaystyle{ k }[/math]. Thanks to this, we can reinterpret the mixing probabilities: [math]\displaystyle{ w_k = P(Z_i=k/\theta) }[/math]. Moreover, we can now define the membership probabilities, one for each observation: [math]\displaystyle{ P(Z_i=k/x_i,\theta) = \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} }[/math]. We will note these membership probabilities [math]\displaystyle{ p(k/i) }[/math] as they will have a big role in the EM algorithm below. Indeed, we don't know the values taken by the latent variables, so we will have to infer their probabilities from the data. Introducing the latent variables corresponds to what is called the "missing data formulation" of the mixture problem.
  • Technical details: a few important rules are required, but only from a high-school level in maths (see here). Let's start by finding the maximum-likelihood estimates of the mean of each cluster:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{f(x_i/\theta)} \frac{\partial f(x_i/\theta)}{\partial \mu_k} }[/math]

As we derive with respect to [math]\displaystyle{ \mu_k }[/math], all the others means [math]\displaystyle{ \mu_l }[/math] with [math]\displaystyle{ l \ne k }[/math] are constant, and thus disappear:

[math]\displaystyle{ \frac{\partial f(x_i/\theta)}{\partial \mu_k} = w_k \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} }[/math]

And finally:

[math]\displaystyle{ \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} = \frac{\mu_k - x_i}{\sigma_k^2} g(x_i/\mu_k,\sigma_k) }[/math]

Once we put all together, we end up with:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i) = \sum_{i=1}^N \frac{1}{\sigma^2} p(k/i) (\mu_k - x_i) }[/math]

By convention, we note [math]\displaystyle{ \hat{\mu_k} }[/math] the maximum-likelihood estimate of [math]\displaystyle{ \mu_k }[/math]:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k}_{\mu_k=\hat{\mu_k}} = 0 }[/math]

Therefore, we finally obtain:

[math]\displaystyle{ \hat{\mu_k} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)} }[/math]

By doing the same kind of algebra, we derive the log-likelihood w.r.t. [math]\displaystyle{ \sigma_k }[/math]:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3}) }[/math]

And then we obtain the ML estimates for the standard deviation of each cluster:

[math]\displaystyle{ \hat{\sigma_k} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \mu_k)^2}{\sum_{i=1}^N p(k/i)}} }[/math]

The partial derivative of [math]\displaystyle{ l(\theta) }[/math] w.r.t. [math]\displaystyle{ w_k }[/math] is tricky. ... <TO DO> ...

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k) }[/math]

Finally, here are the ML estimates for the mixture proportions:

[math]\displaystyle{ \hat{w}_k = \frac{1}{N} \sum_{i=1}^N p(k/i) }[/math]

  • EM algorithm: ... <TO DO> ...
  • Simulate data:
#' Generate univariate observations from a mixture of Normals
#'
#' @param K number of components
#' @param N number of observations
GetUnivariateSimulatedData <- function(K=2, N=100){
  mus <- seq(0, 6*(K-1), 6)
  sigmas <- runif(n=K, min=0.5, max=1.5)
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  ns <- c(tmp, N - sum(tmp))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
                               ncol=1))
  obs <- matrix(unlist(lapply(1:K, function(k){
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  })))
  new.order <- sample(1:N, N)
  obs <- obs[new.order]
  rownames(obs) <- NULL
  clusters <- clusters[new.order]
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
              mix.probas=ns/N))
}
  • Implement the E step:
#' Return probas of latent variables given data and parameters from previous iteration
#'
#' @param data Nx1 vector of observations
#' @param params list which components are mus, sigmas and mix.probas
Estep <- function(data, params){
  GetMembershipProbas(data, params$mus, params$sigmas, params$mix.probas)
}
#' Return the membership probabilities P(zi=k/xi,theta)
#'
#' @param data Nx1 vector of observations
#' @param mus Kx1 vector of means
#' @param sigmas Kx1 vector of std deviations
#' @param mix.probas Kx1 vector of mixing probas P(zi=k/theta)
#' @return NxK matrix of membership probas
GetMembershipProbas <- function(data, mus, sigmas, mix.probas){
  N <- length(data)
  K <- length(mus)
  tmp <- matrix(unlist(lapply(1:N, function(i){
    x <- data[i]
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.proba){
      mix.proba * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.probas)))
    unlist(Map(function(mu, sigma, mix.proba){
      mix.proba * GetUnivariateNormalDensity(x, mu, sigma) / norm.const
    }, mus[-K], sigmas[-K], mix.probas[-K]))
  })), ncol=K-1, byrow=TRUE)
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  names(membership.probas) <- NULL
  return(membership.probas)
}
#' Univariate Normal density
GetUnivariateNormalDensity <- function(x, mu, sigma){
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
}
  • References:
    • tutorial: document from Carlo Tomasi (Duke University)
    • introduction to mixture models: PhD thesis from Matthew Stephens (Oxford, 2000)
    • articles on the Bayesian approach: Diebolt and Robert (1994); Richardson and Green (1997); Jasra, Holmes and Stephens (2005)