User:Timothee Flutre/Notebook/Postdoc/2011/12/14

From OpenWetWare
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Learn about mixture models and the EM algorithm

(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)

  • Motivation: a large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect heterogeneous data. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is thus to recover from the data, ie. to infer, (i) how many clusters there are, (ii) what are the features of these clusters, and (iii) from which cluster each sample comes from. In the following, I will focus on points (ii) and (iii).
  • Data: we have N observations, noted [math]\displaystyle{ X = (x_1, x_2, ..., x_N) }[/math]. For the moment, we suppose that each observation [math]\displaystyle{ x_i }[/math] is univariate, ie. each corresponds to only one number.
  • Assumption: let's assume that the data are heterogeneous and that they can be partitioned into [math]\displaystyle{ K }[/math] clusters (in this document, we suppose that [math]\displaystyle{ K }[/math] is known). This means that we expect a subset of the observations to come from cluster [math]\displaystyle{ k=1 }[/math], another subset to come from cluster [math]\displaystyle{ k=2 }[/math], and so on.
  • Model: technically, we say that the observations were generated according to a density function [math]\displaystyle{ f }[/math]. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that observations from cluster [math]\displaystyle{ k }[/math] are generated from a Normal distribution, which density is here noted [math]\displaystyle{ \phi }[/math], with mean [math]\displaystyle{ \mu_k }[/math] and standard deviation [math]\displaystyle{ \sigma_k }[/math]. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight [math]\displaystyle{ w_k }[/math] (also called mixing proportion) to be the probability that any given observation comes from cluster [math]\displaystyle{ k }[/math]. As a result, we have the following list of parameters: [math]\displaystyle{ \theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K) }[/math]. Finally, for a given observation [math]\displaystyle{ x_i }[/math], we can write the model:

[math]\displaystyle{ f(x_i|\theta) = \sum_{k=1}^{K} w_k \phi(x_i|\mu_k,\sigma_k) = \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp \left(-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2 \right) }[/math]

The constraints are: [math]\displaystyle{ \forall k, w_k \gt 0 }[/math] and [math]\displaystyle{ \sum_{k=1}^K w_k = 1 }[/math]

  • Missing data: it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why it is useful to introduce the following N latent variables [math]\displaystyle{ Z_1,...,Z_i,...,Z_N }[/math] (also called hidden or allocation variables), one for each observation, such that [math]\displaystyle{ Z_i=k }[/math] means that observation [math]\displaystyle{ x_i }[/math] belongs to cluster [math]\displaystyle{ k }[/math]. This is called the "missing data formulation" of the mixture model. In fact, it is much easier to work the equations when defining each [math]\displaystyle{ Z_i }[/math] as a vector of length [math]\displaystyle{ K }[/math], with [math]\displaystyle{ Z_{ik}=1 }[/math] if observation [math]\displaystyle{ x_i }[/math] belongs to cluster [math]\displaystyle{ k }[/math], and [math]\displaystyle{ Z_{ik}=0 }[/math] otherwise (indicator variables). Thanks to this, we can reinterpret the mixture weights: [math]\displaystyle{ \forall i, P(Z_i=k|\theta)=w_k }[/math]. Moreover, we can now define the membership probabilities, one for each observation:

[math]\displaystyle{ p(k|i) = P(Z_i=k|x_i,\theta) = \frac{w_k \phi(x_i|\mu_k,\sigma_k)}{\sum_{l=1}^K w_l \phi(x_i|\mu_l,\sigma_l)} }[/math]

We can now write the augmented-data likelihood, assuming all observations are independent conditionally on their membership: [math]\displaystyle{ L_{aug}(\theta) = P(X,Z|\theta) = \prod_{i=1}^N P(X_i|Z_i,\theta) P(Z_i|\theta) = \prod_{i=1}^N \left( \prod_{k=1}^K \phi(x_i|\mu_k,\sigma_k)^{Z_{ik}} w_k^{Z_{ik}} \right) }[/math].

And here is the observed-data likelihood (also called sometimes incomplete or marginal, even though these appellations are misnomers):

[math]\displaystyle{ L_{obs}(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta) }[/math]

  • EM algorithm - theory: following the motivation stated above, the aim is to estimate the parameters [math]\displaystyle{ \theta }[/math]. However, the [math]\displaystyle{ Z_i }[/math] are not observed. In such settings, the very famous EM algorithm plays a big role. The idea is to iterate two steps, starting from randomly-initialized parameters. First, in the E-step, one computes the conditional expectation of the augmented-data log-likelihood function over the latent variables given the observed data and the parameter estimates from the previous iteration. Second, in the M-step, one maximizes this expected augmented-data log-likelihood function to determine the next iterate of the parameter estimates.
    • E step: [math]\displaystyle{ Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right] = \int l_{aug} f(Z|X,\theta^{(t)}) dZ }[/math]
    • M-step: [math]\displaystyle{ \theta^{(t+1)} = argmax_{\theta} Q(\theta|X,\theta^{(t)}) }[/math] so that [math]\displaystyle{ \forall \theta \in \Theta, Q(\theta^{(t+1)}|X,\theta^{(t)}) \ge Q(\theta|X,\theta^{(t)}) }[/math]

Where does it come from and why does it work? Well, the main point of this procedure is that it is proven to always increase the observed-data log-likelihood at each iteration. Yes, the observed-data log-likelihood, even though it works on the augmented-data log-likelihood... The idea is to introduce the hidden variables in order to find a lower bound of the observed-data log-likelihood.


  • ML estimation: we want to find the values of the parameters that maximize the observed-data likelihood. This reduces to (i) differentiating the log-likelihood with respect to each parameter, and then (ii) finding the value at which each partial derivative is zero. Instead of maximizing the likelihood, we maximize its logarithm, noted [math]\displaystyle{ l(\theta) }[/math]. It gives the same solution because the log is monotonically increasing, but it's easier to derive the log-likelihood than the likelihood. Here is the whole formula for the (incomplete) log-likelihood:

[math]\displaystyle{ l(\theta) = log(L_{incomp}(\theta)) = \sum_{i=1}^N log(f(x_i/\theta)) = \sum_{i=1}^N log \left( \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp \left( -\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2 \right) \right) }[/math]

  • MLE analytical formulas: a few important rules are required to write down the analytical formulae of the MLEs, but only from a high-school level (see here). Let's start by finding the maximum-likelihood estimates of the mean of each cluster:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{f(x_i/\theta)} \frac{\partial f(x_i/\theta)}{\partial \mu_k} }[/math]

As we derive with respect to [math]\displaystyle{ \mu_k }[/math], all the others means [math]\displaystyle{ \mu_l }[/math] with [math]\displaystyle{ l \ne k }[/math] are constant, and thus disappear:

[math]\displaystyle{ \frac{\partial f(x_i/\theta)}{\partial \mu_k} = w_k \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} }[/math]

And finally:

[math]\displaystyle{ \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} = \frac{\mu_k - x_i}{\sigma_k^2} g(x_i/\mu_k,\sigma_k) }[/math]

Once we put all together, we end up with:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i) = \sum_{i=1}^N \frac{1}{\sigma^2} p(k/i) (\mu_k - x_i) }[/math]

By convention, we note [math]\displaystyle{ \hat{\mu_k} }[/math] the maximum-likelihood estimate of [math]\displaystyle{ \mu_k }[/math]:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \mu_k}_{\mu_k=\hat{\mu_k}} = 0 }[/math]

Therefore, we finally obtain:

[math]\displaystyle{ \hat{\mu_k} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)} }[/math]

By doing the same kind of algebra, we derive the log-likelihood w.r.t. [math]\displaystyle{ \sigma_k }[/math]:

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3}) }[/math]

And then we obtain the ML estimates for the standard deviation of each cluster:

[math]\displaystyle{ \hat{\sigma_k} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \mu_k)^2}{\sum_{i=1}^N p(k/i)}} }[/math]

The partial derivative of [math]\displaystyle{ l(\theta) }[/math] w.r.t. [math]\displaystyle{ w_k }[/math] is tricky because of the constraints on the [math]\displaystyle{ w_k }[/math]. But we can handle it by writing them in terms of unconstrained variables [math]\displaystyle{ \gamma_k }[/math] (softmax function):

[math]\displaystyle{ w_k = \frac{e^{\gamma_k}}{\sum_{k=1}^K e^{\gamma_k}} }[/math]

[math]\displaystyle{ \frac{\partial w_k}{\partial \gamma_j} = \begin{cases} w_k - w_k^2 & \mbox{if }j = k \\ -w_kw_j & \mbox{otherwise} \end{cases} }[/math]

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k) }[/math]

Finally, here are the ML estimates for the mixture weights:

[math]\displaystyle{ \hat{w}_k = \frac{1}{N} \sum_{i=1}^N p(k/i) }[/math]

  • EM algorithm: ... <TO DO> ...
  • R code to simulate data:
#' Generate univariate observations from a mixture of Normals
#'
#' @param K number of components
#' @param N number of observations
#' @param gap difference between all component means
GetUnivariateSimulatedData <- function(K=2, N=100, gap=6){
  mus <- seq(0, gap*(K-1), gap)
  sigmas <- runif(n=K, min=0.5, max=1.5)
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  ns <- c(tmp, N - sum(tmp))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
                               ncol=1))
  obs <- matrix(unlist(lapply(1:K, function(k){
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  })))
  new.order <- sample(1:N, N)
  obs <- obs[new.order]
  rownames(obs) <- NULL
  clusters <- clusters[new.order]
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
              mix.weights=ns/N))
}
  • R code for the E step:
#' Return probas of latent variables given data and parameters from previous iteration
#'
#' @param data Nx1 vector of observations
#' @param params list which components are mus, sigmas and mix.weights
Estep <- function(data, params){
  GetMembershipProbas(data, params$mus, params$sigmas, params$mix.weights)
}
#' Return the membership probabilities P(zi=k/xi,theta)
#'
#' @param data Nx1 vector of observations
#' @param mus Kx1 vector of means
#' @param sigmas Kx1 vector of std deviations
#' @param mix.weights Kx1 vector of mixture weights w_k=P(zi=k/theta)
#' @return NxK matrix of membership probas
GetMembershipProbas <- function(data, mus, sigmas, mix.weights){
  N <- length(data)
  K <- length(mus)
  tmp <- matrix(unlist(lapply(1:N, function(i){
    x <- data[i]
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.weights)))
    unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma) / norm.const
    }, mus[-K], sigmas[-K], mix.weights[-K]))
  })), ncol=K-1, byrow=TRUE)
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  names(membership.probas) <- NULL
  return(membership.probas)
}
#' Univariate Normal density
GetUnivariateNormalDensity <- function(x, mu, sigma){
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
}
  • R code for the M step:
#' Return ML estimates of parameters
#'
#' @param data Nx1 vector of observations
#' @param params list which components are mus, sigmas and mix.weights
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
Mstep <- function(data, params, membership.probas){
  params.new <- list()
  sum.membership.probas <- apply(membership.probas, 2, sum)
  params.new$mus <- GetMlEstimMeans(data, membership.probas,
                                    sum.membership.probas)
  params.new$sigmas <- GetMlEstimStdDevs(data, params.new$mus,
                                         membership.probas,
                                         sum.membership.probas)
  params.new$mix.weights <- GetMlEstimMixWeights(data, membership.probas,
                                                 sum.membership.probas)
  return(params.new)
}
#' Return ML estimates of the means (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of means
GetMlEstimMeans <- function(data, membership.probas, sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sum(unlist(Map("*", membership.probas[,k], data))) /
      sum.membership.probas[k]
  })
}
#' Return ML estimates of the std deviations (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of std deviations
GetMlEstimStdDevs <- function(data, means, membership.probas,
                              sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sqrt(sum(unlist(Map(function(p_ki, x_i){
      p_ki * (x_i - means[k])^2
    }, membership.probas[,k], data))) /
         sum.membership.probas[k])
  })
}
#' Return ML estimates of the mixture weights
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of mixture weights
GetMlEstimMixWeights <- function(data, membership.probas,
                                 sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    1/length(data) * sum.membership.probas[k]
  })
}
  • R code for the log-likelihood:
GetLogLikelihood <- function(data, mus, sigmas, mix.weights){
  loglik <- sum(sapply(data, function(x){
    log(sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)
    }, mus, sigmas, mix.weights))))
  }))
  return(loglik)
}
  • R code for the EM loop:
EMalgo <- function(data, params, threshold.convergence=10^(-2), nb.iter=10,
                   verbose=1){
  logliks <- vector()
  i <- 1
  if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
  membership.probas <- Estep(data, params)
  params <- Mstep(data, params, membership.probas)
  loglik <- GetLogLikelihood(data, params$mus, params$sigmas,
                             params$mix.weights)
  logliks <- append(logliks, loglik)
  while(i < nb.iter){
    i <- i + 1
    if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
    membership.probas <- Estep(data, params)
    params <- Mstep(data, params, membership.probas)
    loglik <- GetLogLikelihood(data, params$mus, params$sigmas, params$mix.weights)
    if(loglik < logliks[length(logliks)]){
      msg <- paste("the log-likelihood is decreasing:", loglik, "<", logliks[length(logliks)])
      stop(msg, call.=FALSE)
    }
    logliks <- append(logliks, loglik)
    if(abs(logliks[i] - logliks[i-1]) <= threshold.convergence)
      break
  }
  return(list(params=params, membership.probas=membership.probas, logliks=logliks, nb.iters=i))
}
  • Example: and now, let's try it!
## simulate data
K <- 3
N <- 300
simul <- GetUnivariateSimulatedData(K, N)
data <- simul$obs
## run the EM algorithm
params0 <- list(mus=runif(n=K, min=min(data), max=max(data)),
                sigmas=rep(1, K),
                mix.weights=rep(1/K, K))
res <- EMalgo(data, params0, 10^(-3), 1000, 1)
## check its convergence
plot(res$logliks, xlab="iterations", ylab="log-likelihood",
     main="Convergence of the EM algorithm", type="b")
## plot the data along with the inferred densities
png("mixture_univar_em.png")
hist(data, breaks=30, freq=FALSE, col="grey", border="white", ylim=c(0,0.15),
     main="Histogram of data overlaid with densities inferred by EM")
rx <- seq(from=min(data), to=max(data), by=0.1)
ds <- lapply(1:K, function(k){dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])})
f <- sapply(1:length(rx), function(i){
  res$params$mix.weights[1] * ds[[1]][i] + res$params$mix.weights[2] * ds[[2]][i] + res$params$mix.weights[3] * ds[[3]][i]
})
lines(rx, f, col="red", lwd=2)
dev.off()

It seems to work well, which was expected as the clusters are well separated from each other...

The classification of each observation can be obtained via the following command:

## get the classification of the observations
memberships <- apply(res$membership.probas, 1, function(x){which(x > 0.5)})
table(memberships)
  • References:
    • introduction (ch.1) of the PhD thesis from Matthew Stephens (Oxford, 2000)
    • tutorial from Carlo Tomasi (Duke University)
    • book "Introducing Monte Carlo Methods with R" from Robert and and Casella (2009)