User:Timothee Flutre/Notebook/Postdoc/2011/12/14: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(→‎Learn about mixture models and the EM algorithm: add option gap + add code to get classif)
(23 intermediate revisions by the same user not shown)
Line 10: Line 10:
''(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)''
''(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)''


* '''Motivation''': a large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect ''heterogeneous'' data. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is then to recover from the data, ie. to infer, (i) the values of the parameters of the probability distribution of each cluster, and (ii) from which cluster each sample comes from.
* '''Motivation''': a large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect ''heterogeneous'' data. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is thus to recover from the data, ie. to infer, (i) how many clusters there are, (ii) what are the features of these clusters, and (iii) from which cluster each sample comes from. In the following, I will focus on points (ii) and (iii).


* '''Data''': we have N observations, noted <math>X = (x_1, x_2, ..., x_N)</math>. For the moment, we suppose that each observation <math>x_i</math> is univariate, ie. each corresponds to only one number.
* '''Data''': we have N observations, noted <math>X = (x_1, x_2, ..., x_N)</math>. For the moment, we suppose that each observation <math>x_i</math> is univariate, ie. each corresponds to only one number.


* '''Hypothesis''': let's assume that the data are heterogeneous and that they can be partitioned into <math>K</math> clusters (see examples above). This means that we expect a subset of the observations to come from cluster <math>k=1</math>, another subset to come from cluster <math>k=2</math>, and so on.
* '''Assumption''': let's assume that the data are heterogeneous and that they can be partitioned into <math>K</math> clusters (in this document, we suppose that <math>K</math> is known). This means that we expect a subset of the observations to come from cluster <math>k=1</math>, another subset to come from cluster <math>k=2</math>, and so on.


* '''Model''': technically, we say that the observations were generated according to a [http://en.wikipedia.org/wiki/Probability_density_function density function] <math>f</math>. More precisely, this density is itself a mixture of densities, one per cluster. In our case, we will assume that each cluster <math>k</math> corresponds to a Normal distribution, which density is here noted <math>g</math>, with mean <math>\mu_k</math> and standard deviation <math>\sigma_k</math>. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight <math>w_k</math> to be the probability that any given observation comes from cluster <math>k</math>. As a result, we have the following list of parameters: <math>\theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K)</math>. Finally, for a given observation <math>x_i</math>, we can write the model:


<math>f(x_i/\theta) = \sum_{k=1}^{K} w_k g(x_i/\mu_k,\sigma_k) = \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp \left(-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2 \right)</math>
* '''Model''': writing down the model usually means starting by writing down the likelihood of the parameters, that is the probability of the data given the parameters. Technically, we say that the observations were generated according to a [http://en.wikipedia.org/wiki/Probability_density_function density function] <math>f</math>. In this case, this density is itself a mixture of densities, one per cluster. In our case, we will assume that observations from cluster <math>k</math> are generated from a Normal distribution, which density is here noted <math>\phi</math>, with mean <math>\mu_k</math> and standard deviation <math>\sigma_k</math>. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight <math>w_k</math> (also called mixing proportion) to be the probability that any given observation comes from cluster <math>k</math>. As a result, we have the following list of parameters: <math>\theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K)</math>. Finally, for a given observation <math>x_i</math>, we can write the model:


* '''Missing data''': it is worth noting that a big piece of information is lacking here. We aim at finding the parameters defining the mixture, but we don't know from which cluster each observation is coming! That's why we need to introduce the following N [http://en.wikipedia.org/wiki/Latent_variable latent variables] <math>Z_1,...,Z_i,...,Z_N</math>, one for each observation, such that <math>Z_i=k</math> means that observation <math>x_i</math> belongs to cluster <math>k</math> ([http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29 indicators]). This is called the "missing data formulation" of the mixture model. Thanks to this, we can reinterpret the mixture weights: <math>w_k = P(Z_i=k/\theta)</math>. Moreover, we can now define the membership probabilities, one for each observation:
<math>f(x_i|\theta) = \sum_{k=1}^{K} w_k \, \phi(x_i|\mu_k,\sigma_k) = \sum_{k=1}^{K} w_k \frac{1}{\sigma_k \sqrt{2\pi}} \exp \left( -\frac{1}{2} \left( \frac{x_i - \mu_k}{\sigma_k} \right) ^2 \right)</math>


<math>p(k/i) = P(Z_i=k/x_i,\theta) = \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)}</math>
The constraints are:
<math>\forall k, w_k \ge 0</math> and <math>\sum_{k=1}^K w_k = 1</math>


We can now write the complete likelihood, ie. the likelihood of the augmented model (even if we don't need it in the following), where <math>I_k = \{i / Z_i = k\}</math>:
<math>L_{comp}(\theta) = P(X,Z/\theta) = P(X/Z,\theta) P(Z/\theta) = \left( \prod_{k=1}^K \prod_{i \in I_k} g(x_i/\mu_k,\sigma_k) \right) \prod_{i=1}^N P(Z_i/\theta)</math>.


And, more useful, the incomplete (or marginal) likelihood, assuming all observations are independent:
* '''Maximum-likelihood''': naturally, we can start by maximizing the likelihood in order to estimate the parameters:


<math>L_{incomp}(\theta) = P(X/\theta) = \prod_{i=1}^N f(x_i/\theta)</math>
<math>L(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta) = \prod_{i=1}^N \sum_{k=1}^K w_k \phi(x_i;\theta_k)</math>


* '''ML estimation''': we want to find the values of the parameters that maximize the likelihood. This reduces to (i) differentiating the log-likelihood with respect to each parameter, and then (ii) finding the value at which each partial derivative is zero. Instead of maximizing the likelihood, we maximize its logarithm, noted <math>l(\theta)</math>. It gives the same solution because the log is monotonically increasing, but it's easier to derive the log-likelihood than the likelihood. Here is the whole formula for the (incomplete) log-likelihood:
Note that, to simply calculate this likelihood, we need to calculate <math>K^N</math> terms, which is quickly too costly.
<math>l(\theta) = \sum_{i=1}^N log(f(x_i/\theta)) = \sum_{i=1}^N log \left( \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2} \right)</math>


* '''MLE analytical formulae''': a few important rules are required, but only from a high-school level in maths (see [http://en.wikipedia.org/wiki/Differentiation_%28mathematics%29#Rules_for_finding_the_derivative here]). Let's start by finding the maximum-likelihood estimates of the mean of each cluster:
As usual, it's easier to deal with the log-likelihood:


<math>\frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{f(x_i/\theta)} \frac{\partial f(x_i/\theta)}{\partial \mu_k}</math>
<math>l(\theta) = \sum_{i=1}^N ln \left( \sum_{k=1}^K w_k \phi(x_i; \theta_k) \right)</math>


As we derive with respect to <math>\mu_k</math>, all the others means <math>\mu_l</math> with <math>l \ne k</math> are constant, and thus disappear:
Let's take the derivative with respect to one parameter, eg. <math>\theta_l</math>:


<math>\frac{\partial f(x_i/\theta)}{\partial \mu_k} = w_k \frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k}</math>
<math>\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{1}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} w_l \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l}</math>


And finally:
<math>\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{1}{\phi(x_i; \theta_l)} \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l}</math>


<math>\frac{\partial g(x_i/\mu_k,\sigma_k)}{\partial \mu_k} = \frac{\mu_k - x_i}{\sigma_k^2} g(x_i/\mu_k,\sigma_k)</math>
<math>\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{\partial ln ( \phi(x_i; \theta_l) )}{\partial \theta_l}</math>


Once we put all together, we end up with:
This shows that maximizing the likelihood of a mixture model is like doing a weighted likelihood maximization. However, these weights depend on the parameters we want to estimate! That's why we now switch to the missing-data formulation of the mixture model.


<math>\frac{\partial l(\theta)}{\partial \mu_k} = \sum_{i=1}^N \frac{1}{\sigma^2} \frac{w_k g(x_i/\mu_k,\sigma_k)}{\sum_{l=1}^K w_l g(x_i/\mu_l,\sigma_l)} (\mu_k - x_i) = \sum_{i=1}^N \frac{1}{\sigma^2} p(k/i) (\mu_k - x_i)</math>


By convention, we note <math>\hat{\mu_k}</math> the maximum-likelihood estimate of <math>\mu_k</math>:
* '''Missing data''': we introduce the following N [http://en.wikipedia.org/wiki/Latent_variable latent variables] <math>Z_1,...,Z_i,...,Z_N</math> (also called hidden or allocation variables), one for each observation, such that <math>Z_i=k</math> means that observation <math>x_i</math> belongs to cluster <math>k</math>. Thanks to this, we can reinterpret the mixture weights: <math>\forall i, P(Z_i=k|\theta)=w_k</math>. In fact, it is much easier to do the maths when defining each <math>Z_i</math> as a vector of length <math>K</math>, with <math>Z_{ik}=1</math> if observation <math>x_i</math> belongs to cluster <math>k</math>, and <math>Z_{ik}=0</math> otherwise ([http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29 indicator variables]). Moreover, we can now define the membership probabilities, one for each observation:


<math>\frac{\partial l(\theta)}{\partial \mu_k}_{\mu_k=\hat{\mu_k}} = 0</math>
<math>p(k|i) = P(Z_{ik}=1|x_i,\theta) = \frac{P(Z_{ik}=1 | \theta) p(x_i | Z_{ik}=1,\theta)}{p(x_i | \theta)} = \frac{w_k \phi(x_i|\mu_k,\sigma_k)}{\sum_{l=1}^K w_l \phi(x_i|\mu_l,\sigma_l)}</math>


Therefore, we finally obtain:
The observed-data likelihood (also called sometimes "incomplete" or "marginal", even though these appellations are misnomers) is still written the same way:


<math>\hat{\mu_k} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)}</math>
<math>L_{obs}(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta)</math>


By doing the same kind of algebra, we derive the log-likelihood w.r.t. <math>\sigma_k</math>:
But now we can also write the augmented-data likelihood (also called sometimes "complete"), assuming all observations are independent conditionally on their membership:


<math>\frac{\partial l(\theta)}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3})</math>
<math>L_{aug}(\theta) = P(X,Z|\theta) = \prod_{i=1}^N P(x_i|Z_i,\theta) P(Z_i|\theta) = \prod_{i=1}^N \left( \prod_{k=1}^K \phi(x_i|\mu_k,\sigma_k)^{Z_{ik}} w_k^{Z_{ik}} \right)</math>.


And then we obtain the ML estimates for the standard deviation of each cluster:
Note how easy it is to write it thanks to the fact that we chose to use <math>Z_{ik} \in \{0,1\}</math> compare to <math>Z_i=k</math>.


<math>\hat{\sigma_k} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \mu_k)^2}{\sum_{i=1}^N p(k/i)}}</math>
And here is the augmented-data log-likelihood (useful in the M step of the EM algorithm, see below):


The partial derivative of <math>l(\theta)</math> w.r.t. <math>w_k</math> is tricky. ... <TO DO> ...
<math>l_{aug}(\theta) = \sum_{i=1}^N \left( \sum_{k=1}^K Z_{ik} ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K Z_{ik} ln(w_k) \right)</math>
 
Using the [http://en.wikipedia.org/wiki/Plate_notation plate notation], the Gaussian mixture model described here can be represented like [http://en.wikipedia.org/wiki/File:Nonbayesian-gaussian-mixture.svg this].
 
 
* '''EM algorithm - definition''': we first define an objective function, <math>Q</math>, which happens to be the conditional expectation of the augmented-data log-likelihood function, <math>l_{aug}</math>, over the latent variables, <math>Z</math>, given the observed data, <math>X</math>, and the parameter estimates, <math>\theta</math>. The idea is to iterate two steps, starting from randomly-initialized parameters. In the E-step, one does an expectation, that is one computes this objective function to determine the membership probabilities. And in the M-step, one maximizes this objective function to determine the next iterate of the parameter estimates. In equations, it can be written like this:
** E step: <math>Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right] = \int q(Z|X,\theta^{(t)}) \; l_{aug} \; dZ</math>
** M-step: <math>\theta^{(t+1)} = argmax_{\theta} Q(\theta|X,\theta^{(t)})</math> so that <math>\forall \theta \in \Theta, Q(\theta^{(t+1)}|X,\theta^{(t)}) \ge Q(\theta|X,\theta^{(t)})</math>
 
 
* '''EM algorithm - theory''': stated like this above doesn't necessarily allow oneself to understand it immediately, at least in my case. Hopefully, Matthew Beal presents it in a great and simple way in his PhD thesis (see references at the bottom of the page).
 
Here is the observed-data log-likelihood:
 
<math>l_{obs}(\theta) = \sum_{i=1}^N ln \left( f(x_i|\theta) \right)</math>
 
First we introduce the hidden variables by integrating them out:
 
<math>l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int p(x_i,z_i|\theta) dz_i \right)</math>
 
Then, we use any probability distribution <math>q</math> on these hidden variables (in fact, we use a distinct distribution <math>q_{z_i}</math> for each observation):
 
<math>l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int q_{z_i}(z_i) \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} dz_i \right)</math>
 
And here is the great trick, as explained by Beal: "any probability distribution over the hidden variables gives rise to a lower bound on <math>l_{obs}</math>". This is due to to the [http://en.wikipedia.org/wiki/Jensen%27s_inequality Jensen inequality] (the logarithm is concave):
 
<math>l_{obs}(\theta) \ge \sum_{i=1}^N \int q_{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} \right) dz_i = \mathcal{F}(q_{z_1}(z_1), ..., q_{z_N}(z_N), \theta)</math>
 
At each iteration, the E step maximizes the lower bound (<math>\mathcal{F}</math>) with respect to the <math>q_{z_i}(z_i)</math>:
* E step: <math>q^{(t+1)}_{z_i} \leftarrow argmax_{q_{z_i}} \mathcal{F}(q_z(z), \theta^{(t)}) \forall i</math>
* M step: <math>\theta^{(t+1)} \leftarrow argmax_\theta \mathcal{F}(q^{(t+1)}_z(z), \theta)</math>
 
The E-step amounts to inferring the posterior distribution of the hidden variables <math>q^{(t+1)}_{z_i}</math> given the current parameter <math>\theta^{(t)}</math>:
 
<math>q^{(t+1)}_{z_i}(z_i) = p(z_i | x_i, \theta^{(t)})</math>
 
Indeed, the <math>q^{(t+1)}_{z_i}(z_i)</math> make the bound tight (the inequality becomes an equality):
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int q^{(t+1)}_{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{q^{(t+1)}_{z_i}(z_i)} \right) dz_i</math>
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i</math>
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i|\theta^{(t)}) p(z_i|x_i,\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i</math>
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( p(x_i|\theta^{(t)}) \right) dz_i</math>
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N ln \left( p(x_i|\theta^{(t)}) \right)</math>
 
<math>\mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = l_{obs}(\theta^{(t)})</math>
 
Then, at the M step, we use these statistics to maximize the new lower bound <math>\mathcal{F}</math> with respect to <math>\theta</math>, and therefore find <math>\theta^{(t+1)}</math>.
 
 
* '''EM algorithm - variational''': if the posterior distributions <math>p(z_i|x_i,\theta)</math> are intractable, we can use a variational approach to constrain them to be of a particular, tractable form. In the E step, maximizing <math>\mathcal{F}</math> with respect to <math>q_{z_i}</math> is equivalent to minimizing the [http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Kullback-Leibler divergence] between the variational distribution <math>q(z_i)</math> and the exact hidden variable posterior <math>p(z_i|x_i,\theta)</math>:
 
<math>KL[q_{z_i}(z_i) || p(z_i|x_i,\theta)] = \int q_{z_i}(z_i) ln \left( \frac{q_{z_i}(z_i)}{p(z_i|x_i,\theta)} \right)</math>
 
As a result, the E step may not always lead to a tight bound.
 
 
* '''Formulas of both steps''': in both steps we need to use <math>Q</math>, whether to evaluate it or maximize it.
 
<math>Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right]</math>
 
<math>Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ l_{aug}(\theta)|X,\theta^{(t)} \right]</math>
 
<math>Q(\theta|X,\theta^{(t)}) = \sum_{i=1}^N \left( \sum_{k=1}^K \mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] \; ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K \mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] \; ln(w_k) \right)</math>
 
 
* '''Formulas of the E step''': as indicated above, the E step consists in evaluating <math>Q</math>, i.e. simply evaluating the conditional expectation over the latent variables of the augmented-data log-likelihood given the observed data and the current estimates of the parameters.
 
<math>\mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] = 1 \times P(Z_{ik}=1|x_i,\theta_k^{(t)}) + 0 \times P(Z_{ik}=0|x_i,\theta_k^{(t)}) = P(Z_{ik}=1|x_i,\theta_k^{(t)}) = \frac{w_k^{(t)} \phi(x_i|\mu_k^{(t)},\sigma_k^{(t)})}{\sum_{l=1}^K w_l^{(t)} \phi(x_i|\mu_l^{(t)},\sigma_l^{(t)})} = p(k|i)</math>
 
Note how the conditional expectation over <math>Z_{ik}</math> simply happens to be the posterior of <math>Z_{ik}=1</math>, which of course corresponds to the membership probability.
 
 
* '''Formulas of the M step''': in this step, we need to maximize <math>Q</math> (also written <math>\mathcal{F}</math> above), w.r.t. each <math>\theta_k</math>. A few important rules are required to write down the analytical formulas of the MLEs, but only from a high-school level (see [http://en.wikipedia.org/wiki/Differentiation_%28mathematics%29#Rules_for_finding_the_derivative here]).
 
 
* '''M step - weights''': let's start by finding the maximum-likelihood estimates of the weights <math>w_k</math>. But remember the constraint <math>\sum_{k=1}^K w_k = 1</math>. To enforce it, we can use a [http://en.wikipedia.org/wiki/Lagrange_multiplier Lagrange multiplier], <math>\lambda</math>. This means that we now need to maximize the following equation where <math>\Lambda</math> is a Lagrange function (only the part of Q being a function of the weights is kept):
 
<math>\Lambda(w_k,\lambda) = \sum_{i=1}^N \left( \sum_{k=1}^K p(k|i) ln(w_k) \right) + \lambda (1 - \sum_{k=1}^K w_k)</math>
 
As usual, to find the maximum, we derive and equal to zero:
 
<math>\frac{\partial \Lambda}{\partial w_k}(w_k) = \sum_{i=1}^N \left( p(k|i) \frac{1}{w_k} \right) - \lambda</math>
 
<math>\frac{\partial \Lambda}{\partial w_k}(\hat{w}_k^{(t+1)}) = 0</math>
 
<math>\hat{w}_k^{(t+1)} = \frac{1}{\lambda} \sum_{i=1}^N p(k|i)</math>
 
Now, to find the multiplier, we go back to the constraint:
 
<math>\sum_{k=1}^K \hat{w}_k^{(t+1)} = 1 \rightarrow \lambda = \sum_{i=1}^N \sum_{k=1}^K p(k|i) = N</math>
 
Finally:
 
<math>\hat{w}_k^{(t+1)} = \frac{1}{N} \sum_{i=1}^N p(k|i)</math>
 
 
* '''M step - means''':
 
<math>\frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{\partial ln(\phi(x_i|\mu_k,\sigma_k))}{\partial \mu_k}</math>
 
<math>\frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{1}{\phi(x_i|\mu_k,\sigma_k)} \frac{\partial \phi(x_i|\mu_k,\sigma_k)}{\partial \mu_k}</math>
 
<math>\frac{\partial Q}{\partial \mu_k} = 0 = \sum_{i=1}^N p(k|i) (x_i - \hat{\mu}_k^{(t+1)})</math>
 
Finally:
 
<math>\hat{\mu}_k^{(t+1)} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)}</math>
 
 
* '''M step - variances''': same kind of algebra
 
<math>\frac{\partial Q}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3})</math>
 
<math>\hat{\sigma}_k^{(t+1)} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \hat{\mu}_k^{(t+1)})^2}{\sum_{i=1}^N p(k/i)}}</math>
 
 
* '''M step - weights (2)''': we can write them in terms of unconstrained variables <math>\gamma_k</math> ([http://en.wikipedia.org/wiki/Softmax_activation_function softmax function]):
 
<math>w_k = \frac{e^{\gamma_k}}{\sum_{k=1}^K e^{\gamma_k}}</math>
 
<math>\frac{\partial w_k}{\partial \gamma_j} =
\begin{cases}
w_k - w_k^2  & \mbox{if }j = k \\
-w_kw_j & \mbox{otherwise}
\end{cases}</math>


<math>\frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k)</math>
<math>\frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k)</math>


Finally, here are the ML estimates for the mixture weights:
Finally:


<math>\hat{w}_k = \frac{1}{N} \sum_{i=1}^N p(k/i)</math>
<math>\hat{w}_k = \frac{1}{N} \sum_{i=1}^N p(k/i)</math>


* '''EM algorithm''': ... <TO DO> ...


* '''R code to simulate data''':
* '''R code to simulate data''': if you read up to there, nothing is better than implementing the EM algorithm yourself!


  #' Generate univariate observations from a mixture of Normals
  <nowiki>
#'
#' Generate univariate observations from a mixture of Normals
#' @param K number of components
#'
#' @param N number of observations
#' @param K number of components
#' @param gap difference between all component means
#' @param N number of observations
GetUnivariateSimulatedData <- function(K=2, N=100, gap=6){
#' @param gap difference between all component means
  mus <- seq(0, gap*(K-1), gap)
GetUnivariateSimulatedData <- function(K=2, N=100, gap=6){
  sigmas <- runif(n=K, min=0.5, max=1.5)
  mus <- seq(0, gap*(K-1), gap)
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  sigmas <- runif(n=K, min=0.5, max=1.5)
  ns <- c(tmp, N - sum(tmp))
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
  ns <- c(tmp, N - sum(tmp))
                                ncol=1))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
  obs <- matrix(unlist(lapply(1:K, function(k){
                              ncol=1))
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  obs <- matrix(unlist(lapply(1:K, function(k){
  })))
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  new.order <- sample(1:N, N)
  })))
  obs <- obs[new.order]
  new.order <- sample(1:N, N)
  rownames(obs) <- NULL
  obs <- obs[new.order]
  clusters <- clusters[new.order]
  rownames(obs) <- NULL
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
  clusters <- clusters[new.order]
              mix.weights=ns/N))
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
}
              mix.weights=ns/N))
}
</nowiki>


* '''R code for the E step''':
* '''R code for the E step''':


  #' Return probas of latent variables given data and parameters from previous iteration
  <nowiki>
#'
#' Return probas of latent variables given data and parameters from previous iteration
#' @param data Nx1 vector of observations
#'
#' @param params list which components are mus, sigmas and mix.weights
#' @param data Nx1 vector of observations
Estep <- function(data, params){
#' @param params list which components are mus, sigmas and mix.weights
  GetMembershipProbas(data, params$mus, params$sigmas, params$mix.weights)
Estep <- function(data, params){
}
  GetMembershipProbas(data, params$mus, params$sigmas, params$mix.weights)
}


#' Return the membership probabilities P(zi=k/xi,theta)
#' Return the membership probabilities P(zi=k/xi,theta)
#'
#'
#' @param data Nx1 vector of observations
#' @param data Nx1 vector of observations
#' @param mus Kx1 vector of means
#' @param mus Kx1 vector of means
#' @param sigmas Kx1 vector of std deviations
#' @param sigmas Kx1 vector of std deviations
#' @param mix.weights Kx1 vector of mixture weights w_k=P(zi=k/theta)
#' @param mix.weights Kx1 vector of mixture weights w_k=P(zi=k/theta)
#' @return NxK matrix of membership probas
#' @return NxK matrix of membership probas
GetMembershipProbas <- function(data, mus, sigmas, mix.weights){
GetMembershipProbas <- function(data, mus, sigmas, mix.weights){
  N <- length(data)
  N <- length(data)
  K <- length(mus)
  K <- length(mus)
  tmp <- matrix(unlist(lapply(1:N, function(i){
  tmp <- matrix(unlist(lapply(1:N, function(i){
    x <- data[i]
    x <- data[i]
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.weight){
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.weights)))
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.weights)))
    unlist(Map(function(mu, sigma, mix.weight){
    unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma) / norm.const
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma) / norm.const
    }, mus[-K], sigmas[-K], mix.weights[-K]))
    }, mus[-K], sigmas[-K], mix.weights[-K]))
  })), ncol=K-1, byrow=TRUE)
  })), ncol=K-1, byrow=TRUE)
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  names(membership.probas) <- NULL
  names(membership.probas) <- NULL
  return(membership.probas)
  return(membership.probas)
}
}


#' Univariate Normal density
#' Univariate Normal density
GetUnivariateNormalDensity <- function(x, mu, sigma){
GetUnivariateNormalDensity <- function(x, mu, sigma){
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
}
}
</nowiki>


* '''R code for the M step''':
* '''R code for the M step''':


  #' Return ML estimates of parameters
  <nowiki>
#'
#' Return ML estimates of parameters
#' @param data Nx1 vector of observations
#'
#' @param params list which components are mus, sigmas and mix.weights
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param params list which components are mus, sigmas and mix.weights
Mstep <- function(data, params, membership.probas){
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
  params.new <- list()
Mstep <- function(data, params, membership.probas){
  sum.membership.probas <- apply(membership.probas, 2, sum)
  params.new <- list()
  params.new$mus <- GetMlEstimMeans(data, membership.probas,
  sum.membership.probas <- apply(membership.probas, 2, sum)
                                    sum.membership.probas)
  params.new$mus <- GetMlEstimMeans(data, membership.probas,
  params.new$sigmas <- GetMlEstimStdDevs(data, params.new$mus,
                                    sum.membership.probas)
                                          membership.probas,
  params.new$sigmas <- GetMlEstimStdDevs(data, params.new$mus,
                                          sum.membership.probas)
                                        membership.probas,
  params.new$mix.weights <- GetMlEstimMixWeights(data, membership.probas,
                                        sum.membership.probas)
                                                  sum.membership.probas)
  params.new$mix.weights <- GetMlEstimMixWeights(data, membership.probas,
  return(params.new)
                                                sum.membership.probas)
}
  return(params.new)
}
 
#' Return ML estimates of the means (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of means
GetMlEstimMeans <- function(data, membership.probas, sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sum(unlist(Map("*", membership.probas[,k], data))) /
      sum.membership.probas[k]
  })
}
 
#' Return ML estimates of the std deviations (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of std deviations
GetMlEstimStdDevs <- function(data, means, membership.probas,
                              sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sqrt(sum(unlist(Map(function(p.ki, x.i){
      p.ki * (x.i - means[k])^2
    }, membership.probas[,k], data))) /
        sum.membership.probas[k])
  })
}


#' Return ML estimates of the means (1 per cluster)
#' Return ML estimates of the mixture weights
#'
#'
#' @param data Nx1 vector of observations
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of means
#' @return Kx1 vector of mixture weights
GetMlEstimMeans <- function(data, membership.probas, sum.membership.probas){
GetMlEstimMixWeights <- function(data, membership.probas,
  K <- ncol(membership.probas)
                                sum.membership.probas){
  sapply(1:K, function(k){
  K <- ncol(membership.probas)
    sum(unlist(Map("*", membership.probas[,k], data))) /
  sapply(1:K, function(k){
      sum.membership.probas[k]
    1/length(data) * sum.membership.probas[k]
  })
  })
}
}
</nowiki>


#' Return ML estimates of the std deviations (1 per cluster)
* '''R code for the log-likelihood''':
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of std deviations
GetMlEstimStdDevs <- function(data, means, membership.probas,
                              sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sqrt(sum(unlist(Map(function(p_ki, x_i){
      p_ki * (x_i - means[k])^2
    }, membership.probas[,k], data))) /
          sum.membership.probas[k])
  })
}


  #' Return ML estimates of the mixture weights
  <nowiki>
#'
GetLogLikelihood <- function(data, mus, sigmas, mix.weights){
#' @param data Nx1 vector of observations
  loglik <- sum(sapply(data, function(x){
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
    log(sum(unlist(Map(function(mu, sigma, mix.weight){
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)
#' @return Kx1 vector of mixture weights
    }, mus, sigmas, mix.weights))))
GetMlEstimMixWeights <- function(data, membership.probas,
  }))
                                  sum.membership.probas){
  return(loglik)
  K <- ncol(membership.probas)
}
  sapply(1:K, function(k){
</nowiki>
    1/length(data) * sum.membership.probas[k]
  })
}


* '''R code for the EM loop''':
* '''R code for the EM loop''':


... <TO DO> ...
<nowiki>
EMalgo <- function(data, params, threshold.convergence=10^(-2), nb.iter=10,
                  verbose=1){
  logliks <- vector()
  i <- 1
  if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
  membership.probas <- Estep(data, params)
  params <- Mstep(data, params, membership.probas)
  loglik <- GetLogLikelihood(data, params$mus, params$sigmas,
                            params$mix.weights)
  logliks <- append(logliks, loglik)
  while(i < nb.iter){
    i <- i + 1
    if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
    membership.probas <- Estep(data, params)
    params <- Mstep(data, params, membership.probas)
    loglik <- GetLogLikelihood(data, params$mus, params$sigmas, params$mix.weights)
    if(loglik < logliks[length(logliks)]){
      msg <- paste("the log-likelihood is decreasing:", loglik, "<", logliks[length(logliks)])
      stop(msg, call.=FALSE)
    }
    logliks <- append(logliks, loglik)
    if(abs(logliks[i] - logliks[i-1]) <= threshold.convergence)
      break
  }
  return(list(params=params, membership.probas=membership.probas, logliks=logliks, nb.iters=i))
}
</nowiki>


* '''Example''': and now, let's try it!
* '''Example''': and now, let's try it!


  ## simulate data
  <nowiki>
K <- 3
## simulate data
N <- 300
K <- 3
simul <- GetUnivariateSimulatedData(K, N)
N <- 300
data <- simul$obs
simul <- GetUnivariateSimulatedData(K, N)
data <- simul$obs


## run the EM algorithm
## run the EM algorithm
params0 <- list(mus=runif(n=K, min=min(data), max=max(data)),
params0 <- list(mus=runif(n=K, min=min(data), max=max(data)),
                sigmas=rep(1, K),
                sigmas=rep(1, K),
                mix.weights=rep(1/K, K))
                mix.weights=rep(1/K, K))
res <- EMalgo(data, params0, 10^(-3), 1000, 1)
res <- EMalgo(data, params0, 10^(-3), 1000, 1)


## check its convergence
## check its convergence
plot(res$logliks, xlab="iterations", ylab="log-likelihood",
plot(res$logliks, xlab="iterations", ylab="log-likelihood",
      main="Convergence of the EM algorithm", type="b")
    main="Convergence of the EM algorithm", type="b")


## plot the data along with the inferred densities
## plot the data along with the inferred densities
png("mixture_univar_em.png")
png("mixture_univar_em.png")
hist(data, breaks=30, freq=FALSE, col="grey", border="white", ylim=c(0,0.15),
hist(data, breaks=30, freq=FALSE, col="grey", border="white", ylim=c(0,0.15),
      main="Histogram of data overlaid with densities inferred by EM")
    main="Histogram of data overlaid with densities inferred by EM")
rx <- seq(from=min(data), to=max(data), by=0.1)
rx <- seq(from=min(data), to=max(data), by=0.1)
ds <- lapply(1:K, function(k){dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])})
ds <- lapply(1:K, function(k){dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])})
f <- sapply(1:length(rx), function(i){
f <- sapply(1:length(rx), function(i){
  res$params$mix.weights[1] * ds<nowiki>[[1]]</nowiki>[i] + res$params$mix.weights[2] * ds<nowiki>[[2]]</nowiki>[i] + res$params$mix.weights[3] * ds<nowiki>[[3]]</nowiki>[i]
  res$params$mix.weights[1] * ds[[1]][i] + res$params$mix.weights[2] * ds[[2]][i] + res$params$mix.weights[3] * ds[[3]][i]
})
})
lines(rx, f, col="red", lwd=2)
lines(rx, f, col="red", lwd=2)
dev.off()
dev.off()
</nowiki>


It seems to work well, which was expected as the clusters are well separated from each other...
It seems to work well, which was expected as the clusters are well separated from each other...
Line 244: Line 415:
The classification of each observation can be obtained via the following command:
The classification of each observation can be obtained via the following command:


  ## get the classification of the observations
  <nowiki>
memberships <- apply(res$membership.probas, 1, function(x){which(x > 0.5)})
## get the classification of the observations
table(memberships)
memberships <- apply(res$membership.probas, 1, function(x){which(x > 0.5)})
table(memberships)
</nowiki>


* '''References''':
* '''References''':
** introduction (ch.1) of the PhD thesis from Matthew Stephens (Oxford, 2000)
** chapter 1 from the PhD thesis of Matthew Stephens (Oxford, 2000) freely available [http://www.stat.washington.edu/stephens/papers/tabstract.html online]
** tutorial from Carlo Tomasi (Duke University)
** chapter 2 from the PhD thesis of Matthew Beal (UCL, 2003) freely available [http://www.cse.buffalo.edu/faculty/mbeal/thesis/ online]
** lecture "Mixture Models, Latent Variables and the EM Algorithm" from Cosma Shalizi freely available [http://www.stat.cmu.edu/~cshalizi/uADA/12/ online]
** talk "Graphical Models" from Zubin Ghahramani freely available [http://videolectures.net/mlss2012_ghahramani_graphical_models/ online]
** book "Introducing Monte Carlo Methods with R" from Robert and and Casella (2009)
** book "Introducing Monte Carlo Methods with R" from Robert and and Casella (2009)
* '''Beyond''':
** many different distributions can be used besides the Normal
** the observations can be multivariate
** we can fit the model using Bayesian methods, e.g. MCMC or Variational Bayes
** we can try to estimate the number of components (K), e.g. by reversible-jump MCMC or via non-parametric Bayes
** there are issues, such as the fact that the EM can get stuck in a local maximum, or that the likelihood is invariant under permutations of the components' labels
** the parameters of each mixture component can depend on some known predictors, giving rise to mixture-of-experts models


<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->
<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->

Revision as of 11:05, 30 November 2014

Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Learn about mixture models and the EM algorithm

(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)

  • Motivation: a large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect heterogeneous data. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is thus to recover from the data, ie. to infer, (i) how many clusters there are, (ii) what are the features of these clusters, and (iii) from which cluster each sample comes from. In the following, I will focus on points (ii) and (iii).
  • Data: we have N observations, noted [math]\displaystyle{ X = (x_1, x_2, ..., x_N) }[/math]. For the moment, we suppose that each observation [math]\displaystyle{ x_i }[/math] is univariate, ie. each corresponds to only one number.
  • Assumption: let's assume that the data are heterogeneous and that they can be partitioned into [math]\displaystyle{ K }[/math] clusters (in this document, we suppose that [math]\displaystyle{ K }[/math] is known). This means that we expect a subset of the observations to come from cluster [math]\displaystyle{ k=1 }[/math], another subset to come from cluster [math]\displaystyle{ k=2 }[/math], and so on.


  • Model: writing down the model usually means starting by writing down the likelihood of the parameters, that is the probability of the data given the parameters. Technically, we say that the observations were generated according to a density function [math]\displaystyle{ f }[/math]. In this case, this density is itself a mixture of densities, one per cluster. In our case, we will assume that observations from cluster [math]\displaystyle{ k }[/math] are generated from a Normal distribution, which density is here noted [math]\displaystyle{ \phi }[/math], with mean [math]\displaystyle{ \mu_k }[/math] and standard deviation [math]\displaystyle{ \sigma_k }[/math]. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight [math]\displaystyle{ w_k }[/math] (also called mixing proportion) to be the probability that any given observation comes from cluster [math]\displaystyle{ k }[/math]. As a result, we have the following list of parameters: [math]\displaystyle{ \theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K) }[/math]. Finally, for a given observation [math]\displaystyle{ x_i }[/math], we can write the model:

[math]\displaystyle{ f(x_i|\theta) = \sum_{k=1}^{K} w_k \, \phi(x_i|\mu_k,\sigma_k) = \sum_{k=1}^{K} w_k \frac{1}{\sigma_k \sqrt{2\pi}} \exp \left( -\frac{1}{2} \left( \frac{x_i - \mu_k}{\sigma_k} \right) ^2 \right) }[/math]

The constraints are: [math]\displaystyle{ \forall k, w_k \ge 0 }[/math] and [math]\displaystyle{ \sum_{k=1}^K w_k = 1 }[/math]


  • Maximum-likelihood: naturally, we can start by maximizing the likelihood in order to estimate the parameters:

[math]\displaystyle{ L(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta) = \prod_{i=1}^N \sum_{k=1}^K w_k \phi(x_i;\theta_k) }[/math]

Note that, to simply calculate this likelihood, we need to calculate [math]\displaystyle{ K^N }[/math] terms, which is quickly too costly.

As usual, it's easier to deal with the log-likelihood:

[math]\displaystyle{ l(\theta) = \sum_{i=1}^N ln \left( \sum_{k=1}^K w_k \phi(x_i; \theta_k) \right) }[/math]

Let's take the derivative with respect to one parameter, eg. [math]\displaystyle{ \theta_l }[/math]:

[math]\displaystyle{ \frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{1}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} w_l \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l} }[/math]

[math]\displaystyle{ \frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{1}{\phi(x_i; \theta_l)} \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l} }[/math]

[math]\displaystyle{ \frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{\partial ln ( \phi(x_i; \theta_l) )}{\partial \theta_l} }[/math]

This shows that maximizing the likelihood of a mixture model is like doing a weighted likelihood maximization. However, these weights depend on the parameters we want to estimate! That's why we now switch to the missing-data formulation of the mixture model.


  • Missing data: we introduce the following N latent variables [math]\displaystyle{ Z_1,...,Z_i,...,Z_N }[/math] (also called hidden or allocation variables), one for each observation, such that [math]\displaystyle{ Z_i=k }[/math] means that observation [math]\displaystyle{ x_i }[/math] belongs to cluster [math]\displaystyle{ k }[/math]. Thanks to this, we can reinterpret the mixture weights: [math]\displaystyle{ \forall i, P(Z_i=k|\theta)=w_k }[/math]. In fact, it is much easier to do the maths when defining each [math]\displaystyle{ Z_i }[/math] as a vector of length [math]\displaystyle{ K }[/math], with [math]\displaystyle{ Z_{ik}=1 }[/math] if observation [math]\displaystyle{ x_i }[/math] belongs to cluster [math]\displaystyle{ k }[/math], and [math]\displaystyle{ Z_{ik}=0 }[/math] otherwise (indicator variables). Moreover, we can now define the membership probabilities, one for each observation:

[math]\displaystyle{ p(k|i) = P(Z_{ik}=1|x_i,\theta) = \frac{P(Z_{ik}=1 | \theta) p(x_i | Z_{ik}=1,\theta)}{p(x_i | \theta)} = \frac{w_k \phi(x_i|\mu_k,\sigma_k)}{\sum_{l=1}^K w_l \phi(x_i|\mu_l,\sigma_l)} }[/math]

The observed-data likelihood (also called sometimes "incomplete" or "marginal", even though these appellations are misnomers) is still written the same way:

[math]\displaystyle{ L_{obs}(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta) }[/math]

But now we can also write the augmented-data likelihood (also called sometimes "complete"), assuming all observations are independent conditionally on their membership:

[math]\displaystyle{ L_{aug}(\theta) = P(X,Z|\theta) = \prod_{i=1}^N P(x_i|Z_i,\theta) P(Z_i|\theta) = \prod_{i=1}^N \left( \prod_{k=1}^K \phi(x_i|\mu_k,\sigma_k)^{Z_{ik}} w_k^{Z_{ik}} \right) }[/math].

Note how easy it is to write it thanks to the fact that we chose to use [math]\displaystyle{ Z_{ik} \in \{0,1\} }[/math] compare to [math]\displaystyle{ Z_i=k }[/math].

And here is the augmented-data log-likelihood (useful in the M step of the EM algorithm, see below):

[math]\displaystyle{ l_{aug}(\theta) = \sum_{i=1}^N \left( \sum_{k=1}^K Z_{ik} ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K Z_{ik} ln(w_k) \right) }[/math]

Using the plate notation, the Gaussian mixture model described here can be represented like this.


  • EM algorithm - definition: we first define an objective function, [math]\displaystyle{ Q }[/math], which happens to be the conditional expectation of the augmented-data log-likelihood function, [math]\displaystyle{ l_{aug} }[/math], over the latent variables, [math]\displaystyle{ Z }[/math], given the observed data, [math]\displaystyle{ X }[/math], and the parameter estimates, [math]\displaystyle{ \theta }[/math]. The idea is to iterate two steps, starting from randomly-initialized parameters. In the E-step, one does an expectation, that is one computes this objective function to determine the membership probabilities. And in the M-step, one maximizes this objective function to determine the next iterate of the parameter estimates. In equations, it can be written like this:
    • E step: [math]\displaystyle{ Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right] = \int q(Z|X,\theta^{(t)}) \; l_{aug} \; dZ }[/math]
    • M-step: [math]\displaystyle{ \theta^{(t+1)} = argmax_{\theta} Q(\theta|X,\theta^{(t)}) }[/math] so that [math]\displaystyle{ \forall \theta \in \Theta, Q(\theta^{(t+1)}|X,\theta^{(t)}) \ge Q(\theta|X,\theta^{(t)}) }[/math]


  • EM algorithm - theory: stated like this above doesn't necessarily allow oneself to understand it immediately, at least in my case. Hopefully, Matthew Beal presents it in a great and simple way in his PhD thesis (see references at the bottom of the page).

Here is the observed-data log-likelihood:

[math]\displaystyle{ l_{obs}(\theta) = \sum_{i=1}^N ln \left( f(x_i|\theta) \right) }[/math]

First we introduce the hidden variables by integrating them out:

[math]\displaystyle{ l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int p(x_i,z_i|\theta) dz_i \right) }[/math]

Then, we use any probability distribution [math]\displaystyle{ q }[/math] on these hidden variables (in fact, we use a distinct distribution [math]\displaystyle{ q_{z_i} }[/math] for each observation):

[math]\displaystyle{ l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int q_{z_i}(z_i) \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} dz_i \right) }[/math]

And here is the great trick, as explained by Beal: "any probability distribution over the hidden variables gives rise to a lower bound on [math]\displaystyle{ l_{obs} }[/math]". This is due to to the Jensen inequality (the logarithm is concave):

[math]\displaystyle{ l_{obs}(\theta) \ge \sum_{i=1}^N \int q_{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} \right) dz_i = \mathcal{F}(q_{z_1}(z_1), ..., q_{z_N}(z_N), \theta) }[/math]

At each iteration, the E step maximizes the lower bound ([math]\displaystyle{ \mathcal{F} }[/math]) with respect to the [math]\displaystyle{ q_{z_i}(z_i) }[/math]:

  • E step: [math]\displaystyle{ q^{(t+1)}_{z_i} \leftarrow argmax_{q_{z_i}} \mathcal{F}(q_z(z), \theta^{(t)}) \forall i }[/math]
  • M step: [math]\displaystyle{ \theta^{(t+1)} \leftarrow argmax_\theta \mathcal{F}(q^{(t+1)}_z(z), \theta) }[/math]

The E-step amounts to inferring the posterior distribution of the hidden variables [math]\displaystyle{ q^{(t+1)}_{z_i} }[/math] given the current parameter [math]\displaystyle{ \theta^{(t)} }[/math]:

[math]\displaystyle{ q^{(t+1)}_{z_i}(z_i) = p(z_i | x_i, \theta^{(t)}) }[/math]

Indeed, the [math]\displaystyle{ q^{(t+1)}_{z_i}(z_i) }[/math] make the bound tight (the inequality becomes an equality):

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int q^{(t+1)}_{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{q^{(t+1)}_{z_i}(z_i)} \right) dz_i }[/math]

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i }[/math]

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i|\theta^{(t)}) p(z_i|x_i,\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i }[/math]

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( p(x_i|\theta^{(t)}) \right) dz_i }[/math]

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = \sum_{i=1}^N ln \left( p(x_i|\theta^{(t)}) \right) }[/math]

[math]\displaystyle{ \mathcal{F}(q^{(t+1)}_z(z), \theta^{(t)}) = l_{obs}(\theta^{(t)}) }[/math]

Then, at the M step, we use these statistics to maximize the new lower bound [math]\displaystyle{ \mathcal{F} }[/math] with respect to [math]\displaystyle{ \theta }[/math], and therefore find [math]\displaystyle{ \theta^{(t+1)} }[/math].


  • EM algorithm - variational: if the posterior distributions [math]\displaystyle{ p(z_i|x_i,\theta) }[/math] are intractable, we can use a variational approach to constrain them to be of a particular, tractable form. In the E step, maximizing [math]\displaystyle{ \mathcal{F} }[/math] with respect to [math]\displaystyle{ q_{z_i} }[/math] is equivalent to minimizing the Kullback-Leibler divergence between the variational distribution [math]\displaystyle{ q(z_i) }[/math] and the exact hidden variable posterior [math]\displaystyle{ p(z_i|x_i,\theta) }[/math]:

[math]\displaystyle{ KL[q_{z_i}(z_i) || p(z_i|x_i,\theta)] = \int q_{z_i}(z_i) ln \left( \frac{q_{z_i}(z_i)}{p(z_i|x_i,\theta)} \right) }[/math]

As a result, the E step may not always lead to a tight bound.


  • Formulas of both steps: in both steps we need to use [math]\displaystyle{ Q }[/math], whether to evaluate it or maximize it.

[math]\displaystyle{ Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right] }[/math]

[math]\displaystyle{ Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ l_{aug}(\theta)|X,\theta^{(t)} \right] }[/math]

[math]\displaystyle{ Q(\theta|X,\theta^{(t)}) = \sum_{i=1}^N \left( \sum_{k=1}^K \mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] \; ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K \mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] \; ln(w_k) \right) }[/math]


  • Formulas of the E step: as indicated above, the E step consists in evaluating [math]\displaystyle{ Q }[/math], i.e. simply evaluating the conditional expectation over the latent variables of the augmented-data log-likelihood given the observed data and the current estimates of the parameters.

[math]\displaystyle{ \mathbb{E}_{Z|X,\theta^{(t)}}[Z_{ik}|x_i,\theta_k^{(t)}] = 1 \times P(Z_{ik}=1|x_i,\theta_k^{(t)}) + 0 \times P(Z_{ik}=0|x_i,\theta_k^{(t)}) = P(Z_{ik}=1|x_i,\theta_k^{(t)}) = \frac{w_k^{(t)} \phi(x_i|\mu_k^{(t)},\sigma_k^{(t)})}{\sum_{l=1}^K w_l^{(t)} \phi(x_i|\mu_l^{(t)},\sigma_l^{(t)})} = p(k|i) }[/math]

Note how the conditional expectation over [math]\displaystyle{ Z_{ik} }[/math] simply happens to be the posterior of [math]\displaystyle{ Z_{ik}=1 }[/math], which of course corresponds to the membership probability.


  • Formulas of the M step: in this step, we need to maximize [math]\displaystyle{ Q }[/math] (also written [math]\displaystyle{ \mathcal{F} }[/math] above), w.r.t. each [math]\displaystyle{ \theta_k }[/math]. A few important rules are required to write down the analytical formulas of the MLEs, but only from a high-school level (see here).


  • M step - weights: let's start by finding the maximum-likelihood estimates of the weights [math]\displaystyle{ w_k }[/math]. But remember the constraint [math]\displaystyle{ \sum_{k=1}^K w_k = 1 }[/math]. To enforce it, we can use a Lagrange multiplier, [math]\displaystyle{ \lambda }[/math]. This means that we now need to maximize the following equation where [math]\displaystyle{ \Lambda }[/math] is a Lagrange function (only the part of Q being a function of the weights is kept):

[math]\displaystyle{ \Lambda(w_k,\lambda) = \sum_{i=1}^N \left( \sum_{k=1}^K p(k|i) ln(w_k) \right) + \lambda (1 - \sum_{k=1}^K w_k) }[/math]

As usual, to find the maximum, we derive and equal to zero:

[math]\displaystyle{ \frac{\partial \Lambda}{\partial w_k}(w_k) = \sum_{i=1}^N \left( p(k|i) \frac{1}{w_k} \right) - \lambda }[/math]

[math]\displaystyle{ \frac{\partial \Lambda}{\partial w_k}(\hat{w}_k^{(t+1)}) = 0 }[/math]

[math]\displaystyle{ \hat{w}_k^{(t+1)} = \frac{1}{\lambda} \sum_{i=1}^N p(k|i) }[/math]

Now, to find the multiplier, we go back to the constraint:

[math]\displaystyle{ \sum_{k=1}^K \hat{w}_k^{(t+1)} = 1 \rightarrow \lambda = \sum_{i=1}^N \sum_{k=1}^K p(k|i) = N }[/math]

Finally:

[math]\displaystyle{ \hat{w}_k^{(t+1)} = \frac{1}{N} \sum_{i=1}^N p(k|i) }[/math]


  • M step - means:

[math]\displaystyle{ \frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{\partial ln(\phi(x_i|\mu_k,\sigma_k))}{\partial \mu_k} }[/math]

[math]\displaystyle{ \frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{1}{\phi(x_i|\mu_k,\sigma_k)} \frac{\partial \phi(x_i|\mu_k,\sigma_k)}{\partial \mu_k} }[/math]

[math]\displaystyle{ \frac{\partial Q}{\partial \mu_k} = 0 = \sum_{i=1}^N p(k|i) (x_i - \hat{\mu}_k^{(t+1)}) }[/math]

Finally:

[math]\displaystyle{ \hat{\mu}_k^{(t+1)} = \frac{\sum_{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)} }[/math]


  • M step - variances: same kind of algebra

[math]\displaystyle{ \frac{\partial Q}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3}) }[/math]

[math]\displaystyle{ \hat{\sigma}_k^{(t+1)} = \sqrt{\frac{\sum_{i=1}^N p(k/i) (x_i - \hat{\mu}_k^{(t+1)})^2}{\sum_{i=1}^N p(k/i)}} }[/math]


  • M step - weights (2): we can write them in terms of unconstrained variables [math]\displaystyle{ \gamma_k }[/math] (softmax function):

[math]\displaystyle{ w_k = \frac{e^{\gamma_k}}{\sum_{k=1}^K e^{\gamma_k}} }[/math]

[math]\displaystyle{ \frac{\partial w_k}{\partial \gamma_j} = \begin{cases} w_k - w_k^2 & \mbox{if }j = k \\ -w_kw_j & \mbox{otherwise} \end{cases} }[/math]

[math]\displaystyle{ \frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k) }[/math]

Finally:

[math]\displaystyle{ \hat{w}_k = \frac{1}{N} \sum_{i=1}^N p(k/i) }[/math]


  • R code to simulate data: if you read up to there, nothing is better than implementing the EM algorithm yourself!
#' Generate univariate observations from a mixture of Normals
#'
#' @param K number of components
#' @param N number of observations
#' @param gap difference between all component means
GetUnivariateSimulatedData <- function(K=2, N=100, gap=6){
  mus <- seq(0, gap*(K-1), gap)
  sigmas <- runif(n=K, min=0.5, max=1.5)
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  ns <- c(tmp, N - sum(tmp))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
                               ncol=1))
  obs <- matrix(unlist(lapply(1:K, function(k){
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  })))
  new.order <- sample(1:N, N)
  obs <- obs[new.order]
  rownames(obs) <- NULL
  clusters <- clusters[new.order]
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
              mix.weights=ns/N))
}

  • R code for the E step:
#' Return probas of latent variables given data and parameters from previous iteration
#'
#' @param data Nx1 vector of observations
#' @param params list which components are mus, sigmas and mix.weights
Estep <- function(data, params){
  GetMembershipProbas(data, params$mus, params$sigmas, params$mix.weights)
}

#' Return the membership probabilities P(zi=k/xi,theta)
#'
#' @param data Nx1 vector of observations
#' @param mus Kx1 vector of means
#' @param sigmas Kx1 vector of std deviations
#' @param mix.weights Kx1 vector of mixture weights w_k=P(zi=k/theta)
#' @return NxK matrix of membership probas
GetMembershipProbas <- function(data, mus, sigmas, mix.weights){
  N <- length(data)
  K <- length(mus)
  tmp <- matrix(unlist(lapply(1:N, function(i){
    x <- data[i]
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)}, mus, sigmas, mix.weights)))
    unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma) / norm.const
    }, mus[-K], sigmas[-K], mix.weights[-K]))
  })), ncol=K-1, byrow=TRUE)
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  names(membership.probas) <- NULL
  return(membership.probas)
}

#' Univariate Normal density
GetUnivariateNormalDensity <- function(x, mu, sigma){
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
}

  • R code for the M step:
#' Return ML estimates of parameters
#'
#' @param data Nx1 vector of observations
#' @param params list which components are mus, sigmas and mix.weights
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
Mstep <- function(data, params, membership.probas){
  params.new <- list()
  sum.membership.probas <- apply(membership.probas, 2, sum)
  params.new$mus <- GetMlEstimMeans(data, membership.probas,
                                    sum.membership.probas)
  params.new$sigmas <- GetMlEstimStdDevs(data, params.new$mus,
                                         membership.probas,
                                         sum.membership.probas)
  params.new$mix.weights <- GetMlEstimMixWeights(data, membership.probas,
                                                 sum.membership.probas)
  return(params.new)
}

#' Return ML estimates of the means (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of means
GetMlEstimMeans <- function(data, membership.probas, sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sum(unlist(Map("*", membership.probas[,k], data))) /
      sum.membership.probas[k]
  })
}

#' Return ML estimates of the std deviations (1 per cluster)
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of std deviations
GetMlEstimStdDevs <- function(data, means, membership.probas,
                              sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sqrt(sum(unlist(Map(function(p.ki, x.i){
      p.ki * (x.i - means[k])^2
    }, membership.probas[,k], data))) /
         sum.membership.probas[k])
  })
}

#' Return ML estimates of the mixture weights
#'
#' @param data Nx1 vector of observations
#' @param membership.probas NxK matrix with entry i,k being P(zi=k/xi,theta)
#' @param sum.membership.probas Kx1 vector of sum per column of matrix above
#' @return Kx1 vector of mixture weights
GetMlEstimMixWeights <- function(data, membership.probas,
                                 sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    1/length(data) * sum.membership.probas[k]
  })
}

  • R code for the log-likelihood:
GetLogLikelihood <- function(data, mus, sigmas, mix.weights){
  loglik <- sum(sapply(data, function(x){
    log(sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * GetUnivariateNormalDensity(x, mu, sigma)
    }, mus, sigmas, mix.weights))))
  }))
  return(loglik)
}

  • R code for the EM loop:
EMalgo <- function(data, params, threshold.convergence=10^(-2), nb.iter=10,
                   verbose=1){
  logliks <- vector()
  i <- 1
  if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
  membership.probas <- Estep(data, params)
  params <- Mstep(data, params, membership.probas)
  loglik <- GetLogLikelihood(data, params$mus, params$sigmas,
                             params$mix.weights)
  logliks <- append(logliks, loglik)
  while(i < nb.iter){
    i <- i + 1
    if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
    membership.probas <- Estep(data, params)
    params <- Mstep(data, params, membership.probas)
    loglik <- GetLogLikelihood(data, params$mus, params$sigmas, params$mix.weights)
    if(loglik < logliks[length(logliks)]){
      msg <- paste("the log-likelihood is decreasing:", loglik, "<", logliks[length(logliks)])
      stop(msg, call.=FALSE)
    }
    logliks <- append(logliks, loglik)
    if(abs(logliks[i] - logliks[i-1]) <= threshold.convergence)
      break
  }
  return(list(params=params, membership.probas=membership.probas, logliks=logliks, nb.iters=i))
}

  • Example: and now, let's try it!
## simulate data
K <- 3
N <- 300
simul <- GetUnivariateSimulatedData(K, N)
data <- simul$obs

## run the EM algorithm
params0 <- list(mus=runif(n=K, min=min(data), max=max(data)),
                sigmas=rep(1, K),
                mix.weights=rep(1/K, K))
res <- EMalgo(data, params0, 10^(-3), 1000, 1)

## check its convergence
plot(res$logliks, xlab="iterations", ylab="log-likelihood",
     main="Convergence of the EM algorithm", type="b")

## plot the data along with the inferred densities
png("mixture_univar_em.png")
hist(data, breaks=30, freq=FALSE, col="grey", border="white", ylim=c(0,0.15),
     main="Histogram of data overlaid with densities inferred by EM")
rx <- seq(from=min(data), to=max(data), by=0.1)
ds <- lapply(1:K, function(k){dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])})
f <- sapply(1:length(rx), function(i){
  res$params$mix.weights[1] * ds[[1]][i] + res$params$mix.weights[2] * ds[[2]][i] + res$params$mix.weights[3] * ds[[3]][i]
})
lines(rx, f, col="red", lwd=2)
dev.off()

It seems to work well, which was expected as the clusters are well separated from each other...

The classification of each observation can be obtained via the following command:

## get the classification of the observations
memberships <- apply(res$membership.probas, 1, function(x){which(x > 0.5)})
table(memberships)

  • References:
    • chapter 1 from the PhD thesis of Matthew Stephens (Oxford, 2000) freely available online
    • chapter 2 from the PhD thesis of Matthew Beal (UCL, 2003) freely available online
    • lecture "Mixture Models, Latent Variables and the EM Algorithm" from Cosma Shalizi freely available online
    • talk "Graphical Models" from Zubin Ghahramani freely available online
    • book "Introducing Monte Carlo Methods with R" from Robert and and Casella (2009)
  • Beyond:
    • many different distributions can be used besides the Normal
    • the observations can be multivariate
    • we can fit the model using Bayesian methods, e.g. MCMC or Variational Bayes
    • we can try to estimate the number of components (K), e.g. by reversible-jump MCMC or via non-parametric Bayes
    • there are issues, such as the fact that the EM can get stuck in a local maximum, or that the likelihood is invariant under permutations of the components' labels
    • the parameters of each mixture component can depend on some known predictors, giving rise to mixture-of-experts models