User:Timothee Flutre/Notebook/Postdoc/2012/08/16
From OpenWetWare
(→Variational Bayes approach for the mixture of Normals: shorten + add principle variational bayes) |
(→Variational Bayes approach for the mixture of Normals: add part of updates for q_z) |
||
(One intermediate revision not shown.) | |||
Line 73: | Line 73: | ||
As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations] to find the functions <math>q_\mathbf{z}</math> and <math>q_\Theta</math> that maximize the functional <math>\mathcal{F}</math>. | As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the [http://en.wikipedia.org/wiki/Calculus_of_variations calculus of variations] to find the functions <math>q_\mathbf{z}</math> and <math>q_\Theta</math> that maximize the functional <math>\mathcal{F}</math>. | ||
- | <math>\mathcal{F}(q_\mathbf{z}, q_\Theta) = \int_\Theta \, \mathrm{d}\Theta \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta}</math> | + | <math>\mathcal{F}(q_\mathbf{z}, q_\Theta) = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; q(\mathbf{z}) \; \mathrm{ln} \, \frac{p(\mathbf{y}, \mathbf{z} | \Theta, K)}{q(\mathbf{z})} + \mathrm{ln} \, \frac{p(\Theta | K)}{q(\Theta)} \right) + C_{\mathbf{z}} + C_{\Theta}</math> |
This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions <math>q_\mathbf{z}</math> and <math>q_\Theta</math>, and, at the M step, we recompute the variational distributions over the parameters. | This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions <math>q_\mathbf{z}</math> and <math>q_\Theta</math>, and, at the M step, we recompute the variational distributions over the parameters. | ||
Line 79: | Line 79: | ||
* '''Updates for <math>q_\mathbf{z}</math>''': | * '''Updates for <math>q_\mathbf{z}</math>''': | ||
+ | |||
+ | We start by writing the functional derivative of <math>\mathcal{F}</math> with respect to <math>q_{\mathbf{z}}</math>: | ||
+ | |||
+ | <math>\frac{\partial \mathcal{F}}{\partial q_{\mathbf{z}}} = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left[ \frac{\partial}{\partial q_{\mathbf{z}}} \left( \int_\mathbf{z} \, \mathrm{d}\mathbf{z} \; \left( q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - q_{\mathbf{z}}(\mathbf{z}) \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) \right) \right) \right] + C_{\mathbf{z}}</math> | ||
+ | |||
+ | <math>\frac{\partial \mathcal{F}}{\partial q_{\mathbf{z}}} = \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \left[ \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) - \mathrm{ln} \, q_{\mathbf{z}}(\mathbf{z}) - 1 \right] + C_{\mathbf{z}}</math> | ||
+ | |||
+ | Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables (mean-field assumption): | ||
+ | |||
+ | <math>\frac{\partial \mathcal{F}}{\partial q_{\mathbf{z}}} \bigg|_{q_{\mathbf{z}}^{(t+1)}} = 0 \Longleftrightarrow \forall \, n \; \mathrm{ln} \, q_{z_n}^{(t+1)}(z_n) = \left( \int_\Theta \, \mathrm{d}\Theta \; q_\Theta(\Theta) \; \mathrm{ln} \, p(\mathbf{y},\mathbf{z}|\Theta,K) \right) - 1 + C_{z_n}</math> | ||
TODO | TODO |
Revision as of 17:25, 4 September 2012
Project name | Main project page Previous entry Next entry |
Variational Bayes approach for the mixture of Normals
The latent variables induce dependencies between all the parameters of the model. This makes it difficult to find the parameters that maximize the likelihood. An elegant solution is to introduce a variational distribution of parameters and latent variables, which leads to a re-formulation of the classical EM algorithm. But let's show it directly in the Bayesian paradigm.
The constant C is here to remind us that q has the constraint of being a distribution, ie. of summing to 1, which can be enforced by a Lagrange multiplier. We can then use the concavity of the logarithm (Jensen's inequality) to derive a lower bound of the marginal log-likelihood:
Let's call this lower bound as it is a functional, ie. a function of functions. To gain some intuition about the impact of introducing q, let's expand :
From this, it is clear that (ie. a lower-bound of the marginal log-likelihood) is the conditional log-likelihood minus the Kullback-Leibler divergence between the variational distribution q and the joint posterior of latent variables and parameters. In practice, we have to make the following crucial assumption of independence on q in order for the calculations to be analytically tractable:
This means that approximates the joint posterior, and therefore the lower-bound will be tight if and only if this approximation is exact and the KL divergence is zero. As we ultimately aim at inferring the parameters and latent variables that maximize the marginal log-likelihood, we will use the calculus of variations to find the functions and q_{Θ} that maximize the functional .
This naturally leads to a procedure very similar to the EM algorithm where, at the E step, we calculate the expectations of the parameters with respect to the variational distributions and q_{Θ}, and, at the M step, we recompute the variational distributions over the parameters.
We start by writing the functional derivative of with respect to :
Then we set this functional derivative to zero. We also make use of a frequent assumption, namely that the variational distribution fully factorizes over each individual latent variables (mean-field assumption):
TODO
TODO |