User:Timothee Flutre/Notebook/Postdoc/2011/12/14: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(Autocreate 2011/12/14 Entry for User:Timothee_Flutre/Notebook/Postdoc)
 
(→‎Entry title: add motivation, examples, aim, model)
Line 6: Line 6:
| colspan="2"|
| colspan="2"|
<!-- ##### DO NOT edit above this line unless you know what you are doing. ##### -->
<!-- ##### DO NOT edit above this line unless you know what you are doing. ##### -->
==Entry title==
==Learn about mixture models and the EM algorithm==
* Insert content here...
 
* '''Motivation and examples''': it's frequent to collect heterogeneous data, ie. for which we suspect that they come from several clusters. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc.
 
* '''Data''': we have N observations, noted <math>X = (x_1, x_2, ..., x_N)</math>. For the moment, we suppose that each observation <math>x_i</math> is univariate, ie. each corresponds to only one number.
 
* '''Hypotheses and aim''': let's assume that the data are heterogeneous and that they can be partitioned into <math>K</math> clusters (see examples above). This means that a subset of the observations come from cluster <math>k=1</math>, another subset come from cluster <math>k=2</math>, and so on.
 
* '''Model''': technically, we say that the observations were generated by a family of density functions. The density of all the observations is thus a mixture of densities, one per cluster. In our case, we will assume that each cluster <math>k</math> corresponds to a Normal distribution of mean <math>\mu_k</math> and standard deviation <math>\sigma_k</math>. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture probability <math>w_k</math> to be the probability that any given observation comes from cluster <math>k</math>. As a result, we have the following list of parameters: <math>\theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K</math>. Finally, for a given observation <math>x_i</math>, we can write the model <math>f(x_i/\theta) = \sum_{k=1}^{K} w_k g(x_i/\mu_k,\sigma_k)</math> , wth <math>g</math> being the Normal distribution <math>g(x_i/\mu_k,\sigma_k) = \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2}</math>





Revision as of 10:46, 14 December 2011

Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Learn about mixture models and the EM algorithm

  • Motivation and examples: it's frequent to collect heterogeneous data, ie. for which we suspect that they come from several clusters. For instance, we measure the height of individuals without recording their gender, we measure the levels of expression of a gene in several individuals without recording which ones are healthy and which ones are sick, etc.
  • Data: we have N observations, noted [math]\displaystyle{ X = (x_1, x_2, ..., x_N) }[/math]. For the moment, we suppose that each observation [math]\displaystyle{ x_i }[/math] is univariate, ie. each corresponds to only one number.
  • Hypotheses and aim: let's assume that the data are heterogeneous and that they can be partitioned into [math]\displaystyle{ K }[/math] clusters (see examples above). This means that a subset of the observations come from cluster [math]\displaystyle{ k=1 }[/math], another subset come from cluster [math]\displaystyle{ k=2 }[/math], and so on.
  • Model: technically, we say that the observations were generated by a family of density functions. The density of all the observations is thus a mixture of densities, one per cluster. In our case, we will assume that each cluster [math]\displaystyle{ k }[/math] corresponds to a Normal distribution of mean [math]\displaystyle{ \mu_k }[/math] and standard deviation [math]\displaystyle{ \sigma_k }[/math]. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture probability [math]\displaystyle{ w_k }[/math] to be the probability that any given observation comes from cluster [math]\displaystyle{ k }[/math]. As a result, we have the following list of parameters: [math]\displaystyle{ \theta=(w_1,...,w_K,\mu_1,...\mu_K,\sigma_1,...,\sigma_K }[/math]. Finally, for a given observation [math]\displaystyle{ x_i }[/math], we can write the model [math]\displaystyle{ f(x_i/\theta) = \sum_{k=1}^{K} w_k g(x_i/\mu_k,\sigma_k) }[/math] , wth [math]\displaystyle{ g }[/math] being the Normal distribution [math]\displaystyle{ g(x_i/\mu_k,\sigma_k) = \frac{1}{\sqrt{2\pi} \sigma_k} \exp^{-\frac{1}{2}(\frac{x_i - \mu_k}{\sigma_k})^2} }[/math]