User:Timothee Flutre/Notebook/Postdoc/2011/11/10: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
Line 12: Line 12:




* '''Data''': let's assume that we obtained data from N individuals. We note <math>y_1,\ldots,y_N</math> the (quantitative) phenotypes (e.g. expression level at a given gene), and <math>g_1,\ldots,g_N</math> the genotypes at a given SNP (as allele dose, 0, 1 or 2).
* '''Data''': let's assume that we obtained data from N individuals. We note <math>y_1,\ldots,y_N</math> the (quantitative) phenotypes (e.g. expression levels at a given gene), and <math>g_1,\ldots,g_N</math> the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2).




Line 21: Line 21:




* '''Likelihood''': <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1})</math>
* '''Likelihood''': we start by writing the usual linear regression for one individual
 
<math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1})</math>


where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>.
where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>.


Let's now write in matrix notation:
Let's now write the model in matrix notation:


<math>Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T</math>
<math>Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T</math>


which gives the following conditional distribution for the phenotypes:
This gives the following [http://en.wikipedia.org/wiki/Multivariate_normal_distribution multivariate Normal distribution] for the phenotypes:


<math>Y | X, B, \tau \sim \mathcal{N}(XB, \tau^{-1} I_N)</math>
<math>Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)</math>


The likelihood of the parameters given the data is therefore:
The likelihood of the parameters given the data is therefore:
Line 37: Line 39:
<math>\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)</math>
<math>\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)</math>


<math>\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{N/2} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)</math>
<math>\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)</math>




Line 43: Line 45:


<math>\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)</math>
<math>\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)</math>
A [http://en.wikipedia.org/wiki/Gamma_distribution Gamma distribution] for <math>\tau</math>:


<math>\tau \sim \Gamma(\kappa/2, \, \lambda/2)</math>
<math>\tau \sim \Gamma(\kappa/2, \, \lambda/2)</math>
which means:
<math>\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}</math>
And a multivariate Normal distribution for <math>B</math>:


<math>B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)</math>
<math>B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)</math>
which means:
<math>\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)</math>




Line 104: Line 118:
Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on <math>B</math>!):
Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on <math>B</math>!):


<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{1/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B</math>
<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B</math>


As we used a conjugate prior for <math>\tau</math>, we know that we expect a Gamma distribution for the posterior.
As we used a conjugate prior for <math>\tau</math>, we know that we expect a Gamma distribution for the posterior.
Line 110: Line 124:
We also factorize inside the exponential:
We also factorize inside the exponential:


<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{1/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B</math>
<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B</math>


We recognize the conditional posterior of <math>B</math>.
We recognize the conditional posterior of <math>B</math>.
Line 117: Line 131:
<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T X \Omega X^T Y + Y^T Y) \right]</math>
<math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T X \Omega X^T Y + Y^T Y) \right]</math>


We finally recognize the following Gamma distribution:
We finally recognize a Gamma distribution, allowing us to write the posterior as:


<math>\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T X \Omega X^T Y + Y^T Y + \lambda) \right)</math>
<math>\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T X \Omega X^T Y + Y^T Y + \lambda) \right)</math>

Revision as of 19:24, 21 November 2012

Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Bayesian model of univariate linear regression for QTL detection

See Servin & Stephens (PLoS Genetics, 2007).


  • Data: let's assume that we obtained data from N individuals. We note [math]\displaystyle{ y_1,\ldots,y_N }[/math] the (quantitative) phenotypes (e.g. expression levels at a given gene), and [math]\displaystyle{ g_1,\ldots,g_N }[/math] the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2).


  • Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype.


  • Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes.


  • Likelihood: we start by writing the usual linear regression for one individual

[math]\displaystyle{ \forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1}) }[/math]

where [math]\displaystyle{ \beta_1 }[/math] is in fact the additive effect of the SNP, noted [math]\displaystyle{ a }[/math] from now on, and [math]\displaystyle{ \beta_2 }[/math] is the dominance effect of the SNP, [math]\displaystyle{ d = a k }[/math].

Let's now write the model in matrix notation:

[math]\displaystyle{ Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T }[/math]

This gives the following multivariate Normal distribution for the phenotypes:

[math]\displaystyle{ Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N) }[/math]

The likelihood of the parameters given the data is therefore:

[math]\displaystyle{ \mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B) }[/math]

[math]\displaystyle{ \mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right) }[/math]


  • Priors: we use the usual conjugate prior

[math]\displaystyle{ \mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau) }[/math]

A Gamma distribution for [math]\displaystyle{ \tau }[/math]:

[math]\displaystyle{ \tau \sim \Gamma(\kappa/2, \, \lambda/2) }[/math]

which means:

[math]\displaystyle{ \mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau} }[/math]

And a multivariate Normal distribution for [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2) }[/math]

which means:

[math]\displaystyle{ \mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right) }[/math]


  • Joint posterior:

[math]\displaystyle{ \mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau) }[/math]


  • Conditional posterior of B:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \mathsf{P}(B, Y | X, \tau) }[/math]

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)} }[/math]

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, B, \tau)}{\int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B} }[/math]

Here and in the following, we neglect all constants (e.g. normalization constant, [math]\displaystyle{ Y^TY }[/math], etc):

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) }[/math]

We use the prior and likelihood and keep only the terms in [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB)) }[/math]

We expand:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB) }[/math]

We factorize some terms:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY) }[/math]

Let's define [math]\displaystyle{ \Omega = (\Sigma_B^{-1} + X^TX)^{-1} }[/math]. We can see that [math]\displaystyle{ \Omega^T=\Omega }[/math], which means that [math]\displaystyle{ \Omega }[/math] is a symmetric matrix. This is particularly useful here because we can use the following equality: [math]\displaystyle{ \Omega^{-1}\Omega^T=I }[/math].

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY) }[/math]

This now becomes easy to factorizes totally:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY)) }[/math]

We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as:

[math]\displaystyle{ B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega) }[/math]


  • Posterior of [math]\displaystyle{ \tau }[/math]:

Similarly to the equations above:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) }[/math]

But now, to handle the second term, we need to integrate over [math]\displaystyle{ B }[/math], thus effectively taking into account the uncertainty in [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B }[/math]

Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on [math]\displaystyle{ B }[/math]!):

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B }[/math]

As we used a conjugate prior for [math]\displaystyle{ \tau }[/math], we know that we expect a Gamma distribution for the posterior. Therefore, we can take [math]\displaystyle{ \tau^{N/2} }[/math] out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B }[/math]

We recognize the conditional posterior of [math]\displaystyle{ B }[/math]. This allows us to use the fact that the pdf of the Normal distribution integrates to one:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T X \Omega X^T Y + Y^T Y) \right] }[/math]

We finally recognize a Gamma distribution, allowing us to write the posterior as:

[math]\displaystyle{ \tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T X \Omega X^T Y + Y^T Y + \lambda) \right) }[/math]