User:Timothee Flutre/Notebook/Postdoc/2011/11/10

From OpenWetWare
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Project name <html><img src="/images/9/94/Report.png" border="0" /></html> Main project page
<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>Previous entry<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>Next entry<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>

Bayesian model of univariate linear regression for QTL detection

See Servin & Stephens (PLoS Genetics, 2007).


  • Data: let's assume that we obtained data from N individuals. We note [math]\displaystyle{ y_1,\ldots,y_N }[/math] the (quantitative) phenotypes (e.g. expression levels at a given gene), and [math]\displaystyle{ g_1,\ldots,g_N }[/math] the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2).


  • Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype.


  • Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes.


  • Likelihood: we start by writing the usual linear regression for one individual

[math]\displaystyle{ \forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1}) }[/math]

where [math]\displaystyle{ \beta_1 }[/math] is in fact the additive effect of the SNP, noted [math]\displaystyle{ a }[/math] from now on, and [math]\displaystyle{ \beta_2 }[/math] is the dominance effect of the SNP, [math]\displaystyle{ d = a k }[/math].

Let's now write the model in matrix notation:

[math]\displaystyle{ Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T }[/math]

This gives the following multivariate Normal distribution for the phenotypes:

[math]\displaystyle{ Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N) }[/math]

Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the covariance matrix of [math]\displaystyle{ Y | X, \tau, B }[/math] remains a single real number, [math]\displaystyle{ \tau }[/math].

The likelihood of the parameters given the data is therefore:

[math]\displaystyle{ \mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B) }[/math]

[math]\displaystyle{ \mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right) }[/math]


[math]\displaystyle{ \mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau) }[/math]

A Gamma distribution for [math]\displaystyle{ \tau }[/math]:

[math]\displaystyle{ \tau \sim \Gamma(\kappa/2, \, \lambda/2) }[/math]

which means:

[math]\displaystyle{ \mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau} }[/math]

And a multivariate Normal distribution for [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2) }[/math]

which means:

[math]\displaystyle{ \mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right) }[/math]


  • Joint posterior (1):

[math]\displaystyle{ \mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau) }[/math]


  • Conditional posterior of B:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \mathsf{P}(B, Y | X, \tau) }[/math]

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)} }[/math]

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, B, \tau)}{\int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B} }[/math]

Here and in the following, we neglect all constants (e.g. normalization constant, [math]\displaystyle{ Y^TY }[/math], etc):

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) }[/math]

We use the prior and likelihood and keep only the terms in [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB)) }[/math]

We expand:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB) }[/math]

We factorize some terms:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY) }[/math]

Let's define [math]\displaystyle{ \Omega = (\Sigma_B^{-1} + X^TX)^{-1} }[/math]. We can see that [math]\displaystyle{ \Omega^T=\Omega }[/math], which means that [math]\displaystyle{ \Omega }[/math] is a symmetric matrix. This is particularly useful here because we can use the following equality: [math]\displaystyle{ \Omega^{-1}\Omega^T=I }[/math].

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY) }[/math]

This now becomes easy to factorizes totally:

[math]\displaystyle{ \mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY)) }[/math]

We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as:

[math]\displaystyle{ B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega) }[/math]


  • Posterior of [math]\displaystyle{ \tau }[/math]:

Similarly to the equations above:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) }[/math]

But now, to handle the second term, we need to integrate over [math]\displaystyle{ B }[/math], thus effectively taking into account the uncertainty in [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B }[/math]

Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on [math]\displaystyle{ B }[/math]!):

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B }[/math]

As we used a conjugate prior for [math]\displaystyle{ \tau }[/math], we know that we expect a Gamma distribution for the posterior. Therefore, we can take [math]\displaystyle{ \tau^{N/2} }[/math] out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B }[/math]

We recognize the conditional posterior of [math]\displaystyle{ B }[/math]. This allows us to use the fact that the pdf of the Normal distribution integrates to one:

[math]\displaystyle{ \mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T X \Omega X^T Y + Y^T Y) \right] }[/math]

We finally recognize a Gamma distribution, allowing us to write the posterior as:

[math]\displaystyle{ \tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T X \Omega X^T Y + Y^T Y + \lambda) \right) }[/math]


  • Joint posterior (2): sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution:

[math]\displaystyle{ B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \tau^{-1}\Omega, \frac{N+\kappa}{2}, \frac{\lambda^\ast}{2}) }[/math]

where [math]\displaystyle{ \lambda^\ast = (Y^T X \Omega X^T Y + Y^T Y + \lambda) }[/math]


  • Marginal posterior of B: we can now integrate out [math]\displaystyle{ \tau }[/math]:

[math]\displaystyle{ \mathsf{P}(B | Y, X) = \int \mathsf{P}(B, \tau | Y, X) \mathsf{d}\tau }[/math]

[math]\displaystyle{ \mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau }[/math]

Here we recognize the formula to integrate the Gamma function:

[math]\displaystyle{ \mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}} }[/math]

And we now recognize a multivariate t distribution:

[math]\displaystyle{ \mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}} }[/math]