Sarah Carratt: Week 11: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
Line 209: Line 209:
##Avg FC > 0.25 and p < 0.05: 2
##Avg FC > 0.25 and p < 0.05: 2
##Avg FC> -0.25 and p < 0.05: 1
##Avg FC> -0.25 and p < 0.05: 1
#For Schade, genes had to be at 50% fold change, not 20%, and the p-value for their analysis had to be less than .03, not .05. They used a different method, however, and used software for the calculations (GenePix) and we did ours in Excel. To get our p values, we used four or more trials, scaled and centered the data, averaged the trials, and then performed t-tests. Only after all this could we get p-values and filter the p-values to find significant data points. Schade only used two trials/datapoints, not 4 or 5. He also used different time intervals.


{{Template:SarahCarratt}}
{{Template:SarahCarratt}}

Revision as of 20:26, 4 April 2011

Background

List of Steps to Analyze DNA Microarray Data

  1. Quantitate the fluorescence signal in each spot
  2. Calculate the ratio of red/green fluorescence
  3. Log transform the ratios
  4. Normalize the ratios on each microarray slide
    • Steps 1-4 are performed by the GenePix Pro software.
    • You will perform the following steps:
  5. Normalize the ratios for a set of slides in an experiment
  6. Perform statistical analysis on the ratios
  7. Compare individual genes with known data
    • Steps 5-7 are performed in Microsoft Excel
  8. Pattern finding algorithms (clustering)
  9. Map onto biological pathways
    • We will use software called STEM for the clustering and mapping
  10. Create mathematical model of transcriptional network

Groups and Submitting the Assignment

Each group will analyze a different microarray dataset:

  • Wild type data from the Schade et al. (2004) paper you read last week.
  • Wild type data from the Dahlquist lab.
  • Δgln3 data from the Dahlquist lab.

For your assignment this week, you will keep an electronic laboratory notebook on your individual wiki page that records all the manipulations you perform on the data and the answers to the questions throughout the protocol.

You will download your assigned Excel spreadsheet from LionShare. Because the Dahlquist Lab data is unpublished, please do not post it on this public wiki. Instead, keep the file(s) on LionShare, which is protected by a password.

  • Groups:
    • Schade et al. (2004) data: Carmen, James
    • Dahlquist lab wild type data: Sarah, Nick
    • Dahlquist lab Δgln3 data: Alondra

Experimental Design

On the spreadsheet, each row contains the data for one gene (one spot on the microarray). The first column (labeled "MasterIndex") numbers the rows in the spreadsheet so that we can match the data from different experiments together later. The second column (labeled "ID") contains the gene identifier from the Saccharomyces Genome Database. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-4 above having been done for you by the scanner software).

Each of the column headings from the data begin with the experiment name ("wt" for Dahlquist wild type data). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.

For the Dahlquist data, the timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).

Instructions

Record Parameters

Begin by recording in your wiki the number of replicates for each time point in your data

Normalize the ratios for a set of slides in an experiment

To scale and center the data (between-chip normalization) perform the following operations:

  • Insert a new Worksheet into your Excel file, and name it "scaled_centered".
  • Go back to the "compiled_raw_data" worksheet, Select All and Copy. Go to your new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
  • Insert two rows in between the top row of headers and the first data row.
  • In cell A2, type "Average" and in cell A3, type "StdDev".
  • You will now compute the Average log ratio for each chip (each column of data). In cell C2, type the following equation:
=AVERAGE(C4:C6190)

and press "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can left-click on the beginning cell (let go of the mouse button), scroll down to the bottom of the worksheet, and shift-left-click on the ending cell.

  • You will now compute the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
=STDEV(C4:C6190)

and press "Enter".

  • Excel will now do some work for you. Copy these two equations (cells C2 and C3) and paste them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
  • You have now computed the average and standard deviation of the log ratios for each chip. Now we will actually do the scaling and centering based on these values.
  • Insert a new column to the right of each data column and label the top of the column as with the same name as the column to the left, but adding "_sc" for scaled and centered to the name. For example, "wt_LogFC_t15-1_sc"
  • In cell D4, type the following equation:
=(C4-C$2)/C$3

In this case, we want the data in cell C4 to have the average subtracted from it (cell C2) and be divided by the standard deviation (cell C3). We use the dollar sign symbols in front of the number to tell Excel to always reference that row in the equation, even though we will paste it for the entire column. Why is this important?

  • Copy and paste this equation into the entire column.
  • Repeat the scaling and centering equation for each of the columns of data. You can copy and paste the formula above, but be sure that your equation is correct for the column you are calculating.

Perform statistical analysis on the ratios

We are going to perform this step on the scaled and centered data you produced in the previous step.

  • Insert a new worksheet into your Excel spreadsheet and name it "statistics".
  • Go back to the "scaled_centered" worksheet, Select All and Copy. Go to your new "statistics" worksheet, click on the upper, left-hand cell (cell A1) and Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
    • There may be some non-numerical values in some of the cells in your worksheet. This is due to errors created when Excel tries to compute an equation on a cell that has no data. We need to go through and remove these error messages before going on to the next step.
    • Scan through your spreadsheet to find an example of the error message. Then go to the Edit menu and Select Replace. A window will open, type the text you are replacing in the "Find what:" field. In the "Replace with:" field, enter a single space character. Click on the button "Replace All" and record the number of replacements made in your wiki page.
  • We are now going to work with your scaled and centered Log Fold Changes only, so delete the columns containing the raw Log Fold changes, leaving only the columns that have the "_sc" suffix in their column headings. You may also delete the second and third rows where you computed the average and standard deviations for each chip.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the average log fold changes that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_AvgLogFC_t15".
  • Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(range of cells in the row for that timepoint)

into the second cell below the column heading. For example, your equation might read

=AVERAGE(C2:F2)

Copy this equation and paste it into the rest of the column.

  • Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.
  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the T statistic that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Tstat_t15". You will now compute a T statistic that tells you whether the scaled and centered average log fold change is significantly different than 0 (no change in expression). Enter the equation into the second cell below the column heading:
=AVERAGE(range of cells)/(STDEV(range of cells)/SQRT(number of replicates))

For example, your equation might read:

=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))

(NOTE: in this case the number of replicates is 4. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns. Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.

  • Go to the empty columns to the right on your worksheet. Create new column headings in the top cells to label the P value that you will compute. Name them with the pattern <Schade, wt, or dGLN3>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time. For example, "wt_Pval_t15". In the cell below the label, enter the equation:
=TDIST(ABS(cell containing T statistic),degrees of freedom,2)

For example, your equation might read:

=TDIST(ABS(AE2),3,2)

The number of degrees of freedom is the number of replicates minus one, so in our case there are 3 degrees of freedom. Copy the equation and paste it into all rows in that column.

  • Insert a new worksheet and name it "final".
  • Go back to the "statistics" worksheet and Select All and Copy.
  • Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. This is your final worksheet from which we will perform biological analysis of the data.
  • Select all of the columns containing Fold Changes. Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
  • Select all of the columns containing T statistics or P values. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
  • Upload the .xls file that you have just created to LionShare. Give Dr. Dahlquist (username kdahlqui) and Dr. Fitzpatrick (username bfitzpatrick) permission to download your file. Send an e-mail to each of us with the link to the file.

Sanity Check: Number of genes significantly changed

Before we move on to the biological analysis of the data, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Schade et al. (2004).

  • Open your spreadsheet and go to the "final" worksheet.
  • Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
  • Click on the drop-down arrow on one of your "Pval" columns. Select "Custom". In the window that appears, set a criterion that will filter your data so that the P value has to be less than 0.05.
    • How many genes have p value < 0.05?
    • What about p < 0.01?
    • What about p < 0.001?
    • What about p < 0.0001?
      • Answer these questions for each timepoint in your dataset.
  • When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
  • We have just performed 5221 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 309 times. Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones.
    • There is a simple correction that can be made to the p values to increase the stringency called the Bonferroni correction. To perform this correction, multiply the p value by the number of statistical tests performed (in our case 6189) and see whether any of the p values are still less than 0.05.
      • Perform this correction and determine whether and how many of the genes are still significantly changed at p < 0.05 after the Bonferroni correction.
  • The "AvgLogFC" tells us the magnitude of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control. For the timepoint that had the greatest number of genes significantly changed at p < 0.05, answer the following:
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change greater than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, filter the "AvgLogFC" column to show all genes with an average log fold change less than zero. How many meet these two criteria?
    • Keeping the "Pval" filter at p < 0.05, How many have an average log fold change of > 0.25 and p < 0.05?
    • How many have an average log fold change of < -0.25 and p < 0.05? (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • What criteria did Schade et al. (2004) use to determine a significant gene expression change? How does it compare to our method?
  • The expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. (Recall that it is specifically mentioned in the Schade et al. (2004) paper.) Find NSR1 in your dataset. Is it's expression significantly changed at any timepoint? Record the average fold change and p value for NSR1 for each timepoint in your dataset.
  • Which gene has the smallest p value in your dataset (at any timepoint)? You can find this by sorting your data based on p value (but be careful that you don't cause a mismatch in the rows of your data!) Look up the function of this gene at the Saccharomyces Genome Database and record it in your notebook. Why do you think the cell is changing this gene's expression upon cold shock?

Student Response

Number of Replicates

t15:4, t30:5, t60:4, t90:5, t120:5

Checklist of Steps

  1. Create new worksheet called "scaled_centered": DONE
    1. Copy data from "compiled_raw_data":DONE
    2. Average log ratios for each chip: DONE
    3. Standard Deviations for each chip: DONE
    4. (data in each cell of columns-average of column) / standard deviation of column: DONE
  2. Create new worksheet called "statistics": DONE
    1. Copy data from "scaled_centered" as numbers, not formulas: DONE
    2. Check for errors: DONE
      1. Replaced "no character" with "space" 6936 times
      2. Replaced "#VALUE!" with "space 729 times
    3. Delete unwanted columns and rows: DONE
    4. Create columns using "wt_AvgLogFC_t15" format: DONE
      1. Compute the average log fold change for each of the replicates: DONE
      2. Copy formulas so averages for each replicate are calculated for each gene: DONE
    5. Create columns using "wt_Tstat_t15" format: DONE
      1. Compute T statistic for each time: DONE
      2. Copy formula for each gene: DONE
    6. Create columns using ""wt_Pval_t15" format: DONE
      1. Compute P value for each time: DONE
      2. Copy formula for each gene: DONE
  3. Create new worksheet called "final": DONE
    1. Copy data from "statistics" as numbers, not formulas: DONE
    2. Change fold changes to 2 decimals only: DONE
    3. Change T/P columns to 4 decimals only: DONE
  4. Create folder in lionshare titled Week 11:DONE
    1. Upload file with data and calculations: DONE
    2. Give administrator permissions to folder to both professors: DONE
    3. Email professors that they now have access to the file: DONE
  5. Sanity checks (see questions in next section for results)
    1. Tests for varying p-values: DONE
    2. Bonferroni correction: DONE
    3. Filters for magnitude of gene expression change: DONE

Answers to Questions

  1. We want to use dollar signs so that we can normalize each individual cell by subtracting the average of all the cells on one chip (column) and dividing by the standard deviation of the chip (column). If we don't use the dollar signs, the formula will not use the column averages and column standard deviations, but the raw data that is listed below it.
  2. Sanity Check for Time 15
    1. p value < 0.05 = 817 for wt_Pval_t15
    2. p < 0.01 = 208 for wt_Pval_t15
    3. p < 0.001 = 25 for wt_Pval_t15
    4. p < 0.0001 = 2 for wt_Pval_t15
  3. Sanity Check for Time 30
    1. p value < 0.05 = 1231 for wt_Pval_t30
    2. p < 0.01 = 425 for wt_Pval_t30
    3. p < 0.001 = 71 for wt_Pval_t30
    4. p < 0.0001 = 8 for wt_Pval_t30
  4. Sanity Check for Time 60
    1. p value < 0.05 = 1071 for wt_Pval_t60
    2. p < 0.01 = 292 for wt_Pval_t60
    3. p < 0.001 = 41 for wt_Pval_t60
    4. p < 0.0001 = 8 for wt_Pval_t60
  5. Sanity Check for Time 90
    1. p value < 0.05 = 684 for wt_Pval_t90
    2. p < 0.01 = 164 for wt_Pval_t90
    3. p < 0.001 = 14 for wt_Pval_t90
    4. p < 0.0001 = 0 for wt_Pval_t90
  6. Sanity Check for Time 120
    1. p value < 0.05 = 295 for wt_Pval_t120
    2. p < 0.01 = 37 for wt_Pval_t120
    3. p < 0.001 = 5 for wt_Pval_t120
    4. p < 0.0001 = 2 for wt_Pval_t120
  7. P < .05 after Bonferroni corrections
    1. Time 15: 0
    2. Time 30: 1
    3. Time 60: 3
    4. Time 90: 0
    5. Time 120: 0
  8. Special evaluations of TIme 60:
    1. Avg FC greater than zero: 2
    2. Avg FC less than zero: 1
    3. Avg FC > 0.25 and p < 0.05: 2
    4. Avg FC> -0.25 and p < 0.05: 1
  9. For Schade, genes had to be at 50% fold change, not 20%, and the p-value for their analysis had to be less than .03, not .05. They used a different method, however, and used software for the calculations (GenePix) and we did ours in Excel. To get our p values, we used four or more trials, scaled and centered the data, averaged the trials, and then performed t-tests. Only after all this could we get p-values and filter the p-values to find significant data points. Schade only used two trials/datapoints, not 4 or 5. He also used different time intervals.



Navigation Guide

Individual Assignments

Sarah Carratt: Week 2 Sarah Carratt: Week 6 Sarah Carratt: Week 11
Sarah Carratt: Week 3 Sarah Carratt: Week 7 Sarah Carratt: Week 12
Sarah Carratt: Week 4 Sarah Carratt: Week 8 Sarah Carratt: Week 13
Sarah Carratt: Week 5 Sarah Carratt: Week 9 Sarah Carratt: Week 14

Class Assignments

Shared Journal: Week 1 Shared Journal: Week 6 Shared Journal: Week 11
Shared Journal: Week 2 Shared Journal: Week 7 Shared Journal: Week 12
Shared Journal: Week 3 Shared Journal: Week 8 Shared Journal: Week 13
Shared Journal: Week 4 Shared Journal: Week 9 Shared Journal: Week 14
Shared Journal: Week 5 Shared Journal: Week 10

Class Notes

Sarah Carratt_1.18.11 Sarah Carratt_2.3.11 Sarah Carratt_2.22.11
Sarah Carratt_1.20.11 Sarah Carratt_2.8.11 Sarah Carratt_2.24.11
Sarah Carratt_1.25.11 Sarah Carratt_2.10.11 Sarah Carratt_3.1.11
Sarah Carratt_1.27.11 Sarah Carratt_2.15.11 Sarah Carratt_3.3.11
Sarah Carratt_2.1.11 Sarah Carratt_2.17.11 Sarah Carratt_3.8.11

Internal Links

BIOL398-01/S11:Assignments BIOL398-01/S11:People BIOL398-01/S11:Sarah Carratt