Natalie Williams: Electronic Notebook: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(Added two days worth of notes - the 25 and the 26)
 
(100 intermediate revisions by the same user not shown)
Line 9: Line 9:
*[[Dahlquist:GRNmap]]
*[[Dahlquist:GRNmap]]


=== Electronic Notebook ===
===[[Natalie Williams Fall Electronic Notebook 2014 |Fall 2014]]===
===Fall 2014===
This contains all the procedures and tasks that I completed and the trials that I ran in Fall 2014.
====September 2014====
=====September 18, 2014=====
Data Set Up<br>
Openwetware familiarization: I became familiar with openwetware code and programing <br>
MATLAB procedure: the MATLAB procedure that was written contains the instructions in using it to receive the output with the optimization network weights of the system. <br>
<br>
Network<br>
Ten random networks were made from the original network.
*The original network Excel file was used, and each cell on the network sheet had the following formula in it:
** =IF(RAND()<0.1134,1,0)
*This procedure was done ten times to get these ten random networks
*Each network was saved as rand# (1 - 10)


=====September 25, 2014=====
===[[Natalie Williams Spring Electronic Notebook 2015 |Spring 2015]]===
The random network Excel files were put into MATLAB to be run to get the optimized weights of the network
This contains all the procedures and tasks that I completed and the trials that I ran in Spring 2015. Most of the activities/notes for this semester focused on creating a poster for the various conferences that we attended in the Spring.
*The file is saved as the final name with _output.xls
Opening the file, the weights of these networks was found on the optimized_network_weights sheet.<br>
<br>
Visualization of the Networks<br>
These files had to be re-saved as .xlsx in order to upload them to GRNSight. GRNSight visualizes the networks and because there are varying numbers that suggest how much one gene controls another, the resulting output has different colors. After each individual random network was visualized, it was compared to the original network. For better analysis, the same order of the proteins was used to see the different connectivities.


====October 2014====
===[[Natalie Williams Summer Electronic Notebook 2015 |Summer 2015]]===
=====October 2, 2014=====
This link has all the information for what occurred over the summer. A lot of it was testing the code by changing the initial weights and the threshold b values of the input sheets.
Information that could have been gathered from comparing the Original network to the 10 Random ones was found. This information includes:
Nodes: the positive and the negative <br>
Frequencies: the In and Out degrees. These were how often one gene controlled other genes. It was found through the following equations:
*=COUNTIF(B2:B22,"<>"&0)
*=COUNTIF(B3:V3,"<>"&0)
*From this, the frequencies were found by looking at how often 0 appeared or 1 appeared, etc.
**For example, =COUNTIF(B23:V23,"=0") for In Degree to see how often 0 occurred
Next, bar graphs were used to compare the weighted networks between a random network and the original network.<br>
After that, the minimum and maximum values from each random network was found.
*The minimum was found using: =MIN(B2:V22)
*The maximum was found using: =MAX(B2:V22)
The sum was found of the entire worksheet of the optimized_network_weights.
*=SUM(B2:V22)
The average of the worksheet was also taken for the entire matrix.
*=AVERAGE(B2:V22)
<br>
We used this information to see if there were any key factors to what made the original network the one that we accepted. We hoped that it would shed some light on what key differences were between the random networks and the original one.


=====October 9, 2014=====
===[[Natalie Williams Fall Electronic Notebook 2015 |Fall  2015]]===
I was out of town, so there was nothing I needed to do specifically for this week.


=====October 16, 2014=====
===[[Natalie Williams Fall Electronic Notebook 2016 | Fall 2016]]===
I began the process of the forward simulations of the networks. I had to isolate the deletion strains and see if there was any resemblance between the wild type strain with the four deletion strains.


=====October 23, 2014=====
==Spring 2017==
All the bugs in the system were noted and written down to be fixed.<br>
The forward simulations were rerun. The production and degradation rates from the output were inserted into each of the individual strains. For the network weights of the individual strains, the output from the general workbook sheet, optimized_network_weights, was used.<br>
The deletion strains needed hard 0's across their row on their worksheet. On the optimization_parameters sheet, the following things needed to be altered:
*iestimate = 0.00+E0
*fixed_b = 0
*strains:
**wt/3/0
**dcin5/4/3, where the first number is the sheet, and the second number is the row of the gene within the sheet
**This controlled which strains would be shown after the workbook was run through MATLAB
Network_b sheet used the optimized_network_b from the general workbook output was used for each individual strain.


=====October 30, 2014=====
=== January 2017 ===
The Real WT individual strain was compared to the forward simulation WT and deletion strains.<br>
====Week of January 12, 2017====
I made a list of transcription factors of the individual strains that did not compare well with the real WT individual. Those transcription factors were going to be looked at more closely and might have been taken away. The parts that I compared were the data points and the fit of the line.  
Monday & Thursday: Worked on collecting sources for my thesis project. The annotated bibliography is due 20/01. I will be in Boston at that time, but I will still submit my annotated bibliography in time. We had our first lab meeting of the semester on Thursday.


====November 2014====
====Week of January 19, 2017 ====
=====November 6, 2014=====
Monday: Worked on writing the abstract for the SCSBC at UC Irvine on Saturday, 28/01. The abstract can be found on the Dahlquist Lab repository on github.
16 transcription factors were taken and run through YEASTRACT. However, the results have to be formatted in a way so that GRNSIght can visualize it the network that results.
  The network I used was created with the following transcription factors:
  ARG80
  CIN5
  GLN3
  HAP4
  HMO1
  NRG2
  RSF2
  RTG3
  STB4
  SWI4
  TBF1
  TOS8
  TYE7
  YHP1
  YOX1
  ZAP1
#Navigate to Generate Regulation Matrix [[http://www.yeastract.com/formgenerateregulationmatrix.php]] on the YEASTRACT
#Select the appropriate check boxes for the filters.
#Paste the list of transcription factors into the appropriate field.
#Paste a list of targets into the Target ORF/Genes field, or check the box to consider all ORF/Genes.
#Click the Generate button.
#In the results window that appears, click the link to download the Regulation matrix results file as a Semicolon Separated Values (CSV) file.
#Once you have downloaded the file, launch Microsoft Excel.
#Select the menu item, File > Open and select the file that you downloaded.
#Select Column A.
#Select the menu item, Data > Text to Columns...
#In the first window of the wizard that appears, select the radio button for "Delimited" and click Next.
#In the second window of the wizard that appears, check the box for "Other" under "Delimiters" and type a semicolon in the field to the right and click Finish.
#Select the menu item, File > Save As.  Save the file as an Excel Workbook (.xlsx).
#The orientation of the matrix has to be flipped. A new worksheet must be created by clicking on the new worksheet icon at the bottom of the screen. Name this new worksheet "network".
#Select the adjacency matrix from the first worksheet and copy it to the clipboard. Go to the "network" worksheet and click on cell A1. Select the menu item Edit > Paste Special. In the window that appears, check the box "Transpose" and click OK.
#The labels for the genes in the columns and rows needs to match. The "p" of the gene names in the columns must be deleted.
#Paste the following text into cell A1 "rows genes affected/cols genes controlling".
#Save your work, which is now ready for loading into GRNsight. The original sheet can be deleted if you want.


Results<br>
Thursday: Not present. Interview at Harvard Medical School.
GRNSight v1.8 had to be used to visualize the networks. Only four of the input selection choices gave network connections among the listed transcription factors.<br>
Documented DNA Binding Evidence
*15 genes
*58 edges
Documented DNA Expression
*15 genes
*31 edges
DNA Expression plus Binding
*15 genes
*58 edges
DNA Expression and Binding
*15 genes
*4 edges
Potential with Motifs
*0 genes
*0 edges
Potential without Motifs
*0 genes
*0 edges
Documented plus Potential
*0 genes
*0 edges
Documented and Potential
*0 genes
*0 edges


=====November 13, 2014=====
====Week of January 26, 2017 ====
I reran MATLAB to see if I got the same results as Dr. Fitzpatrick. I received the same results as Dr. Fitzpatrick. When each deletion strain was compared to the WT strain, the targeted genes that were supposed to be affected were.  
Monday: Finished most of the poster that will be presented this upcoming Saturday at the conference. I wrote much of the content and analysis and Brandon worked on the formatting. Much of the analysis done was on the optimized production and threshold b value's, a motif - Hmo1 --> Msn2 --> Cin5 --> Yhp1.


===Spring 2015===
Thursday: Went over poster during lab meeting. With Dahlquist's correction, I updated the poster and uploaded it to the github repositoryto be edited and reviewed by Dahlquist before printing.
====January 2015====
I met the other people that are working on this project - Juan, Trixie, and Grace. For this month, we discussed where the project was heading and what parts of the code need to be changed. During these meetings, Profs. Dahlquist and Fitzpatrick gave overviews of the research project and all the computational functions that the model requires.


====February 2015====
=== February 2017 ===
=====February 6, 2015=====
====January 31, 2017 & February 2, 2017====
I reran the protocol for microarray data that I received from Dr. Dahlquist. The protocol can be found [[Dahlquist:Microarray Data Processing in R|here]].
Monday: Reran the networks derived from dgln3, dhap4, and dzap1 on bouldardii 2 for consistency so that there aren't any discrepancies from running these networks on a different computer.
*First, I created files on my desktop to host the Ontario and GCAT data
**Each folder contained the following:
**#The script for either Ontario or GCAT
**#The target files for those scripts --> Ontario_Targets and GCAT_Targets
**#The .gpr files from the microarrays were also located in the individual files
*I downloaded and unzipped the files that were listed under the protocol
Note that Ontario was saved under Ontario and GCAT with GCAT
*The R used was the 32-bit
The directory had to be changed to the folder where the files that would be corrected were extracted to
*The Ontario script was run first and then followed by GCAT
*There will be two different outputs from running GCAT. We want the Final_Normalized_Data


My file was then sent to Grace J., who then began to compare the results that we got.
Thursday:
*Compiled the optimized parameters into one file as well as the MSE values for individual genes in each of the networks. Each of the networks were visualized again on GRNsight just to ensure that the visualizations match with the output optimized weights for each network.
*Received feedback from Dr. Dahlquist on my annotated bibliography as well as additional sources to use for my thesis.


=====February 12/13, 2015=====
====Week of February 6, 2017====
I spent this day searching literature for data sets of transcription rates with Grace J. We wrote an abstract to submit to the Undergraduate Research Symposium to present what was done last semester. The abstract was submitted the following Friday.
Monday: Edited the 10 random output sheet's K. Grace Johnson ran last year to make them into input sheets to re-run on boulardii 2.
*I deleted all the output sheets: the sigmas, optimized_network_weights, optimized_expression, and the optimized production and threshold_b
*I copied the production and degradation rates from Brandon's dhap4 network into all the corresponding sheets in the random network input sheets
Worked on creating the working abstract for my talk during LMU's Undergraduate Research Symposium.
*The adjacency matrices from the random network files were then copied and pasted into the adjacency matrix of Brandon's file so that all parameters and information would be the same. The only difference was the network and the network weight sheets.


=====February 19/20, 2015=====
Thursday: I was not here due to an interview at UCSF's medical school.
Spring Break


=====February 26/27, 2015=====
====Week of February 13, 2017====
Monday: I generated some random networks with Brandon's R script to be run on the model. A folder was created to hold all the input and output sheets for the random networks that are run with GRNmap [https://github.com/kdahlquist/DahlquistLab/tree/master/data/bouldardii2_GRNmap_outputs/Random_network_intput_output]. For further analysis, I will also look at the distribution of the in and out degrees of all the random networks compared to the network derived from the dhap4 data.
*Distribution of weights (positive vs. negative) and the overall network
*Are any motifs/connections conserved?
*Any self or auto-regulators?
*Visualization will also be seen via GRNsight


We worked on the poster for our presentation at the 7th Annual Undergraduate Research Symposium. We compiled the data that we were going to use and present on our poster. For the random networks, we gathered the LSE's to compare to the lierature-derived. We chose Random Networks 1 and 4 due to them having the lowest and the highest LSE output.
Throughout the next couple of weeks, I will be running the random networks generated on GRNmap.
====Week of February 20, 2017====
Monday:Began to look at the MSE values of the db networks 1 & 5 (derived from wt and dhap4 data) compared to the p values from the ANOVA. For the analysis, I looked at the expression data plots categorized by number of significant p values (B&H p values) at the suggestion of Brandon. Divisions were made as follows:
*Two or more significant p values across all strains
*One significant p value across all strains
*No significant p value across all strains
I then described whether the MSE value that was matched with the p value fit well with the modeled dynamics from the expression plots. I created an excel file with these comparisons and comments about the fit of the model, which can be found in the Dahlquist Lab Repository [https://github.com/kdahlquist/DahlquistLab/blob/master/data/15-gene_networks_analysis/pvalue_MSE_comparison.xlsx pvalue_MSE_comparison].


====March 2015====
Thursday: Continued to do analysis of p values and MSE outputs by looking at the expression plots. However, during meeting, was told that this was futile and would not generate results because I should be looking at the minMSE for each gene's output. By comparing the MSE:minMSE ratio for each gene, I could see if genes with p values had better or worse fit due to the ratio.
=====March 5/6, 2015=====
====Week of February 27, 2017====
We added more information to our poster.  
Monday: Continued to run the random networks on boulardii 2 (left off at random network 23). Instead of using the expression plots for analysis, I began to compare the MSE values of db network 5 (derived from hap4) and the random networks with the same number of nodes (15) and edges (28). The last random network included in the file is rand19. On every sheet, I have the MSE value output from the run in GRNmap next to the p values from the ANOVA for the dhap4 strain. Below those comparisons, we see the differences in MSE values of the random network from the db derived network 5.
*If the number is negative, it suggests that GRNmap reduced the mean square error of that individual gene in the random networks;
*however, if the number is larger, then the db derived network's individual gene saw better modeling/mean square error.
This file can be found in the pvalue_MSE_comparison excel file [https://github.com/kdahlquist/DahlquistLab/blob/master/data/15-gene_networks_analysis/pvalue_MSE_comparison.xlsx].


=====March 12/13, 2015=====
Thursday: Continued to run the random networks on boulardii 2 (random network 27 currently running). There are only three remaining random networks (28-30) that need to be run on GRNmap. I carried on with my compilation of the random network MSE value comparisons to db network 5. The last LSE:minLSE comparison made was between db 5 and random network 26. Again, the file can be found [https://github.com/kdahlquist/DahlquistLab/blob/master/data/15-gene_networks_analysis/pvalue_MSE_comparison.xlsx here] on the Dahlquist Lab Repository under the file name pvalue_MSE_comparison.xlsx.  
We continued to edit our poster for the 7th Annual Research Symposium. We edited the results section and changed the layout of the poster. We also edited the abstract that we submitted for the symposium to enter ourselves in the WCBSURC (West Coast Biological Sciences Undergraduate Research Conference). We found out the next day that we were accepted to present on April 25, 2015.


For Friday, we continued to look for production and degradation rates in the literature.
Further, on the sheet labeled dhap4, a bar graph comparing the LSE:minLSE ratios for all the GRNs run on MATLAB thus far can be found. I've begun to look at the regulatory relationships identified in the three lowest and three highest ratio random networks.
*The smallest LSE:minLSE ratios
**random networks 15, 16, and 24
*The largest LSE:minLSE ratios
**random networks 5, 7, and 12


=====March 19/20, 2015=====
====Week of March 4, 2017====
For this week, we finalized our poster for presenting at LMU's symposium. The Results section as well as some of the background information were edited. Images, graphs, and the layout of the poster were formatted to be clearer and easier to follow. The graph titles as well as the scales were altered so that each graph had the same axises. Furthermore, some of the section headings were edited to summarize the main finding for each result. We printed out our posters to put them up on Friday morning.
Spring Break this week. I was in Mammoth for the week.
Sunday: Kristen noticed that random networks 4 & 5 were identical, so I created a new random network (rand 31) to be run in GRNmap. After it was run in the model, the optimization diagnostics showed that random network 31 had a larger LSE:minLSE ratio than random network 5. Therefore, analysis will now be conducted on the following with the highest LSE:minLSE ratios:
*Random network 7, 12, and 31
**Rand7: 1.5202
**Rand12: 1.5080
**Rand31: 1.5001
Thursday: I computed the minMSE values for the DB5 network so that I could use the information for my Symposium presentation. The following protocol was done.
#Using the log2 expression data for the specific strain in a input sheet, average the values for the same timepoints
#* i.e. for wt strain, there are four 15 time point measurements, five 30 time point measurements, and four 60 time point measurements. Therefore, the first gene's average log fold expression change is averaged across four timepoints for 15, five for 30, and four for 60.
#** ABF1 averages: 15 = -1.1878; 30 = -1.1819; 60 = -1.9142
#Next, the difference between each individual log2 expression at a given time point and the average for that given time point was found.
#* i.e for wt's ABF1 gene, we see the following formula: = t15.1 - avg15, where t15.1 is the individual log2 expression at the first 15 time point and avg15 is the average of the four observed expression changes at 15 time point replicates
#** ABF1's first 15 time point: = B2 - P2 = t15.1 - avg15 = -2.1071 - (-1.1878) = -0.9193
#Then, the differences are squared so that no negative numbers result and to account for differences seen above and below this difference
#* i.e. for wt's ABF1 gene, we see the following formula in the cell: = B20^2
#The squared differences were then summed up for all the time points and divided by the total number of time points.
#* The formula used was as follows: =SUM(B38:N38)/13 (based off wt's ABF1 gene)
#** i.e. for wt, there are 13 time points (four 15 time points + five 30 time points + four 60 time points = 13)
#** Note that the sum for all these time points differs for each individual strain, such that db4 (dGLN3) has 12 overall time points
#To ensure that these calculations were correct, I first used this procedure to calculate the MSE observed via the model. After I receive the same output values, I proceeded to calculate the actual minMSEs.


On Friday, we continued to look for the various degradation and production rates of mRNA.
====Week of March 12, 2017====
Monday: I worked on completing the analysis of my results. I used Brandon's regulatory relationship workbook to compare the regulatory relationships for DB5 and the three best (15, 16, 24) and worst (7, 12, 31) random networks.
*Process for isolating regulatory relationships
*#Using GRNsight, I visualized the weighted networks of interest and exported the network as a .siv file to isolate the regulatory relationships between regulator and target gene
*#Next, I opened the SIV file in Excel. In a new Excel workbook, I wrote down the relationship between the transcription factor and its target as Regulator --> Target Gene in one cell with the weight of the transcription factor's influence in the column right of it.
*#After I saved all these relationships for the seven networks (DB5, Rand7, Rand12, Rand15, Rand16, Rand24, and Rand31), I compiled all of their regulatory relationships together in a list.
*#Next, I pasted the values that corresponded to a specific node/relationship for each network into the correct cell.
*#*Reading R->L (DB5, Rand7, ..., Rand31)
*#Because Brandon's Excel file already highlighted cell's based on the weights within them, stronger activators were colored red; stronger repressors were colored blue, and grey was used for the weak influencers.
Thursday: Presented a first draft of my presentation for LMU's URS. That was the focus of lab this day.
*Finished up the first draft of my presentation
*For further analysis, I included:
**The sum of weights to identify if the network was 'overall repressive (-) or activating (+)'
**The shared nodes between DB5 and the 3 best and 3 worst networks & found that it shared more nodes with the better networks


=====March 26/27, 2015=====
====Week of March 19, 2017====
I was not in the lab this week. From Thursday to Sunday, I attended NSBE's (National Society for Black Engineers) National Convention in Anaheim.
Monday: Worked on completing my powerpoint presentation for the LMU's Undergraduate Research Symposium. I sat down with Dr. Dahlquist to discuss my presentation and re-work some of the analysis that I did.  


====April 2015====
Thursday: I practiced my presentation in Sea120 before I rehearsed my presentation in front of my lab. Later, I presented my powerpoint for the symposium to my fellow researchers. I received feedback (overall positive, with minor changes to make). I listened to Kristen's presentation, too, before the end of lab.


In April, not much was done on mine and Grace J.'s part. For this month, we both completed the homework assignments for my upper division biomathematics course [[BIOL398-04/S15]]. For the following weeks in April, we worked on Assignments from Week 11 - Week 14. We no longer met on Fridays for this month due to a lack of having assignments.
====Week of March 26, 2017====
Monday: Worked on a lot of my thesis, writing my discussion


=====April 9, 2015=====
Thursday: Continued to work and write my thesis before the holiday (Cesar Chavez). During the lab meeting, we discussed future directions/what we should work on for the remainder of our time in the lab.
Last week was Easter break. This week we met up to discuss our progress with the mRNA production and degradation rates. Grace J. and I combined our findings from the literature into one document and sent it to our mentors and professors - Drs. Dahlquist and Fitzpatrick.
 
=====April 16, 2015=====
We discussed the results that Grace J. got from completing the Assignment due that week. My feedback is seen on my electronic notebook from that course.
 
=====April 23, 2015=====
We talked about the conference that we would be attending WCBSURC Saturday. Again, we discussed Grace's results from completing that week's assignments.
 
April 24, 2015
<br>
The conference went well. A few groups of students came by and visited our poster, wanting to know about our research. A few professors stopped by to ask questions about our methods, results, as well as future ideas and goals for the project.
 
=====April 30, 2015=====
We discussed the plan for summer research.
*Journal Clubs
**12-2; Lunch time meeting
*Weekly meetings where we discuss what occurred during the week
*First week training process of going through all the data
*Focusing on sigmoidal model to get results for publications


==Documents==
===Summer 2015===
===Summer 2015===
====May 2015====
To view the most updated powerpoint click [[Media:Williams wtANOVA Ttest 2.pptx| here]]
=====May 18, 2015=====
Purpose: To better understand how the data used for generating the model was obtained. We are using R to normalize the microarray data from the Dahlquist Lab.
<br>
Procedure
*Downloaded and installed R from ([http://cran.r-project.org/bin/windows/base/old/3.1.0/ link to download site]) and the limma package ([[Media:Limma_3.20.1.zip | direct link to download zipped file]])
**The limma folder within the limma zip file must be copied into the R directory
**Limma contains normalization procedures
**Extracted the limma package into R's library folder
*Downloaded the necessary files from links sent by Dr. Dahlquist
<b>Within Array Normalization for the Ontario Chips</b>
<br>
#Launch R x64 3.1.0 (make sure you are using the 64-bit version).
#Change the directory to the folder containing the targets file and the GPR files for the Ontario chips by selecting the menu item File > Change dir... and clicking on the appropriate directory.  You will need to click on the + sign to drill down to the right directory.  Once you have selected it, click OK.
#In R, select the menu item File > Source R code..., and select the Ontario_Chip_Within-Array_Normalization_modified_20150514.R script.
#*You will be prompted by an Open dialog for the Ontario targets file.  Select the file Ontario_Targets_wt-dCIN5-dGLN3-dHAP4-dHMO1-dSWI4-dZAP1-Spar_20150514.csv and click Open.
#*Wait while R processes your files.
<b>Within Array Normalization for the GCAT Chips and Between Array Normalization for All Chips</b>
#These instructions assume that you have just completed the Within Array Normalization for the Ontario Chips in the section above.
#In R, select the menu item File > Source R code..., and select the Within-Array_Normalization_GCAT_and_Merged_Ontario-GCAT_Between-Chip_Normalization_modified_20150514.R script.
#*You will be prompted by an Open dialog for the GCAT targets file.  Select the file GCAT_Targets.csv and click Open.
#*Wait while R processes your files.
#When the processing has finished, you will find two files called GCAT_and_Ontario_Within_Array_Normalization.csv and GCAT_and_Ontario_Final_Normalized_Data.csv in the same folder.
#*Save these files to LionShare and/or to a flash drive.
<b>Visualizations of the Normalized Data</b>
<br>
Create MA Plots and Box Plots for the GCAT Chips
Input the following code, line by line, into the main R window.  Press the enter key after each block of code.
 
GCAT.GeneList<-RGG$genes$ID
 
lg<-log2((RGG$R-RGG$Rb)/(RGG$G-RGG$Gb))
 
* If you get a message saying "NaNs produced" this is OK, proceed to the next step.
 
r0<-length(lg[1,])
rx<-tapply(lg[,1],as.factor(GCAT.GeneList),mean)
r1<-length(rx)
MM<-matrix(nrow=r1,ncol=r0)
 
for(i in 1:r0) {MM[,i]<-tapply(lg[,i],as.factor(GCAT.GeneList),mean)}
 
MC<-matrix(nrow=r1,ncol=r0)
 
for(i in 1:r0) {MC[,i]<-dw[i]*MM[,i]}
 
MCD<-as.data.frame(MC)
colnames(MCD)<-chips
rownames(MCD)<-gcatID
 
la<-(1/2*log2((RGG$R-RGG$Rb)*(RGG$G-RGG$Gb)))
 
* If you get these Warning messages, it's OK:
:1: In (RGG$R - RGG$Rb) * (RGG$G - RGG$Gb) :
:NAs produced by integer overflow
:2: NaNs produced
 
r2<-length(la[1,])
ri<-tapply(la[,1],as.factor(GCAT.GeneList),mean)
r3<-length(ri)
AG<-matrix(nrow=r3,ncol=r2)
 
for(i in 1:r2) {AG[,i]<-tapply(la[,i],as.factor(GCAT.GeneList),mean)}
 
par(mfrow=c(3,3))
 
for(i in 1:r2) {plot(AG[,i],MC[,i],main=chips[i],xlab='A',ylab='M',ylim=c(-5,5),xlim=c(0,15))}
browser()
 
* Maximize the window in which the graphs have appeared. Save the graphs as a JPEG (File>Save As>JPEG>100% quality...). Once the graphs have been saved, close the window. To continue with the rest of the code, press Enter.
** To make sure that you save the clearest image, do not scroll in the window because a grey bar will appear if you do so.
* The next set of code is for the generation of the GCAT boxplots for the wild-type data.
 
x0<-tapply(MAG$A[,1],as.factor(MAG$genes$ID),mean)
y0<-length(MAG$A[1,])
x1<-length(x0)
AAG<-matrix(nrow=x1,ncol=y0)
 
for(i in 1:y0) {AAG[,i]<-tapply(MAG$A[,i],as.factor(MAG$genes$ID),mean)}
 
par(mfrow=c(3,3))
 
for(i in 1:y0) {plot(AAG[,i],MG2[,i],main=chips[i],xlab='A',ylab='M',ylim=c(-5,5),xlim=c(0,15))}
browser()
 
* Maximize the window in which the graphs have appeared. Save the graphs as a JPEG (File>Save As>JPEG>100% quality...). Once the graphs have been saved, close the window. To continue with the rest of the code, press Enter.
 
par(mfrow=c(1,3))
 
boxplot(MCD,main="Before Normalization",ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
 
axis(1,at=xy.coords(chips)$x,tick=TRUE,labels=FALSE)
 
text(xy.coords(chips)$x-1,par('usr')[3]-0.6,labels=chips,srt=45,cex=0.9,xpd=TRUE)
 
boxplot(MG2,main='After Within Array Normalization',ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
 
axis(1,at=xy.coords(chips)$x,labels=FALSE)
 
text(xy.coords(chips)$x-1,par('usr')[3]-0.6,labels=chips,srt=45,cex=0.9,xpd=TRUE)
 
boxplot(MAD[,Gtop$MasterList],main='After Between Array Normalization',ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
 
axis(1, at=xy.coords(chips)$x,labels=FALSE)
 
text(xy.coords(chips)$x-1,par('usr')[3]-0.6,labels=chips,srt=45,cex=0.9,xpd=TRUE)
 
* Maximize the window in which the plots have appeared. You may not want to actually maximize them because you might lose the labels on the x axis, but make them as large as you can. Save the plots as a JPEG (File>Save As>JPEG>100% quality...). Once the graphs have been saved, close the window.
 
Create MA Plots and Box Plots for the Ontario Chips
*Input the following code, line by line, into the main R window.  Press the enter key after each block of code.
 
Ontario.GeneList<-RGO$genes$Name
 
lr<-log2((RGO$R-RGO$Rb)/(RGO$G-RGO$Gb))
 
* Warning message: "NaNs produced" is OK.
 
z0<-length(lr[1,])
v0<-tapply(lr[,1],as.factor(Ontario.GeneList),mean)
z1<-length(v0)
MT<-matrix(nrow=z1,ncol=z0)
 
for(i in 1:z0) {MT[,i]<-tapply(lr[,i],as.factor(Ontario.GeneList),mean)}
 
MI<-matrix(nrow=z1,ncol=z0)
 
for(i in 1:z0) {MI[,i]<-ds[i]*MT[,i]}
 
MID<-as.data.frame(MI)
colnames(MID)<-headers
rownames(MID)<-ontID
 
ln<-(1/2*log2((RGO$R-RGO$Rb)*(RGO$G-RGO$Gb)))
 
* Warning messages are OK:
:1: In (RGO$R - RGO$Rb) * (RGO$G - RGO$Gb) :
: NAs produced by integer overflow
:2: NaNs produced
 
z2<-length(ln[1,])
zi<-tapply(ln[,1],as.factor(Ontario.GeneList),mean)
z3<-length(zi)
AO<-matrix(nrow=z3,ncol=z2)
 
for(i in 1:z0) {AO[,i]<-tapply(ln[,i],as.factor(Ontario.GeneList),mean)}
 
strains<-c('wt','dCIN5','dGLN3','dHAP4','dHMO1','dSWI4','dZAP1','Spar')
 
*After entering the call browser() below, maximize the window in which the graphs have appeared. Save the graphs as a JPEG (File>Save As>JPEG>100% quality...). Once the graphs have been saved, close the window and press Enter for the next set of graphs to appear.
**The last graph to appear will be the spar graphs.
**The graphs generated from this code are the before Ontario chips
*Be sure to save the 8 graphs before moving on to the next step
for (i in 1:length(strains)) {
  st<-strains[i]
  lt<-which(Otargets$Strain %in% st)
  if (st=='wt') {
      par(mfrow=c(3,5))
  } else {
      par(mfrow=c(4,5))
  }
  for (i in lt) {
    plot(AO[,i],MI[,i],main=headers[i],xlab="A",ylab="M",ylim=c(-5,5),xlim=c(0,15))
  }
  browser()
}
 
*To continue generating plots, press enter.
 
j0<-tapply(MAO$A[,1],as.factor(MAO$genes[,5]),mean)
k0<-length(MAO$A[1,])
j1<-length(j0)
AAO<-matrix(nrow=j1,ncol=k0)
 
for(i in 1:k0) {AAO[,i]<-tapply(MAO$A[,i],as.factor(MAO$genes[,5]),mean)}
 
*Remember, that after entering the call readline('Press Enter to continue'), maximize the window in which the graphs have appeared. Save the graphs as a JPEG (File>Save As>JPEG>100% quality...). Once the graphs have been saved, close the window and press Enter for the next set of graphs to appear.
**Again, the last graphs to appear will be the spar graphs.
**These graphs that are produced are for the after Ontario chips
*Again, be sure to save 8 graphs before moving on to the next part of the code.
for (i in 1:length(strains)) {
  st<-strains[i]
  lt<-which(Otargets$Strain %in% st)
  if (st=='wt') {
      par(mfrow=c(3,5))
  } else {
      par(mfrow=c(4,5))
  }
  for (i in lt) {
    plot(AAO[,i],MD2[,i],main=headers[i],xlab="A",ylab="M",ylim=c(-5,5),xlim=c(0,15))
  }
  browser()
}
*To continue generating plots, press enter.
 
for (i in 1:length(strains)) {
  par(mfrow=c(1,3))
  st<-strains[i]
  lt<-which(Otargets$Strain %in% st)
  if (st=='wt') {
      xcoord<-xy.coords(lt)$x-1
      fsize<-0.9
  } else {
      xcoord<-xy.coords(lt)$x-1.7
      fsize<-0.8
  }
  boxplot(MID[,lt],main='Before Normalization',ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
  axis(1,at=xy.coords(lt)$x,labels=FALSE)
  text(xcoord,par('usr')[3]-0.65,labels=headers[lt],srt=45,cex=fsize,xpd=TRUE)
  boxplot(MD2[,lt],main='After Within Array Normalization',ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
  axis(1,at=xy.coords(lt)$x,labels=FALSE)
  text(xcoord,par('usr')[3]-0.65,labels=headers[lt],srt=45,cex=fsize,xpd=TRUE)
  ft<-Otargets$MasterList[which(Otargets$Strain %in% st)]
  boxplot(MAD[,ft],main='After Between Array Normalization',ylab='Log Fold Change',ylim=c(-5,5),xaxt='n')
  axis(1,at=xy.coords(lt)$x,labels=FALSE)
  text(xcoord,par('usr')[3]-0.65,labels=headers[lt],srt=45,cex=fsize,xpd=TRUE)
  browser()
}
*To continue generating the box plots, press enter.
**You will have to save 8 plots before you have completed the procedure. The last box plot is for spar.
* Warnings are OK.
* Zip the files of the plots together and upload to LionShare and/or save to a flash drive.
<br>
<br>
<br>
When doing the transformation of the data, I replaced 477 #VALUE! with a single space.
To see the input sheet that was run for the fixed b trial, please click this [[Media:Williams_Input_Scer_Spar_point01_PROF45.xlsx |link]]
<br>
<br>
Statistical Analysis
To view the output file from this fixed b trial, click [[Media:Williams Input Scer Spar point01 PROF45 fixedb output.xlsx| here]]
 
* For the statistical analysis, we will begin with the file "GCAT_and_Ontario_Final_Normalized_Data.csv" that you generated in the previous step.
* Open this file in Excel and Save As an Excel Workbook <code>*.xlsx</code>.  It is a good idea to add your initials and the date (yyyymmdd) to the filename as well.
* Rename the worksheet with the data "Compiled_Normalized_Data".
** Type the header "ID" in cell A1.
** Insert a new column after column A and name it "Standard Name".  Column B will contain the common names for the genes on the microarray.
*** Copy the entire column of IDs from Column A.
*** Paste the names into the "Value" field of the [http://www.yeastract.com/formorftogene.php ORF List <-> Gene List] tool in [http://www.yeastract.com YEASTRACT]. Then, click on the "Transform" button.
*** Select all of the names in the "Gene Name" column of the resulting table.
*** Copy and paste these names into column B of the <code>*.xlsx</code> file. Save your work.
** Insert a new column on the very left and name it "MasterIndex".  We will create a numerical index of genes so that we can always sort them back into the same order.
*** Type a "1" in cell A2 and a "2" in cell A3.
*** Select both cells. Hover your mouse over the bottom-right corner of the selection until it makes a thin black + sign.  Double-click on the + sign to fill the entire column with a series of numbers from 1 to 6189 (the number of genes on the microarray).
* Insert a new worksheet and call it "Rounded_Normalized_Data".  We are going to round the normalization results to four decimal places because of slight variations seen in different runs of the normalization script.
** Copy the first three columns of the "Compiled_Normalized_Data" sheet and paste it into the first three columns of the "Rounded_Normalized_Data" sheet.
** Copy the first row of the "Compiled_Normalized_Data" sheet and paste it into the first row of the "Rounded_Normalized_Data" sheet.
** In cell C2, type the equation <code>=ROUND(Compiled_Normalized_Data!C2,4)</code>.
** Copy and paste this equation in the rest of the cells of row 2.
** Select all of the cells of row 2 and hover your mouse over the bottom right corner of the selection.  When the cursor changes to a thin black "plus" sign, double-click on it to paste the equation to all the rows in the worksheet.  Save your work.
* Insert a new worksheet and call it "Master_Sheet".
** Go back to the "Rounded_Normalized_Data" sheet and Select All and Copy.
** Click on cell A1 of the "Master_Sheet" worksheet.  Select Paste special > Paste values to paste the values, but not the formulas from the previous sheet.  Save your work.
** There will be some #VALUE! errors in cells where there was missing data for genes that existed on the Ontario chips, but not the GCAT chips.
*** Select the menu item Find/Replace and Find all cells with "#VALUE!" and replace them with a single space character.  Record how many replacements were made to your electronic lab notebook.  Save your work.
* This will be the starting point for our statistical analysis below.
Creating the Worksheet
# Create a new worksheet, naming it either "(STRAIN)_ANOVA" as appropriate.  For example, you might call yours "wt_ANOVA"
# Copy all of the data from the "Master_Sheet" worksheet for your strain and paste it into your new worksheet.
# At the top of the first column to the right of your data, create five column headers of the form (STRAIN)_AvgLogFC_(TIME) where (STRAIN) is your strain designation and (TIME) is 15, 30, etc.
# In the cell below the (STRAIN)_AvgLogFC_t15 header, type <code>=AVERAGE(</code>
# Then highlight all the data in row 2 associated with (STRAIN) and t15, press the closing paren key (shift 0),and press the "enter" key.
# This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
# Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
# Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data.
# Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO.
# In the first cell below this header, type <code>=SUMSQ(</code>
# Highlight all the LogFC data in row 2 for your (STRAIN) (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.
# In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3).
# Make a note of how many data points you have at each time point for your strain.  For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5".  Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
# In the first cell below the header (STRAIN)_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-<number of data points>*<AvgLogFC_t15>^2</code> and hit enter.
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
#* The phrase <number of data points> should be replaced by the number of data points for that timepoint (either 3, 4, or 5).
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.
#* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
# Repeat this computation for the t30 through t120 data points.  Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
# In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full.
# In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
# In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value.
# Recall the number of data points from (13): call that total n.
# In the first cell of the (STRAIN)_Fstat column, type <code>=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full></code> and hit enter. 
#* Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints and the dSWI4 strain has 4 timepoints (it is missing t15).
#* Replace the phrase (STRAIN)_ss_HO with the cell designation.
#* Replace the phrase <(STRAIN)_SS_full> with the cell designation.
#* Copy to the whole column.
# In the first cell below the (STRAIN)_p-value header, type <code>=FDIST(<(STRAIN)_Fstat>,5,n-5)</code> replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. (Again, note that the number of timepoints is actually "4" for the dSWI4 strain).  Copy to the whole column.
# Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
#*  Click on cell A1 and click on the Data tab.  Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
#* Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
 
Calculate the Bonferroni and p value Correction
# Now we will perform adjustments to the p value to correct for the multiple testing problem.  Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
# Type the equation <code>=<(STRAIN)_p-value>*6189</code>, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: <code>=IF(r2>1,1,r2)</code>.  Use the Step (10) trick to copy the formula throughout the column.
 
Calculate the Benjamini & Hochberg p value Correction
# Insert a new worksheet named "(STRAIN)_B&H".
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column C, smallest to largest.
# Type the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
# Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
# Type "STRAIN_B-H_p-value" into cell G1.
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.
# Select columns A through G.  Now sort them by your MasterIndex in Column A in ascending order.
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
* '''''Upload the .xlsx file that you have just created to LionShare.'''''  Send Dr. Dahlquist an e-mail with the link to the file (e-mail kdahlquist at lmu dot edu).
 
*Anu and I were assigned the wild-type strain to analyze statistically
*The following were for numbers for our data:
**T15: 4
**T30: 5
**T60: 4
**T90: 5
**T120: 5
**We had a total of 23 chips
*We had to adjust our values to account for the various blanks in the data for the various times. These adjustments were made to make sure that those specific genes had their data corrected for those blank time points.
**53 genes were missing data points
**For T15, there was 1 blank
**For T30, 60, 90, 120, there were 2 blanks per time point
 
<b>Sanity Check: Number of genes significantly changed</b>
* Go to your (STRAIN)_ANOVA worksheet.
* Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
* Click on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?'''''
***2377 (38.41%)
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?'''''
***1531 (24.74%)
** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?'''''
***850 (13.73%)
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?'''''
***449 (7.25%)
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?'''''
***226 (3.65%)
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?'''''
***1673 (31.88%)
To view the comparison of all the strains analyzed on this day, please download the following slide: [[Media:Williams WtANOVA.pptx| ANOVA Test]]
 
=====May 19, 2015=====
Purpose: See May 18, 2015. Began with a T test as well as a between strain ANOVA to continue to obtain the final data that will be input into MATLAB.
<br>
<br>
Procedure:
To see the input sheet that was run from the estimated b, please click [[Media:Williams Input Scer Spar point01 PROF45 estimatedb.xlsx|this]]
<br>
<br>
<b>Modified T test</b>
To view the output file from the estimated b, click [[Media:Williams Input Scer Spar point01 PROF45 estimatedb output.xlsx|here]]
* Insert a new worksheet into your Excel workbook and name it "(STRAIN)_ttest", "wt_ttest"
* Go back to the "Master_Sheet" worksheet for your strain.  Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for your strain and paste it into your new worksheet.  Copy the columns containing the data for your strain and paste it into your new worksheet.
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the average log fold changes that you will compute.  Name them with the pattern <dHAP4>_<AvgLogFC>_<tx> where you use the appropriate text within the <> and where x is the time.  For me: wt_AvgLFC_15 and so on for ever time point.
* Compute the average log fold change for the replicates for each timepoint by typing the equation:
=AVERAGE(''range of cells in the row for that timepoint'')
into the second cell below the column heading.  For example, your equation might read
=AVERAGE(C2:F2)
Copy this equation and paste it into the rest of the column. 
* Create the equation for the rest of the timepoints and paste it into their respective columns.  ''Note that you can save yourself some time by completing the first equation for all of the averages and then copy and paste all the columns at once.''
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the T statistic that you will compute.  Name them with the pattern <dHAP4>_<Tstat>_<tx> where you use the appropriate text within the <> and where x is the time.  For example, wt_Tstat_15 and so on for each time point.  You will now compute a T statistic that tells you whether the normalized average log fold change is significantly different than 0 (no change in expression).  Enter the equation into the second cell below the column heading: 
=AVERAGE(''range of cells'')/(STDEV(''range of cells'')/SQRT(''number of replicates''))
For example, your equation might read:
=AVERAGE(C2:F2)/(STDEV(C2:F2)/SQRT(4))
(NOTE: in this case the number of replicates is 4.  Be careful that you are using the correct number of parentheses.)  Copy the equation and paste it into all rows in that column. Create the equation for the rest of the timepoints and paste it into their respective columns.  ''Note that you can save yourself some time by completing the first equation for all of the T statistics and then copy and paste all the columns at once.''
* Go to the empty columns to the right on your worksheet.  Create new column headings in the top cells to label the P value that you will compute.  Name them with the pattern <dHAP4>_<Pval>_<tx> where you use the appropriate text within the <> and where x is the time.  For example, "dHAP4_Pval_t15".  In the cell below the label, enter the equation: 
=TDIST(ABS(''cell containing T statistic''),''degrees of freedom'',2)
For example, your equation might read:
=TDIST(ABS(AE2),3,2)
*The number of degrees of freedom is the number of replicates minus one.  Copy the equation and paste it into all rows in that column.
**The degrees of freedom used were 3 and 4
* As with the ANOVA, we encounter the multiple testing problem here as well.
*P-values less than 0.05
**T15: 1078
***The adjust p-value: 1075
**T30: 1600
***The adjusted p-value: 1587
**T60: 1837
***The adjusted p-value: 1814
**T90: 759
***The adjusted p-value: 749
**T120: 513
***The adjusted p-value: 509
*Adjustments had to be made to the genes that had missing data points. A total of 53 had blanks.
**The square roots for the Tstat were changed: 15 was altered to 3; 30 was altered to 3; 60 had 2; 90 was altered to 3 and 120 was altered to 3
**The Degrees of Freedom were changed to figure out the p-values: 15, 30, 90, and 120 all had 2 while 60 had 1.
<b>Bonferroni Correction</b>
# Now we will perform adjustments to the p value to correct for the multiple testing problem.  Label the columns to the right with the label, (STRAIN)_Bonferroni-Pval_tx (do this twice in a row).
# Type the equation <code>=<(STRAIN)_Pval_tx>*6189</code>, Upon completion of this single computation, use the trick to copy the formula throughout the column.
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni-Pval_tx header: <code>=IF(r2>1,1,r2)</code>.  Use the trick to copy the formula throughout the column.within-strain ANOVA.
*The resulting p-values less than 0.05
**T15: 0
**T30: 1
**T60: 0
**T90: 0
**T120: 1
<b>Benjamini & Hochberg Correction</b>
# Insert a new worksheet named "(STRAIN)_ttest_B-H".  You will need to perform the procedure below for the p values for each timepoint.  Do them individually one at a time to avoid confusion.
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from the first timepoint from your ttest worksheet and paste it into Column D.
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column C, smallest to largest.
# Type the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
# Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_Pval_tx in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
# Type "STRAIN_B-H_Pval_tx" into cell G1.
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.
# Select columns A through G.  Now sort them by your MasterIndex in Column A in ascending order.
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ttest sheet.
*The Resulting P-values less than 0.05
**T15: 0
**T30: 85
**T60: 0
**T90: 0
**T120: 1
<br>
<br>
<b>Between Strain ANOVA</b>
The powerpoint that reviews and analyzes the outputs can be viewed [[Media:Williams Running GRNmap Results.pptx|here]]
 
The detailed description of how this is done can be found on [[Dahlquist:Modified_ANOVA_and_p_value_Corrections_for_Microarray_Data#Comparing_Significant_Changes_in_Expression_Between_Two_Strains | this page.]] A brief version of the protocol appears below.


* All two strain comparisons were performed in MATLAB using the script [[Media:Two_strain_compare_corrected_20140813_3pm.zip | Two_strain_compare_corrected_20140813_3pm.zip (within a zip file)]]:
==GRNmap Testings==
** Download the zipped script file, extract it to the folder that contains your Excel file with the worksheet named "Master_Sheet".  (The script and Excel file must be in the same folder to work.)
This is the template for future reports: [[GRNmap Testing Report]]
** Launch MATLAB version 2014b.
** In MATLAB, you will need to navigate to the folder containing the script and the Excel file.
*** Near the top of the page, you will see a a field that contains the path to the working directory.  Just to the left of it, there is an icon that looks like a folder opening with a green down arrow.  Click on this icon to open a dialog box where you can choose your folder containing the script and Excel file.
*** Once you have selected your folder, the left-hand pane should display the contents of that folder.  To open the MATLAB script, you can double-click on it from that pane.  The code for the script will appear in the center pane.
* You will need to make a few edits to the code, depending on which strain comparison you want to make.
** For the first block of code, the user must input the name of the Excel file (<code>*.xls or .xlsx</code>) to be imported as the variable "filename", the sheet from which the data will be imported as the variable "sheetname", and the two strains that will be compared as the variables "strain1" and "strain2".
*** MATLAB will read either .xls or .xlsx
*** Also note that this script will not work for any comparison involving dSWI4 because it has been hard-coded to expect 5 timepoints instead of 4.
* The user does not have to modify any of the code from here on.
* The next two lines of code ask the user whether or not they would like to see plots for each gene with an unadjusted p-value < 0.05. If the user does want to see these plots, they enter "1". If they would not like to see these plots, the user enters "0".  When prompted, enter a "1" to see the plots displayed.
I ran the wild-type vs. Spar, which is for S. paradoxus. S. paradoxus was the yeast strain that I studied for a class assignment in [[BIOL398-04/S15]].
*p<0.05 >> 1498 genes
*B&H p<0.05 >> 703 genes
<br>
<br>
<b>Sanity Check</b>
[[GRNmap Testing Report: Strain Run Comparisons 2015-05-27]]
<br>
<br>
To view the comparisons of the individual time points against each other, please view the following powerpoint. This powerpoint contains the slide from yesterday as well as the T test analysis done today.
[[GRNmap Testing Report: Non-1 Initial Weight Guesses 2015-05-28]]
<br>
[[Media:Williams wtANOVA Ttest.pptx| Statistical Analysis Powerpoint]]
<br>
<br>
<b>Clustering the Genes with STEM</b>
 
# '''Prepare your microarray data file for loading into STEM.'''
#* Insert a new worksheet into your Excel workbook, and name it "(STRAIN)_stem".
#* Select all of the data from your "(STRAIN)_ANOVA" worksheet and Paste special > paste values into your "(STRAIN)_stem" worksheet.
#** Your leftmost column should have the column header "MasterIndex".  Rename this column to "SPOT".  Column B should be named "ID".  Rename this column to "Gene Symbol".  Delete the column named "StandardName".
#** Filter the data on the B-H corrected p value to be > 0.05 (that's '''greater than''' in this case).
#*** Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu.  Undo the filter.  This ensures that we will cluster only the genes with a "significant" change in expression and not the noise.
#** Delete all of the data columns '''''EXCEPT''''' for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.).
#** Rename the data columns with just the time and units (for example, 15m, 30m, etc.).
#** Save your work.  Then use ''Save As'' to save this spreadsheet as Text (Tab-delimited) (*.txt).  Click OK to the warnings and close your file.
#*** Note that you should turn on the file extensions if you have not already done so.
# '''Now download and extract the STEM software.'''  [http://www.cs.cmu.edu/~jernst/stem/ Click here to go to the STEM web site].
#* Click on the [http://www.andrew.cmu.edu/user/zivbj/stemreg.html download link], register, and download the <code>stem.zip</code> file to your Desktop.
#* Unzip the file.  In Seaver 120, you can right click on the file icon and select the menu item ''7-zip > Extract Here''.
#* This will create a folder called <code>stem</code>.  Inside the folder, double-click on the <code>stem.jar</code> to launch the STEM program.
<!--#** In Seaver 120, we encountered an issue where the program would not launch on the Windows XP machines due to a lack of memory. (Even though the computers have been upgraded to Windows 7, do this to launch the program.)  To get around this problem, launch STEM from the command line.
#*** Go to the start menu and click on ''Programs > Accessories > Command Prompt''.
#*** You will need to navigate to the directory (folder) in which the STEM program resides.  If you followed the instructions above and extracted the stem folder to the Desktop, type the following:  <code>cd Desktop\stem</code>  and press "Enter".
#*** To launch the program then type: <code>java -mx512M -jar stem.jar -d defaults.txt</code>  and press "Enter".  This will launch the program with less memory allocated to it.-->
# '''Running STEM'''
## In section 1 (Expression Data Info) of the the main STEM interface window, click on the ''Browse...'' button to navigate to and select your file.
##* Click on the radio button ''No normalization/add 0''.
##* Check the box next to ''Spot IDs included in the data file''.
## In section 2 (Gene Info) of the main STEM interface window, select ''Saccharomyces cerevisiae (SGD)'', from the drop-down menu for Gene Annotation Source.  Select ''No cross references'', from the Cross Reference Source drop-down menu.  Select ''No Gene Locations'' from the Gene Location Source drop-down menu.
## In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
## In section 4 (Execute) click on the yellow Execute button to run STEM.
# '''Viewing and Saving STEM Results'''
## A new window will open called "All STEM Profiles (1)".  Each box corresponds to a model expression profile.  Colored profiles have a statistically significant number of genes assigned; they are arranged in order from most to least significant p value.  Profiles with the same color belong to the same cluster of profiles.  The number in each box is simply an ID number for the profile.
##* Click on the button that says "Interface Options...".  At the bottom of the Interface Options window that appears below where it says "X-axis scale should be:", click on the radio button that says "Based on real time".  Then close the Interface Options window.
##*Take a screenshot of this window (on a PC, simultaneously press the <code>Alt</code> and <code>PrintScreen</code> buttons to save the view in the active window to the clipboard) and paste it into a PowerPoint presentation to save your figures.
## Click on each of the SIGNIFICANT profiles (the colored ones) to open a window showing a more detailed plot containing all of the genes in that profile.
##* Take a screenshot of each of the individual profile windows and save the images in your PowerPoint presentation.
##* At the bottom of each profile window, there are two yellow buttons "Profile Gene Table" and "Profile GO Table".  For each of the profiles, click on the "Profile Gene Table" button to see the list of genes belonging to the profile.  In the window that appears, click on the "Save Table" button and save the file to your desktop.  Make your filename descriptive of the contents, e.g. "wt_profile#_genelist.txt", where you replace the number symbol with the actual profile number.
##** Upload these files to [http://lionshare.lmu.edu LionShare] and e-mail a link to Dr. Dahlquist.  (It will be easier to [[BIOL398-04/S15:Help#Compressing_Files_with_7-Zip | zip all the files together]] and upload them as one file).
##* For each of the significant profiles, click on the "Profile GO Table" to see the list of Gene Ontology terms belonging to the profile.  In the window that appears, click on the "Save Table" button and save the file to your desktop.  Make your filename descriptive of the contents, e.g. "wt_profile#_GOlist.txt", where you use "wt", "dGLN3", etc. to indicate the dataset and where you replace the number symbol with the actual profile number.  At this point you have saved all of the primary data from the STEM software and it's time to interpret the results!
##** Upload these files to [http://lionshare.lmu.edu LionShare] and e-mail a link to Dr. Dahlquist. (It will be easier to [[BIOL398-04/S15:Help#Compressing_Files_with_7-Zip | zip all the files together]] and upload them as one file).
# '''Analyzing and Interpreting STEM Results'''
## Select '''''one''''' of the profiles you saved in the previous step for further intepretation of the data.  I suggest that you choose one that has a pattern of up- or down-regulated genes at the early (first three) timepoints.  You and your partner will choose the '''''same''''' profile so that you can compare your results between the two strains.  Answer the following:
##* '''''Why did you select this profile?  In other words, why was it interesting to you?'''''
##* '''''How many genes belong to this profile?'''''
##* '''''How many genes were expected to belong to this profile?'''''
##* '''''What is the p value for the enrichment of genes in this profile?'''''  Bear in mind that we just finished computing p values to determine whether each individual gene had a significant change in gene expression at each time point.  This p value determines whether the number of genes that show this particular expression profile across the time points is significantly more than expected.
##* Open the GO list file you saved for this profile in Excel.  This list shows all of the Gene Ontology terms that are associated with genes that fit this profile.  Select the third row and then choose from the menu Data > Filter > Autofilter.  Filter on the "p-value" column to show only GO terms that have a p value of < 0.05.  '''''How many GO terms are associated with this profile at p < 0.05?'''''  The GO list also has a column called "Corrected p-value".  This correction is needed because the software has performed thousands of significance tests.  Filter on the "Corrected p-value" column to show only GO terms that have a corrected p value of < 0.05.  '''''How many GO terms are associated with this profile with a corrected p value < 0.05?'''''
##* Select 10 Gene Ontology terms from your filtered list (either p < 0.05 or corrected p < 0.05). 
##** Since you and your partner are going to compare the results from each strain for the same cluster, you can either:
##*** Choose the same 10 terms that are in common between strains.
##*** Choose 10 terms that are different between the strains (5 or so from each).
##*** Choose some that are the same and some that are different.
##**'''''Look up the definitions for each of the terms at [http://geneontology.org http://geneontology.org].  For your final lab report, you will discuss the biological interpretation of these GO terms.  In other words, why does the cell react to cold shock by changing the expression of genes associated with these GO terms?  Also, what does this have to do with HAP4 being deleted?'''''
##** To easily look up the definitions, go to [http://geneontology.org http://geneontology.org].
##** Copy and paste the GO ID (e.g. GO:0044848) into the search field at the upper left of the page called "Search GO Data".
##** In the [http://amigo.geneontology.org/amigo/medial_search?q=GO%3A0044848 results] page, click on the button that says "Link to detailed information about <term>, in this case "biological phase"".
##** The definition will be on the next results page, e.g. [http://amigo.geneontology.org/amigo/term/GO:0044848 here].
<br>
=====May 20, 2015=====
Purpose: See May 18. Continuation of using the results from the statistical analysis to obtain the data that will be further analyzed to construct a GRN as well as run the model in MATLAB .
<br>
Procedure
<br>
<b>GenMAPP & MAPP Finder</b>
<br>
Preparing the GenMAPP Input File
* Insert a new worksheet and name it STRAIN_GenMAPP.
* Go back to the "ANOVA" worksheet for your strain and Select All and Copy.
* Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. 
** Delete the columns containing the "ss" calculations, just retaining the individual log fold change data, the average log fold change data, and the p values.  For the Bonferroni and B&H p values, just keep the one column where we replaced all values > 1 with 1.
* Now go to your "_ttest" worksheet.  Copy just the columns containing the P values for the individual timepoints and Paste special > Paste values into your GenMAPP worksheet to the right of the previous data.  For the Bonferroni and B&H p values, just keep one column where we replaced all values > 1 with 1.
* It will be useful if we arrange the columns in a slightly different order: all individual log fold change data, then the ANOVA p values, then the AvgLogFC and p values for the individual timepoints clustered together (e.g., all t15 data together).
* Select all of the columns containing Fold Changes.  Select the menu item Format > Cells.  Under the number tab, select 2 decimal places.  Click OK.
* Select all of the columns containing T statistics or P values.  Select the menu item Format > Cells.  Under the number tab, select 4 decimal places. Click OK.
* We will now format this file for use with GenMAPP.
** Currently, the "MasterIndex" column is the first column in the worksheet.  We need the "ID" column to be the first column.  Select Column B and Cut.  Right-click on Cell A1 and select "Insert cut cells".  This will reverse the position of the columns.
** Insert a new empty column in Column B.  Type "SystemCode" in the first cell and "D" in the second cell of this column.  Use our trick to fill this entire column with "D".
** Make sure to save this work as your .xlsx file.  Now save this worksheet as a tab-delimited text file for use with GenMAPP in the next section.
*Formatting
**All the p-values were extended to have 4 decimal points
**All the individual LFC time points as well as their averages had 2 decimal points
<b>Running GenMAPP</b>
<br>
Each time you launch GenMAPP, you need to make sure that the correct Gene Database (.gdb) is loaded.
* Look in the lower left-hand corner of the window to see which Gene Database has been selected.
* If you need to change the Gene Database, select Data > Choose Gene Database.  Navigate to the directory C:\GenMAPP 2 Data\Gene Databases and choose the correct one for your species.
* For the exercise today, if the yeast Gene Database is not present on your computer, you will need to download it. [https://lionshare.lmu.edu/Users/kdahlqui/BIOL478/Sc-Std_20060526.zip Click this link to download the yeast Gene Database.]
* Unzip the file and save it, Sc-Std_20060526.gdb, to the folder C:\GenMAPP 2 Data\Gene Databases.
<b>GenMAPP Expression Dataset Manager Procedure</b>
<br>
* Launch the GenMAPP Program.  Check to make sure the correct Gene Database is loaded.
* Select the Data menu from the main Drafting Board window and choose Expression Dataset Manager from the drop-down list. The Expression Dataset Manager window will open.
* Select New Dataset from the Expression Datasets menu. Select the tab-delimited text file that you formatted for GenMAPP (.txt) in the procedure above from the file dialog box that appears.
* The Data Type Specification window will appear.  GenMAPP is expecting that you are providing numerical data.  If any of your columns has text (character) data, check the box next to the field (column) name.
** The column ''StandardName'' has text data in it, but none of the rest do.
* Allow the Expression Dataset Manager to convert your data.
** This may take a few minutes depending on the size of the dataset and the computer’s memory and processor speed. When the process is complete, the converted dataset will be active in the Expression Dataset Manager window and the file will be saved in the same folder the raw data file was in, named the same except with a .gex extension; for example, MyExperiment.gex.
** A message may appear saying that the Expression Dataset Manager could not convert one or more lines of data. Lines that generate an error during the conversion of a raw data file are not added to the Expression Dataset. Instead, an exception file is created. The exception file is given the same name as your raw data file with .EX before the extension (e.g., MyExperiment.EX.txt). The exception file will contain all of your raw data, with the addition of a column named ~Error~. This column contains either error messages or, if the program finds no errors, a single space character.
*** '''Record the number of errors in your lab notebook.'''
***97 errors were found in running the program. This result could be due to the fact that the gene was not found or matched in the database due to the age of the program.
* Customize the new Expression Dataset by creating new Color Sets which contain the instructions to GenMAPP for displaying data on MAPPs.
** Color Sets contain the instructions to GenMAPP for displaying data from an Expression Dataset on MAPPs. Create a Color Set by filling in the following different fields in the Color Set area of the Expression Dataset Manager:  a name for the Color Set, the gene value, and the criteria that determine how a gene object is colored on the MAPP. Enter a name in the Color Set Name field that is 20 characters or fewer.  You will have one Color Set per strain per time point.
** The Gene Value is the data displayed next to the gene box on a MAPP. Select the column of data to be used as the Gene Value from the drop down list or select [none].  We will use "Avg_LogFC_" for the the appropriate time point.
** Activate the Criteria Builder by clicking the New button.
** Enter a name for the criterion in the Label in Legend field.
** Choose a color for the criterion by left-clicking on the Color box. Choose a color from the Color window that appears and click OK.
** State the criterion for color-coding a gene in the Criterion field.
*** A criterion is stated with relationships such as "this column greater than this value" or "that column less than or equal to that value". Individual relationships can be combined using as many ANDs and ORs as needed. A typical relationship is
[ColumnName] RelationalOperator Value
with the column name always enclosed in brackets and character values enclosed in single quotes. For example:
[Fold Change] >= 2
[p value] < 0.05
[Quality] = 'high'
This is the equivalent to queries that you performed on the command line when working with the PostgreSQL movie database.  GenMAPP is using a graphical user interface (GUI) to help the user format the queries correctly.  The easiest and safest way to create criteria is by choosing items from the Columns and Ops (operators) lists shown in the Criteria Builder. The Columns list contains all of the column headings from your Expression Dataset. To choose a column from the list, click on the column heading. It will appear at the location of the cursor in the Criterion box. The Criteria Builder surrounds the column names with brackets.
 
The Ops (operators) list contains the relational operators that may be used in the criteria: equals ( = )  greater than ( > ), less than ( < ), greater than or equal to
( >= ), less than or equal to ( <= ), is not equal to ( <> ). To choose an operator from the list, click on the symbol. It will appear at the location of the insertion bar (cursor) in the Criterion box. The Criteria Builder automatically surrounds the operators with spaces.
The Ops list also contains the conjunctions AND and OR, which may be used to make compound criteria. For example:
[Fold Change] > 1.2 AND [p value] <= 0.05
Parentheses control the order of evaluation. Anything in parentheses is evaluated first. Parentheses may be nested. For example:
[Control Average] = 100 AND ([Exp1 Average] > 100 OR [Exp2 Average] > 100)
Column names may be used anywhere a value can, for example:
[Control Average] < [Experiment Average]
 
* After completing a new criterion, add the criterion entry (label, criterion, and color) to the Criteria List by clicking the Add button.
** For the yeast dataset, you will create two criterion for each Color Set.  "Increased" will be [<strain>_Avg_LogFC_<timepoint>] > 0.25 AND [<strain>Pval_<timepoint>] < 0.05 and "Decreased will be [<strain>_Avg_LogFC_<timepoint>] < -0.25 AND [<strain>Pval_<timepoint>] < 0.05.  Make sure that the increased and decreased average log fold change values match the timepoint of the Color Set.
** You may continue to add criteria to the Color Set by using the previous steps.
*** The buttons to the right of the list represent actions that can be performed on individual criteria. To modify a criterion label, color, or the criterion itself, first select the criterion in the list by left-clicking on it, and then click the Edit button. This puts the selected criterion into the Criteria Builder to be modified. Click the Save button to save changes to the modified criterion; click the Add button to add it  to the list as a separate criterion. To remove a criterion from the list, left-click on the criterion to select it, and then click on the Delete button. The order of Criteria in the list has significance to GenMAPP. When applying an Expression Dataset and Color Set to a MAPP, GenMAPP examines the expression data for a particular gene object and applies the color for the first criterion in the list that is true. Therefore, it is imperative that when criteria overlap the user put the most important or least inclusive criteria in the list first. To change the order of the criteria in the list, left-click on the criterion to select it and then click the Move Up or Move Down buttons. No criteria met and Not found are always the last two positions in the list.
**These are the colors used:
***Increased: Salmon/light red
***Decreased: Light baby blue/cyan
* You will also create a ColorSets to view the within-strain ANOVA p values for your strain, with criteria for viewing the unadjusted, Bonferroni-corrected, and B&H corrected p values.
**These were the colors used for the ANOVA:
***Bonferroni p-value: light pink
***B&H p-value: light orange/tangerine
***Unadjusted p-value: dirty yellow/pale yellow (so it's not too rough on the eyes)
* Finally, you will create a ColorSet to view the between-strain ANOVA p values for your wt v. STRAIN comparison.
**The colors used were:
***B&H p-value: light orange/tangerine
***Unadjusted p-value: dirty yellow/pale yellow
* Save the entire Expression Dataset by selecting Save from the Expression Dataset menu. Changes made to a Color Set are not saved until you do this.
* Exit the Expression Dataset Manager to view the Color Sets on a MAPP. Choose Exit from the Expression Dataset menu or click the close box in the upper right hand corner of the window.
* '''Upload your .gex file to Lionshare and share it with Dr. Dahlquist.'''  E-mail the link to the file to Dr. Dahlquist.
* Click [https://lionshare.lmu.edu/Users/kdahlqui/BIOL478/Yeast_MAPPs_BIOL478_Spring2014.zip here] to download a zipped set of MAPPs with which to view your Expression Dataset.
======Genes that were shown to have significant changes in expression level when comparing wild-types S.cerevisiae and S. paradoxus======
{| border="1" class="wikitable"
! Orange
! Yellow
|-
| YAP1
| YHP1
|-
| RTG3
| YOX1
|-
| TBF1
| MET28
|-
| PHD1
| COM2
|-
| NRG1
|}
*Orange (their B&H p-values < 0.05) & Yellow (the unadjusted p-values < 0.05)
*Further comments/observations about the comparison of wt vs. spar:
**YHP1 is odd. Its expression level doesn't seem to increase or decrease with the specific criteria when looking at the time points, yet its overall unadjusted p-value is significant.
**Only up-regulation: YAP1, RTG3, YOX1, PHD1,
**Again in looking at TBF1, its expression level doesn't meet the criteria, but its B&H p-value is significant.
**Only down-regulation: MET28, COM2
**NRG2 initially up-regulated and then during recovery time it was down-regulated.
For Within Strain Which strain to hybridize microarrays:
#OPI1: did not have any significance
#Unadjusted:
#*INO2: p-value of 0.018
#B&H:
#*YAP1: p-value of 0.0001
#Bonferroni:
#*None from list
To test for growth impairment in cold using the Within Strain ANOVA
#*ARG80, TBF1, YHP1, and NRG1 did not have any significant values
#Unadjusted:
#*None here
#B&H:
#*RSF2: p-value of 0.0126
#*RTG3: p-value of 0.0062
#*YOX1: p-value of 0.0004
#*PHD1: p-value of 0.0001
#Bonferroni:
#*None here
<b>Yeastract</b>
<br>
Using YEASTRACT to Infer which Transcription Factors Regulate a Cluster of Genes
<br>
In the previous analysis using STEM, we found a number of gene expression profiles (aka clusters) which grouped genes based on similarity of gene expression changes over time.  The implication is that these genes share the same expression pattern because they are regulated by the same (or the same set) of transcription factors.  We will explore this using the YEASTRACT database.
# Open the gene list in Excel for the one of the significant profiles from your stem analysis.  Choose a cluster with a clear cold shock/recovery up/down or down/up pattern.  You should also choose one of the largest clusters.
#* Copy the list of gene IDs onto your clipboard.
# Launch a web browser and go to the [http://www.yeastract.com/ YEASTRACT database].
#* On the left panel of the window, click on the link to [http://www.yeastract.com/formrankbytf.php ''Rank by TF''].
#* Paste your list of genes from your cluster into the box labeled ''ORFs/Genes''.
#* Check the box for ''Check for all TFs''.
#* Accept the defaults for the Regulations Filter (Documented, DNA binding plus expression evidence)
#* Do '''''not''''' apply a filter for "Filter Documented Regulations by environmental condition".
#* Rank genes by TF using: The % of genes in the list and in YEASTRACT regulated by each TF.
#* Click the ''Search'' button.
# Answer the following questions:
#* In the results window that appears, the p values colored green are considered "significant", the ones colored yellow are considered "borderline significant" and the ones colored pink are considered "not significant".  '''''How many transcription factors are green or "significant"?'''''
#* '''''List the "significant" transcription factors on your wiki page, along with the corresponding "% in user set", "% in YEASTRACT", and "p value".'''''
#**To view this list, please download the following Excel Workbook: [[Media:20150520.SignificantTFs wt NW.xlsx| Significant TFs Workbook]]
#** '''''Are CIN5, GLN3, HAP4, HMO1, SWI4, and ZAP1 on the list?'''''
#***Profile #45: None of those were identified
#***Profile #22: CIN5, HAP4, SWI4, ZAP1 were seen
#***Profile #9: None of those were identified
#***Profile #28: None of those were identified
#***Profile #48: YEASTRACT did not identify any of the genes on this list as significant with its p-value criterion.
# For the mathematical model that we will build, we need to define a ''gene regulatory network'' of transcription factors that regulate other transcription factors.  We can use YEASTRACT to assist us with creating the network.  We want to generate a network with approximately 15-30 transcription factors in it. 
#* You need to select from this list of "significant" transcription factors, which ones you will use to run the model.  You will use these transcription factors and add CIN5, GLN3, HAP4, HMO1, SWI4, and ZAP1 if they are not in your list.  Explain in your electronic notebook how you decided on which transcription factors to include.  Record the list and your justification in your electronic lab notebook.
#* Go back to the YEASTRACT database and follow the link to ''[http://www.yeastract.com/formgenerateregulationmatrix.php Generate Regulation Matrix]''.
#* Copy and paste the list of transcription factors you identified (plus CIN5, HAP4, GLN3, HMO1, SWI4, and ZAP1) into both the "Transcription factors" field and the "Target ORF/Genes" field.
#* We are going to generate several regulation matrices, with different "Regulations Filter" options.
#** For the first one, accept the defaults:  "Documented", "DNA binding '''plus''' expression evidence"
#** Click the "Generate" button.
#** In the results window that appears, click on the link to the "Regulation matrix (Semicolon Separated Values (CSV) file)" that appears and save it to your Desktop.  Rename this file with a meaningful name so that you can distinguish it from the other files you will generate.
#** Repeat these steps to generate a second regulation matrix, this time applying the Regulations Filter "Documented", "'''Only''' DNA binding evidence".
#** Repeat these steps a third time to generate a third regulation matrix, this time applying the Regulations Filter "Documented", DNA binding '''and''' expression evidence".
 
=====May 25, 2015=====
Purpose: Creating the Input sheet to be run in MATLAB
<br>
Procedure:
# My file was similar to the file "21-genes_50-edges_Dahlquist-data_Sigmoid_estimation.xls", but with your expression data and network.  You should download this file, change the name, and edit it to include your data.  Make sure to give it a meaningful filename that includes your last name or initials.  [https://github.com/kdahlquist/GRNmap/blob/master/test_files/data_samples/21-genes_50-edges_Dahlquist-data_Sigmoid_estimation.xls?raw=true Click this link to download the sample file from the GRNmap GitHub repository.)]
# The first thing you need to do is determine the transcription factors that you are including in your network.  You are going to use the "transposed" Regulation Matrix that you generated from YEASTRACT in the previous section.
#* Copy the transposed matrix from your "network" sheet and paste it into the worksheets called "network" and "network_weights".
#* Note that the transcription factor names have to be in the same order and same format across the top row and first column.  CIN5 does not match Cin5p, so the latter will need to be changed to CIN5 if you have not already done so.
#* It may be easier for you if you put the transcription factors in alphabetical order (using the sort feature in Excel), but whether you leave your list the same as it is from the YEASTRACT assignment or in alphabetical order, make sure it is the same order for all of the worksheets.
# The next worksheet to edit is the one called "degradation_rates".
#* Paste your list of transcription factors from your "network" sheet into the column named "StandardName".  You will need to look up the "SystematicName" of your genes.  YEASTRACT has a feature that will allow you to paste your list of standard names in to retrieve the systematic names [http://www.yeastract.com/formorftogene.php here].
#* Next, you will need to look up the degradation rates for your list of transcription factors.  These rates have been calculated from protein half-life data from a paper by Belle et al. (2006).  Look up the rates for your transcription factors from [[Media:Belle_PNAS_06_degradation_rates_203_TFs.xls | this file]] and include them in your "degradation_rates" worksheet.
#* If a transcription factor does not appear in the file above, use the value "0.027182242" for the degradation rate.
# The next worksheet to edit is the one called "production_rates".
#* Paste the "SystematicName" and "StandardName" columns from your "degradation_rates" sheet into the "production_rates" sheet.
#* The initial guesses for the production rates we are using for the model are two times the degradation rate.  Compute these values from your degradation rates and paste the values into the column titled "ProductionRate".
# I then inputted two sheets that held the wild type information and the S. paradoxus (spar) information. S. paradoxus is a different species of yeast that has been known to do well in the cold.
#* Put the wild type data in the sheet called "wt".
#* The sample spreadsheet has a worksheet named "dcin5", which I changed to spar.  The instructions below should be followed for each strain sheet.
#* Paste the SystematicName and StandardName columns from one of your previous sheets into this one.
#* This data in this sheet is the Log Fold Changes for each replicate and each timepoint from the "Rounded_Normalized_Data" worksheet from the big Excel workbook in which you computed the [[Dahlquist:Microarray_Data_Analysis_Workflow#Step_6:_Statistical_Analysis | statistics]].  We are only going to use the cold shock timepoints for the modeling.  Thus your column headings for the data should be "15", "30", and "60". There will be multiple columns for each timepoint (typically 4) to represent the replicate data, but they will all have the same name.  For example, you may have four columns with the header "15".
#* Copy and paste the data from your spreadsheet into this one.  You need to include only the data for the genes in your network.  Make sure that the genes are in the same order as in the other sheets.
# The "optimization_parameters" worksheet should have the following values:
#* alpha should be 0.01
#* kk_max should be 1
#* MaxIter should be 1e08 (one hundred million in plain English)
#* TolFun should be 1e-6
#* MaxFunEval should be 1e08 (one hundred million in plain English)
#* TolX should be 1e-6
#* Sigmoid should be 1
#* estimateParams should be 1
#* makeGraphs should be 1
#* fix_P should be 0
#* fix_b should be 1
#* For the parameter "time" (Cell A13), we should have "15", "30", and "60", since these are the timepoints we have in our data.
#* For the parameter "Strain" (Cell A14), replace "dcin5" with the name of the second strain you are using, making sure that the capitalizaiton and spelling is the same as what you named the worksheet containing that strain's expression data.  We are only going to compare two strains, so you can delete the other strain information.
#* For the parameter "Sheet" (Cell A15), give the number of the worksheet from left to right that your "Strain" log2 expression data is in.  Delete any extra numbers because we are only comparing two strains.
# For the parameter "Deletion", leave the zero in cell B15 (corresponding to wt).  In cell C15, put a number corresponding to the position in the list of gene names that the gene that was deleted appears.  In the sample file, CIN5 is number 3 in the list.  Note, disregard the column header in this count and only consider the actual gene names themselves.
#* For the parameter, "simtime", you perform the forward simulation of the expression in five minute increments from 0 to 60 minutes.  Thus, this row should read: simtime should be 0, 5, <...fill by steps of 5...>, 60, each number in a different cell.
# The last sheet you will need to modify is called "network_b".
#* Paste in the list of standard names for your transcription factors from one of your previous sheets.  Note that this sheet does not have a column for the Systematic Name.
#* Cell A1 in the sample files has the text "rows genes affected/cols genes controlling".  I believe you can either have this text in cell A1 or "StandardName".
#* The "threshold" value for each gene should be "0".
# When you have completed the modifications to your file, upload it to [http://lionshare.lmu.edu LionShare] and send Dr. Dahlquist an e-mail with a link to the file.
 
====== Appendix: Full explanation of the "optimization_parameters" sheet ======
 
* <code>alpha</code>: Penalty term weighting (from an L-curve analysis)
* <code>kk_max</code>: Number of times to re-run the optimization loop: in some cases re-starting the optimization loop can improve performance of the estimation.
* <code>MaxIter</code>: Number of times MATLAB iterates through the optimization scheme. If this is set too low, MATLAB will stop before the parameters are optimized.
* <code>TolFun</code>: How different two least squares evaluations should be before it says it's not making any improvement
* <code>MaxFunEval</code>: maximum number of times it will evaluate the least squares cost
* <code>TolX</code>: How close successive least squares cost evaluations should be before MATLAB determines that it is not making any improvement.
* <code>Sigmoid</code>: <code>=1</code> if sigmoidal model, <code>=0</code> if Michaelis-Menten model
* <code>estimateParams</code>: <code>=1</code> if want to estimate parameters and <code>=0</code> if the user wants to do just one forward run
* <code>makeGraphs</code>: <code>=1</code> to output graphs; <code>=0</code> to not output graphs
* <code>fix_P</code>: <code>=1</code> if the user does not want to estimate the production rate, P, parameter, use initial guess and never change; <code>=0</code> to estimate
* <code>fix_b</code>: <code>=1</code> if the user does not want to estimate the b parameter, use initial guess and never change; <code>=0</code> to estimate
* <code>time</code>: A row containing a list of the time points when the data was collected experimentally. Should correspond to the timepoint column headers in the expression sheets.
* <code>Strain</code>: A row containing a list of all of the strains for which there is expression data in the workbook. Should correspond to the names of the sheets for each strain.
* <code>Sheet</code>: A row where each entry is the order number of the sheet (left to right) that corresponds to the list of strains above.
* <code>Deletion</code>: Gives the index of the gene in the network sheet that has been deleted in each strain listed above. For example, if data has been provided for the CIN5 deletion strain, then give the index number from the network sheet corresponding to CIN5.
* <code>simtime</code>: A list of times for which the forward simulation should be evaluated.
 
=====May 26, 2015=====
Purpose: Running GRNmap and the GRNmodel in MATLAb
<br>
Procedure:
# Download the current version of GRNmap from GitHub.  Version 1.0.6 can be downloaded by following this [https://github.com/kdahlquist/GRNmap/archive/v1.0.6.zip link]. 
#* For the sake of organization, save it into a new folder called "GRNmap" either on your Desktop or within your "Microarray Analysis" folder.
#* Unzip the file by right-clicking on it and choosing 7-zip > Extract here.
# Open the "GRNmap-1.0.6" folder and open the "matlab" subfolder.  Double-click on the file "GRNmodel.m" to open GRNmap in MATLAB 2014b.
# Click on the green triangle "Run" button to run the model.
#* You will be prompted by an Open dialog to find your input file that you created in the previous section.  Browse and select this input file and click OK.
#* Note that the Open dialog will default to show files of <code>*.xlsx</code> only.  If your file is saved as <code>*.xls</code>, you will need to select the drop-down menu to show all files.
#* A window called "Figure 1" will appear.  The counter is showing the number of iterations of the least squares optimization algorithm.  The top plot is showing the values of all the parameters being estimated.  You should see some movement of the diamonds each time the counter iterates.
# Once the model has completed its run, plots showing the expression over time for all of the genes in the network will appear.  Since we selected "makeGraphs = 1" these will automatically be saved as <code>*.jpg</code> files in the same folder as your input file.  Compile the figures into a single PowerPoint file. Please label things clearly, placing an appropriate number of graphs on each page for a readable visual.  Take some care to make sure that the graphs are the same size and the aspect ratio has not been changed. <!--maybe suggest to put graphs for the same gene side by side-->
# Create a new workbook for analyzing the weight data.  In this workbook, create a new sheet: call it estimated_weights. In this new worksheet, create a column of labels of the form ControllerGeneA -> TargetGeneB, replacing these generic names with the standard gene names for each regulatory pair in your network. Remember that columns represent Controllers and rows represent Targets in your network and network_weights sheets.
# Extract the non-zero optimized weights from their worksheet and put them in a single column next to the corresponding ControllerGeneA -> TargetGeneB label.
# Now we will run the model a second time, this time estimating the threshold parameters, b.  Save the input workbook that you previously created as a new file with a meaningful name (e.g. append "estimate-b" to the previous filename), and change fix_b to 0 in the "optimization_parameters" worksheet, so that the thresholds will be estimated. Rerun GRNmodel with the new input sheet.
# Repeat Parts (4) through (6) with the new output.
# Create an empty excel workbook, and copy both sets of weights into a worksheet.
# Create a bar chart in order to compare the "fixed b" and "estimated b" weights.
# Create bar charts to compare the production rates from each run.
# Copy the two bar charts into your powerpoint.
# Visualize the output of each of your model runs with GRNsight.
#* In order for this to work, you need to alter your output workbook slightly.  You need to change the name of the sheet called "out_network_optimized_weights" to "network_optimized_weights"; i.e., delete the "out_" from that sheet name.
#* Arrange the genes in the same order you used to display them previously when you visualized the networks from YEASTRACT for both of your model output runs.  Take a screenshot of each of the results and paste it into your PowerPoint presentation.  Clearly label which screenshot belongs to which run.
#* Note that GRNsight will display differently now that you have estimated the weights.  For positive weights > 0, the edge will be given a regular (pointy) arrowhead to indicate an activation relationship between the two nodes. For negative weights < 0, the edge will be given a blunt arrowhead (a line segment perpendicular to the edge direction) to indicate a repression relationship between the two nodes. The thickness of the edge will vary based on the magnitude of the absolute value of the weight. Larger magnitudes will have thicker edges and smaller magnitudes will have thinner edges. The way that GRNsight determines the edge thickness is as follows. GRNsight divides all weight values by the absolute value of the maximum weight in the matrix to normalize all the values to between zero and 1. GRNsight then adjusts the thickness of the lines to vary continuously from the minimum thickness (for normalized weights near zero) to maximum thickness (normalized weights of 1). The color of the edge also imparts information about the regulatory relationship. Edges with positive normalized weight values from 0.05 to 1 are colored magenta; edges with negative normalized weight values from -0.05 to -1 are colored cyan. Edges with normalized weight values between -0.05 and 0.05 are colored grey to emphasize that their normalized magnitude is near zero and that they have a weak influence on the target gene.
# Upload your PowerPoint, your two input workbooks, and your two output workbooks and link to them in your individual journal.  Also upload the workbook where you made the bar charts comparing the weights from both runs.
#* Interpret the results of the model simulation. 
#** Examine the graphs that were output by each of the runs.  Which genes in the model have the closest fit between the model data and actual data?  Which genes have the worst fit between the model and actual data?  Why do you think that is?  (Hint: how many inputs do these genes have?)  How does this help you to interpret the microarray data? 
#** Which genes showed the largest dynamics over the timecourse?  In other words, which genes had a log fold change that is different than zero at one or more timepoints.  The  p values from the [[BIOL398-04/S15:Week 11 | Week 11]] ANOVA analysis are informative here.  Does this seem to have an effect on the goodness of fit (see question above)?
#** Which genes showed differences in dynamics between the wild type and the other strain your group is using? Does the model adequately capture these differences?  Given the connections in your network (see the visualization in GRNsight), does this make sense? Why or why not?
#** Examine the bar charts comparing the weights and production rates between the two runs. Were there any major differences between the two runs? Why do you think that was? Given the connections in your network (see the visualization in GRNsight), does this make sense? Why or why not?
#** Finally, based on the results of your entire project, which transcription factors are most likely to regulate the cold shock response and why?
#* Based on these results, what future directions do you want to take?
 
 
==Documents==
To view the most updated powerpoint click [[Media:Williams wtANOVA Ttest 2.pptx| here]]
<br>
To see the input sheet that was run for a test trial, please click this [[Media:Williams_Input_Scer_Spar_point01_PROF45.xlsx |link]]


==Other Links==
==Other Links==
Line 980: Line 158:
To visit the Dahlquist Lab: [[Dahlquist| click here]]
To visit the Dahlquist Lab: [[Dahlquist| click here]]
<br>
<br>
To see K. Grace J's Notebook: [[User:Katherine Grace Johnson| click here]]
To see K. Grace J's Notebook: [[Katherine Grace Johnson Electronic Lab Notebook| click here]]
 




[[Category:Dahlquist Lab]]
[[Category:Dahlquist Lab]]
[[Category:GRNmap]]

Latest revision as of 14:00, 11 April 2017

Natalie Williams: Electronic Notebook

Protocol for MATLAB

This page will help you input and run data sets from your document into an output.

Fall 2014

This contains all the procedures and tasks that I completed and the trials that I ran in Fall 2014.

Spring 2015

This contains all the procedures and tasks that I completed and the trials that I ran in Spring 2015. Most of the activities/notes for this semester focused on creating a poster for the various conferences that we attended in the Spring.

Summer 2015

This link has all the information for what occurred over the summer. A lot of it was testing the code by changing the initial weights and the threshold b values of the input sheets.

Fall 2015

Fall 2016

Spring 2017

January 2017

Week of January 12, 2017

Monday & Thursday: Worked on collecting sources for my thesis project. The annotated bibliography is due 20/01. I will be in Boston at that time, but I will still submit my annotated bibliography in time. We had our first lab meeting of the semester on Thursday.

Week of January 19, 2017

Monday: Worked on writing the abstract for the SCSBC at UC Irvine on Saturday, 28/01. The abstract can be found on the Dahlquist Lab repository on github.

Thursday: Not present. Interview at Harvard Medical School.

Week of January 26, 2017

Monday: Finished most of the poster that will be presented this upcoming Saturday at the conference. I wrote much of the content and analysis and Brandon worked on the formatting. Much of the analysis done was on the optimized production and threshold b value's, a motif - Hmo1 --> Msn2 --> Cin5 --> Yhp1.

Thursday: Went over poster during lab meeting. With Dahlquist's correction, I updated the poster and uploaded it to the github repositoryto be edited and reviewed by Dahlquist before printing.

February 2017

January 31, 2017 & February 2, 2017

Monday: Reran the networks derived from dgln3, dhap4, and dzap1 on bouldardii 2 for consistency so that there aren't any discrepancies from running these networks on a different computer.

Thursday:

  • Compiled the optimized parameters into one file as well as the MSE values for individual genes in each of the networks. Each of the networks were visualized again on GRNsight just to ensure that the visualizations match with the output optimized weights for each network.
  • Received feedback from Dr. Dahlquist on my annotated bibliography as well as additional sources to use for my thesis.

Week of February 6, 2017

Monday: Edited the 10 random output sheet's K. Grace Johnson ran last year to make them into input sheets to re-run on boulardii 2.

  • I deleted all the output sheets: the sigmas, optimized_network_weights, optimized_expression, and the optimized production and threshold_b
  • I copied the production and degradation rates from Brandon's dhap4 network into all the corresponding sheets in the random network input sheets

Worked on creating the working abstract for my talk during LMU's Undergraduate Research Symposium.

  • The adjacency matrices from the random network files were then copied and pasted into the adjacency matrix of Brandon's file so that all parameters and information would be the same. The only difference was the network and the network weight sheets.

Thursday: I was not here due to an interview at UCSF's medical school.

Week of February 13, 2017

Monday: I generated some random networks with Brandon's R script to be run on the model. A folder was created to hold all the input and output sheets for the random networks that are run with GRNmap [1]. For further analysis, I will also look at the distribution of the in and out degrees of all the random networks compared to the network derived from the dhap4 data.

  • Distribution of weights (positive vs. negative) and the overall network
  • Are any motifs/connections conserved?
  • Any self or auto-regulators?
  • Visualization will also be seen via GRNsight

Throughout the next couple of weeks, I will be running the random networks generated on GRNmap.

Week of February 20, 2017

Monday:Began to look at the MSE values of the db networks 1 & 5 (derived from wt and dhap4 data) compared to the p values from the ANOVA. For the analysis, I looked at the expression data plots categorized by number of significant p values (B&H p values) at the suggestion of Brandon. Divisions were made as follows:

  • Two or more significant p values across all strains
  • One significant p value across all strains
  • No significant p value across all strains

I then described whether the MSE value that was matched with the p value fit well with the modeled dynamics from the expression plots. I created an excel file with these comparisons and comments about the fit of the model, which can be found in the Dahlquist Lab Repository pvalue_MSE_comparison.

Thursday: Continued to do analysis of p values and MSE outputs by looking at the expression plots. However, during meeting, was told that this was futile and would not generate results because I should be looking at the minMSE for each gene's output. By comparing the MSE:minMSE ratio for each gene, I could see if genes with p values had better or worse fit due to the ratio.

Week of February 27, 2017

Monday: Continued to run the random networks on boulardii 2 (left off at random network 23). Instead of using the expression plots for analysis, I began to compare the MSE values of db network 5 (derived from hap4) and the random networks with the same number of nodes (15) and edges (28). The last random network included in the file is rand19. On every sheet, I have the MSE value output from the run in GRNmap next to the p values from the ANOVA for the dhap4 strain. Below those comparisons, we see the differences in MSE values of the random network from the db derived network 5.

  • If the number is negative, it suggests that GRNmap reduced the mean square error of that individual gene in the random networks;
  • however, if the number is larger, then the db derived network's individual gene saw better modeling/mean square error.

This file can be found in the pvalue_MSE_comparison excel file [2].

Thursday: Continued to run the random networks on boulardii 2 (random network 27 currently running). There are only three remaining random networks (28-30) that need to be run on GRNmap. I carried on with my compilation of the random network MSE value comparisons to db network 5. The last LSE:minLSE comparison made was between db 5 and random network 26. Again, the file can be found here on the Dahlquist Lab Repository under the file name pvalue_MSE_comparison.xlsx.

Further, on the sheet labeled dhap4, a bar graph comparing the LSE:minLSE ratios for all the GRNs run on MATLAB thus far can be found. I've begun to look at the regulatory relationships identified in the three lowest and three highest ratio random networks.

  • The smallest LSE:minLSE ratios
    • random networks 15, 16, and 24
  • The largest LSE:minLSE ratios
    • random networks 5, 7, and 12

Week of March 4, 2017

Spring Break this week. I was in Mammoth for the week. Sunday: Kristen noticed that random networks 4 & 5 were identical, so I created a new random network (rand 31) to be run in GRNmap. After it was run in the model, the optimization diagnostics showed that random network 31 had a larger LSE:minLSE ratio than random network 5. Therefore, analysis will now be conducted on the following with the highest LSE:minLSE ratios:

  • Random network 7, 12, and 31
    • Rand7: 1.5202
    • Rand12: 1.5080
    • Rand31: 1.5001

Thursday: I computed the minMSE values for the DB5 network so that I could use the information for my Symposium presentation. The following protocol was done.

  1. Using the log2 expression data for the specific strain in a input sheet, average the values for the same timepoints
    • i.e. for wt strain, there are four 15 time point measurements, five 30 time point measurements, and four 60 time point measurements. Therefore, the first gene's average log fold expression change is averaged across four timepoints for 15, five for 30, and four for 60.
      • ABF1 averages: 15 = -1.1878; 30 = -1.1819; 60 = -1.9142
  2. Next, the difference between each individual log2 expression at a given time point and the average for that given time point was found.
    • i.e for wt's ABF1 gene, we see the following formula: = t15.1 - avg15, where t15.1 is the individual log2 expression at the first 15 time point and avg15 is the average of the four observed expression changes at 15 time point replicates
      • ABF1's first 15 time point: = B2 - P2 = t15.1 - avg15 = -2.1071 - (-1.1878) = -0.9193
  3. Then, the differences are squared so that no negative numbers result and to account for differences seen above and below this difference
    • i.e. for wt's ABF1 gene, we see the following formula in the cell: = B20^2
  4. The squared differences were then summed up for all the time points and divided by the total number of time points.
    • The formula used was as follows: =SUM(B38:N38)/13 (based off wt's ABF1 gene)
      • i.e. for wt, there are 13 time points (four 15 time points + five 30 time points + four 60 time points = 13)
      • Note that the sum for all these time points differs for each individual strain, such that db4 (dGLN3) has 12 overall time points
  5. To ensure that these calculations were correct, I first used this procedure to calculate the MSE observed via the model. After I receive the same output values, I proceeded to calculate the actual minMSEs.

Week of March 12, 2017

Monday: I worked on completing the analysis of my results. I used Brandon's regulatory relationship workbook to compare the regulatory relationships for DB5 and the three best (15, 16, 24) and worst (7, 12, 31) random networks.

  • Process for isolating regulatory relationships
    1. Using GRNsight, I visualized the weighted networks of interest and exported the network as a .siv file to isolate the regulatory relationships between regulator and target gene
    2. Next, I opened the SIV file in Excel. In a new Excel workbook, I wrote down the relationship between the transcription factor and its target as Regulator --> Target Gene in one cell with the weight of the transcription factor's influence in the column right of it.
    3. After I saved all these relationships for the seven networks (DB5, Rand7, Rand12, Rand15, Rand16, Rand24, and Rand31), I compiled all of their regulatory relationships together in a list.
    4. Next, I pasted the values that corresponded to a specific node/relationship for each network into the correct cell.
      • Reading R->L (DB5, Rand7, ..., Rand31)
    5. Because Brandon's Excel file already highlighted cell's based on the weights within them, stronger activators were colored red; stronger repressors were colored blue, and grey was used for the weak influencers.

Thursday: Presented a first draft of my presentation for LMU's URS. That was the focus of lab this day.

  • Finished up the first draft of my presentation
  • For further analysis, I included:
    • The sum of weights to identify if the network was 'overall repressive (-) or activating (+)'
    • The shared nodes between DB5 and the 3 best and 3 worst networks & found that it shared more nodes with the better networks

Week of March 19, 2017

Monday: Worked on completing my powerpoint presentation for the LMU's Undergraduate Research Symposium. I sat down with Dr. Dahlquist to discuss my presentation and re-work some of the analysis that I did.

Thursday: I practiced my presentation in Sea120 before I rehearsed my presentation in front of my lab. Later, I presented my powerpoint for the symposium to my fellow researchers. I received feedback (overall positive, with minor changes to make). I listened to Kristen's presentation, too, before the end of lab.

Week of March 26, 2017

Monday: Worked on a lot of my thesis, writing my discussion

Thursday: Continued to work and write my thesis before the holiday (Cesar Chavez). During the lab meeting, we discussed future directions/what we should work on for the remainder of our time in the lab.

Documents

Summer 2015

To view the most updated powerpoint click here
To see the input sheet that was run for the fixed b trial, please click this link
To view the output file from this fixed b trial, click here
To see the input sheet that was run from the estimated b, please click this
To view the output file from the estimated b, click here
The powerpoint that reviews and analyzes the outputs can be viewed here

GRNmap Testings

This is the template for future reports: GRNmap Testing Report
GRNmap Testing Report: Strain Run Comparisons 2015-05-27
GRNmap Testing Report: Non-1 Initial Weight Guesses 2015-05-28

Other Links

Back to User:Natalie Williams
To visit the Dahlquist Lab: click here
To see K. Grace J's Notebook: click here