User talk:Darek Kedra/sandbox 30: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
No edit summary
Line 1: Line 1:
<!---
bamtools filter -in ERR307343_12.Lmex.bwa_mem.Lmex.bwa.sorted.bam  -out chr_1_ERR307343_12.Lmex.bwa_mem.Lmex.bam -region LmxM.01
bamtools filter -in ERR307343_1.Q1.Lmex80.lst475.sorted.bam  -out chr_1_ERR307343_1.Q1.Lmex80.lst475.bam -region LmxM.01
bamtools filter -in ERR307343_2.Q1.Lmex80.lst475.sorted.bam  -out chr_1_ERR307343_2.Q1.Lmex80.lst475.bam -region LmxM.0
/users/rg/dkedra/soft/tmp/BAMUTIL/bamUtil_1.0.7/bamUtil/bin/bam bam2FastQ --in chr_1_ERR307343_1.Q1.Lmex80.lst475.bam --refFile ../../../data/genome/Lmex_genome.fa --firstOut chr_1_ERR307343_1.Q1.Lmex80.lst475.fastq
/users/rg/dkedra/soft/tmp/BAMUTIL/bamUtil_1.0.7/bamUtil/bin/bam bam2FastQ --in chr_1_ERR307343_2.Q1.Lmex80.lst475.bam --refFile ../../../data/genome/Lmex_genome.fa --firstOut chr_1_ERR307343_2.Q1.Lmex80.lst475.fastq
???
http://www.ebi.ac.uk/ena/data/view/SRR924400
Download: SRR1016916
--->
=EMBO Tunis 2014=
From sequencing data to knowledge
== 00 Programs used ==
===sequence pre-processing===
* [http://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=std SRA_toolkit] ver current
* [http://www.bioinformatics.babraham.ac.uk/projects/fastqc/ FastQC] ver 0.11.2
* [http://www.usadellab.org/cms/index.php?page=trimmomatic Trimmomatic ]  ver 0.32
* [http://sourceforge.net/projects/tagdust/ TagDust] ver 2.13
* [http://www.cs.helsinki.fi/u/lmsalmel/coral/ Coral] ver 1.4
=== general tools ===
* [http://hannonlab.cshl.edu/fastx_toolkit/ fastx_toolkit] ver 0.0.13
* [http://samtools.sourceforge.net/ Samtools classic]  ver 0.1.19
* [http://www.htslib.org/ samtools/HTSlib] ver 1.0
* [http://picard.sourceforge.net/ Picard ]  ver 1.119
=== mappers ===
* [http://bio-bwa.sourceforge.net/ BWA] ver 0.7.10
* [http://last.cbrc.jp/ LAST] ver 475
* [http://www.well.ox.ac.uk/~gerton/software/Stampy/ Stampy] stampy-1.0.23r2059.tgz (optional)
=== Splice reader mappings ===
* [https://github.com/indraniel/fqgrep fqgrep] Github version plus
* [https://github.com/laurikari/tre/ TRE_library] ver 0.80
=== viewers===
* [http://www.broadinstitute.org/igv/home IGV] ver 2.3.34
* [http://www.broadinstitute.org/igv/download igvtools] ver 2.3.32
=== quantification ===
* [http://ngsutils.org/NGSUtils NGSUtils] ver 0.5.6
* [https://pypi.python.org/pypi/HTSeq HTSeq] ver 0.6.1 (requires NumPy)
=== SNPs discovery ===
* [https://www.broadinstitute.org/gatk/  GATK] ver 3.2-2
==01 Data files used ==
===FASTQ files ===
====L.amazonensis RNA-Seq ====
* http://www.ebi.ac.uk/ena/data/view/SRP016502
====L mexicana genomic DNA ====
* http://www.ebi.ac.uk/ena/data/view/ERX280624
==== (extra set)  L.enriettii genomic DNA ====
* http://www.ebi.ac.uk/ena/data/view/SRR835620
== Stuff to read / compare ==
!!! Important: introduce zero- and 1-based positioning of file formats !!!
===File formats ===
* http://biobits.org/samtools_primer.html (file formats)
* http://www.slideshare.net/lindenb/ngsformats
* http://wiki.bits.vib.be/index.php/Next-generation_sequencing
=== VCF ===
* http://www.1000genomes.org/node/101 (VCF)
* http://en.wikipedia.org/wiki/Variant_Call_Format
* http://vcftools.sourceforge.net/ (VCFTools)
=== BED ===
* http://genome.ucsc.edu/FAQ/FAQformat.html#format1
* http://www.broadinstitute.org/igv/BED
* http://www.ensembl.org/info/website/upload/bed.html
* http://bedtools.readthedocs.org/en/latest/ BEDTOOLS
=== GFF / GTF ===
* http://www.ensembl.org/info/website/upload/gff.html
* http://www.sequenceontology.org/gff3.shtml
==Genomes and annotations ==
* L mexicana
http://tritrypdb.org/common/downloads/release-8.0/LmexicanaMHOMGT2001U1103/fasta/data/TriTrypDB-8.0_LmexicanaMHOMGT2001U1103_Genome.fasta
http://tritrypdb.org/common/downloads/release-8.0/LmexicanaMHOMGT2001U1103/gff/data/TriTrypDB-8.0_LmexicanaMHOMGT2001U1103.gff
* L.amazonensis
http://tritrypdb.org/common/downloads/release-8.0/LamazonensisMHOMBR71973M2269/fasta/data/TriTrypDB-8.0_LamazonensisMHOMBR71973M2269_Genome.fasta
* L.enriettii
http://tritrypdb.org/common/downloads/release-8.0/LenriettiiLEM3045/fasta/data/TriTrypDB-8.0_LenriettiiLEM3045_Genome.fasta
* L.major
http://tritrypdb.org/common/downloads/release-8.0/LmajorFriedlin/fasta/data/TriTrypDB-8.0_LmajorFriedlin_Genome.fasta
=Why Next Generation Sequencing =
=Why Next Generation Sequencing =
Typical uses cases of NGS data include:
Typical uses cases of NGS data include:
Line 208: Line 97:
</pre>
</pre>


===GTF/GFF===
The most commonly used sequence annotation format.
GTF Fields:
<pre>
seqname - name of the chromosome or scaffold; chromosome names can be given with or without the 'chr' prefix.
source - name of the program that generated this feature, or the data source (database or project name)
feature - feature type name, e.g. Gene, Variation, Similarity
start - Start position of the feature, with sequence numbering starting at 1.
end - End position of the feature, with sequence numbering starting at 1.
score - A floating point value.
strand - defined as + (forward) or - (reverse).
frame - One of '0', '1' or '2'. '0' indicates that the first base of the feature is the first base of a codon, '1' that the second base is the first base of a codon, and so on..
attribute - A semicolon-separated list of tag-value pairs, providing additional information about each feature.
</pre>
Sample GFF output from Ensembl export:
<pre>
X Ensembl Repeat 2419108 2419128 42 . . hid=trf; hstart=1; hend=21
X Ensembl Repeat 2419108 2419410 2502 - . hid=AluSx; hstart=1; hend=303
X Ensembl Repeat 2419108 2419128 0 . . hid=dust; hstart=2419108; hend=2419128
X Ensembl Pred.trans. 2416676 2418760 450.19 - 2 genscan=GENSCAN00000019335
X Ensembl Variation 2413425 2413425 . + .
X Ensembl Variation 2413805 2413805 . + .
</pre>
And from TriTrypDB (gff3)
<pre>
LmjF.01 TriTrypDB      CDS    12073  12642  .      -      0      ID=cds_LmjF.01.0040-1;Name=cds;description=.;size=570;Parent=rna_LmjF.01.0040-1
LmjF.01 TriTrypDB      exon    12073  12642  .      -      .      ID=exon_LmjF.01.0040-1;Name=exon;description=exon;size=570;Parent=rna_LmjF.01.0040-1
LmjF.01 TriTrypDB      gene    15025  17022  .      -      .      ID=LmjF.01.0050;Name=LmjF.01.0050;description=carboxylase%2C+putative;size=1998;web_id=LmjF.01.0050;locus_tag=LmjF.01.0050;size=1998;Alias=321438056,389592315,LmjF1.0050,LmjF01.0050,LmjF.01.0050,LmjF01.0050:pep,LmjF01.0050:mRNA
</pre>
===BED===


===FASTQ===
===FASTQ===
Line 470: Line 325:
</pre>
</pre>


Which one to use? ENA may be easier as you get gzipped fastq files directly. But NCBI tools may have better interface at times, so you can search for interesting data set at NCBI, then store the names of experiments and download fastq.gz from ENA.
===GTF/GFF===
The most commonly used sequence annotation format.
GTF Fields:
<pre>
seqname - name of the chromosome or scaffold; chromosome names can be given with or without the 'chr' prefix.
source - name of the program that generated this feature, or the data source (database or project name)
feature - feature type name, e.g. Gene, Variation, Similarity
start - Start position of the feature, with sequence numbering starting at 1.
end - End position of the feature, with sequence numbering starting at 1.
score - A floating point value.
strand - defined as + (forward) or - (reverse).
frame - One of '0', '1' or '2'. '0' indicates that the first base of the feature is the first base of a codon, '1' that the second base is the first base of a codon, and so on..
attribute - A semicolon-separated list of tag-value pairs, providing additional information about each feature.
</pre>


* European Nucleotide Archive
Sample GFF output from Ensembl export:
http://www.ebi.ac.uk/ena/  
<pre>
X Ensembl Repeat 2419108 2419128 42 . . hid=trf; hstart=1; hend=21
X Ensembl Repeat 2419108 2419410 2502 - . hid=AluSx; hstart=1; hend=303
X Ensembl Repeat 2419108 2419128 0 . . hid=dust; hstart=2419108; hend=2419128
X Ensembl Pred.trans. 2416676 2418760 450.19 - 2 genscan=GENSCAN00000019335
X Ensembl Variation 2413425 2413425 . + .
X Ensembl Variation 2413805 2413805 . + .
</pre>


And from TriTrypDB (gff3)
<pre>
<pre>
* go there: http://www.ebi.ac.uk/ena/
LmjF.01 TriTrypDB      CDS    12073  12642  .      -      0      ID=cds_LmjF.01.0040-1;Name=cds;description=.;size=570;Parent=rna_LmjF.01.0040-1
* put RNA Leishmania
LmjF.01 TriTrypDB      exon    12073  12642  .      -      .      ID=exon_LmjF.01.0040-1;Name=exon;description=exon;size=570;Parent=rna_LmjF.01.0040-1
* see Run (16)
LmjF.01 TriTrypDB      gene    15025  17022  .      -      .      ID=LmjF.01.0050;Name=LmjF.01.0050;description=carboxylase%2C+putative;size=1998;web_id=LmjF.01.0050;locus_tag=LmjF.01.0050;size=1998;Alias=321438056,389592315,LmjF1.0050,LmjF01.0050,LmjF.01.0050,LmjF01.0050:pep,LmjF01.0050:mRNA
* click on some samples and explore
</pre>
</pre>


Which one to use? ENA may be easier as you get gzipped fastq files directly. But NCBI tools may have better interface at times, so you can search for interesting data set at NCBI, then store the names of experiments and download fastq.gz from ENA.
===BED===
 
 


==SAM and BAM file formats==
==SAM and BAM file formats==

Revision as of 18:01, 16 September 2014

Why Next Generation Sequencing

Typical uses cases of NGS data include:

  1. de novo genome assembly
  2. mapping reads to a known genome
    1. whole genome random reads (DNA)
    2. targeted resequencing: regions selected by long range PCR/hybridisation
    3. exome mapping: DNA fragments are preselected using microarrays/beads with attached exonic sequences, so only fragments hybridising to them are selected
    4. RNASeq: just RNA, but specific protocols may target just 5- or 3-prime ends of transcripts or miRNA with small size selection. RNASeq is the application in which stranded libraries are used, i.e. to differentiate between sense and antisense transcripts
    5. ChIP-Seq: chromatin immunoprecipitation combined with sequencing of DNA fragments bound by histones/transcription factors.
    6. bisulfite sequencing (non-methylated cytosines converted to uracil, read as Ts on the non-methylated sequences)
  3. metagenomics: sequencing populations of microorganisms to investigate the diversity of species/strains

Whole Genome Sequencing

Before you start (semi-obvious tips):

  • if possible, use haploid genome (see bee genome project where they used haploid drones)
  • next best thing: highly inbred, nearly homozygous lines

Sometimes you have unexpected treasures: There is probably (almost)-homozygous cow: http://en.wikipedia.org/wiki/Chillingham_cattle

  • there are cases where extra-chromosomal DNA (40-70 chloroplasts per plant cell are often in 150kb range, with 40-70 copies per organelle) contributes non-trivial portion of total DNA. Select tissues/stages with less multiple copy DNAs



NGS file formats overview

There are multiple file formats used at various stages of NGS data processing. We can divide them into two basic types:

  • text based (FASTA, FASTQ, SAM, VCF, GTF/GFF, BED, WIG)
  • binary (BAM, BCF, SFF(454 sequencer data, not covered here))

In principle, we can view and manipulate text based formats without special tools, but we will need these to access and view binary formats. To make things a bit more complicated, the text-based format are often compressed to save space making them de facto binary, but still easy to read by eye using standard Unix tools. Also despite that one can read values in several columns and from tens of rows, we still need dedicated programs to make sense of millions of rows and i.e. encoded columns.

On the top of these data/results files, some programs require that for faster access we need a companion file (often called index). See i.e.

  • FASTA (.fa & .fai)
  • BAM and BAI formats (suffixes .bam & .bai),
  • VCF (.vcf & .vcf.idx).

The important thing for all files containing positions on chromosomes (mappings, features) is their numbering. To make things complicated, we have formats starting with 1, or with 0.

  • 1 based formats: GFF/GFT, SAM/BAM, WIG,
  • 0 BED, CN (copy number data, not covered)

Programs automatically deduce the correct numbering scheme, but whenever you are converting from one format to another watch for this "feature".

More info on file formats used by IGV: http://www.broadinstitute.org/igv/?q=book/export/html/16

Fasta (just few tricks)

#count number of sequences in multiple fasta
grep -c "^>" multi_fasta.fa

#list sequence names in fasta, display screen by screen with "q" to escape:
grep "^>" | less 

While most likely mature NGS programs will handle complex FASTA sequence names correctly, you may sometimes have problems with various scripts. To fix it and just keep the first, hopefully unique single word sequence name:

!/usr/bin/env python
"""
name: fix_fasta_header.py
usage: ./fix_fasta_header.py input_fasta.fa > output_fasta.fa

CAVEAT: it does not check if the resulting sequence names are unique
"""
import sys

fn = sys.argv[1]

for line in open(fn).readlines():
	if line[0] == ">":
		line = line.split()[0]
		print line
	else:
		print line,

For reformatting sequences so that all sequences will have 70 columns you can use this program from a package called exonerate from: https://github.com/nathanweeks/exonerate

fastareformat input.fa > output.fa

For extracting sequences from your FASTA file you can use pyfasta library:

#single contig/chromosome
pyfasta extract --header --fasta Lmex_genome.sort.fa LmxM.01 > LmxM.01_single.fa

#list of sequences in the desired order
#input seqids.txt file (just two lines with sequence names):
LmxM.01
LmxM.00

#command
pyfasta extract --header --fasta Lmex_genome.sort.fa --file seqids.txt  > LmxM.two_contigs.fa


FASTQ

Format and quality encoding checks

Already in the 90ties when all sequencing was being done using Sanger method, the big breakthrough in genome assembly was when individual bases in the reads (ACTG) were assigned some quality values. In short, some parts of sequences had multiple bases with a lower probability of being called right. So it makes sense that matches between high quality bases are given a higher score, be it during assembly or mapping that i.e. end of the reads with multiple doubtful / unreliable calls. This concept was borrowed by Next Generation Sequencing. While we can hardly read by eye the individual bases in some flowgrams, it is still possible for the Illumina/454/etc. software to calculate base qualities. The FASTQ format, (usually files have suffixes .fq or .fastq) contains nowadays 4 lines per sequence:

  1. sequence name (should be unique in the file)
  2. sequence string itself with ACTG and N
  3. extra line starting with "+" sign, which contained repeated sequence name in the past
  4. string of quality values (one letter/character per base) where each letter is translated in a number by the downstream programs

Here it is how it looks:

@SRR867768.249999 HWUSI-EAS1696_0025_FC:3:1:2892:17869/1
CAGCAAGTTGATCTCTCACCCAGAGAGAAGTGTTTCATGCTAAGTGGCACTGGTGCAGAACAGTTCTGCAATGG
+
IHIIHDHIIIHIIIIIIHIIIDIIHGGIIIEIIIIIIIIIIIIGGGHIIIHIIIIIIBBIEDGGFHHEIHGIGE

CAVEAT: When planning to do SNP calling using GATK, do not try to modify part of the name describing location on the sequencing lane:

HWUSI-EAS1696_0025_FC:3:1:2892:17869

This part is used for looking for optical replicates.

Unfortunately Solexa/Illumina did not follow the same quality encoding as people doing Sanger sequencing, so there are few iterations of the standard, with quality encodings containing different characters. For the inquisitive: http://en.wikipedia.org/wiki/FASTQ_format#Quality

What we need to remember from it, that we must know which quality encoding we have in our data, because this is an information required by mappers, and getting it wrong will make our mappings either impossible (some mappers may quit when encountering wrong quality value) or at best unreliable.

There are two main quality encodings: Sanger and Two other terms, offset 33 and offset 64 are also being used for describing quality encodings:

  • offset 33 == Sanger / Illumina 1.9
  • offset 64 == Illumina 1.3+ to Illumina 1.7

For that, if we do not have direct information from the sequencing facility which version of the Illumina software was used, we can still find it out if we investigate the FASTQ files themselves. Instead of going by eye, we use a program FastQC. For the best results/full report we need to use the whole FASTQ file as an input, but for quick and dirty quality encoding recognition using 100K of reads is enough:

head -400000 my_reads.fastq > 100K_head_my_reads.fastq 
fastqc 100K_head_my_reads.fastq
#we got here 100K_head_my_reads.fastq_fastqc/ directory

grep Encoding 100K_head_my_reads.fastq_fastqc/fastqc_data.txt

#output: 
Encoding	Sanger / Illumina 1.9

CAVEAT: all this works only on unfiltered FASTQ files. Once you remove the lower quality bases/reads containing them, guessing which encoding format is present in your files is problematic.

Here is a bash script containing awk oneliner to detect quality encoding in both gzip-ed and not-compressed FASTQ files.

#!/bin/bash
file=$1

if [[ $file ]]; then
    command="cat"
    if [[ $file =~ .*\.gz ]];then
        command="zcat"
    fi
    command="$command $file | "
fi

command="${command}awk 'BEGIN{for(i=1;i<=256;i++){ord[sprintf(\"%c\",i)]=i;}}NR%4==0{split(\$0,a,\"\");for(i=1;i<=length(a);i++){if(ord[a[i]]<59){print \"Offset 33\";exit 0}if(ord[a[i]]>74){print \"Offs
et 64\";exit 0}}}'"

eval $command

Types of data

  • read length

from 35bp in some old Illumina reads to 250+ in MiSeq. The current sweet spot is between 70-150bp.

  • single vs paired

Just one side of the insert sequenced or sequencing is done from both ends. Single ones are cheaper and faster to produce, but paired reads allow for more accurate mapping, detection of large insertions/deletions in the genome.

Most of the time forward and reverse reads facing each other end-to-end are

  • insert length

With the standard protocol, the inserts are anywhere between 200-500bp. Sometimes especially for de novo sequencing, insert sizes can be smaller (160-180bp) with 100bp long reads allowing for overlap between ends of the reads. This can improve the genome assembly (i.e. when using Allpaths-LG assembler requiring such reads). Also with some mappers (LAST) using longer reads used to give better mappings (covering regions not unique enough for shorter reads) than 2x single end mapping. With paired end mappings the effects are modest.

Program for combining overlapping reads: FLASH: http://ccb.jhu.edu/software/FLASH/ To run it:

flash SRR611712_1.fastq SRR611712_2.fastq -o SRR611712_12.flash

For improving the assembly or improving the detection of larger genome rearrangements there are other libraries with various insert sizes, such as 2.5-3kb or 5kb and more. Often sequencing yields from such libs are lower than from the conventional ones.

  • stranded vs unstranded (RNASeq only)

We can obtain reads just from a given strand using special Illumina wet lab kits. This is of a great value for subsequent gene calling, since we can distinguish between overlapping genes on opposite strands.

quality checking (FastQC)

It is always a good idea to check the quality of the sequencing data prior to mapping. We can analyze average quality, over-represented sequences, number of Ns along the read and many other parameters. The program to use is FastQC, and it can be run in command line or GUI mode.

  • good quality report:

http://www.bioinformatics.babraham.ac.uk/projects/fastqc/good_sequence_short_fastqc.html

  • bad quality FastQC report

http://www.bioinformatics.babraham.ac.uk/projects/fastqc/bad_sequence_fastqc.html

trimming & filtering

Depending on the application, we can try to improve the quality of our data set by removing bad quality reads, clipping the last few problematic bases, or search for sequencing artifacts, as Illumina adapters. All this makes much sense for de novo sequencing, were genome assemblies can be improved by data clean up. It has a low priority for mapping, especially when we have high coverage. Bad quality reads etc. will simply be discarded by the mapper.

You can read more about quality trimming for genome assembly in the two blog posts by Nick Loman:

http://pathogenomics.bham.ac.uk/blog/2013/04/adaptor-trim-or-die-experiences-with-nextera-libraries/

Trimmomatic

http://www.usadellab.org/cms/index.php?page=trimmomatic From the manual:

Paired End:

java -jar trimmomatic-0.30.jar PE --phred33 input_forward.fq.gz input_reverse.fq.gz output_forward_paired.fq.gz output_forward_unpaired.fq.gz output_reverse_paired.fq.gz output_reverse_unpaired.fq.gz ILLUMINACLIP:TruSeq3-PE.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36

This will perform the following:

Remove adapters Remove leading low quality or N bases (below quality 3) Remove trailing low quality or N bases (below quality 3) Scan the read with a 4-base wide sliding window, cutting when the average quality per base drops below 15 Drop reads below the 36 bases long

Single End:

java -jar trimmomatic-0.30.jar SE --phred33 input.fq.gz output.fq.gz ILLUMINACLIP:TruSeq3-SE:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36

This will perform the same steps, using the single-ended adapter file

Tagdust (for simple unpaired reads)

Tagdust is a program for removing Illumina adapter sequences from the reads containing them. Such reads containing 6-8 bases not from genome will be impossible to map using typical mappers having often just 2 mismatch base limit. Tagdust works in an unpaired mode, so when using paired reads we have to "mix and match" two outputs to allow for paired mappings.

tagdust -o my_reads.clean.out.fq -a my_reads.artifact.out.fq adapters.fasta my_reads.input.fq

Error correction

For some applications, like de novo genome assembly, one can correct the sequencing errors in the reads by comparing them with other reads with almost identical sequence. One of the programs which do perform this and are relatively easy to install and make it running is Coral.

Coral

web site: http://www.cs.helsinki.fi/u/lmsalmel/coral/ version: 1.4

It requires large RAM machine for correcting individual Illumina files (run it on 96GB RAM).

#Illumina reads
./coral -fq input.fq -o output.fq  -illumina  

#454 reads
./coral -fq input.454.fq -o output.454.fq  -454  

#correcting 454 reads with Illumina reads
Coral can not use more than one input file, therefore one has to combine Illumina & 454 reads into one FASTQ file, noting the number of Illumina reads used. To prepare such file:

cat 10Millions_illumina_reads.fq > input_4_coral.fq
cat some_number_of_454_reads.fq >> input_4_coral.fq

## run coral with the command:
coral -454 -fq  input_4_coral.fq -o  input_4_coral.corrected.fq-i 10000000 -j 10000000

#This will correct just the 454 reads,not the Illumina ones
#In real life you have to count the number of reads in your Illumina FASTQ file, i.e. (assuming you do not have wrapped sequence/qualities FASTQ 1 read=4lines ) :

wc -l illumina_reads.fq | awk '{print $1/4}' 

#if in doubt, use fastqc to get the numbers

source of published FASTQ data: Short Read Archive vs ENA

While we will often have our data sequenced in house/provided by collaborators, we can also reuse sequences made public by others. Nobody does everything imaginable with their data, so it is quite likely we can do something new and useful with already published data, even if treating it as a control to our pipeline. Also doing exactly the same thing, say assembling genes from RNASeq data but with a newer versions of the software and or more data will likely improve on the results of previous studies. There are two main places to get such data sets:

For both of them it is a good idea to install program Aspera Connect to reduce download times:

Click on Resources, download and install the web plugin.

* go there 
* put Leishmania major RNA
* we get "SRA Experiments	6"	
* click on "6"
* one of them is: "Transcriptome Analysis of Leishmania major strain Friedlin", ERX190352

* go there
* put Leishmania major
* click on the checkbox SRA Experiments
* click on Display button
* we got 58 public experiment, 7of which are RNA
* click on RNA (7) on the left
* note 
"Whole Genome Sequencing of Leishmania major strain Friedlin"
Accession: SRX203187

CAVEAT: not everybody submits the right descriptions of their experiments. In case of doubt, download and map.

Extracting the FASTQ from SRA archive is easy with sra-toolkit installed on your computer You get it here: https://github.com/ncbi/sratoolkit

fastq-dump SRR1016916.sra

#Result:
SRR1016916.fastq 

You can also get the multiple SRA files in one step and without 20 clicks from NCBI with configured ASPERA by writing a shell script (one line per file), preferably using a (Python/Perl/Ruby) script and a list of files to get:


ascp -QTrk1 -l200m -i /home/me/.aspera/connect/etc/asperaweb_id_dsa.openssh \
anonftp@ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByStudy/sra/SRP/SRP012/SRP012154/  ./

ascp -QTrk1 -l200m -i /home/me/.aspera/connect/etc/asperaweb_id_dsa.openssh \
anonftp@ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByStudy/sra/SRP/SRP012/SRP012155/  ./

Which one to use? ENA may be easier as you get gzipped fastq files directly. But NCBI tools may have better interface at times, so you can search for interesting data set at NCBI, then store the names of experiments and download fastq.gz from ENA.


GTF/GFF

The most commonly used sequence annotation format.

GTF Fields:

seqname - name of the chromosome or scaffold; chromosome names can be given with or without the 'chr' prefix.
source - name of the program that generated this feature, or the data source (database or project name)
feature - feature type name, e.g. Gene, Variation, Similarity
start - Start position of the feature, with sequence numbering starting at 1.
end - End position of the feature, with sequence numbering starting at 1.
score - A floating point value.
strand - defined as + (forward) or - (reverse).
frame - One of '0', '1' or '2'. '0' indicates that the first base of the feature is the first base of a codon, '1' that the second base is the first base of a codon, and so on..
attribute - A semicolon-separated list of tag-value pairs, providing additional information about each feature.

Sample GFF output from Ensembl export:

X	Ensembl	Repeat	2419108	2419128	42	.	.	hid=trf; hstart=1; hend=21
X	Ensembl	Repeat	2419108	2419410	2502	-	.	hid=AluSx; hstart=1; hend=303
X	Ensembl	Repeat	2419108	2419128	0	.	.	hid=dust; hstart=2419108; hend=2419128
X	Ensembl	Pred.trans.	2416676	2418760	450.19	-	2	genscan=GENSCAN00000019335
X	Ensembl	Variation	2413425	2413425	.	+	.	
X	Ensembl	Variation	2413805	2413805	.	+	.

And from TriTrypDB (gff3)

LmjF.01 TriTrypDB       CDS     12073   12642   .       -       0       ID=cds_LmjF.01.0040-1;Name=cds;description=.;size=570;Parent=rna_LmjF.01.0040-1
LmjF.01 TriTrypDB       exon    12073   12642   .       -       .       ID=exon_LmjF.01.0040-1;Name=exon;description=exon;size=570;Parent=rna_LmjF.01.0040-1
LmjF.01 TriTrypDB       gene    15025   17022   .       -       .       ID=LmjF.01.0050;Name=LmjF.01.0050;description=carboxylase%2C+putative;size=1998;web_id=LmjF.01.0050;locus_tag=LmjF.01.0050;size=1998;Alias=321438056,389592315,LmjF1.0050,LmjF01.0050,LmjF.01.0050,LmjF01.0050:pep,LmjF01.0050:mRNA

BED

SAM and BAM file formats

The SAM file format serves to store information about result of mapping of reads to the genome. It starts with a header, describing the format version, sorting order (SO) of the reads, genomic sequences to which the reads were mapped. The minimal version looks like this:

@HD	VN:1.0	SO:unsorted
@SQ	SN:1	LN:171001
@PG	ID:bowtie2	PN:bowtie2	VN:2.1.0

It can contain both mapped and unmapped reads (we are mostly interested in mapped ones). Here is the example:

SRR197978.9007654       177     1       189     0       50M     12      19732327        0       CAGATTTCAGTAGTCTAAACAAAAACGTACTCACTACACGAGAACGACAG      5936A><<4=<=5=;=;?@<?BA@4A@B<AAB9BB;??B?=;<B@A@BCB      XT:A:R  NM:i:3  SM:i:0  AM:i:0  X0:i:58 XM:i:3  XO:i:0  XG:i:0  MD:Z:0A0G0T47
SRR197978.9474929       69      1       238     0       *       =       238     0       GAGAAAAGGCTGCTGTACCGCACATCCGCCTTGGCCGTACAGCAGAGAAC      B9B@BBB;@@;::@@>@<@5&+2;702?;?.3:6=9A5-124=4677:7+
SRR197978.9474929       137     1       238     0       50M     =       238     0       GTTAGCTGCCTCCACGGTGGCGTGGCCGCTGCTGATGGCCTTGAACCGGC      B;B9=?>AA;?==;?>;(2A;=/=<1357,91760.:4041=;(6535;%      XT:A:R  NM:i:0  SM:i:0  AM:i:0  X0:i:62 XM:i:0  XO:i:0  XG:i:0  MD:Z:50

In short, it is a complex format, where in each line we have detailed information about the mapped read, its quality mapped position(s), strand, etc. The exact description of it takes (with BAM and examples) 15 pages: http://samtools.sourceforge.net/SAMv1.pdf There are multiple tools to process and extract information from SAM and its compressed form, BAM files, so it is better to learn how to use them than decipher it and access with often slow scripts.


Mapping Illumina reads to the genome

basic mapping steps

  • indexing

Before we can use the genome for mapping we have to transform it into a format specific for each of the mappers allowing for much faster search and lower memory usage. This is often called indexing, but to make things worse indexing fasta with samtools is not the same as indexing with bwa, bowtie etc.

  • mapping

This is often the longest step, with options specific for each mapper

  • postprocessing

The output of the mappers is seldom directly usable by downstream programs, which often use sorted and indexed BAM files. So we need to transform the mapper output (often SAM, but sometimes different format (MAF for LAST, MAP for GEM) to get such BAM files.

bwa

BWA is a the default mapper used by state of the art SNP calling GATK pipeline. There are some mappers which on some statistics may be better or equal but faster than BWA, but it is still a safe choice for doing genetic mapping. The main problem of BWA is mapping of paired reads: once one read is mapped to a good location, the second read seems to be placed close to this read (taking into account the insert size) even if the mapping would be very doubtful. This may not be a problem for GATK, since mapping qualities and flags are being accounted for, but one should keep this in mind when doing any analysis of the mapping results on your own. Currently BWA can use 3 different algorithms, each one with some limits and strong points. Here is the overview:

  • Illumina reads up to 100bp: BWA-BACKTRACK (the legacy bwa)

commands: "bwa aln" followed by bwa samse/sampe

  • sequences from 70bp up to 1Mbp:
    • Illumina data: "BWA-MEM"

(seeding alignments with maximal exact matches (MEMs) and then extending seeds with the affine-gap Smith-Waterman algorithm (SW))

command: bwa mem

    • very long reads/contigs: "BWA-SW" (Smith Waterman)

command: bwa bwasw

BWA has two different genome indexing algorithms:

  • IS (compatible with aln and mem), the default indexing (or: "bwa index -a is"
  • SW (for SW only): "bwa index bwtsw"

Please note that SW indexing (and therefore mapping using SW by BWA) does not work for short genomes


#creating genome index
bwa index -p ref.bwa_is  ref.fa

#mapping single end reads using MEM algorithm
bwa mem ref.bwa_is  reads.fq > reads.bwa_mem.sam

#mapping paired end reads using MEM algorithm
bwa mem ref.bwa_is reads_1.fq reads_2.fq > reads_12.bwa_mem.sam

#mapping single and reads
bwa aln ref.bwa_is short_read.fq > short_read.bwa_aln.sai
bwa samse ref.bwa_is short_read.bwa_aln.sai short_read.fq > short_read.bwa_aln.sam

#mapping paired reads
bwa aln ref.bwa_is  short_read_1.fq > short_read_1.bwa_aln.sai
bwa aln ref.bwa_is  short_read_2.fq > short_read_2.bwa_aln.sai
bwa sampe ref.bwa_is  short_read_1.bwa_aln.sai  short_read_2.bwa_aln.sai   short_read_1.fq   short_read_2.fq >  short_read_12.bwa_aln.sam

#mapping long reads using bwasw algorithm
bwa index -p ref.bwa_sw -a bwtsw ref.fa
bwa bwasw ref.bwa_sw long_read.fq > long_read.bwa_sw.sam

The mode currently recommended for mapping by BWA manual and the leading SNP calling software called GATK is MEM.

To create usable BAM files we can process SAM files using Picard's SortSam

java -jar /path/to/SortSam.jar I=reads_vs_reference.bwa.unsorted.sam O=reads_vs_reference.bwa.sorted.bam SO=coordinate VALIDATION_STRINGENCY=SILENT CREATE_INDEX=true

For subsequent processing the mapping files with GATK (SNP calling) it is easier to introduce necessary information at the mapping stage, than run an extra step using picard. What is required by GATK is so called reads group info. We will cover it later, but at this stage is good to know that bwa can be run with extra parameters saving us one extra step.

#below is the example read group info needed to be passed to bwa on the command line: 
@RG\tID:group1\tSM:sample1\tPL:illumina\tLB:lib1\tPU:unit1 

#here is the mapping step where in the place of string in <> we put group info from above.
#different samples should have different group info, like this:

bwa mem -M -R '@RG\tID:group1\tSM:sample1\tPL:illumina\tLB:lib1\tPU:unit1' ref_gen.bwa_is chicken_genomic_short_1.fq chicken_genomic_short_2.fq > chicken_genomic_12_vs_refgen.bwa_mem.rg.sam 

(optional) Stampy

Stampy is a quite slow but at times more accurate mapper, allowing for improvement over simple BWA mappings. The basic usage is as follows:

#creating two special index files 

stampy.py --species=Lmex --assembly=Lmex_toyasembly -G Lmex_toygenome Lmex_genome.nfix.fa
#Result: 
Lmex_toygenome.stidx

stampy.py -g Lmex_toygenome -H Lmex_toyasembly   
#Result:
Lmex_toyasembly.sthash

#remapping reads already mapped with BWA (prefered option)
stampy.py -g Lmex_toygenome -h Lmex_toyasembly -t2 --bamkeepgoodreads -M LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.bam  > LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.stampy.sam

#convert SAM to BAM, sort and index BAM file:
java -jar ~/soft/picard_1.119/SortSam.jar \
I=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.stampy.sam \ O=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.stampy.bam \
SO=coordinate VALIDATION_STRINGENCY=SILENT CREATE_INDEX=true

#Result:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.stampy.bam
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.stampy.bam

last

web site: http://last.cbrc.jp/

current version: 475 (Sep 2014)

This is less popular but sometimes quite useful mapper reporting unique mappings only. It can handle large number of mismatches and it simply remove the non-matching parts of the read, as long as what is left is sufficient to secure unique mapping. It can also be used to map very long reads, and even genome to genome (but then one has to index the genome differently). Standard usage:

#create samtools fasta index used to insert FASTA header sequence info in SAM 2 BAM. Creates ref_genome.fa.fai
samtools faidx ref_genome.fa

#index ref_genome for last, with a preference for short, exact matches
lastdb -m1111110  ref_genome.lastdb ref_genome.fa

#map short reads with Sanger (Q1) quality encoding, with the alignment score 120 (e120), then filter the output for 150 threshold (s150). See the http://last.cbrc.jp/doc/last-map-probs.txt for more info 
lastal -Q1 -e120 ref_genome.lastdb  input_reads.fastq  | last-map-probs.py -s150 > input_reads_vs_ref_genome.last.maf

#convert from MAF to SAM format
maf-convert.py sam  input_reads_vs_ref_genome.last.maf >  input_reads_vs_ref_genome.last.sam 

#convert SAM to BAM inserting header
samtools view -but ref_genome.fa.fai  input_reads_vs_ ref_genome.last.sam -o input_reads_vs_ref_genome.last.unsorted.bam 

#sort BAM
samtools sort input_reads_vs_ref_genome.last.unsorted.bam input_reads_vs_ ref_genome.last.sorted

#create BAM index (input_reads_vs_ ref_genome.last.sorted.bam.bai)
samtools index input_reads_vs_ref_genome.last.sorted.bam 
 
 

Quick and dirty genome 2 genome comparison using LAST

  • Comparing 2-3 Leishmania genomes

Viewing mapping results with IGV

IGV is a java program primarily for viewing mappings of short reads to a genome. But it can also be used for viewing SNPs (VCF files), genome annotation or even genome-2-genome alignment (not a typical usage). The order of steps:

  • start IGV (it needs to be started specifying the amount of RAM being used by the program). This depends in the coverage, number of BAM files opened at the same time. In short, more RAM assigned, faster scrolling.
  • select exact the same version of the genome with contigs named also the same way as in your BAM files
  • open BAM files (need to be sorted and indexed), plus any annotation you may need.

To work with IGV:

#assuming that igv.sh is on the PATH:

igv.sh

We have a large number of genomes available through IGV pull down menus, but we may need to create our own genome for viewing (top menu): Genomes > Create .genome file


We need to have FASTA genome reference file and its index (ref_gen.fa, ref_gen.fa.fai). The later one (FASTA index) we create as follows:

samtools faidx ref_gen.fa 

In the IGV new genome creation menu we have to put unique names for our genome, then FASTA , and if we have it, also genome annotation file.

So now we will be ready to load some BAM files mapped to our ref_gen sequence.

There are multiple options to display the reads. The important thing to notice are mismatches in reads, the coverage track, and paired display.

SNP discovery (GATK)

!!! Update to 1.119 !!!

Prerequisites

  • reference genome fasta file
  • reference genome index
samtools faidx Lmex_genome.fa

Result: Lmex_genome.fa.fai 


  • reference genome dictionary
java -jar ~/soft/picard_1.119/CreateSequenceDictionary.jar R=Lmex_genome.fa O=Lmex_genome.dict

Result: Lmex_genome.dict
  • BAM file preferably bwa mapped (from previous steps)

CAVEAT: GATK requires meta information in BAM fiiles to work. This information is often not there after performing the mappings. If we have not done it during the mapping with bwa, we can still fix it easily with AddOrReplaceReadGroups from picard:

 
java -jar  ~/soft/picard_1.119/AddOrReplaceReadGroups.jar \
I=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.bam \
O=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bam \
RGID=ERR307343 RGPL=Illumina RGLB=ERR307343_lib  RGPU=ERR307343_pu RGSM=ERR307343_sm \
VALIDATION_STRINGENCY=LENIENT CREATE_INDEX=TRUE SO=coordinate 


Result: 
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bam 
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bai

GATK pipeline

Genome Analysis Toolkit (GATK) from Broad is the de facto standard for detecting Single Nucleotide Polymorphisms (SNPs). There are very good and extensive manuals available on their site: http://www.broadinstitute.org/gatk/index.php

This is the step by step procedure to follow their best practice.

Caveat: GATK requires that you have more than one sequence in your reference genome. If not, it reports a strange error about wrong IUPAC (sequence character).

Mark duplicates

At this stage we have mapped reads with group info as BAM. The next step is to mark duplicate reads (~PCR artifacts) in this file. We can almost always use CREATE_INDEX=true, so we do not need to run extra indexing when using some picard utilities

java -jar ~/soft/picard_1.118/MarkDuplicates.jar I=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bam O=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.bam METRICS_FILE=LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bam.metrics CREATE_INDEX=true



Results:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.bam
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.bai
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.bam.metric
Realign read around indels

After getting marked duplicated reads, the next step is to realign read around indels. This is being done in two steps. Also at this stage it becomes more and more cumbersome to execute these steps as commands on the command line. The solution is to cut and past them into script files, then change the script permission and execute them instead.

CAVEAT: notice the small -o in the command

#computing interval in which read will be realigned

java -jar  ~/soft/GATK_3.2.2/GenomeAnalysisTK.jar  -T RealignerTargetCreator \
-R Lmex_genome.fa \
-I LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.bam \
-o  LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.target.intervals


Result:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.target.intervals

#realignment
java -jar  ~/soft/GATK_3.2.2/GenomeAnalysisTK.jar  -T IndelRealigner -R Lmex_genome.fa -I LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.bam -targetIntervals  LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.target.intervals  -o LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.bam  --filter_bases_not_stored 

Results:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.bam
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.bai

The recommended Best Practices step here is to run base re-calibration, meaning that the base quality is being re-estimated after taking into account mapping results. It adds several steps, and while it may be worthwhile, Illumina got better at estimating base qualities of it reads, so the results may not justify the extra complexity.

Another optional (by Best Practices) step is to reduce the complexity of the BAM. Since it is not necessary, we will skip it this time, but it is recommended to run it when dealing with multiple / large data sets.

Call mutations (SNPs)
#SNP only:

java -jar  ~/soft/GATK_3.2.2/GenomeAnalysisTK.jar  -T UnifiedGenotyper \
-R Lmex_genome.fa \
-I LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.bam   \
-o LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp.vcf 

Results:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp.vcf
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp.vcf.idx


#SNP + indels
java -jar  ~/soft/GATK_3.2.2/GenomeAnalysisTK.jar  -T UnifiedGenotyper \
-R Lmex_genome.fa \
-I LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.bam   \
-o LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp_indel.vcf \
 --genotype_likelihoods_model BOTH

Results:
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp_indel.vcf
LmxM.01_ERR307343_12.Lmex.bwa_mem.Lmex.rg.md.realigned.snp_indel.vcf.idx

Quantifications of mapped reads

  • Gene quantifications (DNA & RNA levels)

Finding gene ends by mapping post-splice leader and polyA sequences

Viewing mappings and SNPs