Wikiomics:Repeat finding

From OpenWetWare

(Difference between revisions)
Jump to: navigation, search
m (Installation)
m (Installation)
Line 191: Line 191:
</pre>
</pre>
-
Also perltidy finds one error in Clustering.pl:
+
Also perltidy finds one error in Clustering.pl. Replace:
-
Replace:
+
<pre>
<pre>
my $cluster_dir "cluster_"; #prefix of directory names
my $cluster_dir "cluster_"; #prefix of directory names
</pre>
</pre>
 +
 +
with:
<pre>
<pre>

Revision as of 14:58, 30 July 2010


To simplify, this page assumes eukaryotic genomic DNA repeat finding.

Repeat finding can be divided into two tasks, depending on availability of repeat library:

A) Library exists for a given (or possibly closely related species)

or

B) you construct such library de novo.


Task A is usually a prerequisite step for genome annotation and even blast searches. For newly sequences genomes one should start with B (constructing species specific repeat library).

For more comprehensive list of programs read TE tools@Bergman Lab, U. of Manchester, UK

Contents

Detecting known repeats

Most comonly used: Repeatmasker

RepeatMasker


  • Online web server [1]
  • command line

You have to have a FastA file (it can be multiple FastA). Type:

repmask your_sequence_in_fasta_format

You will get a file: your_sequence_in_fasta_format.masked --- name tells all

species options (choose only one):

-m(us) masks rodent specific and mammalian wide repeats
-rod(ent) same as -mus
-mam(mal) masks repeats found in non-primate, non-rodent mammals
-ar(abidopsis) masks repeats found in Arabidopsis
-dr(osophila) masks repeats found in Drosophilas
-el(egans) masks repeats found in C. elegans

De novo repeat library construction

For programs recommendations based on test see: Saha et al. Empirical comparison of ab initio repeat finding programs (2008)

For an extensive reviews listing tens of programs:

Keep in mind that resulting libraries should be further screened for gene families. There are border cases, where genome may contain thousands of modified copies of a gene, ranging from seemingly functional copies, through pseudogenes, gene fragments and single exons (i.e Speer family in rodents).

Consensus Based

One has to have at least a draft of the genome or multiple genomic sequences.

RepeatScout

command line only, requires compilation

Site: http://bix.ucsd.edu/repeatscout/

current version (2010-03): 1.05

Documentation:


Simplest run:

  • build frequency table
build_lmer_table -sequence input_genome_sequence.fas -freq output_lmer.frequency

output_lmer.frequency file can be still quite large (1.7Gb for 900Mb fasta file)

  • create fasta file containing all kinds of repeats
RepeatScout -sequence input_genome_sequence.fas -output output_repeats.fas  -freq output_lmer.frequency

Resources:

    • RAM usage (RepeatScout): > 17Gb for 800Mb genomic sequence.
    • 9.6h Xeon E7450 @ 2.40GHz

The output (output_repeats.fas) is a fasta file with headers (>R=1, >R=232 etc.). It contains also trivial simple repeats (CACACA...), tandem repeats

  • filter out short (<50bp) sequences. Remove "anything that is over 50% low-complexity vis a vis TRF or NSEG.". Perl script.

It does require trg and nseg to be on the PATH, or setting env variables TRF_COMMAND and NSEG_COMMAND pointing to their location

 
filter-stage-1.prl output_repeats.fas > output_repeats.fas.filtered_1 

this prints tons of messages


  • run RepeatMasker on your genome of interest using filtered RepeatScout library
 RepeatMasker  input_genome_sequence.fas -lib output_repeats.fas.filtered_1

This is a very long step (36h for 800Mb draft genome) when run in such default mode. See discussion for this page for possible, but so far untested speedups.

Output used for the next step: input_genome_sequence.fas.out

  • filtering putative repeats by copy number. By default only sequences occurring > 10 times in the genome are kept


 cat output_repeats.fas.filtered_1  | filter-stage-2.prl --cat=input_genome_sequence.fas.out > output_repeats.fas.filtered_2

Fast (< 1min ). You can modify the filter using i.e. "--thresh=20" (only repeats occurring 20+ times will be kept)


Piler (with lastz)

Prerequisites:

Also read the manual of pals, in case you need to understand the piler's gff input format: http://www.drive5.com/pals/pals_userguide.htm Both programs install easily on Ubuntu 9.10.

Input

  • fasta file. Fasta identifiers should not contain spaces/special characters (stay with alpha-numeric ones)
  • things to keep in mind:
    • default fasta output of i.e. Newbler needs to be fixed

Running lastz

lastz my_fasta_input.fa[multiple] my_fasta_input.fa --output=my_fasta_lastz_output.csv \ 
--format=general:name2,zstart2,end2,score,strand2,name1,zstart1,end1,identity  --notrivial --ambiguousn  --markend

Converting lastz output to gff piler input

Python script (still in the testing phase)

Running piler

Draft! See: http://www.drive5.com/piler/piler_userguide.html

piler -trs my_fasta_lastz_output.gff -out  my_fasta_lastz_output.trs

Input Reads

This is pre-assembly repeat finding method.

ReAS (broken install)

Paper

Installation

tar xfvz ReAS_2.02.tar.gz; cd ReAS_2.02/code
  • open N_matchreads.cpp and add line below i.e. after "#include<time.h>":
#include <cmath>
  • compile ReAS
make; make install

You will get binaries + perl modules in ReAS_2.02/bin

  • Put them on $PATH (bash)
export PATH=/your/path/to/ReAS_2.02/bin/:$PATH

Also perltidy finds one error in Clustering.pl. Replace:

my $cluster_dir "cluster_"; #prefix of directory names

with:

my $cluster_dir = "cluster_"; #prefix of directory names

Usage

  • you need to have on your PATH
    • dust from wublast (not tested yet with dustmasker from blast)
    • muscle
    • cross_match from phrap

read 00readme located in ReAS_2.02/code for more detailed instructions.

To run on 8 CPU cores with otherwise default settings with already selected set of reads:

nohup reas_all.pl -read 454_subset_4repeat_search.fas -pa 8 -output 454_subset_4repeat_search.fas.cons_reas   2 > nohup.reas1.txt & 

CAVEAT: at the current stage above command does not run all scripts (on my computer: multi_run2.sh: line 72: dust: command not found).

external info

  • There is description how ReAS was used in Hydra magnipapillata genome project (see supplementary information PDF, section "S9. Analysis of repeated sequences"):

http://www.nature.com/nature/journal/v464/n7288/full/nature08830.html

From 10M+ Sanger sequences of average length 800+ bp they constructed two libraries:

  • A) 1% of reads, 17-mers with a depth of at least 10 ("especially enriched in the CR1 class of retrotransposons")
  • B) 10% of reads 17-mers depth range of 10 to 100.

""" - to improve the quality of the assembly, only reads that have at least 100 high-depth 17-mers were considered

- ReAS was run on each of the libraries separately

- after retaining only the assembled repeats of length larger than 500 nucleotides and a minimal average depth value (as provided by the program) of 10, the two libraries contained 949 and 25,110 repeats each, respectively.

- These sequences were then pulled together.

- initial ReAS assembly appears to be fragmented and there are many redundant sequences, the final version of the library was produced by running ReAS' join_fragments.pl and rmRedundance.pl scripts.

- final library contained 3909 reconstructed repetitive elements with an average length of about 1500 nt. """

  • see this thread @Biostar:

http://biostar.stackexchange.com/questions/753/instaling-reas-repeat-finding


For pages on similar topics visit: Wikiomics@OpenWetWare

References

  • Church DM, Goodstadt L, Hillier LW, Zody MC, Goldstein S, et al. 2009 Lineage-Specific Biology Revealed by a Finished Genome Assembly of the Mouse. PLoS Biol 7(5): e1000112. doi:10.1371/journal.pbio.1000112
Personal tools