To simplify, this page assumes eukaryotic genomic DNA repeat finding.
Repeat finding can be divided into two tasks, depending on availability of repeat library:
A) Library exists for a given (or possibly closely related species)
B) you construct such library de novo.
Task A is usually a prerequisite step for genome annotation and even blast searches. For newly sequences genomes one should start with B (constructing species specific repeat library).
For more comprehensive list of programs read TE tools@Bergman Lab, U. of Manchester, UK
Detecting known repeats
Most comonly used: Repeatmasker
- web site: http://www.repeatmasker.org/
- current version (checked on 2010-03.22): 3.2.8
- documentation: http://www.repeatmasker.org/webrepeatmaskerhelp.html
- Online web server 
- command line
You have to have a FastA file (it can be multiple FastA). Type:
You will get a file: your_sequence_in_fasta_format.masked --- name tells all
species options (choose only one):
-m(us) masks rodent specific and mammalian wide repeats -rod(ent) same as -mus -mam(mal) masks repeats found in non-primate, non-rodent mammals -ar(abidopsis) masks repeats found in Arabidopsis -dr(osophila) masks repeats found in Drosophilas -el(egans) masks repeats found in C. elegans
De novo repeat library construction
For programs recommendations based on test see: Saha et al. Empirical comparison of ab initio repeat finding programs (2008)
For an extensive reviews listing tens of programs:
- Bergamn C. Discovering and detecting transposable elements in genome sequences (2007)
- Lerat E. Identifying repeats and transposable elements in sequenced genomes: how to find your way through the dense forest of programs (Nov 2009)
Keep in mind that resulting libraries should be further screened for gene families. There are border cases, where genome may contain thousands of modified copies of a gene, ranging from seemingly functional copies, through pseudogenes, gene fragments and single exons (i.e Speer family in rodents).
One has to have at least a draft of the genome or multiple genomic sequences.
command line only, requires compilation
current version (2010-03): 1.05
- PPT presentation presenting algorithm: http://bix.ucsd.edu/repeatscout/repeatscout-ismb.ppt
- publication (PDF)De novo identification of repeat families in large genomes 2005
- build frequency table
build_lmer_table -sequence input_genome_sequence.fas -freq output_lmer.frequency
output_lmer.frequency file can be still quite large (1.7Gb for 900Mb fasta file)
- create fasta file containing all kinds of repeats
RepeatScout -sequence input_genome_sequence.fas -output output_repeats.fas -freq output_lmer.frequency
- RAM usage (RepeatScout): > 17Gb for 800Mb genomic sequence.
- 9.6h Xeon E7450 @ 2.40GHz
The output (output_repeats.fas) is a fasta file with headers (>R=1, >R=232 etc.). It contains also trivial simple repeats (CACACA...), tandem repeats
- filter out short (<50bp) sequences. Remove "anything that is over 50% low-complexity vis a vis TRF or NSEG.". Perl script.
It does require trg and nseg to be on the PATH, or setting env variables TRF_COMMAND and NSEG_COMMAND pointing to their location
filter-stage-1.prl output_repeats.fas > output_repeats.fas.filtered_1
this prints tons of messages
- run RepeatMasker on your genome of interest using filtered RepeatScout library
RepeatMasker input_genome_sequence.fas -lib output_repeats.fas.filtered_1
This is a very long step (36h for 800Mb draft genome) when run in such default mode. See discussion for this page for possible, but so far untested speedups.
Output used for the next step: input_genome_sequence.fas.out
- filtering putative repeats by copy number. By default only sequences occurring > 10 times in the genome are kept
cat output_repeats.fas.filtered_1 | filter-stage-2.prl --cat=input_genome_sequence.fas.out > output_repeats.fas.filtered_2
Fast (< 1min ). You can modify the filter using i.e. "--thresh=20" (only repeats occurring 20+ times will be kept)
Piler (with lastz)
- lastz from http://www.bx.psu.edu/miller_lab/ (tested version from 2010-Jan-12)
- piler from http://www.drive5.com/piler/ (tested version 1.0)
Also read the manual of pals, in case you need to understand the piler's gff input format: http://www.drive5.com/pals/pals_userguide.htm Both programs install easily on Ubuntu 9.10.
- fasta file. Fasta identifiers should not contain spaces/special characters (stay with alpha-numeric ones)
- things to keep in mind:
- default fasta output of i.e. Newbler needs to be fixed
lastz my_fasta_input.fa[multiple] my_fasta_input.fa --output=my_fasta_lastz_output.csv \ --format=general:name2,zstart2,end2,score,strand2,name1,zstart1,end1,identity --notrivial --ambiguousn --markend
Converting lastz output to gff piler input
Python script (still in the testing phase)
Draft! See: http://www.drive5.com/piler/piler_userguide.html
piler -trs my_fasta_lastz_output.gff -out my_fasta_lastz_output.trs
This is pre-assembly repeat finding method.
ReAS (broken install)
- get ftp://ftp.genomics.org.cn/pub/ReAS/software/ReAS_2.02.tar.gz
- unpack it in a suitable directory
tar xfvz ReAS_2.02.tar.gz; cd ReAS_2.02/code
- fix two files:
open code/N_matchreads.cpp and add line below i.e. after "#include<time.h>":
perltidy finds one error in Clustering.pl. Replace:
my $cluster_dir "cluster_"; #prefix of directory names
my $cluster_dir = "cluster_"; #prefix of directory names
- compile ReAS
make; make install
You will get binaries + perl modules in ReAS_2.02/bin
- Put them on $PATH (bash)
- you need to have on your PATH
- dust from wublast, unknown version (not tested yet with dustmasker from blast)
- muscle (3.6 was used originally)
- cross_match.manyreads from phrap
With a large number of reads and "-pa 1" (see below) cross_match (version: 0.990329) crashes with: "FATAL ERROR: seq_area_size". Untested:
- the newest beta version (1.090518) do not have this memory limit, but also do not have cross_match.manyreads(?)
- in the older version of cross_match one can change #define BLOCK_SIZE 10000 in db.c and recompile
- I decreased the input size for testing (265Mb fasta with 592k 454 reads) which is small enough. Possibly running ReAS with
reas_all.pl -read input.fasta -n 8
may fix it.
read 00readme located in ReAS_2.02/code for more detailed instructions.
To run with default settings with already selected set of reads:
nohup reas_all.pl -read 454_subset_4repeat_search.fas -output 454_subset_4repeat_search.fas.cons_reas 2 > nohup.reas1.txt &
CAVEAT: at the current stage above command does not run all scripts correctly.
- There is description how ReAS was used in Hydra magnipapillata genome project (see supplementary information PDF, section "S9. Analysis of repeated sequences"):
From 10M+ Sanger sequences of average length 800+ bp they constructed two libraries:
- A) 1% of reads, 17-mers with a depth of at least 10 ("especially enriched in the CR1 class of retrotransposons")
- B) 10% of reads 17-mers depth range of 10 to 100.
""" - to improve the quality of the assembly, only reads that have at least 100 high-depth 17-mers were considered
- ReAS was run on each of the libraries separately
- after retaining only the assembled repeats of length larger than 500 nucleotides and a minimal average depth value (as provided by the program) of 10, the two libraries contained 949 and 25,110 repeats each, respectively.
- These sequences were then pulled together.
- initial ReAS assembly appears to be fragmented and there are many redundant sequences, the final version of the library was produced by running ReAS' join_fragments.pl and rmRedundance.pl scripts.
- final library contained 3909 reconstructed repetitive elements with an average length of about 1500 nt. """
- see this thread @Biostar:
For pages on similar topics visit: Wikiomics@OpenWetWare
- Church DM, Goodstadt L, Hillier LW, Zody MC, Goldstein S, et al. 2009 Lineage-Specific Biology Revealed by a Finished Genome Assembly of the Mouse. PLoS Biol 7(5): e1000112. doi:10.1371/journal.pbio.1000112