The BioBricks Foundation:Standards/Technical/Exchange/Old Discussion

From OpenWetWare
Jump to navigationJump to search

Biobrick Data Exchange Standards: This working group aims to define formats / technologies for the description of biobricks and the exchange (or networking) of biobrick-related data. This document is part of the ongoing discussion on the technical standards mailing list. The main questions to tackle are:

  • Aim -- goal and application scenarios for this standard
  • Biobrick definition -- What is a Biobrick?
  • Data model -- What is the data model needed to describe a biobrick?
  • Technology -- What is the best format / technology for exchange?


Aim / Application scenarios for this standard

Application scenarios [please discuss]

  • data exchange between local / central part registries

Example: "We have a local registry and want to publish the finished Biobricks to the MIT registry." See [[ http://brickit.wiki.sourceforge.net/ | BrickIt project]] for an example local registry system.

  • find suitable parts

Example: "I need a 10-fold PoPs amplifier (input range 0 - 8 PoPs) that works in S. cerevisiae at 25 C temperature; response time doesn't matter but protein production load needs to stay below 100000 AA consumed; Sub-components must not interfere with the MAPK pathway [enter reactions]."

  • download biobrick data into local computer programs

Example: "I want to make a bioinformatic analysis of all RNA-biobricks in the MIT registry." Example: "I want to write a Biobrick DNA design program."

  • simulate Biobrick devices & simulation-aided design

"We want to simulate the behavior of device X and Y with the GePasy program." or "We want to develop simulation-based bio-circuit design programs."

  • distributed annotation of Biobricks

Example: "We have measured the toxicity of 1000 BioBricks from MIT and two other registries. Can we cross-link this data with the registy?"


What is a Biobrick?

Definition

A final definition is beyond the scope of this group. For data exchange purposes we adopt the following draft:

  • BioBricks™ are standard DNA parts that encode basic biological functions. see BBF home
  • A BioBrick has a unique DNA sequence.
  • Basic parts are defined by this DNA sequence.
  • Composite parts are defined as "sequence" of Basic BioBricks, along with intervening "scar" sequences.

Issue: BioBrick formats

(Raik) You can have the "same" Biobrick in different formats, e.g. with prefix/suffix from one of the two suggested protein fusion formats. Now the sequence is exactly the same, but having a sample of biobrick X with biofusion flanks may be of no use if the other biobricks in you freezer are formatted differently. *Does a different prefix / suffix create a different biobrick?* To the assembling experimentalist in the lab it does; to the user of gene synthesis it doesn't really; the system designer or analyst couldn't care less...

Abstraction layers [discussion]

See also: [[ http://biobricks.org/pipermail/standards_biobricks.org/2008-February/thread.html | developing thread ]] on standards mailing list.

(Mac) should there be a one-to-one relationship between a part 's functional definition and its sequence? What if you introduce a silent mutation into a BioBrick - is there a "different sequence, different part" doctrine, even if the two are functionally equivalent? ... Is this a source code vs. compiled code issue?

(Raik) We right now seem to follow the unspoken rule that a part is defined by its exact DNA sequence. Any modification creates a new part, which is kind of logical to the experimentalist because it maps a biobrick to exactly one DNA fragment (which you either have in your freezer or not) and vice versa. Options:

  • keep/fix the sequence-based definition but introduce relations like "ortholog to", "equivalent to", etc.
  • define "reference biobricks" and link variants to them
  • find a more abstract definition ... and create the concept of BB 'implementation' or 'instance'.

(Mac) Perhaps we could do both? Assuming a biobrick always has one and only one dna sequence, perhaps we could build the data model to support organizing biobricks into families or sets of functionally related parts? Each family could have one canonical biobrick associated with it that works, is available, and exemplifies the function that the family is supposed to have.

What is the data model needed to describe a biobrick?

Following Ralph's and Barry's mails, Raik suggests to split this into the following sub-topics (re-organize at leisure).

minimal Biobrick information

The set of minimal information aims to (1) uniquely identify a biobrick, (2) provide sufficient detail for its application and handling in the lab and during assembly, (3) describe its origin/source and references for human study.

  • unique ID
  • DNA sequence / basic building blocks
  • format ??? (see issue above)
  • short description for humans
  • long description for humans
  • target chassis
  • "collaborating"/complementing biobricks if any
  • feature annotation
  • experience flag
  • ? bug tracker ?
  • ? version / supersedes / history ?
  • source genebank ID if applicable (with position?)
  • source organism
  • source lab/person
  • references (web / literature)

Biobrick classification

Categorization and anything that helps (1) fishing this part out of the registry and (2) deciding what extra information may be needed.

Intrinsic Classification

Intrinsic classification covers those aspects of Biobrick classification which are defined by the Biobricks themselves. For these the primary focus is defining the vocabularies used to describe Biobricks to the outside world. Broadly speaking, this can include:

  • Identifiers
  • Biobrick taxonomy: defining types or species of Biobricks based on composition, function, etc.

Possible intrinsic classifiers include:

  • DNA category: [ AA coding, RNA-coding[m-/t-/nc-/mi-/si-], regulatory [promoter,rbs,terminator,enhancer], unknown, ...]
  • part category: [ basic, composite ]
  • implementation status: [ planning, building, sequence-verified, function-verified, works ]

Extrinsic Classification

Extrinsic classification refers to those aspects of Biobrick classification which are attributed to Biobricks from external sources or references. The focus is defining the vocabularies for those aspects of the outside world which are related to biobricks.

  • Functional Performance Parameters -- (Raik:) see Characterization below
  • Function: GO identifiers
  • Structure: PFam / Smart protein domains (but see: further annotation)

Characterization

Quantitative data about the part, important for (1) design and (2) implementation of devices containing it but also for (3) simulation (+design) in network models.

A) for device implementation

  • Device reliability (RNA half-life, protein half-life)
  • Device stability (genetic stability)
  • Device compatibility (with other devices, environmental conditions etc.)

B) for device simulation & design

  • Static device behavior
  • Dynamic device behavior
  • Device interactions (including quantitative data)
  • Power requirements of the device
  • Device reactions + reaction rates

C) Further annotation

  • references to High-throughput data ?
  • references to outside, non-standardized information about this part

What is the best format / technology / architecture for exchange?

Model-first: There is the concern that tying ourselves to a format too early will make us not have a clear model in mind, and will cause us to hack up the format. Model-parallel: On the other hand, the technology choices are important also for the data model discussion -- differerent technologies imply different possibilities but also different costs.

Proposed Architectures

For reference, I'm considering a piece of web-accessible software, like the MIT Registry or BrickIt, that has BB data in some sort of persistence layer (be it a relational DB, an object DB, an XML store, a hash store like CouchDB/SimpleDB, or a triple store), offers a human-facing UI, and a programmatic interface for 3rd party software integration that allows *read/write access* with authentication and authorization rules. (See section on 'Application Scenarios' above)

XML/DB backend, REST API

The REST-architecture becomes popular as a clean approach for point-to-point data exchange across the web. (see [[ http://biobricks.org/pipermail/standards_biobricks.org/2008-February/000065.html | 'say no to web services' thread ]]) A RESTful architecture might have some advantages. In particular, REST is a simpler approach to data access than SOAP; REST is easy to work with since it's simply HTTP (GET + POST), and software support is plentiful.

Note that this approach involves a layer of abstraction over the persistence layer. The disadvantage is, compared to offering a straight up SQL/etc interface, is the additional step necessary to write the layer. However, you'll have to design a layer of abstraction anyhow for the UI (such as a web application serving HTML) and frameworks such as Django and Rails can make it easy to expose alternative content types (XML, JSON) in parallel with your human-consumable HTML data views.

The advantage is that you get to decouple the internal representation from the public API. This allows you to modify your underlying data store (database, schema, etc.) and not break the interface that your clients are using. It also allows your application to perform data validation, and allows you to write that in the higher-level language of your application rather than in SQL triggers/keys. Also, you do not have to repeat this validation logic across both your application and in the database. It also affords you more power in the authentication/authorization department than simple database logins. This approach (doing validation/auth in the application later) is that of an Application Database and essentially precludes you from offering a raw SQL interface.

Triple backend, SPARQL/SPARUL API

If, on the other hand, we elect a triple-based storage format, query languages such as SPARQL and SPARQL/Update (aka SPARUL) offer great power.

Note that, with this approach, the tool could expose the underlying RDF as a SPARQL/SPARUL endpoint, and both the application's web interface and the API interface could work against that. The point here is that triples are likely flexible enough to withstand a "schema change" and providing a SPARQL-adhering endpoint is a layer of abstraction that allows you to swap out the underlying triple store if necessary . I am not sure how authentication/authorization and data validation happen in this scenario, as I am less familiar with it.

For rolling up your sleeves and hacking around, you might like to check out object/RDF modeling libraries such as:

The following articles contain a good deal of discussion on the topic of building web applications for the semantic web:

any backend, RDF REST

A web server serving and consuming RDF/XML/N3 documents would combine a REST-architecture with the triple format. It's technical quite easy to implement and would allow the growth of a semantic web around the original biobrick definition. So picture an automatic data exchange between a hypothetical Brickit server in Barcelona (brickit.crg.es) and the MIT registry:

1. update notification

  • parts.mit subscribes to brickit.crg RSS-feed
  • parts.mit receives RSS digest that there is a new biobrick record BBb_F0101 available on brickit.crg

2. Read access

  • parts.mit loads brickit.crg/parts.n3#BBb_F0101
  • -> brickit.crg serves the internal record as N3 document (technically no problem whatsoever, as I discussed on the mailing list before)

3. Write or rather "inverse read"

  • parts.mit parses the document (rdflib, redland...)
  • parts.mit verifies the ontology/content
  • parts.mit inserts a new record (ignores any properties that are not defined in its own ontology -- the crg people may be experimenting with additional data)
  • parts.mit adds a property "owl:sameAs <brickit.crg/parts.n3#BBb_F0101>;" to the new record
  • ...or it may not copy it at all (DRY, Dont Repeat Yourself), but just link it into the appropriate biobrick families and cache it for faster queries ...

Note, that there is no write access in this scenario. That means, there is no authentication needed either. It's up to the receiver to decide whether or not to ignore the RSS and what to do about the new record.

Discussion

The registry biobrick documents can serve as hooks (unique addresses) to link further information into the developing knowledge graph. However, software tools that can gather and integrate distributed RDF information are not yet really available, it seems. Changes to the ontology are decoupled from the data -- the data model can evolve over time with minimal perturbing of existing data. The question is whether this model is feasible with the available tools.

Potential Benchmarks

As the standard evolves it will be necessary to test that it accomplishes what we set out to do with it. Since the standard shall likely have many moving parts, and shall be set to a wide range of applications, a variety of tests for different features and aspects of the standard will be necessary.

This section attempts to define a variety of potential benchmarks for the standard. Ideally they define a specific problem whose results are clearly interpretable while describing the problem in a manner as technologically neutral as possible. The working group should regard these as rough suggestions or guidelines.

Benchmark Descriptions

Document formats

create a new XML format

REST interfaces (like for Django) publish automatically as XML.

adapt existing CellML, SBML XML formats

create a custom file format

use Turtle/N3 notation for semantic web documents

I somewhat share the reservation about completely new file formats, but the non-readability and general nastiness of XML is also an issue. A good solution, IMO, would be to use the Notation3 format developed by the semantic web folks. It is concise, human-readable and editable (i used it myself some years ago) *AND* is equivalent to XML. That means there is a well defined translation back and for and many libraries and tools do the conversion. Being semantic web, it also solves the linking problem (everything is a link).

Quick Example (links are not 100% correct) (The MIT server could serve the following document for parts.mit/biobricks):

# shortcut definition for frequently used ressources ...
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
@prefix bbf: <http://biobricks.org/ontology/1.1/>.
@prefix harvard: <http://harvard.edu/registry/parts#>.

# define a biobrick hosted at this address
:BBa_0001
       rdf:type        bbf:biobrick;
       bbf:sequence    "AAACCCGGG";
       bbf:similarTo  [:BBa_0003, harvard:BBa_J1000, :BBa_00010].

# add information to a biobrick defined elsewhere
harvard:HBB_J1000
       rdf:sameAs      :BBa_0001.

... continue for all other biobricks

OK, one can argue about human-readability but it's at least possible to understand and edit these documents (and much better than the equivalent xml).