We are excited to announce The Fifth International Meeting on Synthetic Biology (SB5.0) on June 15 – 17, 2011 at Stanford University. This meeting will be the first time in over two years that an open and self-defined global community will share, consider, debate, and plan efforts to understand life via building, to make biology easier to engineer, and to work together so that the ramifications of such efforts are most likely to benefit all people and the planet.
Palm Drive at Stanford University
For context, SB4.0 was held at the Hong Kong University of Science & Technology in October 2008. Since this time we have seen significant scientific and technical advances, including full genome synthesis, reliable synchronization of multicellular genetic oscillators, and opiate precursor biosynthesis. We have also experienced increased politicization of the field, including the US Presidential Bioethics Commission consideration of synthetic biology, and ongoing popularization of such work through activities such as the iGEM. In addition, given the 7 years since SB1.0 was held at MIT, we find a “second wave” of younger synthetic biology practitioners rising to prominence.
Taken together, June 2011 is the next right time for the world to come together to learn about and help define what is happening in the world of synthetic biology. Please join us.
Please see the links below for schedule, speaker and registration information. Note that there is no registration deadline at this point, however there is a limit on the number of registered attendees we can have at the meeting. To guarantee your spot, we encourage you to register and make travel arrangements as soon as possible.
The BioBricks Foundation is pleased to announce that we are now managing technical support for OpenWetWare. The BBF is a nonprofit organization that promotes biotechnology in the public interest. If you like OWW and want to support our efforts to keep it running smoothly, please consider making a tax-deductible contribution of any amount to help pay for hosting, web developer oversight, and other basic needs. You can make a contribution to OpenWetWare here.
This week (June 1 – June 5), OpenWetWare.org will be moving from our current server at Rackspace to a new Rackspace Cloud Server. The server will be around the same class of machine and will be running on Ubuntu Linux rather than the existing RedHat Enterprise Linux release. All backups will be done to Rackspace Cloud Files. All MySQL database backups , and image files will be stored external to the server via Cloud Files. For those of you who are wondering why OWW will be using Ubuntu rather than Red Hat, it’s because Wikimedia uses Ubuntu for all of their MediaWiki servers; using it will keep OWW close to the infrastructure that MediaWiki is tested and developed on.
The move will be done Tuesday night around 11:00 PM EST. We don’t anticipate problems but the server will briefly go down as the IP address is changed. The new server has been configured and, just after changing the IP address, the most recent snapshot of the MySQL databases from the current server will be loaded to the new one and a final file sync will be executed.
There should be no changes in the way MediaWiki and any extensions are handled. LaTex has been installed on the new server. All extensions are working or are being tweaked before the move.
There will be no upgrade of OWW’s MediaWiki software release until the move is complete. Hard-won experience dictates that reducing variables is the right way to maximize the probability of a successful major server task.
Since OWW uses many virtual hosts, all of these will be tested briefly to make sure they are all accessible. This can’t be tested completely until the change. No problems are anticipated but if there are problems, this is the most likely place it will be.
Please submit any comments or questions to me. Either reply here or use this link and follow the contact instructions in the OpenWetWare wiki.
To lookup an article using a document object identifier, there’s a cheap and cheerful way to do it based upon the work we did earlier to add access to pubget.
Without it, you can always resolve a DOI. Here’s a simplest example: Let’s say the DOI is 10.1021/ac7018574. This is an article by Cameron Neyland. If you want to redirect t the document, you can use an open DOI like this:
You can therefore represent a DOI in OpenWetWare like this:
That would display the string, doi:10.1021/ac7018574. Clicking on the link would bring you to the paper. This won’t provide any information about the document. Clicking on it will redirect your browser to the paper.
In addition to PubMed lookup, Pubget also includes a DOI resolver. You can access it via an RSS feed within OpenWetWare.
I’ve added a page including a set of examples and instructions to OpenWetWare to illustrate how this can be done as well as how the previous examples. You can look at the list of examples (and, please, add more!) here:
The result is that the title of the doc and a link to allow you to read the document are displayed. There’s currently no way to format these string directly but it’s coming. Look at the page to see an example. I’ll simplify this to allow an even shorter syntax to display it.
Just as pubget uses information about a university to determine what periodicals you have access to, the DOI system uses a ‘resolver’ to map a DOI into a reference to the periodical and the document described.
Biblio, the reference/citation extension in OpenWetWare currently does not support DOI’s. There is limited support for importing the metadata such as periodical, edition, authors, and even title, to do this withougt a bit of hacking. Let me know if anyone is interested in seeing this support added.
If anyone has questions about the use of a feature like this, please let me know. As I mentioned, don’t hold back on adding examples to the page in OWW.
MediaWiki is the software that OpenWetWare.org is built on. We customize it by applying out own styling to the page, add out own member management software to it, and either write our own extensions to it or download and install others. In general, we try like hell to not touch the ‘core’ MediaWiki code. It would be like a lab scientist starting with a standard protocol and then making many modifications to it for his or her own purpose. That’s OK until someone makes a set of modifications to the protocol you started with. You now have to figure out how to retrofit your procedures, ingredients, and what-not with the new ones. Sometimes it’s easy. But sometimes it’s just too much and you revert back to the original, throwing out your own potentially useful enhancements.
MediaWiki has a number of built-in features. One of them is the “magic link” feature I mentioned in the last message. For some reason, someone added a feature to MediaWiki that, when a specific keyword was found, it would try to see if the next word was recognizable and do something on that basis. In the case I discussed, it was the “PMID” word followed by a recognizable PubMed number. The rules for such a number are very simple to understand. But they are not always easy to make work reliably. Since any series of numbers will be seen as a PMID, the wiki page will now contain a link to the PubMed database, offering users the chance to see the citation for a paper. If the paper doesn’t exist, the lookup will still take place. As the saying goes, “Garbage in, garbage out”. The change we made was a bit of a “core” change, but we’ve already moved it into an ‘extension’, thus allowing us to move to a new MediaWiki version without losing our ‘core-like’ change.
We removed another built-in document identifier that was built into MediaWiki. This one was ‘RFC’. The term came from the Internet. All aspects of Internet standards are documented in a “Request For Comment”. Biopart RFC standards are based upon similar documents and processes. In this case, we removed the RFC “magic links” to keep OpenWetWare for turning references to Biopart RFC’s into Internet RFC’s.
MediaWiki also supports magic links for ISBN number lookup as well. Again, the very simple syntax of the number itself makes this possible.
One identifier that’s conspicuously absent from this list of global standards is the academic publishing world’s standard, the DOI.The Document Object Indetifier. The standard has a is different from some of the other schemes. For one thing, you need to pay to obtain a license to issue them. Another is that you also, as far as we have ever determined, need to pay for each DOI issued. Unlike the simple, regular nature of the PMID, the DOI label itself is a variable string that isn’t easy to handle. Once you have located a DOI in a document, to resolve the publication it came from, a third-party needs to step in to do the actual lookup.
In the US, the PubMed database is administered by the NIH. Any PMID can be looked up via a single interface. The simplicity of this is what allowed Pubget to step in and cut out the NIH by directing a reader with a valid subscription to the actual text of a PMID-identified reference.
You can do something of the same thing with the a DOI, if the content of a string can be identified clearly as a DOI. At some point OWW may add support for the DOI in this manner. That would mean that Biblio references could contain them as well as any OWW string that either started with “doi:” or followed the word, “DOI” would lead to the article itself. Like Pubget, the resolution of a DOI to a particular publisher and their online repository for articles does require a subscription to the periodical. Another use of the DOI would be OWW issuing a DOI for a page. This bears more explanation. But also more research.
I’ll provide a simple example of a DOI lookup via a resolver if people are curious.
But not all the time. By the time we’re reading an OpenWetWare (OWW) page and notice a reference to a published paper, we’ve come to the page through a search engine. At this point, a reference is no longer a starting point for a new search. It’s now contextually enlivened information.
We want to read it now. Our interest has already been qualified. Searching is behind us. Is this the time to go to another site to read about the article and to find more? For me, at this point, I want to decide how relevant an article is by reading it. “Let me decide!”.
PubMed’s website is where OWW references typically end up. The PubMed ID (PMID) is a common identifier for research articles for most biological research output. The US NIH acts as a proxy for redirecting us to the actual article. If we have rights to read it via online subscription, we eventually get to see it.
As we’ve experienced far too often, getting to the article through PubMed isn’t direct. The process looks something like this:
You see a citation.
You want to read it.
You click the link.
You read the reference.
You click the link.
You go to the publisher’s site.
You click the link.
If your institution has a subscription, you read the article.
Pubget is a service that cuts through this process. It allows you to click on a link and read the article.
Using it, the process gets collapsed to this:
You see a citation.
You want to read it.
You click the link.
If your institution has a subscription, you read the article.
Note the steps. Once click is now required. The rest is all about thinking, wanting, and reading. So. What if some of the power of this service were to be brought to bear within OpenWetWare?
Based upon a request last week, we looked into how Pubget works. With no official affiliation with Pubget, I’ve added a number of features to OWW that now make this possible in an almost invisible manner.
What do you have to do to take advantage of Pubget
If you’re a member of OWW, you can start by going to your preferences page to set your institution. Just log in, click on the “My Preferences” link at the top of the page, and go to the ‘Misc’ tab. It’s the last tab on the right. Look for the list of participating universities here:
Select the name of the institution where your lab is located. Like Pubget itself, this feature will only work when you’re physically working on that network. Once you set this, you need never change it until you graduate! If you’re at home and not at work, pubmed will use the default setting: any subscriptions by your university will be unavailable util you get back to the lab. This isn’t part of OWW’s Orwellian plot to keep you at your bench for as much of your life as possible. It’s a technical issue. At MIT, we use browser certificates that extend this so that I still get access to the same content at home. I’m sure this isn’t universal. But try it and see how it works. We’re evaluating how we can extend this capability over time for other universities. If your university uses VPN connections, you are virtually on your lab’s network: all subscriptions will continue to work.
Even if your university isn’t on the list, this feature will allow you access to the full text of all open source journals such as those published via PLoS.
The effect is that a growing number of OWW lookup features will use this label to pre-set the URL’s you use to ‘cut to the chase’ and make content available when you click and not a few seconds later. If you want to review several articles, those few seconds do add up quickly. In addition to this, Pubget provides a search interface that ‘knows’ the format of many of the most important journals.
What parts of OWW can use Pubget?
General OWW text.
Try this. Edit an OWW article you’ve created containing a pubmed reference. Reformat the reference to use this format:
Save the document. The text is now a clickable reference. If you’ve set your preference, the article will be opened for you to read if it’s available as a PDF file as well as associated reference information. You can use the PubMed search to find related content as well. Any existing pages using the ‘PMID’ tag will beave the same way.
If you use the Bibio extension for publishing citations, a Pubget link is now present if there’s already a pubmed reference for it. Just click on the link to to read the text.
A Pubget gadget is a code snippet you can create at the pubget site to provide a search using their service. To create one, go to the pubget site and create it using this url:
Hello! Is this thing on? The last 12 months have seen significant life changes (seemingly successful) for many of the people within and around the OWW community. Because OWW is a community (of researchers) we are past due for an update on how we are doing and open discussions of where we might want to be heading.
First, a few of the changes:
All of the founding researchers who created and obtained funding in support of OWW managed to earn their PhDs from MIT. Many of these folks have successfully launched a new company, Ginkgo BioWorks, in order to help make biology easy to engineer. Indeed!
Lorrie LeJeune, who was our Managing Director was lured away to become a Senior Editor at Nature Education, which is an incredible opportunity for her to impact the lives of many learners. Good luck Lorrie!
The Endy Lab wound down at MIT and has been reborn at Stanford. Personally, I’ve moved twice, sold one condo, bought one house, helped to design and manage the construction of a new laboratory, and have been assembling a new research team. Phew.
Second, what’s not changed:
Bill Flanagan remains gainfully employed at MIT, working to make OWW better and helping to put out the fires that flare up. Simply put, Bill is an incredible resource for OWW and we are ridiculously lucky to have somebody at his skill level and with his strategic perspective at the heart of OWW.
We currently maintain funding from the US National Science Foundation in support of OWW. To clarify one point in the recent and fantastic article by Jakob Sukale, the NSF grant expires 30 April 2010. This grant currently pays for Bill’s salary and our server costs. We are currently underspending on this grant and I will likely ask for a no-cost extension which, if granted, could extend our existing funding runway to April 2011.
Third, who is OWW?
I’ve found it very useful to understand who is actually using OWW. I’d suspected that some people tend to talk about OWW and openness in research but that fewer folks are actually living the dream, so to speak. Well, turns out that thanks to Bill, OWW maintains a statistics page here. There are ~6000 registered OWW users (roughly doubling over the past year). About 50 different users make edits to OWW pages on any given day. About 500 unique users make edits each month. Over 100,000 unique visitors browse OWW each month. This is incredible!
From a different perspective, OWW is incredibly small. We also represent a broader experiment in changing the process of research that is very much in a fragile intermediate stage of its development. Michael Nielsen did a good job of capturing some of the issues in his recent article, “Doing Science in the Open.” Stated differently and from a personal perspective, I would currently be hard pressed to make a successful argument that supporting and using OWW has made the research in my own laboratory significantly better, as judged by our traditionally published results. On the one hand, we had a great experience using OWW as a platform for developing a shared reference standard for measuring promoter activity in vivo. On the other hand, using OWW as it exists today has led to increased frustration with the slow inanities to be found within the conventional research publication process, while simultaneously and naively reducing the pressure to publish more formally and enabling others outside the (v. small) OWW community to “borrow” results without giving credit. Perhaps this shouldn’t be surprising. All said, I’m more invested in OWW than ever before, and am convinced that we are figuring out a new way to do research. We just have a lot of work to do in order to make the transition complete.
So fourth, what’s happening in terms of thinking about where we might go?
Lorrie LeJeune and Jason Kelly did a tremendous job exploring the entire process of research, from brainstorming ideas to promulgating results. Some of these ideas are summarized here. Many interesting questions and debates arise from considering this framing. For example, is OWW about the information and knowledge maintained on our servers, or is it about the community of researchers that produces this content? (personally, I think that the answer is both). Stated differently, should OWW support the process of research or should we focus on the capture and promulgation of research results? (again, I’d vote both). As a different example, does OWW exist primarily in order to stand as a shining beacon of openness in research, or are we simply trying to make the research process better which, given today’s information and communication technology platforms, tends to select for doing many more things in the open? (more on this third example below).
Bill Flanagan and I have been churning through the exciting opportunities that seem to continuously emerge given ongoing advances in information and communication technologies. Some people refer to OWW as a wiki. This makes me cringe. Wikis are great but we likely need to transcend this framing in order to best realize solutions that could be developed in service of our community and our work. You can find many early examples of this, such as Bill’s pilot efforts to integrate OWW with online document systems (e.g., Google Docs).
So, where should we go? My own sense is that we should go meta and support the integration of many web-based tools and communities in support of making the research process better. We will end up doing many more things in the open as a result. We also need to partner more effectively with existing modes and channels of peer review and recognition. But this is just my sense, so please chime in with your two cents, either via our Google discussion group, in the comments below, or by editing the appropriate OWW page. We need to hear from the people who are depending on OWW, or who would use OWW if <blank> happened. Also, for those of us for whom OWW is an essential part of our research existence, please participate in discussions about how to best guarantee the future funding of our operation. We have time to work through different models, but need to start doing so now. Our Discussion list is
just a click away.
My life at OWW has been an endless stream of messages articulating Austin’s far-too-old feature and technology suggestions that I slowly get around to adding. The “flash” (of insight) to “bang” (of getting the idea online) is not great; I would hope the time will diminish eventually. But for now, this is what it is!
I’ll provide a full list of these extensions when I’ve completed the import of them.
Personally, I think we should all do our best to start keeping him company.
If there’s something you have to do over- and over- again in OWW to do your work, consider using the discussion area I’ll add in order to get the ideas flowing.
Expect the first set of Gadgets, with instructions, to be available this week.
If anyone wants to volunteer to help out with testing Gadgets prior to our including them in the central library, please let me know. We’re not limiting inclusion of Gadgets because we want to suppress open science, by the way. It’s just that in programming, anything that can fail, will. I just don’t want an infinite number of new lab notebook pages to be created just because someone wanted to automate his or her own task and didn’t test!
Here’s a link for more information on MediaWiki Gadgets:
By the way. Don’t confuse Gadgets with Widgets. We may add Widgets as well. Unlike Gadgets, once enabled, Widgets can be added by anyone to any of their pages. Where Gadgets are more related to creating content and using OWW, Widgets will be useful for extending OWW to interact with external data.
What is a Garage Science Workshop-Seminar, you ask? MediaLab-Prado explains:
The socialization of technology and the accessibility of information available on the Web make it increasingly easy for anyone to have the possibility of building a home laboratory. Garage science is nothing new but home laboratories are connected now more than ever before. There are home laboratories of all kinds: technology factories, chemistry or biology labs, artists’ studios, places to rehearse, etc.
These home laboratories have a worldwide scope via the Web, which serves as a space for the dissemination of projects and the exchange of knowledge and techniques. These online communities are accompanied by a proliferation of onsite events, such as dorkbots, barcamps and hackmeetings, where people who only knew each other via the Web can meet face to face and share their achievements and experiences.
The communities formed this way provide citizens with the capacity to develop scientific-technical knowledge comparable to what is produced in the major laboratories. “Citizen science” can serve to explore questions such as: How are the foods we eat made? What possibilities exist in biogenetic research? What is the code that makes the machines we use work? How are those machines manufactured? Based on this knowledge, experimental and critical formulations and objects can be produced proposing new paths and goals in these fields.
Interactivos?’09 aims to explore these practices, where art, science and technology meet. We invite the participants to turn medialab into a garage laboratory where low-cost, accessible materials are used to develop objects and installations that combine software, hardware and biology. There’s license to fail!
As you can see, this event looks to be quite interesting and should be quite inviting for the DIYBio folks and their projects.
Keep in mind that final call for projects and papers is December 14th 2008.
Those in the open access movement had watched BioMed Central with keen interest. Founded in 2000, it was the first for-profit open access publisher and advocates feared that when the company was sold, its approach might change. But Cockerill assured editors that a BMC board of trustees “will continue to safeguard BioMed Central’s open access policy in the future.” Springer “has been notable…for its willingness to experiment with open access publishing,” Cockerill said in a release circulated with the email to editors.
No information yet as to how much this acquisition cost.
What do you think about this? Will Springer just “experiment” with open access publishing for a while and then close the gates? Or is this a genuine attempt to join the OA movement?