Antibody crisis: Order from chaos
23 Nov 2016 by Evoluted New Media
As we slowly come to terms with the damage caused by the antibody validation crisis – there is hope says Dr Anita Bandrowski. But in order to solve the problem academics, commercial players and publishers must come together
As we slowly come to terms with the damage caused by the antibody validation crisis – there is hope says Dr Anita Bandrowski. But in order to solve the problem academics, commercial players and publishers must come together
Science is reeling from outdated practices in reporting, a reagent market that has grown without common robust controls, and a set of incentives that are both pervasive and perverse.Reporting of science is something that was started more than 350 years ago with the Royal Society¹, and in all truth, has not changed much since. The problem with this is that scientific practice depends on sets of integrated machines, where turning a single knob can change the output of the experiment. The handling of cells or reagents must therefore be tracked meticulously, or the whole experiment will run the risk of being moot. Like large machines, reagents have also moved from ‘buffered salt solution’ where we paid for the reagents based on the 9’s (i.e., the purity desired) to highly complex molecules such as antibodies and, in extreme cases, kits that stain, activate or deactivate. The problem here is that information needed by scientists, such as a list of ingredients, the concentration or consistency of internal components between batches is often not divulged.
The sheer number of antibodies that biological laboratories have to grapple with is in the 2 million range – a figure that should give us pause. Virtually every target in the genome is covered by a number of antibodies that purportedly bind that target. The question we should be asking is: how well is the reagent working, and for which applications? This data is not uniformly available from all vendors, and in many cases two or more vendors may sell the same antibody. If there are 2-3 reagents for a target, it is possible to test them all in a lab, but not if there are hundreds for a target. In this case one must rely on some metric of antibody performance to choose a few reagents that are likely to work in an application to test.
In an extreme case, one company routinely brought products to market within a few weeks of a promising new target being published. However, the generation of an antibody requires at least that long just to immunise an animal and draw blood, suggesting that validation work was not in the company’s pipeline². Proprietary black box reagents or those lacking some basic validation have no place in science. I for one, am concerned because experimental science is supposed to be about controlling variables, not about using statistics to find trends. With poorly validated or black box reagents all we can say is what we can observe some behavior in some system. We have no way to get to the underlying cause of a phenomenon.
The incentive structure provides another hindrance to science; it pushes scientists to write more papers and grants and supervise students less and less³. These formats and demands make practicing science difficult. By science, I mean the tedious work that needs to be done quietly, with few ‘aha!’ moments and many careful measurements. The work where you take data from your graduate students and you throw it in the trash in front of them if it is not calibrated or validated properly, as my graduate adviser did. That is what science is, many tedious nights spent doing careful work by individuals who take a long time to come to decision because they are more concerned about being right than being famous.
This is why I am thrilled that in all three areas; reporting, reagents and incentives; scientists have started to come together to say that enough is enough. With regards to the reporting of findings, multiple working groups of FORCE¹– a community of scholars, librarians, archivists, publishers and research funders set up to facilitate the change toward improved knowledge creation and sharing – have started to create or bolster standards in reporting everything from data to reagents?,?,?. The torch is mainly being carried by journals which say “if you would like to publish something called science, then you must tell us exactly what you did, and we will check your methods”. A great example of this is the recent announcement from Cell Press, which scrapped and rebuilt the methods section of every paper in the latest issues of Cell. The aim of this was to support the publication of the complete protocol and clear unambiguous identification of every reagent, including RRIDs (research identifiers).In the reagent market, we have also had good news; the International Working Group for Antibody Validation (IWGAV) has drawn a line in the sand on antibody reagent validation. The group convened experts and came up with a set of guidelines that should be a good quick reference guide to validate antibody experiments, kits included. The paper outlines a set of recommendations that should either show that an antibody binds specifically to a target under a set of conditions or that it does not. Most good labs will already validate all reagents that are used, including different lots of the same antibody.
In contrast, researchers who are new to a technique may take the reagent company at face value when the company says “this is an antibody against protein X”. Fortunately, this paper, outlining validation per method, also gives reagent companies a set of guidelines that they could follow when they would like to say “this reagent is valid for technique Y, under conditions Z”. Indeed, one company, Thermo Fisher Scientific, has already pledged to take these recommendations to heart as they test and qualify antibodies. It would be a coup-d’etat if a consortium of companies would do the same and we could create order in an absolutely chaotic market. Incentives are always the most difficult to change as these are the most entrenched in current thinking and culture. Despite this, we can see some movement in this space, this time from the National Institutes of Health in the US, which announced at the end of 2015 that starting in 2016, most grantees would have to pay attention to their methods. They have ensured this by adding a scored section to almost every grant for Key Biological Resource Validation, which includes antibodies. While this will not create several more hours a day in each investigator’s lab, it will certainly change the focus, making innovation just a little less important and scientific rigor a little more important.The data is not in yet and we do not know if any of these steps will be successful in improving rigor in science. What is important is that we are having a conversation and that this conversation is happening at the highest levels of publishing, academia, industry and government, making me cautiously optimistic that real change is coming.
References 1. www.royalsociety.org/~/media/publishing350/publishing350-exhibition-catalogue.pdf 2. www.genomeweb.com/proteomics/faulty-antibodies-continue-enter-us-and-european-markets-warns-top-clinical-chem 3. www.nature.com/news/robust-research-institutions-must-do-their-part-for-reproducibility-1.18259 4. www.force11.org/group/fairgroup 5. www.force11.org/group/resource-identification-initiative 6. www.force11.org/group/resource-identification-initiative
Author: Dr Anita Bandrowski, is a project lead at the Center for Research in Biological Systems at the University of California, San Diego. She trained as an electrophysiologist, however after a postdoc at Stanford Neurology, her research has focused more on bioinformatics, neuroinformatics and biotechnology.