What it takes to get smart
17 Sep 2015 by Evoluted New Media
Simon Stoddart tells us how to maximise the value and avoid the pitfalls of smart labs
Simon Stoddart tells us how to maximise the value and avoid the pitfalls of smart labs
Unless you’ve spent the last 20 years in a cave, you will have spent some time considering how modelling software, computing power and analytics platforms can advance the R&D process. Such technologies, among others, can speed innovation by screening out huge numbers of unpromising experiments and help identify higher value products, as well as automating many of the laborious tasks which once constituted the bulk of lab work.
The upshot is increased speed, risk reduction, increased value of innovation, and step changes in productivity – creating smarter labs where researchers can limit time spent pursuing dead ends and zero in on the right result as quickly as possible.
But many have yet to take full advantage of this capability. Sometimes this is down to lack of time or awareness, but often it is down to a lack of proper planning. A piece of technology is bought with promises that it will do everything, but fails to live up to the hype because it is not used to full effect, not properly set up and integrated, or perhaps was the wrong thing to begin with.
Few dispute the value of making their labs smarter, but this is often seen as a technical challenge. We advise a more strategic approach, looking at your overall challenges and working with all stakeholders to identify critical questions and map out how best to answer them. We have seen smarter strategies lead to 15% decreases in time to value, 20% increases on the value of innovation and up to 50% increases in productivity.
Only once you have a clear vision of what you want to achieve, should you begin carefully identifying and selecting the technologies that can help you realise the transformative potential of investment in R&D informatics, analytics and modelling.
But before taking any action you should plan for a number of potential pitfalls which limit the effectiveness of smart labs if not properly addressed. In planning your technology strategy, it is important to align your systems with the supporting IT, people, processes and business management.
If you want enthusiastic adopters, the new technologies must benefit the people actually using them. Begin consultation and communication early on. Making users feel part of the process makes a huge difference to adoption. But ultimately they also need to see the benefits to themselves.
For example, we worked with a large pharmaceutical company to transition its Excel-based data capture system to a more innovative information platform, which ensured data could be easily accessed and understood by modellers for better modelling into the drug pipeline.
The problem was all the benefit was to the modellers and all the extra work to the biologists, so initial interest was low. But a simple solution was found – setting up the system to produce a report after the results had been entered, which the biologist could use in their experiment write-up. This extra feature was not expensive, but was the carrot needed to ensure that everyone involved would benefit from the change, and so participate enthusiastically.
The outcome was dramatically improved quality, accessibility and confidence in the data across the organisation. It has virtually eliminated the need for modellers to manually locate and collate data from preclinical in vivo studies (which used to take up to 50% of their time), delivering significant efficiency savings.
A single solution and a common way of working across a multi-site organisation often makes good sense. However this is not always the case, so look before you leap when considering a global implementation.
If you need to move people and work around flexibly between sites then a consistent system will be sensible. But for this to be valuable you need to be confident that everything will be consistent and standardised at the level at which a typical system – say a laboratory information management system (LIMS) – will be used. Even if it is just samples and materials to be moved around, there will need to be standardisation of the relevant data types.
On the other hand if the motivation is to reduce license and support costs, it may be best to decide group by group based on their needs, rather than buying everyone a licence for a level of technology most don’t require. In this case, you should negotiate a sliding scale of license discount upfront, whilst only committing to implementation on the sites with greatest benefit or most urgent need.
For labs that provide a routine service to other areas of the business, e.g. NMR spectroscopy, the focus will be on delivering a reliable service with a reducing budget. This may require purpose built systems dedicated to specific data management tasks, which can be used consistently over a long period.
For research labs, where there is less routine work and more detailed needs such as securing IP, requirements will change. Solutions will therefore be necessary that are amenable to that change.
Be careful of the word ‘flexibility’ – it can mean a supplier can build something to any specification – but once built it will be a painful and expensive to change. As part of your requirements analysis, work out what flexibility you will need and make sure that the suppliers can demonstrate this against specific, detailed examples.
We have worked with large research companies that have previously ended up buying lower-tech LIMS than originally expected. Smaller suppliers can sometimes demonstrate that users would be able to carry on with configuration and localised implementation in different labs as the system was rolled out, whilst a ‘larger’ and better known system with more features involved the ‘configuration’ being done by the suppliers, thus making reconfiguration difficult and expensive.
In a research lab, processes change all the time. Scientists develop new assays and experiments that generate results which can be differently structured or in new formats. This is often the driver for introducing new systems. However in this moving picture of processes, it is essential to have some fixed points and standards around basic data structures. For example, what do you mean by a batch, sample or lot? How will you in the future be able to compare results between experiments and samples, to track quality trends or to carry out more sophisticated predictive analytics or data mining?
Doing this successfully requires considerable thought and experience in data modelling, data architecture and analysis of metadata.
It is also important to think about application architecture at the conceptual level, to avoid messy and expensive collisions between, say, cross-functional workflow designs (e.g. request management) and integrated functional solutions, as are typical in compound management and high-throughput synthesis or screening.
For example, replacing local systems with a global system may require integrating functions across much of the research workflow, from molecule synthesis and assay requesting, to compound management, data analysis and an electronic laboratory notebook (ELN). The high level of integration between the systems, both old and new, and the regional differences in working practices will mean many unexpected dependencies and other surprises are likely to emerge. Understanding these and how they can be integrated into the new system, as well as decommissioning legacy systems, is a key part of any upgrade.
Much R&D spending these days is outside company boundaries – through academic or corporate partnerships or outsourcing routine activities – and this trend is set to increase. To make it easier to work with external collaborators, companies are opening up their information systems. Approaches range from allowing outsiders direct VPN access to an internal application, to creating cloud-based collaboration spaces outside the corporate firewall.
As well as the technical challenges of these, information security, ownership of IP, and relationships between different suppliers must also be considered.
Systems should be designed to facilitate the transfer of materials and results across organisational boundaries. For sample handling, you will likely need to provide collaborators access to your sample IDs and critical context, while protecting your sensitive information from outsiders and viruses. Try to use standard barcoding schemes so that third parties can charge you less for reading in your sample data. Make sure that results systems can accept direct external inputs or, at least, industry-standard formats.
The most challenging part of these types of projects is often getting all of the disparate IT groups to work together. Achieving this requires strong organisational and communication skills.
Research organisations considering investments in smart labs must fully understand and mitigate the risks of non-alignment between the supporting IT on the one hand, and the people, processes and business management on the other.
Our experience is that using the above criteria provides a strong foundation for success. Combining these with an overall approach that respects the specifics of an organisation’s research environment, level of IT maturity, priorities and problems and other project tools, will ensure you find the most suitable way to manage these risks and get the most out of your technology investment.
The author:
Simon Stoddart is a business analyst and consultant at Tessella with more than 10 years experience in pharmaceutical R&D, in particular with screening data management and ELNs