The PLA2018 central theme will focus the discussions on how to empower your eData Life Cycle.
In the new era of the internet of things and artificial intelligence, the majority of the laboratories have still a long way to go moving from paper based processes to paperless ones.
At Paperless Lab Academy®, we have been working for the past six years in building a learning platform to leverage the cultural awareness of the organizations on how to move to less paper-based business processes. Less becomes then more. When well designed and successfully implemented, a digital transformation project reduces data integrity risks, improves effectiveness, drives change management, increases return on investment while shortening time to market.
For the 6th edition, we propose a journey through the whole data life cycle focusing on what we have identified as the 4-major areas of attention of such digitalisation projects. Those are forming part of our 2018 central theme: “eConnect, eManage, eDecide, eArchive”
This first post will mainly develop on the first and fourth areas. There is a real purpose in doing so. We´ll insist all along the event in thinking broad, in having you reflecting on your data life cycle, in planning your project with the end in mind. All 4 areas are intrinsically related and interdependent. They can´t really go good if not correctly designed. This goes without saying that the very last one, the eArchiving would basically be useless if not planned since the beginning. Any “super modern” connections would be misused if at the end data archived are not easily retrievable for them to be reused whether for compliance purposes whether for reanalysis.
“eConnect”: effective workflow based on self-documenting data capture strategies
Even though data integrity lays all along the whole data life cycle, the very first moment of capturing the data at the source is nowadays a strong focus of the inspectors and auditors. Most of the Lab instrumentations come now with intelligent software embed, additionally labwares and sensors embrace the internet of things, ensuring all now the correct collection of the raw data and the transfer to upper system layers.
Laboratories tend to overlook several critical aspects that need to be considered at this point of time. While the “eConnect” process should happen as lab technicians operate reducing the manual documentation and human error transcription, we tend to forget that maintaining the information about the source that generates the raw data is essential. The data value might reach the final approbation level coming through several system layers, yet it should not be missing its original source information. At this level, we´re not working anymore with the sourced data and it should be possible to get back anytime to the original data.
The interconnections to the different systems go through data interfaces of multiple kind including sometime the old fashion sneakernet mode if not the human transcription on paper. Every interface is a potential data integrity risk. If well designed interfaces increase integrity and remove human interactions, if not it could be the generator of data integrity on its own.
Finally, while in this first stage of collecting data we should not obviate the ones coming from collaborators. Collaborators are generators of data and potential source of information. If external as academic contributors or outsourced services from CRO, CMO and service labs, an immediate security concerns raises up. With the latest GDPR considerations, we need to incorporate data protection impact assessment at least on the most vulnerable data. By May 2018, companies we´ll need to design their processes. Additionally cybersecurity actions to avoid any risk in loosing data needs to also be considered.
“eArchive”: essentials to secure long-term multi-departmental archiving
As stated above, key objective in operating with efficient archival approach is to reduce the struggle finding the right data. Considering the growing digital universe, the archiving could not anymore be left behind in a project and considered when too late. Nowadays, we often hear about concerns on legibility and format consistency along the time for a given retention time that might end up requiring access to obsolete technologies.
Archiving should be approached and designed so it reduces risk of multiple types: knowledge limited to one critical person, security, loss of data…
A comprehensive archiving protocol should eliminate the struggle to find the data to the point of desperately looking for the person owning the knowledge of where it is. As discussed already during the last edition, PLA2017, a corporate master data management and vocabulary model should support a correct management and archival, facilitating a flawless track record of the data. Maximum insights could then be extracted from them. This is how the eDecide could benefit of comprehensive dashboards facilitating trustable conclusions on trends, patterns and key value to decide. See PerkinElmer 2018 workshop: “eDecide : Delivering on the promise of Enterprise Search & Analytics”
Those and more are the topics we´re looking for developing further along the two days event of the 6th edition of the Paperless Lab Academy. The plenary session agenda will provide the audience with very interesting informations: real situations from final users, providers sharing their knowledge and lesson learn from customer projects and though leaders from Gartner, Accenture, Pistoia Alliance will take us to the lab of the future. Last but not least, our most appreciated contributor from legal will provide the latest information about GDPR in a challenging attractive way. stay tuned!
Are you going to miss it? book your free seat now!