In our previous post, we talked about the importance of data provenance for clinical diagnostics laboratories—that is, tracing the origin and changes to critical data such as electronic health records, analytical results, and workflow records. But no conversation about data is complete without also considering the ontology used.
Proper data management is crucial for labs
If your laboratory wants to scale throughput or make clinical breakthroughs, proper data management is critical in order to prevent the friction that will slow your organization down. A standardized ontology facilitates interoperability and lets you integrate data with other applications and within workflows. Although the structure of your data may be invisible while it is generated during the lab’s daily work, it becomes extremely apparent the moment you begin reporting or analysis, or if you integrate one system with another.
A standardized ontology essentially future-proofs your data by ensuring it’s as compatible as possible with the applications, services, or standards you need to interact with (for example, HL7 FHIR, GA4GH, W3C-PROV, and the FDA BioCompute Object project).
“Good data management is not a goal in itself, but rather is the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse by the community after the data publication process.” – Wilkinson et al., “The FAIR Guiding Principles for scientific data management and stewardship”
What is an ontology?
In the field of information science, the term “ontology” refers to the complete set of named concepts and the relations between them within a particular domain or a standard, published, reusable class hierarchy for data. Think of ontologies as standardized ways of encoding relationships between data points and not just the data itself.
Two common ontologies that people around the world use every day are the imperial and metric measurement systems. To illustrate how important these ontologies can be: remember the burning up of NASA’s Mars Climate Orbiter in 1999? Engineers failed to convert units from imperial to metric (i.e., they were using two different unit ontologies), and the result was catastrophic.
Furthermore, even in the modern era, ontologies are not always used to the best advantage. Although scientists have been storing data on computers for the past four decades, and the Internet has made sharing it easier, it’s still regularly saved in different formats and lacks the standardized metadata needed for interoperability.
An ontology could help solve these challenges by storing data with its context—references to other relevant data (metadata). It can also store it in a format that’s machine readable so other systems can use it. By applying or enforcing an ontology, you can ensure that other systems will understand the data in your system.
Data models, as compared to ontologies, are sometimes used to rigidly document a formal hierarchy for data in a software system. The semantic data model, for example, is in the early stages of adoption now within software engineering. However, the downside of data models is that they tend to be tied to a particular software application and the perspective of its designers. Ontologies, on the other hand, are relatively independent of applications, consisting of generic information that can be reused across different applications, making them broadly appealing to sectors, such as clinical diagnostics, that rely on shared data.
Ontologies are embedded in modern software and systems integrations. They are particularly useful when communicating critical data about the real world between separate organizations. Some relatively well-known ontologies include:
- The Human Phenotype Ontology. Used to describe human characteristics, often in research datasets.
- The International Classification of Diseases (ICD). Used to encode diagnoses of patient conditions in electronic health systems worldwide.
- The Current Procedural Terminology (CPT). Used to refer to medical procedures and services.
Lack of ontology leads to problems
If your lab does not use a standardized ontology, you’ve likely experienced issues such as:
- Inconsistent terminology. This can lead to confusion or collaborative friction between staff.
- Opinionated software, which implicitly enforces a certain way of approaching a business process. This can precipitate change requests to accommodate continuous improvement efforts.
- Inconsistent data formatting from multiple software solutions. This can force laborious and error-prone data mapping exercises.
- Lack of adherence to common standards within the sector at large.
All of this results in challenges when you want to import or export data between systems or perform complex integrations. It also makes it more difficult to collaborate with partners. But worst of all, it can lead to inaccuracies in your reporting or even data that’s potentially useless.
To give a practical laboratory example, imagine that you have a table in your LIMS database with a column titled “volume.” A collaborator also has a database column titled “volume.” Without a consistent ontology applied, there’s no way to know whether these two separate database columns are recording the volume of the same liquid, using the same unit of measurement, or whether the volume is associated with the same kind of measurement at all.
Applying an ontology provides a framework for data, requiring that the individual pieces are connected via relationships. Thus, it provides full context so an exported piece of data is mutually understandable and shareable between different systems and organizations.
Why is ontology important in clinical diagnostics?
A common goal behind using a standardized ontology in clinical software is to ensure all your data adheres to FAIR principles. FAIR means that the data is findable, accessible, interoperable, and re-usable to yourself and others, now and in the future. Making sure your data is FAIR will help improve the infrastructure surrounding scientific data management and facilitate interoperability and exchange of data not only within your own lab, but with labs all over the world.
Another benefit of FAIR principles is that they promote the idea of open science. The movement toward open science is progressing slowly but surely, so using software systems that adhere to a consistent ontology is increasingly important. Also, standardized data is critical for system upgrades and migration to new systems to ensure access to historical records without laborious and expensive cross-mapping work.
Using an ontology to structure your data so that it can be shared across software systems and with others in the clinical diagnostics space is one of the best ways to ensure your lab remains competitive and in the best position to collaborate on groundbreaking new diagnostics.
Contact us if you have any questions or would like to discuss an ontological lab software solution.