Reflections on responsible innovation for AI in healthcare by Valentin Tablan, Chief AI Officer at ieso
I recently took part in the “Regulatory update for AI enabled hardware” panel discussion at the Biotech Showcase, a recording of which you can view here. The conversation was very engaging, so much so that I wanted to share some thoughts about the use of AI in healthcare, data as an enabler, ieso’s experience of working collaboratively with regulators, and the importance of responsible innovation.
Why use AI in healthcare?
The panel focused on regulation, which is about controlling risks, but before getting into details of that, let’s think about the benefits that AI can bring to healthcare. Simply put, there are some exciting advances in medicine that are just not possible without AI. Science and technology has transformed medicine over the last 100 years or so. For hundreds of years, diagnosing physicians have been able to rely on their senses for auscultation and palpation. As our understanding of physics, chemistry, and biology developed, they are now also able to order X-rays, MRIs, CAT scans, genetic tests, and innumerable blood tests. AI is the next step in that progression, allowing us to develop tools that give clinicians visibility into high-dimensional multi-variate spaces that are hard to comprehend without technological help. AI also allows us to objectively work with new types of data, such as images and human language, that until now could only be observed subjectively.
When embedded into digital therapeutic solutions, AI can also support care delivery. For example, at ieso we are working on automated therapy systems for the delivery of mental healthcare.
Another question we might ask ourselves is why the sudden impetus to use AI, why now?
Data is the enabler
The reason why AI is such a mot-du-jour is that we find ourselves at a point in time when all the requisite ingredients are available. AI scientists have developed the mathematical techniques required to model complex domains and data types, we have access to large computation resources required to implement these methods in practice, and large data sets are available that can be used to train machine learning models.
While the techniques are mostly public domain, and access to compute power is commoditised, access to relevant data can be a major challenge. As was discussed on the panel, some medical datasets are available to be licensed, but that raises questions about whether all the required use rights and consents have been secured. I also pointed out that it’s important that data sets used to train AI models are technically suitable, and that the processes that led to their creation are similar enough to the problem that the AI system is intended to solve. Implementing innovation responsibly means paying attention to what could go wrong, measuring impact, and mitigating identified risks.
As regards data we are in a fortunate position at ieso, being able to use data that was generated from our own work of providing mental healthcare to over 100,000 patients, over the course of more than 500,000 sessions of one-to-one therapy. That data, with the required consent and use rights, and once de-identified, gives us the raw material for creating automated systems for the assessment and treatment of mental health conditions.
Evidence and regulation
The panel focused on the regulatory approach to AI-enabled medical devices, especially from the perspective of the FDA. I am not a regulatory expert, so I can only refer to lay person’s understanding of medical device regulation. Simply put, regulatory bodies are concerned with understanding and quantifying the risks that patients are exposed to, and understanding how those risks are managed and mitigated to bring them to an acceptable level. Put in even simpler terms: is the proposed intervention safe and effective?
In order to answer those questions, regulators look at scientific data submitted to them. This brings us back into the realm of scientific best practice. Before trying to convince the regulator that an intervention is safe and effective, responsible innovators have to prove it to themselves first. How to do that is well-established practice: you need to execute appropriately controlled and powered clinical studies. Putting your work through the process of peer review in an academic publication, whilst not strictly required, is also a good way to gain additional confidence that your results hold up to scrutiny.
For us, at ieso it has always been a point of pride that we would only offer a product or service that we would use ourselves, and that we would be willing to recommend to friends and family. We are determined that will not change as we move from traditional care to digital systems that may automate elements of assessment and treatment. We have always put science at the core at everything we do, and we always rely on rigorous scientific measurements to prove to ourselves the safety and effectiveness of any intervention.
The regulator as a partner
Just because the regulatory approval is one of the last stages in product development before commercialisation, it doesn’t mean you should only talk to the regulator once the product is completely built, and the clinical data has been collected.
To help companies mitigate some of the product development risks, regulators are available to provide early feedback. For example, they will indicate what level of evidence they expect to see in support of a certain submission or provide feedback on a particular proposed study design. The FDA in the US offers a Pre-Submission process that can be used to engage with them prior to a pre-market notification submission, and obtain feedback. We have used that process at ieso, and the output we received from the agency was very informative and valuable. It should be noted though that it helps to do your homework first. In order to get clear and precise answers it helps to ask clear and precise questions. As one of my co-panellists said, “the more you put in the more you get out from an interaction with the FDA”, which resonates with our experience.
Responsible innovation
While innovation is paramount to improving access to good quality care, it is important that it is done responsibly. For ieso, being responsible innovators means using the right data to train machine learning models, and we are lucky to have that. It means asking all the hard questions to make sure we are developing safe and effective products. That is not new to us – as a company that is ~10% scientists and ~20% engineers, we are built on people who cherish solving hard problems, and who believe in the value of clear, rigorous measurement.
Get in touch with us if you want to hear more about how we're transforming mental healthcare or, better still, come join us!