In my artificial intelligence course this semester, we were assigned a research project on a topic of our choosing, focused on a novel approach that improved upon existing research in the domain of AI. Right from the start it was a significant challenge; we were given two weeks to submit a well-defined research proposal without any learning or experience to lean on. I felt like a third grader being asked to invent a new way to solve algebra problems; I’ve heard of it, sure, but I’m missing the fundamentals to build on.
But, I was in the same boat as about 60 other students, so I jumped in and started swimming. In those two weeks I consumed more journal articles and research papers than I probably had in my entire career. I began forming an idea of what I could focus on, then looked for a relevant problem others had tried to solve that I felt I could reasonably attempt to improve upon.
The Idea
Drawing inspiration from my wife, I looked at the application of artificial intelligence in the field of medicine, and particularly at tools used by providers to aid in interpreting data. That led me to explainable AI, a rapidly growing field that focuses on providing insight into how models arrive at their conclusions in order to provide confidence in a model’s outputs.
In the area of medicine, the tools a provider uses must be reliable and provide understandable results. The difficulty that a non-technical person has with trusting an AI-based tool is that it inherently outputs data that is significantly different from its input, whereas many traditional tools in medicine provide more or less direct views of activity in the body.
For example, a machine that performs an ECG on a patient uses electrodes attached to the body to measure the electrical potential difference between electrodes. While the science behind that is fairly technical, it is within the domain of medical expertise, and doctors develop a mental model of how electrical activity in the heart is translated into the ECG report, as well as what the results can signify, in order to use it in practice.
Let’s replace that machine with another one that also performs an ECG, but instead of showing the report, it contains an AI model that interprets the data and outputs a prediction of what it indicates. Essentially, the machine is saying “Trust me, here’s what’s going on with the patient,” but here the doctor has no mental model to connect activity in the heart to the AI model’s output.
(Likewise, the patient would probably prefer that a human expert confirm the findings of the machine before relying on those findings for directing treatment.)
The Problem
In essence, the machine (or rather, its manufacturer) is expecting the doctor to offload the mental model entirely. When it comes to health and safety, that’s something that’s not only outside the realm of most people’s comfort, but it also introduces significant risk.
Now scale this up in complexity from the simple example of interpreting ECG reports to diagnosing illnesses or predicting responsiveness to a treatment, tasks which require far more inputs and significantly more training, experience, and mental models. It seems like an area ripe for improvement with AI assistance.
Yet there is a significant trust gap between healthcare providers and AI tools. The vast majority of providers do not have the time, experience, or fundamental knowledge to learn the inner workings of a technology that even many computer scientists struggle to understand.
The Solution
That gap is what explainable AI seeks to close, or at the very least greatly narrow. An explainable system is designed to provide non-technical insight into how a model arrives at its conclusions. It does so by training a smaller “observer” model alongside the primary model, learning how it reasons through testing and observing its outputs. Then it’s able to highlight the significant factors that influence the primary model’s decisions.
Explainable AI is designed to close the trust gap by showing its work, justifying its analyses with evidence and reasoning. It gives people the confidence to use highly technical, complex systems in higher-level decision-making, and to provide their own justifications for such use. As artificial intelligence becomes more pervasive in every area of life, explainability will become increasingly vital.
In my paper, which you can read below, I explore the capabilities of using explainable AI to predict the mortality risks of patients admitted to hospitals with greater accuracy than metrics such as length-of-stay. I design and test an explainable system using a dataset of 130,000 in-patient records, and demonstrate how such a system can accept a patient’s record, predict their mortality risk as a percentage, and explain its reasoning by highlighting the most influential factors and their percentage of importance to the prediction.
Through this project I learned only the basics of how to build and fine-tune AI models, and there is definitely room for improvement. But there is also great potential in such a system for enhancing providers’ abilities to take care of their patients, and I’m excited to see how AI develops in the field of medicine in the coming years.