Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging Topic
Published December 4th, 2019 at 9:59 AM
Artificial Intelligence (AI) — the ability of machines to make decisions that normally require human expertise — already is changing our world in countless ways, from self-driving cars to facial-recognition technology.
But the best — and maybe the worst — is yet to come.
AI is being used increasingly in health care, including the possibility of a radiology tool that might eliminate the need for tissue samples. Knowing that, the people leading a new project called Ethical-AI for the Center for Practical Bioethics (CPB) are trying to make sure that AI health care tools will be created and used in ethical ways.
The ethical questions the project is raising should have been considered in a systematic way years ago, of course. But the good news is that the recommendations produced by this effort may be able to prevent misconstruction or misuse of AI health care tools.
“We’ve been excited about technology since we landed on the moon,” says Lindsey Jarrett, a researcher for Cerner Corp., a Kansas City-based global health care technology company. “That has put us into a fast pace that I don’t think we were prepared for. Now we’re looking and saying, ‘OK, wait, hold on. Maybe we should re-evaluate this.’ ”
Jarrett is working with Matthew Pjecha, a (CPB) program associate, to produce a series of ethical guidelines for how AI should — and shouldn’t — be used in health care.
“When we’re talking about (AI in) health care, the stakes get really high really fast,” Pjecha says. “What we’re hoping comes from this project is a robust set of recommendations (about ethics) for people who are designing, implementing and using AI in health care.”
Pjecha, Jarrett and CPB leaders, such as CPB President John G. Carney, worry that if AI tools are created without first thinking about ethical issues, the results can be disastrous for lots of people.
In 2018, for instance, Pjecha gave a presentation at a symposium, attended by Jarrett, in which he looked at an AI instrument used in Arkansas to allocate Medicaid benefits. Because that AI tool was flawed by a failure to include data from a broad segment of the population, it deployed an algorithm that threw many eligible Medicaid recipients off the program, resulting in severe problems.
Pjecha and Jarrett later decided to work together under the CPB umbrella to make sure future AI health care tools were designed properly and ethically.
Once an AI tool has been created, Pjecha says, “if you get outcomes from them that you’re not sure about or uncomfortable with it’s not easy to go back and find out why you got those.” So it’s vital to make sure that the data that goes into creating AI tools is reliable and not biased in some way.
“What we have learned,” Pjecha says, “is that AI will express the biases that their creators have.”
One way in which technology is affecting health care is through the growing use of “wearable” activity monitors, which track our daily movements and bodily reactions.
But, says Jarrett, “If someone is making really big clinical decisions based on the watch that you’re wearing every day, there are lots of times when that device doesn’t catch everything you need to know.”
Pjecha adds: “I could wear a Fitbit every day of my life and I don’t think a picture of my life would really be captured in it. But those are the numbers. And we have a kind of fascination with the role that numbers play in the provision of health care.”
Without broadly accepted ethical guidelines for AI’s creation and use in health care, Pjecha says, “10 years down the road…we would find ourselves with a health care system that is less relatable and less compassionate and less human. We know that AI systems are quickly going to start outpacing human physicians in certain types of tasks. A good example is recognizing anomalies in imaging.”
AI tools, for instance, already can find imaging irregularities at the pixel level, which human eyes can’t see. “We need to figure out what it means when providers deploy a certain tool that is better qualified to make a type of call than they are,” Pjecha says. “I’m really interested in what happens when one of these systems hypothetically makes a certain determination and a human physician disagrees with it. What kind of trust are we placing in these tools? A lot of these questions are just open.”
And, adds Jarrett, another worry is that big companies are entering the health care space of the economy without knowing much about health care, such as Amazon and Google. That may add to the lack of ethical considerations required to make sure AI tools are fair.
So once again, we risk science and technology moving more quickly than our human capacity to understand and control them.
CPB and Cerner both are funding this project, though CPB continues to seek additional investments to support it.
Bill Tammeus, a Presbyterian elder and former award-winning Faith columnist for The Kansas City Star, writes the daily “Faith Matters” blog for The Star’s website and columns for The Presbyterian Outlook and formerly for The National Catholic Reporter. His latest book is The Value of Doubt: Why Unanswered Questions, Not Unquestioned Answers, Build Faith. Email him at wtammeus@gmail.com.