Mayo Platform’s Halamka on new WHO ethics guidelines for AI in medicine
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
For some, the idea of artificial intelligence in medicine may conjure images of robot doctors making house calls on hoverboards. But AI is already being used in health care today, and the World Health Organization recently laid out new ethics guidelines for artificial intelligence in medical settings.
The guidelines say humans — not machines — should remain the decision-makers, the technology must do no harm and doctors must be transparent to help patients understand how it's being used.
Dr. John Halamka is president of the Mayo Clinic Platform, a tech incubator working on tech and data innovations in health care. He told host Tom Crann in an interview this week that artificial intelligence is intended to augment humans, not replace them. And one realm where Halamka sees potential for AI application is in preventative care.
“This is what we hope the future will bring,” he said. “Keep patients healthy. Keep them out of the hospital, out of the clinic, because we’ve predicted disease and treated it before they develop.”
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
For instance, clinicians might use heart rate data collected by a patient’s Apple Watch to screen for abnormalities. Halamka anticipates that algorithms will someday allow doctors to predict patients’ risk of developing serious illnesses and prevent them.
With better predictive AI, however, come thornier ethical questions. Halamka gave the example of his wife, who was diagnosed with breast cancer in 2011. She’s now cancer-free after chemotherapy, radiation and surgery. Had she been informed of her risk for breast cancer and given the option to take preventative medication, Halamka said it would be up to her and her doctor to weigh the risks and potential side effects — a decision no AI could make for her.
When it comes to the ethics of artificial intelligence, “We need guidelines,” Halamka said. Algorithms, he said, don’t come with an ingredient list or nutrition label. “And so what we need is transparency as to how every algorithm performs, how it was developed.”
He said doctors should know whether algorithms have been tested and applied in ways that will work for their patients.
“It’s that level of transparency — that proof of utility — that we need going forward,” he said.
Use the audio player above to listen to the full conversation.