There’s Great Potential for AI in Health Care, but also Many Questions

U of T Magazine

If Prof. Jennifer Gibson, director of U of T’s Joint Centre for Bioethics, is leading a new research project, “Ethics and AI for Health,” to study questions of privacy, responsibility and safety around artificial intelligence. “So often, technologies outpace our ability to address ethical questions,” she says. Some of the important issues Gibson’s project will examine:

Who will benefit from AI?
“If you’re building something that can help patients live better lives, it’s very difficult to prevent someone from using that tool to maximize profit – potentially at the expense of those patients,” says Quaid Morris, a U of T professor in molecular genetics.

A company could develop an AI tool that is very effective at, say, tailoring cancer treatment to individual patients – and then limit its availability to wealthy patients who can pay a lot for it.

Health-care systems vary around the world, which means new AI tools may be applied very differently from place to place. Some jurisdictions might charge for the tool or limit access to certain groups. A public health-care provider might focus machine-learning algorithms on early diagnoses and preventive medicine to lower health-care costs, whereas a corporation might develop customized tools that serve only those who can pay.

Who will protect patients’ privacy?
When working with medical data, Vector Institute researchers follow strict laws and guidelines that protect individuals’ privacy.

But as AI moves from research to application, it could become increasingly difficult to keep genetic and clinical data anonymous. People have gotten used to giving up private information to companies such as Facebook, Google, Amazon and Netflix in return for more personalized recommendations. They may well be willing to disclose medical information in return for better care. This information could end up in the hands of insurers, employers or in the public realm without a patient’s consent.

What will happen when the machine is wrong?
No machine will be perfect. There will always be a risk of a wrong diagnosis. And even the best possible data-driven recommendation might still end with a patient not surviving their illness. Who should be held responsible: the medical team, the algorithm designers or the machine itself?

How will we avoid machine bias?
Algorithms carry as much risk of bias as any human. Search for “CEO” in an image search engine, for example, and your computer will return mainly pictures of white men. Algorithms tend to amplify the built-in sexism and racism of training data.

In the health sphere, algorithmic bias could mean the machine recommends the wrong treatment for groups that have historically been marginalized from health research.

What will the impact be on doctors and other health-care workers?
Doctors might find themselves freed from repetitive tasks and able to spend more time with patients. Some technicians might find that computers have taken over their work. Other frontline workers – nurses, paramedics – may see their roles change in unexpected ways.

What will we do about unforeseen consequences?
In the near future, researchers expect machine-learning algorithms to empower doctors and patients to make better decisions. They won’t make decisions themselves. But beyond such limited predictions, nobody really knows how far and how fast artificial intelligence will develop or how it will change society. Gibson believes we should be preparing for big changes, not incremental ones.

“We ought to think of this more as a disruptive, revolutionary technology and not find ourselves surprised five years down the road if we are too passive about it,” she says. “It’s not about raising the alarm just for the sake of raising the alarm. It’s about moving forward with intention.”