Unhealthy AI: how artificial intelligence may be detrimental to equity in health care. From the editorial:

There are many questions about whether A.I. actually works in medicine, and where it works: can it pick up pneumonia, detect cancer, predict death? But those questions focus on the technical, not the ethical. And in a health system riddled with inequity, we have to ask: Could the use of A.I. in medicine worsen health disparities?

There are at least three reasons to believe it might.

The first is a training problem. A.I. must learn to diagnose disease on large data sets, and if that data doesn’t include enough patients from a particular background, it won’t be as reliable for them….

Will using A.I. to tell us who might have a stroke, or which patients will benefit a clinical trial, codify these concerns into algorithms that prove less effective for underrepresented groups?

Second, because A.I. is trained on real-world data, it risks incorporating, entrenching and perpetuating the economic and social biases that contribute to health disparities in the first place….

In medicine, unchecked A.I. could create self-fulfilling prophesies that confirm our pre-existing biases, especially when used for conditions with complex trade-offs and high degrees of uncertainty. If, for example, poorer patients do worse after organ transplantation or after receiving chemotherapy for end-stage cancer, machine learning algorithms may conclude such patients are less likely to benefit from further treatment — and recommend against it.

Finally, even ostensibly fair, neutral A.I. has the potential to worsen disparities if its implementation has disproportionate effects for certain groups…. If an algorithm incorporates residence in a low-income neighborhood as a marker for poor social support, it may recommend minority patients go to nursing facilities instead of receive home-based physical therapy. Worse yet, a program designed to maximize efficiency or lower medical costs might discourage operating on those patients altogether….

[AI] may well make care more efficient, more accurate and — if properly deployed — more equitable. But realizing this promise requires being aware of the potential for bias and guarding against it….

But most fundamentally, it means recognizing that humans, not machines, are still responsible for caring for patients. It is our duty to ensure that we’re using AI as another tool at our disposal — not the other way around.

For other posts on the ethics of AI, see here