Confronting prejudice in AI: why ethical sensibility is not part of the program. From the article:

The A.I. renaissance has been driven in part by advances in “deep-learning” technology. With deep learning, companies feed their computer networks enormous amounts of information so that they recognize patterns more quickly, and with less coaching (and eventually, perhaps, no coaching) from humans. Facebook, Google, Microsoft, Amazon, and IBM are among the giants already using deep-learning tech in their products. Apple’s Siri and Google Assistant, for example, recognize and respond to your voice because of deep learning. Amazon uses deep learning to help it visually screen tons of produce that it delivers via its grocery service.

And in the near future, companies of every size hope to use deep-learning-powered software to mine their data and find gems buried too deep for meager human eyes to spot. They envision A.I.-driven systems that can scan thousands of radiology images to more quickly detect illnesses, or screen multitudes of résumés to save time for beleaguered human resources staff. In a technologist’s utopia, businesses could use A.I. to sift through years of data to better predict their next big sale, a pharmaceutical giant could cut down the time it takes to discover a blockbuster drug, or auto insurers could scan terabytes of car accidents and automate claims….

But for all their enormous potential, A.I.-powered systems have a dark side. Their decisions are only as good as the data that humans feed them. As their builders are learning, the data used to train deep-learning systems isn’t neutral. It can easily reflect the biases—conscious and unconscious—of the people who assemble it. And sometimes data can be slanted by history, encoding trends and patterns that reflect centuries-old discrimination. A sophisticated algorithm can scan a historical database and conclude that white men are the most likely to succeed as CEOs; it can’t be programmed (yet) to recognize that, until very recently, people who weren’t white men seldom got the chance to be CEOs. Blindness to bias is a fundamental flaw in this technology, and while executives and engineers speak about it only in the most careful and diplomatic terms, there’s no doubt it’s high on their agenda.

The most powerful algorithms being used today “haven’t been optimized for any definition of fairness,” says Deirdre Mulligan, an associate professor at the University of California at Berkeley who studies ethics in technology. “They have been optimized to do a task.” A.I. converts data into decisions with unprecedented speed—but what scientists and ethicists are learning, Mulligan says, is that in many cases “the data isn’t fair.”…

Joaquin Quiñonero Candela leads Facebook’s Applied Machine Learning group, which is responsible for creating the company’s A.I. technologies. Among many other functions, Facebook uses A.I. to weed spam out of people’s News Feeds. It also uses the technology to help serve stories and posts tailored to their interests—putting Candela’s team adjacent to the fake-news crisis. Candela calls A.I. “an accelerator of history,” in that the technology is “allowing us to build amazing tools that augment our ability to make decisions.” But as he acknowledges, “It is in decision-making that a lot of ethical questions come into play.”

h/t FullAI (@FullArtIntel)

For other posts on AI and ethics, see here