Facebook’s AI problem: how its algorithms can discriminate against underrepresented groups. From the article:

One day at work last year, Lade Obamehinti encountered an algorithm that had a problem with black people.

The Facebook program manager was helping test a prototype of the company’s Portal video chat device, which uses computer vision to identify and zoom in on a person speaking. But as Obamehinti, who is black, enthusiastically described her breakfast of French toast, the device ignored her and focused instead on a colleague—a white man….

Obamehinti’s tale of algorithmic discrimination showed how Facebook has had to invent new tools and processes to fend off problems created by AI. She said being ignored by the prototype Portal spurred her to develop a new “process for inclusive AI” that has been adopted by several product development groups at Facebook.

That involved measuring racial and gender biases in the data used to create the Portal’s vision system, as well as the system’s performance. She found that women and people with darker skin were underrepresented in the training data, and that the prerelease product was less accurate at seeing those groups….

“When AI meets people,” Obamehinti said, “there’s inherent risk of marginalization.”

For other posts on AI and ethics, see here.