Re-weighing the task of AI ethics: companies take a step back. From the article:

Microsoft, which has taken the ethical implications of AI so seriously that president Brad Smith met with Pope Francis in February to discuss how to best create responsible systems, is reconsidering a proposal to add AI ethics to its formal list of product audits.

In March, Microsoft executive vice president of AI and Research Harry Shum told the crowd at MIT Technology Review’s EmTech Digital Conference the company would  someday add AI ethics reviews to a standard checklist of audits for products to be released. However, a Microsoft spokesperson said in an interview that the plan was only one of “a number of options being discussed,” and its implementation isn’t guaranteed….

On April 4, Google executives pulled the plug on the Advanced Technology External Advisory Council (ATEAC), a collaborative of executives, engineers and advocates formed to examine the ethical implications of its artificial intelligence products and services. The council, which existed for less than a week, faced opposition from the start from employees who formed a petition titled “Googlers Against Transphobia and Hate” to remove member Kay Cole James, president of conservative think tank The Heritage Foundation….

Ten days after AETAC ended, the Wall Street Journal reported Google dissolved a similar board in the United Kingdom created to assess ethical use of AI in health care technologies….

In February, Tesla founder and CEO Elon Musk, who once called artificial intelligence “humanity’s biggest threat,” stepped down from OpenAI, a research ethics nonprofit he cofounded in 2015 to address the issue. In a now-deleted Twitter post, Musk said Tesla was competing for some of the same people as OpenAI and “didn’t agree with some of what the OpenAI team wanted to do.”

For other posts on ethics and AI, see here.