As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.
To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios….
Effectively, [Ariel] Proccacia wanted to demonstrate how a voting-based system could provide a solution to the ethical AI question, and he believes his algorithm can effectively infer the collective ethical intuitions present in the Moral Machine’s data. “We are not saying that the system is ready for deployment,” he told The Outline. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”…
Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that’s better than one built by an individual.
Leave A Comment