The ethical quandary of AI: altruism, AI, and industry. From the article:
Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.
The blowup at OpenAI showed its influence—and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy….
OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a four-person variant of chess known as Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.
Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind….
[Chief executive Sam] Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute….
The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said….
The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future….
At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions.
“Our goal is to make a mankind-loving AGI [artificial general intelligence],” said Sutskever, the company’s chief scientist.
“Feel the AGI,” he said. “Repeat after me. Feel the AGI.”
At Google, the merging this year of its two artificial intelligence units—DeepMind and Google Brain—triggered a split over how effective-altruism principles are applied, according to current and former employees….
Open Philanthropy’s then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy’s website. Since 2015, Open Philanthropy, a nonprofit that supports effective-altruism causes—has given away $327 million to AI-related causes, including $30 million to OpenAI, its website shows.
When Karnofsky was engaged to Daniela Amodei, now Anthropic’s president, they were roommates with Amodei’s brother Dario, now Anthropic’s CEO.
In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to causes recommended by Karnofsky’s effective-altruism charity, GiveWell, and to read a 457-page tome by German philosopher Jürgen Habermas beforehand.
“This is necessary context for understanding our wedding,” the couple wrote on a website for the event.
Leave A Comment