Sentient AI: a claim for personhood. From the article:
Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.
Blake Lemoine, a software engineer at Alphabet Inc.’s Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech….
AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness. But AI tools increasingly are capable of producing sophisticated interactions in areas such as language and art that technology ethicists have warned could lead to misuse or misunderstanding as companies deploy such tools publicly.
Mr. Lemoine has said that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it.
Leave A Comment