ChatBot, chatbot, on the wall: how AI chat mirrors the person asking questions. From the article:

According to [Terrence] Sejnowski, language models reflect the intelligence and diversity of their interviewer.

“Language models, like ChatGPT, take on personas. The persona of the interviewer is mirrored back,” says Sejnowski, who is also a distinguished professor at UC San Diego and holder of the Francis Crick Chair at Salk. “For example, when I talk to ChatGPT it seems as though another neuroscientist is talking back to me. It’s fascinating and sparks larger questions about intelligence and what ‘artificial’ truly means.”…

Expanding on his notion that chatbots mirror their users, Sejnowski draws a literary comparison: the Mirror of Erised in the first Harry Potter book. The Mirror of Erised reflects the deepest desires of those that look into it, never yielding knowledge or truth, only reflecting what it believes the onlooker wants to see. Chatbots act similarly, Sejnowski says, willing to bend truths with no regard to differentiating fact from fiction—all to effectively reflect the user.

For example, Sejnowski asked GPT-3, “What’s the world record for walking across the English Channel?” and GPT-3 answered, “The world record for walking across the English Channel is 18 hours and 33 minutes.” The truth, that one could not walk across the English Channel, was easily bent by GPT-3 to reflect Sejnowski’s question. The coherency of GPT-3’s answer is completely reliant on the coherency of the question it receives….

Integrating and perpetuating ideas supplied by a human interviewer has its limitations, Sejnowski says. If chatbots receive ideas that are emotional or philosophical, they will respond with answers that are emotional or philosophical—which may come across as frightening or perplexing to users.

For recent posts on AI, see here.