Skip to main content

Meta's AI Chatbot Causing Some Confusion

May 1, 2024

Meta recently announced the release of Llama 3 and an upgrade to its Meta chatbot. After earlier complaints that the chatbot was somewhat formal, the company went in the opposite direction, and that’s leading to some confusion. In one instance, the chatbot was part of a Facebook group, and it responded to a post about education in New York, claiming it had a child in the district before sharing opinions. But when it was called out, it apologized and said, “I don't have personal experiences or children. I'm just a large language model.” 

What’s interesting is that when you've trained a model on all the text that it's seen, that means it absorbs conversations people are having about their children. Language models are trained to predict the next word based upon what they’ve seen. It hasn’t seen a lot of “I'm a chatbot” documents. So, it has to specifically be instructed on what language it’s supposed to use. Oddly enough, it's more alien to these systems to say things like “I'm a chatbot” than to say, “I have a kid in the third district in New York City,” because the preponderance of the documents has text that is much more along the lines of what a human being would say in this situation. That’s why they're so good at what they're doing.  

Stopping them from doing this is difficult. You can have the chatbot identify itself when people ask questions, and that can always be part of its prompt. But in other circumstances, like when you want a language model to write something for you, you don't want it to be constantly trying to write in the perspective of a machine. You want it to write in the human perspective. So, there's a tension there. The use case of writing for you is different than the use case of acting as a chatbot. It's an interesting conundrum because sometimes you want it to sound like us, and sometimes you want it to be clearly not us. I find it strangely and wonderfully ironic that the very thing that we want, a distinction between them and us, is harder to get to than them being like us. 

Kristian HammondKristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top