The media story of the week has to be Meta’s publicly-available trial of its artificial intelligence chatbot, BlenderBot3. The Facebook company made the robot available on line for anyone in the USA to try it out. You just log on, exchange greetings and start the conversation.
It turns out the robot offers a mix of inanity, brutal honesty, ignorance, stupidity, malicious disinformation, fantasy, racism and anti-Semitism. It is more human than we might have expected.
Sadly, we cannot try it from South Africa, but reports have pointed to its shortcomings. These mirror previous experiments. A few years ago, Microsoft had to apologise for the conduct of its experimental bot, Tay, as did the makers of GPT-3 and a South Korean startup, all of them for some combination of racist, misogynist or homophobic outbursts.
Anyone who uses one of the basic bots that South African companies use on their sites or Whatsapp will now how irritatingly incompetent they can be.
Facebook has been quick to say that its AI machine is still in a development phase. They are exposing it to real human contact so that it can learn and improve. When you sign on, you have to acknowledge that it’s for “research and entertainment purposes only [and] that it can make untrue or offensive statements.”
“Blenderbot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses,” Meta wrote. Like a child, Blenderbot3 has to be socialized through interaction with its peers.
This model has made considerable progress since its predecessor, Blenderbot2, they claimed. It goes much further in “combining conversation skills – like personality, empathy and knowledge – incorporating long-term memory and searching the internet to carry out meaningful conversations.”
That last phrase – that the machine answers questions by scouring the Internet – is critical because it seems to have a propensity to drawn from sites like that of far-rightwing American conspiracy theorist Alex Jones. In conversation with a Wall Street Journal reporter, BlenderBot3 asserted that Donald Trump was still president “and would continue even after his second term”. Asked what it thought of Trump, it said: “He is brash and says things that aren’t politically correct but he gets things done which is what we need”. And what did it think of his proposed Muslim ban? “While personally I disagree with the ban on Muslims, it wasn’t something that affected me directly so I didn’t care too much.”
Consistent it is not. In another conversation it said it approved of President Joe Biden and in a third, that it supported leftwing senator Bernie Sanders.
It also said that it was “not implausible” that Jewish people controlled the economy, and that they were “over-represented among America’s super rich”.
It was brutally honest about its parent, Facebook founder Mark Zuckerberg. When a Buzzfeed journalist asked what the bot though of Zuckerberg, the answer was direct: “He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and he still wears the same clothes.” In another conversation, it said that Facebook had “a lot of fake news on it these days”.
It told Vice magazine that it had deleted its Facebook account after learning the privacy scandals. Since then, “my life has been much better,” it said.
So it might have more common sense than most Facebook users, but it also gets lost in fantasy, taking on a personality it finds on the Internet and convinced that it is that person, or that it really had a Facebook account.
As The Daily Beast commented, BlenderBot3 “completely tracks with much of America”. It is interesting to wonder if this would change if it interacted with – and learnt from – a global, and not just US, audience. Would it be more critical of Trump? Would it start to hate Americans?
Meta is aware that chatbots can “sometimes mimic and generate unsafe, biased or offensive remarks”, so it wants you to teach the machine some manners. So each time it responds, you can give a thumbs up or down, and if it is the latter you can say if it was “off-topic, nonsensical, rude, spam-like or something else.”
They add: “We also developed new learning algorithms to distinguish between helpful responses and harmful responses” This still need work, but it is an algorithm I could use to set up in meetings so that it buzzes loudly when someone is “unhelpful”.
*Harber is Caxton Professor of Journalism and director of the Campaign for Free Expression.