Skip to content
Image: Bing Chat/IDG
Over the last few months, AI chatbots have exploded in popularity off the surging success of OpenAI’s revolutionary ChatGPT—which, amazingly, only burst onto the scene around December. But when Microsoft seized the opportunity to hitch its wagon to OpenAI’s rising star for a steep $10 billion dollars, it chose to do so by introducing a GPT-4-powered chatbot under the guise of Bing, its swell-but-also-ran search engine, in a bid to upend Google’s search dominance. Google quickly followed suit with its own homegrown Bard AI.
Both are touted as experiments. And these “AI chatbots” are truly wondrous advancements—I’ve spent many nights with my kids joyously creating fantastic stuff-of-your-dreams artwork with Bing Chat’s Dall-E integration and prompting sick raps about wizards who think lizards are the source of all magic, and seeing them come to life in mere moments with these fantastic tools. I love ‘em.
But Microsoft and Google’s marketing got it wrong. AI chatbots like ChatGPT, Bing Chat, and Google Bard shouldn’t be lumped in with search engines whatsoever. They’re more like those crypto bros clogging up the comments in Elon Musk’s terrible new Twitter, loudly and confidently braying truthy-sounding statements that in reality are often full of absolute bullshit.
These so-called “AI chatbots” do a fantastic job of synthesizing information and providing entertaining, oft-accurate details about whatever you query. But under the hood, they’re actually large language models (LLMs) trained on billions or even trillions of points of data—text—that they learn from in order to anticipate which words should come next based off your query. AI chatbots aren’t intelligent at all. They draw on patterns of word association to generate results that sound plausible for your query, then state them definitively with no idea of whether or not those strung-together words are actually true.
I have no idea who coined the term originally, but the memes are right: These chatbots are essentially autocorrect on steroids, not reliable sources of information like the search engines they’re being glommed onto, despite the implication of trust that association provides.
They’re bullshit generators. They’re crypto bros.
Further reading: ChatGPT vs. Bing vs. Bard: Which AI is best?
AI chatbots say the darndest things
Mark Hachman/IDG
The signs were there immediately. Beyond all the experiment talk, Microsoft and Google were both sure to emphasize that these LLMs sometimes generate inaccurate results (“hallucinating,” in AI technospeak). “Bing is powered by AI, so surprises and mistakes are possible,” Microsoft’s disclaimer states. “Make sure to check the facts, and share feedback so we can learn and improve!” That was driven home when journalists discovered embarrassing inaccuracies in the glitzy launch presentations for Bard and Bing Chat alike.
Those falsehoods suck when you’re using Bing and, you know, Google—the world’s biggest two search engines. But conflating search engines with large language models has even deeper implications, as driven home by a recent Washington Post report chronicling how OpenAI’s ChatGPT “invented a sexual harassment scandal and named a real law prof as the accused,” as the headline aptly summarized.
It’s exactly what it sounds like. But it’s so much worse because of how this hallucinated “scandal” was discovered.
Brad Chacos/IDG
You should go read the article. It’s both great and terrifying. Essentially, law professor John Turley was contacted by a fellow lawyer who asked ChatGPT to generate a list of law scholars guilty of sexual harassment. Turley’s name was on the list, complete with a citation of a Washington Post article. But Turley hasn’t been accused of sexual harassment, and that Post article doesn’t exist. The large language model hallucinated it, likely drawing off Turley’s record of providing press interviews on law subjects to publications like the Post.
“It was quite chilling,” Turley told The Post. “An allegation of this kind is incredibly harmful.”
You’re damned right it is. An allegation like that could ruin someone’s career, especially since Microsoft’s Bing Chat AI quickly started spouting similar allegations with Turley’s name in the news. “Now Bing is also claiming Turley was accused of sexually harassing a student on a class trip in 2018,” the Post’s Will Oremus tweeted. “It cites as a source for this claim Turley’s own USA Today op-ed about the false claim by ChatGPT, along with several other aggregations of his op-ed.”
I’d be furious—and furiously suing every company involved in the slanderous claims, made under the corporate banners of OpenAI and Microsoft. Funnily enough, an Australian mayor threatened just that on Wednesday, around the same time the Post report published. “Regional Australian mayor [Brian Hood] said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service,” Reuters reported.
OpenAI’s ChatGPT is catching the brunt of these lawsuits, possibly because it’s at the forefront of “AI chatbots” and was the fastest-adopted technology ever. (Spitting out libelous, hallucinated claims doesn’t help.) But Microsoft and Google are causing just as much harm by associating chatbots with search engines. They’re too inaccurate for that, at least at this stage.
Turley and Hood’s examples may be extreme, but if you spend any amount of time playing around with these chatbots, you’re bound to stumble into more insidious inaccuracies, nonetheless stated with full confidence. Bing, for example, misgendered my daughter when I asked about her, and when I had it craft a personalized resume from my LinkedIn profile, it got a lot correct, but also hallucinated skills and previous employers wholecloth. That could be devastating to your job prospects if you aren’t paying close attention. Again, Bard’s reveal demonstration included obvious falsehoods about the James Webb space telescope that astronomers identified instantly. Using these supposedly search engine-adjacent tools for research could wreck your kid’s school grades.
It didn’t have to be this way
Bing Chat / Brad Chacos/ IDG
The hallucinations sometimes spit out by these AI tools aren’t as painful in more creative endeavors. AI art generators rock, and Microsoft’s killer-looking Office AI enhancements—which can create full PowerPoint presentations out of reference documents you cite, and more—seem poised to bring radical improvements to desk drones like yours truly. But those tasks don’t have the strict accuracy expectations that come with search engines.
It didn’t have to be this way. Microsoft and Google’s marketing truly dropped the ball here by associating large language models with search engines in the eyes of the public, and I hope it doesn’t wind up permanently poisoning the well of perception. These are fantastic tools.
I’ll end this piece with a tweet from Steven Sinofsky, who was replying to commentary about severely wrong ChatGPT hallucinations causing headaches for an inaccurately cited researcher. Sinofsky is an investor who lead Microsoft Office and Windows 7 to glory back in the day, so he knows what he’s talking about.
“Imagine a world where this was called ‘Creative Writer’ and not ‘Search’ or ‘Ask anything about the world,’” he said. “This is just a branding fiasco right now. Maybe in 10 years of progress, many more technology layers, and so on it will come to be search.”
For now, however, AI chatbots are crypto bros. Have fun, bask in the possibilities these wondrous tools unlock, but don’t take their information at face value. It’s truthy, not trustworthy.
Author: Brad Chacos, Executive editor
Brad Chacos spends his days digging through desktop PCs and tweeting too much. He specializes in graphics cards and gaming, but covers everything from security to Windows tips and all manner of PC hardware.