![](https://techcrunch.com/wp-content/uploads/2023/12/google-bard-gemini-v2.jpg?w=711)
![](https://techcrunch.com/wp-content/uploads/2023/12/google-bard-gemini-v2.jpg?w=711)
For those who wanted extra proof that GenAI is susceptible to creating stuff up, Google’s Gemini chatbot, previously Bard, thinks that the 2024 Tremendous Bowl already occurred. It even has the (fictional) statistics to again it up.
Per a Reddit thread, Gemini, powered by Google’s GenAI models of the same name, is answering questions on Tremendous Bowl LVIII as if the sport wrapped up yesterday — or weeks earlier than. Like many bookmakers, it appears to favor the Chiefs over the 49ers (sorry, San Francisco followers).
Gemini ornaments fairly creatively, in a minimum of one case giving a participant stats breakdown suggesting Kansas Chief quarterback Patrick Mahomes ran 286 yards for 2 touchdowns and an interception versus Brock Purdy’s 253 working yards and one landing.
![Gemini Super Bowl](https://techcrunch.com/wp-content/uploads/2024/02/wait-superbowl-2024-already-happened-v0-naqjhg7fm0ic1.webp)
Picture Credit: /r/smellymonster (opens in a new window)
It’s not simply Gemini. Microsoft’s Copilot chatbot, too, insists the sport ended and gives inaccurate citations to again up the declare. However — maybe reflecting a San Francisco bias! — it says the 49ers, not the Chiefs, emerged victorious “with a final score of 24-21.”
![Copilot Super Bowl](https://techcrunch.com/wp-content/uploads/2024/02/Screenshot-2024-02-11-at-7.29.40 PM.png)
![Copilot Super Bowl](https://techcrunch.com/wp-content/uploads/2024/02/Screenshot-2024-02-11-at-7.29.40 PM.png)
Picture Credit: Kyle Wiggers / TechCrunch
Copilot is powered by a GenAI mannequin related, if not equivalent, to the mannequin underpinning OpenAI’s ChatGPT (GPT-4). However in my testing, ChatGPT was loath to make the identical mistake.
![ChatGPT Super Bowl](https://techcrunch.com/wp-content/uploads/2024/02/Screenshot-2024-02-11-at-7.56.28 PM.png)
![ChatGPT Super Bowl](https://techcrunch.com/wp-content/uploads/2024/02/Screenshot-2024-02-11-at-7.56.28 PM.png)
Picture Credit: Kyle Wiggers / TechCrunch
It’s all moderately foolish — and presumably resolved by now, on condition that this reporter had no luck replicating the Gemini responses within the Reddit thread. (I’d be shocked if Microsoft wasn’t engaged on a repair as nicely.) But it surely additionally illustrates the most important limitations of immediately’s GenAI — and the hazards of inserting an excessive amount of belief in it.
GenAI fashions have no real intelligence. Fed an infinite variety of examples often sourced from the general public internet, AI fashions learn the way probably knowledge (e.g. textual content) is to happen primarily based on patterns, together with the context of any surrounding knowledge.
This probability-based method works remarkably nicely at scale. However whereas the vary of phrases and their possibilities are probably to lead to textual content that is sensible, it’s removed from sure. LLMs can generate one thing that’s grammatically right however nonsensical, for example — just like the declare in regards to the Golden Gate. Or they will spout mistruths, propagating inaccuracies of their coaching knowledge.
Tremendous Bowl disinformation actually isn’t probably the most dangerous instance of GenAI going off the rails. That distinction in all probability lies with endorsing torture, reinforcing ethnic and racial stereotypes or writing convincingly about conspiracy theories. It’s, nonetheless, a helpful reminder to double-check statements from GenAI bots. There’s an honest probability they’re not true.