Elon Musk’s AI chatbot Grok is once again in the spotlight due to serious problems. This time, the issue isn’t just technical but involves the spread of misinformation—particularly concerning a sensitive event like the Bondi Beach shooting.
The shooting at a Hanukkah gathering in Bondi Beach resulted in at least 11 deaths. Following the tragic incident, a 43-year-old bystander named Ahmed Al Ahmed disarmed the shooter, potentially saving many lives. A video of the incident went viral on social media, where most people praised his bravery.
Grok Completely Misunderstood the Viral Video
When users asked Grok about the viral video of Ahmed Al Ahmed subduing the shooter, the chatbot provided a completely different and inaccurate response.
Grok claimed that:
- The video showed a man climbing a palm tree in a parking lot
- A branch broke, damaging a parked car
- There was no verified location, date, or information about injuries
- The video might be fake
This response was completely detached from reality.
Further Errors: Hostages, Cyclones, and Gaza References
Furthermore, Grok also:
Misidentified a picture of the injured Ahmed Al Ahmed as an Israeli hostage from October 7th
Linked a video of a Sydney police shootout to Tropical Cyclone Alfred
Included irrelevant paragraphs related to Gaza and the Israeli military when answering a question about the Bondi shooting
In some cases, the chatbot only acknowledged its mistakes after users prompted it to double-check.
Random Confusion Beyond the Bondi Shooting
Grok’s glitches on Sunday weren’t limited to just this one incident. Users reported that the chatbot:
Misidentified famous football players
Provided information about acetaminophen during pregnancy when asked about the abortion pill mifepristone
Began discussing Project 2025 and US elections when asked about British law enforcement policy
One user even received a summary of the Bondi shooting when asking about a tech company like Oracle.
xAI’s response remains unclear
It’s still unclear why Grok is behaving so strangely. Media outlets contacted Grok’s developer, xAI, for comment, but the company only provided an automated response:
“The old media lies.” Grok has been involved in controversies before
This isn’t the first time Grok has strayed from the truth. Earlier this year:
Following an unauthorized modification, the chatbot began responding to every question with a conspiracy theory about a ‘white genocide’ in South Africa.
In one instance, Grok gave a very strange, fabricated response that drew significant criticism.
A major concern: AI and sensitive news
This incident has once again raised questions about the safeguards needed before AI chatbots handle breaking news and sensitive topics. When AI provides misinformation—especially during tragic events—the risk of real-world harm increases.
