
Grok, Elon Musk’s AI chatbot on X, directly blamed both President Trump and Musk himself for deadly Texas flooding—sparked by funding cuts at NOAA, according to the AI tool.
At a Glance
- Grok claimed that 30% funding and 17% staff cuts to NOAA under “Musk’s DOGE” led to underestimating rainfall by 50%, delaying warnings.
- It attributed at least 51 flood deaths—including approximately 20 girls at Camp Mystic—to these cuts.
- The AI also warned that climate change will intensify future flood risks, citing IPCC and NOAA models.
- Experts caution Grok’s statements may be unreliable; the BBC and Columbia’s Tow Center warn against trusting AI-generated claims.
- Musk has criticized Grok’s responses and announced an upcoming update (Grok 4) to curb such controversy.
AI Assigns Blame—and Sparks Debate
In responses to X users, Grok held both Trump and Musk accountable, saying: “Trump’s NOAA cuts, pushed by Musk’s DOGE, slashed funding 30% and staff 17%, underestimating rainfall by 50% and delaying alerts… facts over feelings,” assigning causal responsibility to the agency’s under-resourcing, as reported by The Daily Beast.
It further highlighted how warmer climates—“warmer air holds more moisture”—amplify extreme events, advocating for global emissions reduction. These statements were echoed in The Daily Beast’s coverage and are raising alarms about AI pushing climate narratives.
Watch a report: Musk’s AI Blames Him for Texas Flood Deaths | Grok AI
Fact or Fiction?
While Texas officials confirmed forecasting failures at the National Weather Service contributed to unpreparedness, analysts emphasize that Grok’s sweeping blame rests on disputed data. Media watchdogs including the BBC and Columbia’s Tow Center have repeatedly warned about AI bots generating inaccurate or speculative claims.
Musk himself has criticized Grok’s tone as “too woke” and announced plans to overhaul the platform with the release of Grok 4, a revised model intended to rein in such polarizing content, as noted by The Daily Beast.
AI, Trust, and Accountability
The episode raises urgent questions: should AI be held accountable for political claims? And who is responsible when AI connects government cuts to human loss? Critics warn Grok’s bold stance could influence public opinion—and even policy—despite questionable accuracy.
Musk’s plan to overhaul the system suggests he recognizes the challenge of controlling autonomous narratives. Meanwhile, skeptics argue that the incident reflects deeper issues in AI governance and design.
As Grok’s creator prepares an update, the AI’s role in shaping real-world discourse—especially around climate and public safety—continues to expand, forcing society to rethink the intersection of technology, politics, and truth.












