Matheus Bertelli/Pexels

In a shocking incident that unfolded on October 18, 2025, two individuals narrowly escaped a near-drowning experience after relying on inaccurate tide timing information provided by the AI chatbot, ChatGPT. The pair had sought information about low tide, but the AI’s response led them into a dangerous situation during an unexpected high tide, highlighting the potential risks of depending on generative AI for critical safety decisions.

The Coastal Misadventure

The two individuals ventured into the water based on the AI’s response, expecting a low tide. However, they were caught off guard by a sudden rise in tide, trapping them in the water. The environmental conditions at the time, including high waves and strong currents, escalated the danger. Eyewitness accounts describe a rapidly worsening situation, with the pair struggling against the tide.

According to Futurism’s report on the incident, the pair had trusted the AI’s information and ventured into the water, unaware of the impending high tide.

Querying ChatGPT for Tide Data

The individuals had asked ChatGPT about the timing of the low tide, expecting precise, real-time information. The AI’s response, as recorded in the interaction log, was “low tide expected around 2 PM.” This inaccurate information led the pair to believe they were entering the water during a safe period.

They had used their mobile devices on-site to query the AI, a detail that adds to the context of the incident as reported on October 18, 2025.

Immediate Dangers Faced

The two individuals faced a life-threatening situation as they were swept by the currents and struggled to stay afloat for over 20 minutes. One of the victims was quoted saying, “We thought we had hours based on what ChatGPT said,” reflecting their surprise and the peril they faced.

The location where the incident occurred is known for its hazards, including rip currents, which amplified the consequences of the AI’s error, as per the source coverage.

Rescue and Survival

Fortunately, nearby beachgoers and lifeguards intervened in time. Using flotation devices, they managed to pull the pair to safety. Following the rescue, medical evaluations confirmed that while the victims faced significant exhaustion and hypothermia risks, they did not sustain any long-term injuries.

The quick human response played a crucial role in the fortunate outcome of the October 18, 2025 incident, as detailed in the reporting.

AI Limitations in Safety Contexts

The incident raises questions about why ChatGPT provided flawed tide data. The AI’s training cutoff and lack of live environmental feeds could be contributing factors. This case serves as a stark illustration of AI inaccuracies in time-sensitive queries, with the tide predictions going dangerously wrong.

Experts have warned about the risks of relying on AI for outdoor activities, linking the event’s risks to the Futurism’s analysis from October 18, 2025.

Lessons for AI Users

This incident serves as a reminder to verify AI outputs with official sources like NOAA tide charts before acting on them. Reflecting on their experience, the victims said, “Never again will we trust an app over a signpost,” emphasizing the importance of personal responsibility in using AI.

The incident has sparked regulatory discussions, as covered in the key report on the 2025 near-drowning. As AI continues to evolve and permeate our lives, it is crucial to understand its limitations and use it responsibly.