OpenAI released Sora, its video generation tool, and within days the internet was flooded with hyperrealistic fake disasters, resurrected celebrities, and AI-generated homeless people that sent police on wild goose chases. If you’re wondering whether we’re ready for this technology, the answer is definitively no.
The Hurricane That Never Happened
This week, videos of “Hurricane Melissa” went viral across social platforms—torrential flooding, destroyed homes, emergency evacuations. The footage looked professional, the kind of thing you’d see on cable news. Thousands shared it, meteorologists had to issue statements debunking it, and emergency services fielded calls from concerned citizens. There was just one problem: Hurricane Melissa doesn’t exist. Never did. The entire thing was generated by Sora.
https://apnews.com/article/hurricane-melissa-ai-sora-video-682d8acff33af4509d615e742698d99a
This isn’t a one-off. OpenAI pitched Sora as a creative tool, but what it’s actually become is a disinformation engine running at scale. Videos of deceased public figures giving speeches they never gave. Politicians saying things they never said. Natural disasters that never occurred. And because the quality is so high, platforms can’t moderate it fast enough, algorithms keep recommending it, and users have no reliable way to verify what they’re seeing.
https://www.latimes.com/business/story/2025-10-26/sora-the-bizarre-mind-bending-ai-slop-machine
When AI Pranks Waste Real Resources
Then there’s the “homeless prank” that’s been spreading across social media: users generate AI images of people in distress, add geotags, and post them publicly. Police departments respond, dispatch officers, and arrive to find—nothing. Because the person never existed. Multiple departments have now issued official warnings telling people to stop calling in AI-generated images, which is a sentence I didn’t expect to write in 2025. But here’s the thing: every fake call is a real resource diverted from actual emergencies. It’s not harmless. It’s dangerous.
https://abcnews.go.com/GMA/Living/police-departments-issue-warnings-ai-homeless-man-prank/story?id=126563187
Your Face Is Now Your Voice
Here’s a new one to worry about: researchers demonstrated this week that AI can generate a convincing voice clone from nothing but a photograph of your face. Not a video. Not an audio sample. Just a photo. And yes, it’s accurate enough for phishing. Scammers are already using it to call victims’ family members, impersonate them, and demand money. Voice authentication for banking? Probably useless now. And since everyone’s face is plastered across social media, there’s no practical defense.
Not Everything Is Terrible
Amid the chaos, there’s at least one genuinely positive development: Google AI independently generated two novel cancer therapy hypotheses this week, both of which were validated experimentally. This isn’t analysis—it’s discovery. The system didn’t just crunch numbers; it proposed original scientific ideas that humans then confirmed in the lab. For pharmaceutical research, this is transformative. For everyone else, it’s a reminder that the same technology capable of flooding the internet with lies is also capable of saving lives.
https://aiagentstore.ai/ai-agent-news/2025-october
ChatGPT for $100
Andrej Karpathy, formerly of Tesla and OpenAI, published a complete guide this week showing how anyone can train a ChatGPT-equivalent model for about $100 in four hours. The barrier to entry just collapsed. What used to require a research lab and millions in funding now requires a laptop and a credit card. Expect an explosion of AI developers—and an explosion of competition.
Bonus: AI Security Theater
School districts using AI-powered weapons detection systems have been stopping students carrying bags of chips—because the algorithm flagged the packaging as a firearm. Parents are demanding the systems be removed. Students are being detained for snacks. And the whole thing is a reminder that when AI gets it wrong, real people suffer the consequences. When it gets it right, we call it dystopian surveillance. There’s no winning.
AI in 5: Pranks, Promises, and Paperwork — The Hidden Stories Shaping AI’s Future (October 21, 2025)
What This Means
We’re now living in an internet where you can’t trust video, you can’t trust photos, and you can’t even trust voices. Sora generates fake hurricanes that look real enough to evacuate neighborhoods. Scammers clone voices from Facebook profile pictures. Police departments waste resources chasing AI-generated ghosts. But the same technology is also accelerating cancer research and democratizing access to AI development. The problem isn’t the technology—it’s that the people using it for harm outnumber the people using it for good, and platforms have no idea how to stop them.


No comment