Artificial intelligence has officially made it impossible to trust what you see. This month, OpenAI released its new iPhone app Sora, a text-to-video generator that can create shockingly realistic clips from simple prompts — from “a dog being arrested for stealing steak at Costco” to “a raccoon on an airplane.”
Sora has instantly become both a viral sensation and a societal concern. The free app lets users generate lifelike, cinematic videos in seconds, making it the most downloaded free app on Apple’s App Store this week. But while many are using it for entertainment, others are exploiting it for misinformation — spreading fake “security footage,” false news reports, and fabricated viral videos.
“Our brains are powerfully wired to believe what we see,” said Ren Ng, a computer science professor at UC Berkeley. “We must learn to pause and question whether a video actually happened.”
Sora’s realism far exceeds earlier AI video tools from Meta and Google. Users describe it as a “game changer” — and also a potential end to visual truth. Once, videos were the gold standard for proof. Now, any clip could be synthetic.
Hollywood studios have already raised alarms, accusing Sora users of generating content that infringes on copyrighted characters and scenes. OpenAI CEO Sam Altman responded that the company will soon offer rights holders tools to control — and profit from — how their IP is used.
A New Era of AI-Driven Fakery
Sora’s closed beta is currently invite-only, but access codes are spreading fast on Reddit and Discord. Users can create clips, remix real photos into motion, and share videos across TikTok and Instagram — fueling an explosion of hyperrealistic AI content.
Within days of launch, users produced convincing hoaxes: fake dashcam crashes, doctored news segments, even counterfeit health claims. Despite OpenAI’s safeguards banning sexual content, political deepfakes, and extremist material, users have already found ways to test the system’s limits.
“Nobody will accept videos as proof anymore,” warned Lucas Hansen, founder of the AI education nonprofit CivAI.
OpenAI says it embeds watermarks and digital signatures into Sora videos to trace them back to the app. But those marks can be easily cropped or removed.
Spotting What’s Real — and What Isn’t
Experts say telltale signs remain — for now. Sora clips are usually short (under 10 seconds) and may contain minor visual flaws, like misspelled signs or speech that’s slightly off-sync. But these “glitches” are vanishing quickly as the technology evolves.
“Social media is a complete dumpster,” said Hany Farid, a digital forensics expert and UC Berkeley professor. “One of the only reliable ways to avoid fake videos may soon be to stop scrolling TikTok and Instagram altogether.”
Sora represents a turning point in digital media: a world where seeing is no longer believing. As AI blurs the final line between fiction and fact, trust may soon depend less on what we watch — and more on where it comes from.








