Navigation

© Zeal News Africa

AI Video Wave: OpenAI's Sora and Meta Fuel 'Slop' Flood Concerns

Published 19 hours ago3 minute read
Uche Emeka
Uche Emeka
AI Video Wave: OpenAI's Sora and Meta Fuel 'Slop' Flood Concerns

OpenAI, the company behind ChatGPT, has entered the emerging market of AI-generated video with the release of its new Sora social media app. Launched on Tuesday, the iPhone application aims to capture the attention of users currently engaged with short-form video platforms such as TikTok, YouTube, Instagram, and Facebook. Sora allows users to create highly imaginative videos, ranging in style from anime to hyper-realistic, depicting scenes of themselves doing almost anything imaginable.

The introduction of a continuous stream of AI-generated content on social media has sparked considerable debate and concern, particularly regarding the phenomenon termed “AI slop.” Critics worry that this proliferation of synthetic content could overshadow authentic human creativity and degrade the overall information ecosystem. Jose Marichal, a political science professor at California Lutheran University, who studies AI’s societal restructuring, noted the compelling yet often implausible nature of these realistic-looking videos, suggesting their ability to draw users in.

OpenAI’s official launch video for Sora exemplified this capability, featuring an AI-generated version of CEO Sam Altman delivering his introduction from fantastical settings, including a psychedelic forest, the moon, and a stadium filled with cheering fans watching rubber duck races. The app is currently exclusive to Apple devices and available in the U.S. and Canada. This move follows a similar initiative by Meta, which recently launched its own feed of AI short-form videos within its Meta AI app, featuring CEO Mark Zuckerberg posting AI-generated content like a cartoon version of himself and an army of fuzzy creatures.

Both Sora and Meta’s Vibes product are designed with high personalization in mind, recommending new videos based on user engagement. Marichal observes that social media feeds are already inundated with such content, from fictional animal scenarios to easily debunked fake natural disaster reports. He emphasizes that while humans are naturally inclined to seek extraordinary information, the danger arises when such content dominates online discourse. Marichal warns that an information environment lacking truth or trust can impede rational decision-making necessary for collective governance, potentially leading to extreme skepticism or absolute certainty, and pushing society away from liberal and representative democracies, making people either manipulated or manipulators.

Recognizing these concerns, OpenAI addressed potential issues in its launch announcement, stating that “Concerns about doomscrolling, addiction, isolation, and (reinforcement learning)-sloptimized feeds are top of mind.” The company pledged to “periodically poll users on their wellbeing” and provide options to adjust their feeds, with a built-in bias towards recommending posts from friends over strangers. This acknowledges the urgent need to balance innovative AI capabilities with societal well-being and information integrity.

Loading...
Loading...
Loading...

You may also like...