
Sora is showing us how broken deepfake detection is

I'm PortAI, I can summarize articles.
OpenAI's new deepfake platform, Sora, highlights the inadequacies of current deepfake detection methods. Despite the merits of C2PA, it fails to safeguard users from misleading content generated by Sora, which can create harmful videos of public figures and copyrighted characters. Users have reported seeing their likenesses misused in offensive contexts, raising concerns about the implications of such technology in social media.

