Since the release of OpenAI’s video AI Sora 2, clips featuring historical figures have been flooding social media platforms. Stalin steals chips from McDonald’s, Cleopatra feeds an admirer to crocodiles. In an interview with Der Spiegel, Roland Meyer explains why these seemingly entertaining videos are dangerous.
AI models such as Sora are trained using enormous amounts of data, including historical recordings, films and computer games. However, the AI does not synthesise a historically accurate representation from this data, but merely reproduces how we imagine a particular era to be.
POV (point of view) videos are particularly popular, showing, for example, the everyday life of a gladiator or a passenger on the Titanic as if they had had a smartphone with them.
It becomes dangerous when images are disseminated as supposedly authentic and historical sources are suppressed. A frightening example of this are AI-generated images from Nazi concentration camps that are disseminated as supposedly authentic.
According to Roland Meyer, commercial AI video generators have an extreme problem with sexist and racist bias. Many right-wing political actors use AI-generated imagery for propaganda purposes, promising to restore a nostalgic past that never existed.
AI extracts recurring patterns from existing images, and the most obvious patterns are often stereotypical and discriminatory. This carries the risk of reproducing a one-sided view of history instead of developing new perspectives. Historical research, on the other hand, means breaking new ground and bringing previously invisible people and stories into focus.
The DIZH bridge professor calls for critical media literacy: with every video, we must ask ourselves who produced it, why and with what intention. Archives and museums are becoming more important because they preserve authentic documents as a corrective to the flood of synthetic content.
The interview with Roland Meyer appeared in Der Spiegel on 15 November 2025.