Number is increasing rapidlyThis is how IT forensic experts uncover deepfakes

Deepfake porn can have serious consequences for those falsely depicted.
picture alliance/dpa / Marcus Brandt
Whether Tiktok or porn videos: there are more and more deepfakes on the Internet. Specialists sometimes use simple tricks to expose AI-generated fakes. An expert provides insights into the work of IT forensic experts.
Their number is increasing rapidly: On social media, deepfakes provide fun and entertainment at best. On porn sites they can destroy people's reputations – and on political sites they can influence elections. Deepfakes are deceptively real, AI-generated or manipulated images, videos and audio recordings. In the current debate, they are increasingly recognized as a problem on the political agenda.
But how do you uncover a deepfake? Nicolas Müller from the Fraunhofer Institute for Applied and Integrated Security in Garching near Munich leads a research group on deepfakes there. An initial analysis can be done with your own eyes and ears, he says.
“When doing an interview with a person, you can check: This was supposed to have been taken in Berlin on a certain day. Is the weather right? You can also check with a video: Are all the shadows coming from the right corner?”
You can do this manually: “You draw a line from the shadows to the shadows on a still image and see whether all of these lines, when you draw them upwards, start from a common point. If that's not the case, then it's very likely a deepfake – at least if it's an outdoor scene with only one light source.”
Another indication are moments where a voice can be heard, but the mouth of the person speaking is closed: “Then you can see: Is that just a uniform delay between the video and the soundtrack, or has the AI made a mistake?”
There are other classic indicators of a deepfake: artifacts around the mouth area, or the neck and upper body do not match in terms of skin tones, a hand has six fingers, an object merges with the hand or floats above it.
Another clue is the metadata – even if it is missing. “If you create deepfakes with current AI models like Gemini or ChatGPT, then the metadata will say that it is AI-generated. But you can also throw the metadata away.”
In addition to a critical eye and your own logic, there are also AI tools to quickly expose deepfakes: “In the end, a numerical value between 0 and 100 comes out. 0 stands for real, 100 stands for fake. Normally a deepfake has a value around 95,” says Müller.
However, the AI models are continually being improved. “The models converge to output images that are almost indistinguishable from real material.” Nevertheless, the researcher does not see the reality as negative: “It’s like in IT security: the attacker improves and the defense then follows suit.”
Jens Kramosch from the company Leak.Red in Erfurt approaches the deepfake like an investigator: “It's like coming to a crime scene. It's best to first observe the overall picture. For example, I pay attention to the hair, the hairline, the blink of an eye and the skin structure.” The skin in deepfakes is often too smooth. “It's also important to not just look at the center of the image, but at the edges: Do the lines fit? Do the shadows fit? There are various AI models that focus on the object in the middle and neglect what's around it.”
The hardest thing to recognize is a deepfake with only one person in the middle of the image and a diffuse background. “For a woman in a sea of flowers, for example, it is much more difficult for the AI to calculate it in a deceptively realistic way.” Then comes the metadata: Is the geolocation correct? Time and place? “Sometimes it says midnight and 1970 as the year it was created.”
“We can use our AI to create a chain of evidence,” says Kramosch, which victims can commission. “If the deepfake is online on Instagram, for example, we can freeze it as evidence and then it can no longer be changed. This is really secured forensically.”
Kramosch also sees: AI is getting better and better, constantly outdoing itself. “If we talk about AI deepfakes that went online a year ago: Since then there have been 23 better models. I think at the end of the year we will really have reached the point where laypeople can no longer tell the difference on social media.”
In the future, we will have to rethink: “At some point there will almost only be deepfakes, or only artificially generated content, and then it will be important to say: This is an original. Then we need something like the blue tick on Instagram, a kind of digital stamp of authenticity. Even if that sounds dramatic now, but: We'll be there next year.”
But will deepfakes remain identifiable as fakes with the help of AI tools? “I'm just saying yes now because I'm optimistic and because of course every AI has a certain pattern. But the last few months show that development is happening very, very quickly.”
“AI detectors are often black box systems. They determine with a probability whether it is a deepfake, but they do not necessarily provide an explanation as to why,” warned Tobias Wirth from the German Research Center for Artificial Intelligence in February.
AI can recognize systems, patterns and indicators that are difficult or impossible for the human eye to perceive. This concerns, for example, subtle discrepancies at the pixel level. However, this is problematic in court because comprehensible statements are required in order to assess the evidence.
Sources used: Frank Christiansen, dpa





