Authors: Samantha Klier and Harald Baier

DFRWS EU 2026

Abstract

Linking an image to its origin is a fundamental task in digital forensics often addressed through Source Camera Identification (SCI) based on Sensor Pattern Noise (SPN). However, recent advances in AI-enhanced smartphone photography challenge the reliability of SPN. On the other hand, noise-based identification approaches have been successfully transferred to AI-generated images. Therefore, we investigate whether the noise patterns of AI-generated images interfere with those of modern smartphones and analyze the implications for standard procedures. Our empirical evaluation reveals that the noise in AI-generated images is not predominantly additive, contradicting prior assumptions. Furthermore, we show that fingerprints of AI image generators can identify corresponding images only when the prompted resolution matches. Additionally, the standard PCE threshold leads to high false-positive rates — 61% for Adobe Firefly Image 4 and 100% for ChatGPT 5 — when comparing AI images to smartphone fingerprints. We demonstrate that simple center-cropping effectively eliminates these false positives without reducing true-positive identification performance. Our findings highlight the need for updated forensic methodologies due to the influence of software on imaging pipelines.

Downloads