Forensic Similarity for Digital Images

A new deep learning technique to expose manipulated multimedia, falsified images, and fake news


(edited image credit: reddit user /u/rombouts)

Owen Mayer & Matthew C. Stamm
Multimedia and Information Security Lab
Electrical and Computer Engineering
Drexel University, Philadelphia, PA, USA

Article / PDF
Code
Cite
MISL Website
Interactive Demos: Page 1 Page 2 Page 3


Overview

In this article, we introduce a new deep learning technique to reliably determine whether two small image regions contain the same or different forensic traces. By identifying differences in forensic traces, we are able to expose image forgeries including those used in fake news.

Forensic traces are the visually imperceptible signals embedded in an image during the capture process and subsequent post-processing. Localized differences in forensic traces are a strong indication that an image has been tampered, whereas regions in an unmodified image will be forensically similar to each other.

In this article, we propose a new multimedia forensics system comprised of two Convolutional Neural Network (CNN) feature extractors in hard sharing configuration, a shallow, comparative neural network called the "Similarity network," and associated deep learning training procedure. The whole system, shown below, takes two image patches as input and outputs a single score that measures the forensic similarity of the two image patches.

While this is a supervised learning approach, importantly, our technique is effective on forensic traces that were not used to train the system. This quality is critical to analyzing image forgeries found outside of the laboratory setting.

Full, technical details are found in the IEEE T-IFS article. Additionally, the project git repository contains necessary Tensorflow model definitions, pre-trained weights, and code to evaluate images as well as to train the system from scratch.

Interactive Demo

This demo puts you in the seat of a forensic investigator. Examine the following image for traces of tampering.

Instructions:

Patches that have different forensic traces from the selected reference patches will be highlighted in red.

By selecting a reference patch in an unaltered portion of the image, the forensic similarity tool will expose the tampered regions.

When selecting multiple reference patches, the demo will highlight the on average patches that are forensically different.

In this example, the left person's shirt has been brush-tooled to remove stains from rain drops (editing by reddit user /u/rombouts on /r/PhotoshopRequest). Selecting reference patches on the right person and/or background will highlight that the shirt has undergone a different processing history.

More demos here! And here! And here, too!

Related Works

This article builds off of several works from the MISL group, including:


Copyright © 2020 Owen Mayer