An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Standard

An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture. / Hashmi, Mohammad Farukh; Ashish, B. Kiran Kumar; Keskar, Avinash G.; Bokde, Neeraj Dhanraj; Yoon, Jin Hee; Geem, Zong Woo.

I: IEEE Access, Bind 8, 9102991, 01.2020, s. 101293-101308.

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Harvard

Hashmi, MF, Ashish, BKK, Keskar, AG, Bokde, ND, Yoon, JH & Geem, ZW 2020, 'An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture', IEEE Access, bind 8, 9102991, s. 101293-101308. https://doi.org/10.1109/ACCESS.2020.2998330

APA

Hashmi, M. F., Ashish, B. K. K., Keskar, A. G., Bokde, N. D., Yoon, J. H., & Geem, Z. W. (2020). An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture. IEEE Access, 8, 101293-101308. [9102991]. https://doi.org/10.1109/ACCESS.2020.2998330

CBE

Hashmi MF, Ashish BKK, Keskar AG, Bokde ND, Yoon JH, Geem ZW. 2020. An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture. IEEE Access. 8:101293-101308. https://doi.org/10.1109/ACCESS.2020.2998330

MLA

Vancouver

Hashmi MF, Ashish BKK, Keskar AG, Bokde ND, Yoon JH, Geem ZW. An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture. IEEE Access. 2020 jan;8:101293-101308. 9102991. https://doi.org/10.1109/ACCESS.2020.2998330

Author

Hashmi, Mohammad Farukh ; Ashish, B. Kiran Kumar ; Keskar, Avinash G. ; Bokde, Neeraj Dhanraj ; Yoon, Jin Hee ; Geem, Zong Woo. / An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture. I: IEEE Access. 2020 ; Bind 8. s. 101293-101308.

Bibtex

@article{813d19b62d9349458d300794ac8b9bd8,
title = "An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture",
abstract = "In recent years, with the advancements in the Deep Learning realm, it has been easy to create and generate synthetically the face swaps from GANs and other tools, which are very realistic, leaving few traces which are unclassifiable by human eyes. These are known as 'DeepFakes' and most of them are anchored in video formats. Such realistic fake videos and images are used to create a ruckus and affect the quality of public discourse on sensitive issues; defaming one's profile, political distress, blackmailing and many more fake cyber terrorisms are envisioned. This work proposes a microscopic-typo comparison of video frames. This temporal-detection pipeline compares very minute visual traces on the faces of real and fake frames using Convolutional Neural Network (CNN) and stores the abnormal features for training. A total of 512 facial landmarks were extracted and compared. Parameters such as eye-blinking lip-synch; eyebrows movement, and position, are few main deciding factors that classify into real or counterfeit visual data. The Recurrent Neural Network (RNN) pipeline learns based on these features-fed inputs and then evaluates the visual data. The model was trained with the network of videos consisting of their real and fake, collected from multiple websites. The proposed algorithm and designed network set a new benchmark for detecting the visual counterfeits and show how this system can achieve competitive results on any fake generated video or image.",
keywords = "Convolutional neural networks (CNN), DeepFakes, Facial landmarks, Generative adversarial network (GANs), Recurrent neural network (RNN), Visual counterfeits",
author = "Hashmi, {Mohammad Farukh} and Ashish, {B. Kiran Kumar} and Keskar, {Avinash G.} and Bokde, {Neeraj Dhanraj} and Yoon, {Jin Hee} and Geem, {Zong Woo}",
year = "2020",
month = jan,
doi = "10.1109/ACCESS.2020.2998330",
language = "English",
volume = "8",
pages = "101293--101308",
journal = "IEEE Access",
issn = "2169-3536",
publisher = "Institute of Electrical and Electronics Engineers",

}

RIS

TY - JOUR

T1 - An Exploratory Analysis on Visual Counterfeits Using Conv-LSTM Hybrid Architecture

AU - Hashmi, Mohammad Farukh

AU - Ashish, B. Kiran Kumar

AU - Keskar, Avinash G.

AU - Bokde, Neeraj Dhanraj

AU - Yoon, Jin Hee

AU - Geem, Zong Woo

PY - 2020/1

Y1 - 2020/1

N2 - In recent years, with the advancements in the Deep Learning realm, it has been easy to create and generate synthetically the face swaps from GANs and other tools, which are very realistic, leaving few traces which are unclassifiable by human eyes. These are known as 'DeepFakes' and most of them are anchored in video formats. Such realistic fake videos and images are used to create a ruckus and affect the quality of public discourse on sensitive issues; defaming one's profile, political distress, blackmailing and many more fake cyber terrorisms are envisioned. This work proposes a microscopic-typo comparison of video frames. This temporal-detection pipeline compares very minute visual traces on the faces of real and fake frames using Convolutional Neural Network (CNN) and stores the abnormal features for training. A total of 512 facial landmarks were extracted and compared. Parameters such as eye-blinking lip-synch; eyebrows movement, and position, are few main deciding factors that classify into real or counterfeit visual data. The Recurrent Neural Network (RNN) pipeline learns based on these features-fed inputs and then evaluates the visual data. The model was trained with the network of videos consisting of their real and fake, collected from multiple websites. The proposed algorithm and designed network set a new benchmark for detecting the visual counterfeits and show how this system can achieve competitive results on any fake generated video or image.

AB - In recent years, with the advancements in the Deep Learning realm, it has been easy to create and generate synthetically the face swaps from GANs and other tools, which are very realistic, leaving few traces which are unclassifiable by human eyes. These are known as 'DeepFakes' and most of them are anchored in video formats. Such realistic fake videos and images are used to create a ruckus and affect the quality of public discourse on sensitive issues; defaming one's profile, political distress, blackmailing and many more fake cyber terrorisms are envisioned. This work proposes a microscopic-typo comparison of video frames. This temporal-detection pipeline compares very minute visual traces on the faces of real and fake frames using Convolutional Neural Network (CNN) and stores the abnormal features for training. A total of 512 facial landmarks were extracted and compared. Parameters such as eye-blinking lip-synch; eyebrows movement, and position, are few main deciding factors that classify into real or counterfeit visual data. The Recurrent Neural Network (RNN) pipeline learns based on these features-fed inputs and then evaluates the visual data. The model was trained with the network of videos consisting of their real and fake, collected from multiple websites. The proposed algorithm and designed network set a new benchmark for detecting the visual counterfeits and show how this system can achieve competitive results on any fake generated video or image.

KW - Convolutional neural networks (CNN)

KW - DeepFakes

KW - Facial landmarks

KW - Generative adversarial network (GANs)

KW - Recurrent neural network (RNN)

KW - Visual counterfeits

UR - http://www.scopus.com/inward/record.url?scp=85086699417&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2020.2998330

DO - 10.1109/ACCESS.2020.2998330

M3 - Journal article

AN - SCOPUS:85086699417

VL - 8

SP - 101293

EP - 101308

JO - IEEE Access

JF - IEEE Access

SN - 2169-3536

M1 - 9102991

ER -