Text-To-Image Synthesis Method Evaluation Based on Visual Patterns

Publikation: Bidrag til bog/antologi/rapport/proceedingKonferencebidrag i proceedingsForskningpeer review

Standard

Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. / Sommer, William Lund; Iosifidis, Alexandros.

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings. IEEE, 2020. s. 4097-4101 9053034.

Publikation: Bidrag til bog/antologi/rapport/proceedingKonferencebidrag i proceedingsForskningpeer review

Harvard

Sommer, WL & Iosifidis, A 2020, Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. i 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings., 9053034, IEEE, s. 4097-4101, 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020, Barcelona, Spanien, 04/05/2020. https://doi.org/10.1109/ICASSP40776.2020.9053034

APA

Sommer, W. L., & Iosifidis, A. (2020). Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. I 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings (s. 4097-4101). [9053034] IEEE. https://doi.org/10.1109/ICASSP40776.2020.9053034

CBE

Sommer WL, Iosifidis A. 2020. Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. I 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings. IEEE. s. 4097-4101. https://doi.org/10.1109/ICASSP40776.2020.9053034

MLA

Sommer, William Lund og Alexandros Iosifidis "Text-To-Image Synthesis Method Evaluation Based on Visual Patterns". 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings. IEEE. 2020, 4097-4101. https://doi.org/10.1109/ICASSP40776.2020.9053034

Vancouver

Sommer WL, Iosifidis A. Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. I 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings. IEEE. 2020. s. 4097-4101. 9053034 https://doi.org/10.1109/ICASSP40776.2020.9053034

Author

Sommer, William Lund ; Iosifidis, Alexandros. / Text-To-Image Synthesis Method Evaluation Based on Visual Patterns. 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings. IEEE, 2020. s. 4097-4101

Bibtex

@inproceedings{a56e60475be7437b985b1b56065b3a4c,
title = "Text-To-Image Synthesis Method Evaluation Based on Visual Patterns",
abstract = "A commonly used evaluation metric for text-to-image synthesis is the Inception score (IS) [1], which has been shown to be a quality metric that correlates well with human judgment. However, IS does not reveal properties of the generated images indicating the ability of a text-to-image synthesis method to correctly convey semantics of the input text descriptions. In this paper, we introduce an evaluation metric and a visual evaluation method allowing for the simultaneous estimation of the realism, variety and semantic accuracy of generated images. The proposed method uses a pre-trained Inception network [2] to produce high dimensional representations for both real and generated images. These image representations are then visualized in a 2-dimensional feature space defined by the t-distributed Stochastic Neighbor Embedding (t-SNE) [3]. Visual concepts are determined by clustering the real image representations, and are subsequently used to evaluate the similarity of the generated images to the real ones by classifying them to the closest visual concept. The resulting classification accuracy is shown to be a effective gauge for the semantic accuracy of text-to-image synthesis methods.",
keywords = "Data visualization, Evaluation metrics, Text-to-Image Synthesis",
author = "Sommer, {William Lund} and Alexandros Iosifidis",
year = "2020",
doi = "10.1109/ICASSP40776.2020.9053034",
language = "English",
pages = "4097--4101",
booktitle = "2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings",
publisher = "IEEE",
note = "2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 ; Conference date: 04-05-2020 Through 08-05-2020",

}

RIS

TY - GEN

T1 - Text-To-Image Synthesis Method Evaluation Based on Visual Patterns

AU - Sommer, William Lund

AU - Iosifidis, Alexandros

PY - 2020

Y1 - 2020

N2 - A commonly used evaluation metric for text-to-image synthesis is the Inception score (IS) [1], which has been shown to be a quality metric that correlates well with human judgment. However, IS does not reveal properties of the generated images indicating the ability of a text-to-image synthesis method to correctly convey semantics of the input text descriptions. In this paper, we introduce an evaluation metric and a visual evaluation method allowing for the simultaneous estimation of the realism, variety and semantic accuracy of generated images. The proposed method uses a pre-trained Inception network [2] to produce high dimensional representations for both real and generated images. These image representations are then visualized in a 2-dimensional feature space defined by the t-distributed Stochastic Neighbor Embedding (t-SNE) [3]. Visual concepts are determined by clustering the real image representations, and are subsequently used to evaluate the similarity of the generated images to the real ones by classifying them to the closest visual concept. The resulting classification accuracy is shown to be a effective gauge for the semantic accuracy of text-to-image synthesis methods.

AB - A commonly used evaluation metric for text-to-image synthesis is the Inception score (IS) [1], which has been shown to be a quality metric that correlates well with human judgment. However, IS does not reveal properties of the generated images indicating the ability of a text-to-image synthesis method to correctly convey semantics of the input text descriptions. In this paper, we introduce an evaluation metric and a visual evaluation method allowing for the simultaneous estimation of the realism, variety and semantic accuracy of generated images. The proposed method uses a pre-trained Inception network [2] to produce high dimensional representations for both real and generated images. These image representations are then visualized in a 2-dimensional feature space defined by the t-distributed Stochastic Neighbor Embedding (t-SNE) [3]. Visual concepts are determined by clustering the real image representations, and are subsequently used to evaluate the similarity of the generated images to the real ones by classifying them to the closest visual concept. The resulting classification accuracy is shown to be a effective gauge for the semantic accuracy of text-to-image synthesis methods.

KW - Data visualization

KW - Evaluation metrics

KW - Text-to-Image Synthesis

UR - http://www.scopus.com/inward/record.url?scp=85089209423&partnerID=8YFLogxK

U2 - 10.1109/ICASSP40776.2020.9053034

DO - 10.1109/ICASSP40776.2020.9053034

M3 - Article in proceedings

AN - SCOPUS:85089209423

SP - 4097

EP - 4101

BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings

PB - IEEE

T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020

Y2 - 4 May 2020 through 8 May 2020

ER -