Loading…

Improving text-to-image generation with object layout guidance

The automatic generation of realistic images directly from a story text is a very challenging problem, as it cannot be addressed using a single image generation approach due mainly to the semantic complexity of the story text constituents. In this work, we propose a new approach that decomposes the...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications 2021-07, Vol.80 (18), p.27423-27443
Main Authors: Zakraoui, Jezia, Saleh, Moutaz, Al-Maadeed, Somaya, Jaam, Jihad Mohammed
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83
cites cdi_FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83
container_end_page 27443
container_issue 18
container_start_page 27423
container_title Multimedia tools and applications
container_volume 80
creator Zakraoui, Jezia
Saleh, Moutaz
Al-Maadeed, Somaya
Jaam, Jihad Mohammed
description The automatic generation of realistic images directly from a story text is a very challenging problem, as it cannot be addressed using a single image generation approach due mainly to the semantic complexity of the story text constituents. In this work, we propose a new approach that decomposes the task of story visualization into three phases: semantic text understanding, object layout prediction, and image generation and refinement. We start by simplifying the text using a scene graph triple notation that encodes semantic relationships between the story objects. We then introduce an object layout module to capture the features of these objects from the corresponding scene graph. Specifically, the object layout module aggregates individual object features from the scene graph as well as averaged or likelihood object features generated by a graph convolutional neural network. All these features are concatenated to form semantic triples that are then provided to the image generation framework. For the image generation phase, we adopt a scene graph image generation framework as stage-I, which is refined using a StackGAN as stage-II conditioned on the object layout module and the generated output image from stage-I. Our approach renders object details in high-resolution images while keeping the image structure consistent with the input text. To evaluate the performance of our approach, we use the COCO dataset and compare it with three baseline approaches, namely, sg2im, StackGAN and AttnGAN, in terms of image quality and user evaluation. According to the obtained assessment results, our object layout guidance-based approach significantly outperforms the abovementioned baseline approaches in terms of the accuracy of semantic matching and realism of the generated images representing the story text sentences.
doi_str_mv 10.1007/s11042-021-11038-0
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2554639121</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2554639121</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83</originalsourceid><addsrcrecordid>eNp9kDFPwzAQhS0EEqXwB5giMRvu7DhxFiRUQalUiQVmy0kuIVWbFNsB-u8xBImN6d7w3ru7j7FLhGsEyG88IqSCg0AeldQcjtgMVS55ngs8jlpq4LkCPGVn3m8AMFMinbHb1W7vhveub5NAn4GHgXc721LSUk_Ohm7ok48uvCZDuaEqJFt7GMaQtGNX276ic3bS2K2ni985Zy8P98-LR75-Wq4Wd2teyUwG3pBM68JKnaMmpYpSp5RnUmkkZYlkBQQWtKBK1k1NZVPUgDYVQpUoRKPlnF1NvfHYt5F8MJthdH1caYRSaSYLFBhdYnJVbvDeUWP2Ln7jDgbBfHMyEycTOZkfTgZiSE4hH819S-6v-p_UF0uiarE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554639121</pqid></control><display><type>article</type><title>Improving text-to-image generation with object layout guidance</title><source>ABI/INFORM Global</source><source>Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List</source><creator>Zakraoui, Jezia ; Saleh, Moutaz ; Al-Maadeed, Somaya ; Jaam, Jihad Mohammed</creator><creatorcontrib>Zakraoui, Jezia ; Saleh, Moutaz ; Al-Maadeed, Somaya ; Jaam, Jihad Mohammed</creatorcontrib><description>The automatic generation of realistic images directly from a story text is a very challenging problem, as it cannot be addressed using a single image generation approach due mainly to the semantic complexity of the story text constituents. In this work, we propose a new approach that decomposes the task of story visualization into three phases: semantic text understanding, object layout prediction, and image generation and refinement. We start by simplifying the text using a scene graph triple notation that encodes semantic relationships between the story objects. We then introduce an object layout module to capture the features of these objects from the corresponding scene graph. Specifically, the object layout module aggregates individual object features from the scene graph as well as averaged or likelihood object features generated by a graph convolutional neural network. All these features are concatenated to form semantic triples that are then provided to the image generation framework. For the image generation phase, we adopt a scene graph image generation framework as stage-I, which is refined using a StackGAN as stage-II conditioned on the object layout module and the generated output image from stage-I. Our approach renders object details in high-resolution images while keeping the image structure consistent with the input text. To evaluate the performance of our approach, we use the COCO dataset and compare it with three baseline approaches, namely, sg2im, StackGAN and AttnGAN, in terms of image quality and user evaluation. According to the obtained assessment results, our object layout guidance-based approach significantly outperforms the abovementioned baseline approaches in terms of the accuracy of semantic matching and realism of the generated images representing the story text sentences.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-11038-0</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial neural networks ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Image processing ; Image quality ; Image resolution ; Layouts ; Modules ; Multimedia Information Systems ; Performance evaluation ; Semantics ; Sentences ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2021-07, Vol.80 (18), p.27423-27443</ispartof><rights>The Author(s) 2021</rights><rights>The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83</citedby><cites>FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83</cites><orcidid>0000-0002-6434-1790</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2554639121/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2554639121?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,11667,27901,27902,36037,44339,74638</link.rule.ids></links><search><creatorcontrib>Zakraoui, Jezia</creatorcontrib><creatorcontrib>Saleh, Moutaz</creatorcontrib><creatorcontrib>Al-Maadeed, Somaya</creatorcontrib><creatorcontrib>Jaam, Jihad Mohammed</creatorcontrib><title>Improving text-to-image generation with object layout guidance</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>The automatic generation of realistic images directly from a story text is a very challenging problem, as it cannot be addressed using a single image generation approach due mainly to the semantic complexity of the story text constituents. In this work, we propose a new approach that decomposes the task of story visualization into three phases: semantic text understanding, object layout prediction, and image generation and refinement. We start by simplifying the text using a scene graph triple notation that encodes semantic relationships between the story objects. We then introduce an object layout module to capture the features of these objects from the corresponding scene graph. Specifically, the object layout module aggregates individual object features from the scene graph as well as averaged or likelihood object features generated by a graph convolutional neural network. All these features are concatenated to form semantic triples that are then provided to the image generation framework. For the image generation phase, we adopt a scene graph image generation framework as stage-I, which is refined using a StackGAN as stage-II conditioned on the object layout module and the generated output image from stage-I. Our approach renders object details in high-resolution images while keeping the image structure consistent with the input text. To evaluate the performance of our approach, we use the COCO dataset and compare it with three baseline approaches, namely, sg2im, StackGAN and AttnGAN, in terms of image quality and user evaluation. According to the obtained assessment results, our object layout guidance-based approach significantly outperforms the abovementioned baseline approaches in terms of the accuracy of semantic matching and realism of the generated images representing the story text sentences.</description><subject>Artificial neural networks</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Image resolution</subject><subject>Layouts</subject><subject>Modules</subject><subject>Multimedia Information Systems</subject><subject>Performance evaluation</subject><subject>Semantics</subject><subject>Sentences</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kDFPwzAQhS0EEqXwB5giMRvu7DhxFiRUQalUiQVmy0kuIVWbFNsB-u8xBImN6d7w3ru7j7FLhGsEyG88IqSCg0AeldQcjtgMVS55ngs8jlpq4LkCPGVn3m8AMFMinbHb1W7vhveub5NAn4GHgXc721LSUk_Ohm7ok48uvCZDuaEqJFt7GMaQtGNX276ic3bS2K2ni985Zy8P98-LR75-Wq4Wd2teyUwG3pBM68JKnaMmpYpSp5RnUmkkZYlkBQQWtKBK1k1NZVPUgDYVQpUoRKPlnF1NvfHYt5F8MJthdH1caYRSaSYLFBhdYnJVbvDeUWP2Ln7jDgbBfHMyEycTOZkfTgZiSE4hH819S-6v-p_UF0uiarE</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Zakraoui, Jezia</creator><creator>Saleh, Moutaz</creator><creator>Al-Maadeed, Somaya</creator><creator>Jaam, Jihad Mohammed</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-6434-1790</orcidid></search><sort><creationdate>20210701</creationdate><title>Improving text-to-image generation with object layout guidance</title><author>Zakraoui, Jezia ; Saleh, Moutaz ; Al-Maadeed, Somaya ; Jaam, Jihad Mohammed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Image resolution</topic><topic>Layouts</topic><topic>Modules</topic><topic>Multimedia Information Systems</topic><topic>Performance evaluation</topic><topic>Semantics</topic><topic>Sentences</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zakraoui, Jezia</creatorcontrib><creatorcontrib>Saleh, Moutaz</creatorcontrib><creatorcontrib>Al-Maadeed, Somaya</creatorcontrib><creatorcontrib>Jaam, Jihad Mohammed</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zakraoui, Jezia</au><au>Saleh, Moutaz</au><au>Al-Maadeed, Somaya</au><au>Jaam, Jihad Mohammed</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving text-to-image generation with object layout guidance</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>80</volume><issue>18</issue><spage>27423</spage><epage>27443</epage><pages>27423-27443</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>The automatic generation of realistic images directly from a story text is a very challenging problem, as it cannot be addressed using a single image generation approach due mainly to the semantic complexity of the story text constituents. In this work, we propose a new approach that decomposes the task of story visualization into three phases: semantic text understanding, object layout prediction, and image generation and refinement. We start by simplifying the text using a scene graph triple notation that encodes semantic relationships between the story objects. We then introduce an object layout module to capture the features of these objects from the corresponding scene graph. Specifically, the object layout module aggregates individual object features from the scene graph as well as averaged or likelihood object features generated by a graph convolutional neural network. All these features are concatenated to form semantic triples that are then provided to the image generation framework. For the image generation phase, we adopt a scene graph image generation framework as stage-I, which is refined using a StackGAN as stage-II conditioned on the object layout module and the generated output image from stage-I. Our approach renders object details in high-resolution images while keeping the image structure consistent with the input text. To evaluate the performance of our approach, we use the COCO dataset and compare it with three baseline approaches, namely, sg2im, StackGAN and AttnGAN, in terms of image quality and user evaluation. According to the obtained assessment results, our object layout guidance-based approach significantly outperforms the abovementioned baseline approaches in terms of the accuracy of semantic matching and realism of the generated images representing the story text sentences.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-11038-0</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-6434-1790</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2021-07, Vol.80 (18), p.27423-27443
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2554639121
source ABI/INFORM Global; Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List
subjects Artificial neural networks
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Image processing
Image quality
Image resolution
Layouts
Modules
Multimedia Information Systems
Performance evaluation
Semantics
Sentences
Special Purpose and Application-Based Systems
title Improving text-to-image generation with object layout guidance
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T03%3A09%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20text-to-image%20generation%20with%20object%20layout%20guidance&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Zakraoui,%20Jezia&rft.date=2021-07-01&rft.volume=80&rft.issue=18&rft.spage=27423&rft.epage=27443&rft.pages=27423-27443&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-11038-0&rft_dat=%3Cproquest_cross%3E2554639121%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-fe34d9a38718e559b84e763581e5aee3c0e0a082ec3dfdebf9d01a4225b122f83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2554639121&rft_id=info:pmid/&rfr_iscdi=true