Photographic Text-To-Image Synthesis via Multi-Turn Dialogue Using Attentional GAN
The process of generating an image that depicts naturalness is not so easy. To address such problem this paper introduces a novel approach to synthesize a photo-realistic image from the caption. The user can adjust the image highlights turn-by-turn according to the caption. This leads to the integration of natural intelligence. For this, the input passed to dialogue state tracker to extract context feature. Then the generator produces an image. If image is not as per expectations then user gives another dialogue, but the system takes both recent input and previous image to generate a new one. In such a manner, user gets a chance to visualize as per the imagination. We performed extensive experiments on two datasets CUB and COCO to generate a realistic image each turn and obtained the results: Inception Score (IS) of 4.38 ± 0.05, R-precision of 67.96 ± 5.27 % on CUB dataset and IS of 26.12 ± 0.24, R-precision of 91.00 ± 2.31 % on COCO dataset. Further, the work could be enhance to synthesize HQ image, voice integration, and video generation from stories and so on. This research is limited to 256x256 image in each turn.
Copyright: © Khwopa Engineering College and Khwopa College of Engineering