Image Synthesis using U-net: Sketch 2 Image
DOI:
https://doi.org/10.3126/injet.v2i2.78613Keywords:
Generative Adversarial Networks, Image Synthesis, Sketch-to-Image, Conditional GANs, Deep LearningAbstract
Image synthesis has been an important part of digital art, fashion design, and law enforcement, among others. In this paper, we introduce Sketch2Image, an automatic system for converting hand-drawn sketches to realistic images based on Conditional Generative Adversarial Networks (cGANs). The model employs a U-Net-based encoder and decoder to produce high-quality images with detailed finesse. The feasibility study takes into consideration technical, operational, economic, and scheduling factors, guaranteeing practicability and effectiveness. The incremental development approach is followed in the project, guaranteeing iterative improvement and performance boost. The assessment is done based on metrics like Mean Squared Error (MSE), Structural Similarity Index (SSIM), and adversarial loss, guaranteeing model efficacy. Experimental results confirm the system's capacity for creating visually realistic and contextually relevant images, with potential applications in creative and investigative fields.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal on Engineering Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.