Generative adversarial networks (GANS) for generating face images

Authors

  • Dolly Indra Universitas Muslim Indonesia, Indonesia
  • Muh Wahyu Hidayat Universitas Muslim Indonesia, Indonesia
  • Fitriyani Umar Universitas Muslim Indonesia, Indonesia

DOI:

https://doi.org/10.21107/kursor.v13i1.422

Keywords:

Dataset, Generative Adversarial Network , StyleGAN2-ADA, Face

Abstract

The advancement of artificial intelligence technology, particularly deep learning, presents significant potential in facial image processing. Generative Adversarial Networks (GANs), a type of deep learning model, have demonstrated remarkable capabilities in generating high-quality synthetic images through a competitive training process between a generator, which creates new data, and a discriminator, which evaluates its authenticity. However, the use of public facial datasets such as CelebA and FFHQ faces limitations in representing global demographic diversity and raises privacy concerns. This study aims to generate realistic synthetic facial datasets using the StyleGAN2-ADA architecture, a specialized variant of GAN, with two training approaches: training from scratch on two types of datasets (private and public), each containing 480 images. The public dataset used is FFHQ (Flickr-Faces-HQ), known for its broader facial variation and high-quality images. Evaluation is conducted using the Frechet Inception Distance (FID), a metric that assesses image quality by comparing the feature distributions of real and generated images. Results indicate that training from scratch with the public dataset (FFHQ) using a batch size of 16 and a learning rate of 0.0025 achieves an FID score of 85.67 and performance of 86.46% at Tick 100, whereas the private dataset, under the same conditions, results in an FID score of 98.59 with a performance of 18.54%.. The training from scratch approach with the public dataset proves more effective in generating high-quality synthetic facial images compared to the private dataset. In conclusion, this approach supports the optimal generation of realistic synthetic facial data.

Downloads

Download data is not yet available.

References

[1] U. L. Yuhana, I. Imamah, C. Fatichah, and B. J. Santoso, “Effectiveness of Deep Learning Approach for Text Classification in Adaptive Learning,” J. Ilm. Kursor, vol. 11, no. 3, p. 137, 2022, doi: 10.21107/kursor.v11i3.285.

[2] V. Walter and A. Wagner, “Probabilistic simulation of electricity price scenarios using Conditional Generative Adversarial Networks,” Energy AI, vol. 18, no. September, 2024, doi: 10.1016/j.egyai.2024.100422.

[3] D. Indra, H. M. Fadlillah, Kasman, L. B. Ilmawan, and H. Lahuddin, “Classification of good and damaged rice using convolutional neural network,” Bull. Electr. Eng. Informatics, vol. 11, no. 2, pp. 785–792, 2022, doi: 10.11591/eei.v11i2.3385.

[4] H. Basri, P. Purnawansyah, H. Darwis, and F. Umar, “Klasifikasi Daun Herbal Menggunakan K-Nearest Neighbor dan Convolutional Neural Network dengan Ekstraksi Fourier Descriptor,” J. Teknol. dan Manaj. Inform., vol. 9, no. 2, pp. 79–90, 2023, doi: 10.26905/jtmi.v9i2.10350.

[5] Preeti, M. Kumar, and H. K. Sharma, “A GAN-Based Model of Deepfake Detection in Social Media,” Procedia Comput. Sci., vol. 218, pp. 2153–2162, 2022, doi: 10.1016/j.procs.2023.01.191.

[6] M. A. N. Ul Ghani, K. She, M. A. Rauf, M. Alajmi, Y. Y. Ghadi, and A. Algarni, “Securing synthetic faces: A GAN-blockchain approach to privacy-enhanced facial recognition,” J. King Saud Univ. - Comput. Inf. Sci., vol. 36, no. 4, p. 102036, 2024, doi: 10.1016/j.jksuci.2024.102036.

[7] J. Liao, T. Guha, and V. Sanchez, “Self-supervised random mask attention GAN in tackling pose-invariant face recognition,” Pattern Recognit., vol. 159, no. October 2024, p. 111112, 2025, doi: 10.1016/j.patcog.2024.111112.

[8] R. Hidayat, M. O. B. Wibowo, B. Y. Satria, and A. Winursito, “Implementation of Face Recognition Using Geometric Features Extraction,” J. Ilm. Kursor, vol. 11, no. 2, p. 83, 2022, doi: 10.21107/kursor.v11i2.284.

[9] E. Y. Puspaningrum, B. Nugroho, and A. Istifariyanto, “Preprocessing With Symmetrical Face and Gamma Correction For Face Recognation Under Varying Illumination With Robust Regression Classification,” vol. 9, no. 2, pp. 49–55, 2017.

[10] A. Kishore, A. Kumar, and N. Dang, “Enhanced Image Restoration by GANs using Game Theory,” Procedia Comput. Sci., vol. 173, no. 2019, pp. 225–233, 2020, doi: 10.1016/j.procs.2020.06.027.

[11] G. C. Oliveira et al., “Robust deep learning for eye fundus images: Bridging real and synthetic data for enhancing generalization,” Biomed. Signal Process. Control, vol. 94, p. 106263, 2024, doi: 10.1016/j.bspc.2024.106263.

[12] A. Melnik et al., “Face Generation and Editing With StyleGAN: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 5, pp. 3557–3576, 2024, doi: 10.1109/TPAMI.2024.3350004.

[13] H. Farhood, I. Joudah, A. Beheshti, and S. Muller, “Advancing student outcome predictions through generative adversarial networks,” Comput. Educ. Artif. Intell., vol. 7, no. June, p. 100293, 2024, doi: 10.1016/j.caeai.2024.100293.

[14] Z. Liu, P. Luo, X. Wang, and X. Tang, “Large-scale CelebFaces Attributes (CelebA) Dataset,” Multimedia Laboratory, The Chinese University of Hong Kong. Accessed: Nov. 24, 2024. [Online]. Available: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html

[15] T. (NVIDIA) Karras, S. (NVIDIA) Laine, and T. (NVIDIA) Aila, “Flickr-Faces-HQ Dataset (FFHQ),” Github.com. Accessed: Nov. 24, 2024. [Online]. Available: https://github.com/NVlabs/ffhq-dataset?tab=readme-ov-file

[16] B. Yilmaz and R. Korn, “A Comprehensive guide to Generative Adversarial Networks (GANs) and application to individual electricity demand,” Expert Syst. Appl., vol. 250, no. April, p. 123851, 2024, doi: 10.1016/j.eswa.2024.123851.

[17] J. Lee, D. Jung, J. Moon, and S. Rho, “Advanced R-GAN: Generating anomaly data for improved detection in imbalanced datasets using regularized generative adversarial networks,” Alexandria Eng. J., vol. 111, no. October 2024, pp. 491–510, 2025, doi: 10.1016/j.aej.2024.10.084.

[18] K. Khairunnisak, G. M. Fahmi, and D. Suhartono, “Implementasi Steganografi Gambar Menggunakan Algoritma Generative Adversarial Network,” SINTECH (Science Inf. Technol. J., vol. 6, no. 1, pp. 47–57, 2023, doi: 10.31598/sintechjournal.v6i1.1258.

[19] V. Walter and A. Wagner, “Probabilistic simulation of electricity price scenarios using Conditional Generative Adversarial Networks,” Energy AI, vol. 18, no. June, p. 100422, 2024, doi: 10.1016/j.egyai.2024.100422.

[20] G. Pasqualino, L. Guarnera, A. Ortis, and S. Battiato, “MITS-GAN: Safeguarding medical imaging from tampering with generative adversarial networks,” Comput. Biol. Med., vol. 183, no. March, p. 109248, 2024, doi: 10.1016/j.compbiomed.2024.109248.

[21] F. Olaoye and K. Potter, “Generative Adversarial Networks ( GANs ) for Cyber Threat Intelligence,” J. Cybersecurity, no. October, 2024.

[22] A. Gun et al., “High‐resolution knee plain radiography image synthesis using style generative.pdf,” J. Orthop. Res., vol. 41, pp. 84–93, 2023, doi: 10.1002/jor.25325.

[23] M. Yates, G. Hart, R. Houghton, M. Torres Torres, and M. Pound, “Evaluation of synthetic aerial imagery using unconditional generative adversarial networks,” ISPRS J. Photogramm. Remote Sens., vol. 190, no. October 2021, pp. 231–251, 2022, doi: 10.1016/j.isprsjprs.2022.06.010.

[24] Fawaidul Badri, M. Taqijuddin Alawiy, and Eko Mulyanto Yuniarno, “Deep Learning Architecture Based on Convolutional Neural Network (Cnn) in Image Classification,” J. Ilm. Kursor, vol. 12, no. 2, pp. 83–92, 2023, doi: 10.21107/kursor.v12i2.349.

[25] H. Tufail, A. Ahad, I. Puspitasari, I. Shayea, P. J. Coelho, and I. M. Pires, “Deep Learning in Smart Healthcare: A GAN-based Approach for Imbalanced Alzheimer’s Disease Classification,” Procedia Comput. Sci., vol. 241, no. 2019, pp. 146–153, 2024, doi: 10.1016/j.procs.2024.08.021.

[26] F. Handayani and M. Mustikasari, “Sentiment Analysis of Electric Cars Using Recurrent Neural Network Method in Indonesian Tweets,” J. Ilm. Kursor, vol. 10, no. 4, pp. 153–158, 2020, doi: 10.21107/kursor.v10i4.233.

[27] I. J. Goodfellow, “Generative Adversarial Networks,” Mach. Learn. Data Sci. Handb. Data Min. Knowl. Discov. Handbook, Third Ed., pp. 375–400, 2023, doi: 10.1007/978-3-031-24628-9_17.

[28] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8107–8116, 2020, doi: 10.1109/CVPR42600.2020.00813.

[29] J. Pries, S. Bhulai, and R. van der Mei, “Evaluating a face generator from a human perspective,” Mach. Learn. with Appl., vol. 10, no. February, p. 100412, 2022, doi: 10.1016/j.mlwa.2022.100412.

[30] T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training generative adversarial networks with limited data,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, no. NeurIPS, 2020.

[31] D. A. Talib and A. A. Abed, “Real-Time Deepfake Image Generation Based on Stylegan2-ADA,” Rev. d’Intelligence Artif., vol. 37, no. 2, pp. 397–405, 2023, doi: 10.18280/ria.370216.

Downloads

Published

2025-07-26

Issue

Section

Articles

Citation Check