Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Projects in Generative Adversarial Networks

projects-in-generative-adversarial-networks.jpg

Python Projects in Generative Adversarial Networks for Masters and PhD

    Project Background:
    Generative Adversarial Networks (GANs) have emerged as a revolutionary approach in the field of generative modeling, aiming to learn and generate realistic data samples that resemble the training data distribution. The project is rooted in addressing the challenges and potentials of GANs across various domains, including computer vision, natural language processing, and generative art. GANs consist of two neural networks, the generator and the discriminator, engaged in a min-max game where the generator tries to produce authentic-looking samples, while the discriminator attempts to differentiate between real and generated samples. This project delves into advancing the capabilities of GANs by exploring novel architectures, training techniques, and applications. By harnessing the power of GANs, this project works seeks to contribute to generating high-quality synthetic data, improving transfer learning, data augmentation, and aiding in creative content generation tasks.

    Problem Statement

  • GANs can suffer from mode collapse, where the generator fails to explore the entire data distribution and only generates a limited set of samples.
  • GAN training is often unstable, with the generator and discriminator oscillating in performance and leading to difficulties in convergence.
  • Existing metrics for evaluating GANs performance may not capture the quality and diversity of generated samples accurately.
  • GANs may struggle to generalize well to unseen data distributions or exhibit biases based on the training data, affecting their applicability in diverse scenarios.
  • Aim and Objectives

  • Enhance the quality and diversity of generated samples in Generative Adversarial Networks (GANs).
  • Develop novel GAN architectures to mitigate mode collapse and improve sample diversity.
  • Investigate training stabilization techniques to improve convergence and reduce training instability in GANs.
  • Explore and refine evaluation metrics to accurately assess the quality, realism, and diversity of generated samples.
  • Enhance the generalization capabilities of GANs to unseen data distributions and reduce biases in generated samples.
  • Apply GANs to various domains such as computer vision, natural language processing, and generative art to demonstrate their versatility and effectiveness.
  • Contributions to Generative Adversarial Networks

  • Proposed novel GAN architectures that mitigate mode collapse and improve sample diversity.
  • Developed training stabilization techniques to enhance convergence and reduce training instability in GANs.
  • Contributed refined evaluation metrics for accurately assessing the quality, realism, and diversity of generated samples.
  • Improved the generalization capabilities of GANs to unseen data distributions and reduced biases in generated samples.
  • Deep Learning Algorithms for Generative Adversarial Networks

  • Deep Convolutional Generative Adversarial Network (DCGAN)
  • Wasserstein Generative Adversarial Network (WGAN)
  • Conditional Generative Adversarial Network (CGAN)
  • InfoGAN (Information Maximizing Generative Adversarial Network)
  • CycleGAN (Cycle-Consistent Generative Adversarial Network)
  • Progressive Growing of GANs (PGGAN)
  • StyleGAN (Style-Based Generative Adversarial Network)
  • Self-Attention Generative Adversarial Network (SAGAN)
  • BigGAN (Big Generative Adversarial Network)
  • Autoencoding GAN (AE-GAN)
  • Datasets for Generative Adversarial Networks

  • MNIST
  • CelebA
  • CIFAR-10
  • LSUN
  • ImageNet
  • Fashion-MNIST
  • COCO (Common Objects in Context)
  • Cityscapes
  • LSUN Bedrooms
  • LSUN Cars
  • Performance Metrics

  • Inception Score (IS)
  • Frechet Inception Distance (FID)
  • Precision and Recall (PR)
  • Fréchet Distance (FD)
  • Mean Squared Error (MSE)
  • Structural Similarity Index (SSIM)
  • Peak Signal-to-Noise Ratio (PSNR)
  • Wasserstein Distance
  • Software Tools and Technologies

    Operating System: Ubuntu 18.04 LTS 64bit / Windows 10
    Development Tools: Anaconda3, Spyder 5.0, Jupyter Notebook
    Language Version: Python 3.9
    Python Libraries:
    1. Python ML Libraries:

  • Scikit-Learn
  • Numpy
  • Pandas
  • Matplotlib
  • Seaborn
  • Docker
  • MLflow

  • 2. Deep Learning Frameworks:
  • Keras
  • TensorFlow
  • PyTorch
  • Total Variation Distance (TVD)
  • Jensen-Shannon Divergence (JSD)