
Challenges
GAN (Generative Adversarial Networks)
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification.
Limitations with GAN
The big disadvantage is that these networks are very hard to train. The function of these networks try to optimize is a loss function that essentially has no closed form (unlike standard loss functions like log-loss or squared error). Thus, optimizing this loss function is very hard and requires a lot of trial-and-error regarding the network structure and training protocol. Since RNNs are generally more fickle than CNNs, this is likely why very few (if any) people have been able to apply GANs to anything more complex than images, such as text or speech.
Excellent knowledge about the arguments we are passing through the model. Python version 3 doesn’t have many libraries for implementing GAN.
So, we decided to apply Restricted Boltzmann Machine using RNN.
​
​
