Generating Images with Recurrent adversarial network
1 Introduction
This work integrates the GAN and sequential generation into the model. By taking the normal sampling noise as input into the GAN at time T, the GAN generates the current part and write it on the canvas. All parts along the time axis on the canvas accumulated and form the final image. Unrolling the gradient descent-based optimization that generates the target image yields a recurrent computation, in which an encoder convolutional network extracts images of the current canvas.The resulting code and the code for the reference image get fed into a decoder which decides on an update to the canvas.
2 About GAN
The game is between a generative and discriminative model G and D, respectively, such that the generative model generates samples that are hard for the discriminator D to distinguish from real data, and the discriminator tries to avoid getting fooled by the generative model G.
The objective function is :
Since the term 1-D(G(z)) get saturated, which makes insufficient gradient flow through the generative model G, as the magnitude of gradients get smaller and prevent them from learning, we remedy the objective function as (2)
and learning them separately. We update the parameter following the rules below.
3 Model
We propose sequential modeling using GAN.The obvious advantage of sequential modeling is that repeatedly generating outputs conditioned on previous states simplifies the problem of modeling complicated data distributions by mapping them to a sequence of simpler problems.
GRAN: generative recurrent adversarial networks.
The generator G consists of a recurrent feedback loop that takes a sequence of noise samples drawn from the prior distribution
At each time step t, a sample z from the prior distribution is passed onto a function f(.) with hidden state h(c,t) where h(c,t) represents the current encoded status of the previous drawing C(t-1), C(t) is what is drawn on canvas at time t and it contains the output of the function f(.). Moreover the h(c,t) is encoded by the function g(.) from previous drawing C(t-1). The function g(,) can be seen as a way of mimic the inverse of the function f(.) In this work we use
The influence on generated image by noise vector.
We sample a noise vector from p(z) and use the same noise for every time step.
During the experiment, it is more time consuming to find a set of
On the other hand, adding different noise increase the model capability on generating