Deep Autoregressive Network (ICML’14)

This is one of the early paper on generative modeling. This work was on arXiv since Oct 2013 before the reparameterization trick has been popularized [2, 3]. It is interesting to look back and see the challenge of training the model with stochastic layers.

Model Architecture

Deep Autoregressive Network (DARN) is a flexible deep generative model with the following architecture features: First, its stochastic layer is stackable. This improves representational power. Second, the deterministic layers can be inserted between the stochastic layers to add complexity to the model.  Third, the generative model such as NADE and EoNADE can be used instead of the simple linear autoregressive. This also improves the representational power.

 

Darn

The main difference from VAE [2] is that the hidden units are binary vectors (which is similar to the restricted Boltzmann machine). VAE requires a continuous vector as hidden units unless we approximate the discrete units with Gumbel-Softmax.

DARN does not assume any form of distribution on its prior p(h) or conditional distribution p(x|h) and q(h|x). The vanilla VAE assumes a standard Gaussian distribution with the diagonal covariance matrix. This could be either good or bad thing for DARN.

Sampling

Since DARN is an autoregressive model, it needs to sample one value at a time, from top hidden layer all the way down to the observed layer.

Minimum Description Length

This is my favorite section of this paper. There is a strong connection between the information theory and variational lowerbound. In EM algorithm, we use Jensen’s inequality to derive the smooth function that acts as a lowerbound of the log likelihood. Some scholars refer this lowerbound as an Evidence Lowerbound (ELBO).

This lowerbound can be derived from information theory perspective. From the Shannon’s theory, the description length is:

L(x|h) = -\log_2 p(x|h)

If h is a compressed version of x, then we need to transport h along with the residual in order to reconstruct the original message x.

The main idea is simple. The less predictable event requires more bits to encode. The shorter bits is better because we will transport fewer bits over the wire. Hence, we want to minimize the description length of the following message x:

L(x) = \sum_h q(h|x)(L(h) + L(x|h))

The q(h|x) is an encode probability of h. The description length of the representation h is defined as:

L(h) = -log_2 p(h) + \log_2 q(h|x)

Finally, the entire description length or Helmholtz variational free energy is:

L(x) = -sum_h q(h|x)(\log_2 p(x,h) - \log_2 q(h|x)) (1)

This is formula is exactly the same as the ELBO when h is a discrete value.

Learning

The variational free energy formula (1) is intractable because it requires summation over all h. DARN employs a sampling approach to learn the parameter.

The expectation term is approximated by sampling h \sim q(H|x), then now we can compute the gradient of (1). However, this approach has a high variance. The authors use a few tricks to keep variance low. (Check out the apprendix).

Closing

DARN is one of the early paper that use the stochastic layers as part of its model. Optimization through these layers posed a few challenges such as high variances from the Monte Carlo approximation.

References:

[1] Gregor, Karol, et al. “Deep autoregressive networks.” arXiv preprint arXiv:1310.8499 (2013).

[2] Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).

[3] Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. “Stochastic backpropagation and approximate inference in deep generative models.” arXiv preprint arXiv:1401.4082 (2014).

Advertisements

Towards a Neural Statistician (ICLR2017)

One extension of VAE is to add a hierarchy structure. In the classical VAE, the prior is drawn from a standard Gaussian distribution. We can learn this prior from the dataset so that each dataset has its own prior distribution.

The generative process is:

  • Draw a dataset prior \mathbf{c} \sim N(\mathbf{0}, \mathbf{I})
  • For each data point in the dataset
    • Draw a latent vector \mathbf{z} \sim P(\cdot | \mathbf{c})
    • Draw a sample \mathbf{x} \sim P(\cdot | \mathbf{z})

The likelihood of the dataset is:

p(D) = \int p(c) \big[ \prod_{x \in D} \int p(x|z;\theta)p(z|c;\theta)dz \big]dc

The paper define the approximate inference network, q(z|x,c;\phi) and q(c|D; \phi) to optimize a variational lowerbound. The single dataset log likelihood lowerboud is:

\mathcal{L}_D = E_{q(c|D;\phi)}\big[  \sum_{x \in d} E_{q(z|c, x; \phi)}[ \log p(x|z;\theta)] - D_{KL}(q(z|c,x;\phi)||p(z|c;\theta)) \big] - D_{KL}(q(c|D;\phi)||p(c))

The interesting contribution of this paper is their statistic network q(c|D; \phi) that approximates the posterior distribution over the context c given the dataset D. Basically, this inference network has an encoder to take each datapoint into a vector e_i = E(x_i). Then, add a pool layer to aggregate e_i into a single vector. This paper uses an element-wise mean. Finally, the final vector is used to generate parameters of a diagonal Gaussian.

NeuralStatistician

This model surprisingly works well for many tasks such as topic models, transfer learning, one-shot learning, etc.

Reference:

https://arxiv.org/abs/1606.02185 (Poster ICLR 2017)

Improved Variational Autoencoders for Text Modeling using Dilated Convolutions (ICML’17)

One of the reasons that VAE with LSTM as a decoder is less effective than LSTM language model due to the LSTM decoder ignores conditioning information from the encoder. This paper uses a dilated CNN as a decoder to improve a perplexity on held-out data.

Language Model

The language model can be modeled as:

p(\textbf{x}) = \prod_t p(x_t | x_1, x_2, \cdots, x_{t-1})

LSTM language model use this conditional distribution to predict the next word.

By adding an additional contextual random variable [2], the language model can be expressed as:

p(\textbf{x}, \textbf{z}) = \prod_t p(x_t | x_1, x_2, \cdots, x_{t-1}, \textbf{z})

The second model is more flexible as it explicitly model a high variation in the sequential data. Without a careful training, the VAE-based language model often degrades to a standard language model as the decoder chooses to ignore the latent variable generated by the encoder.

Dilated CNN

The authors replace LSTM decoder with Dilated CNN decoder to control the contextual capacity. That is when the convolutional kernel is large, the decoder covers longer context as it resembles an LSTM. But if the kernel becomes smaller, the model becomes more like a bag-of-word. The size of kernel controls the contextual capacity which is how much the past context we want to use to predict the current word.

dilated_LM

Stacking Dilated CNN is crucial for a better performance because we want to exponentially increase the context windows. WaveNet [3] also uses this approach.

Conclusion

By replacing VAE with a more suitable decoder, VAE can now perform well on language model task. Since the textual sequence does not contain a lot of variation, we may not notice an obvious improvement. We may see more significant improvement in a more complex sequential data such as speech or audio signals. Also, the experimental results show that Dilated CNN is better than LSTM as a decoder but the improvement in terms of perplexity and NLL are still incremental to the standard LSTM language model. We hope to see stronger language models using VAE in the future.

References:

[1] Yang, Zichao, et al. “Improved Variational Autoencoders for Text Modeling using Dilated Convolutions.” arXiv preprint arXiv:1702.08139 (2017).

[2] Bowman, Samuel R., et al. “Generating sentences from a continuous space.” arXiv preprint arXiv:1511.06349 (2015).

[3] Oord, Aaron van den, et al. “Wavenet: A generative model for raw audio.” arXiv preprint arXiv:1609.03499 (2016).

Sequence Autoencoder

Back in 2010, RNN is a good architecture for language models [3] due to its ability to remember the previous context. We will explore a few RNN architecture for learning document representation in this post.

Semi-supervised Sequence Learning [2] (NIPS 2014)

This model uses two RNN, the first one as an encoder, and later as a decoder.

LSTM_Autoencoder

Instead of learning to generate the output like in seq2seq model [1], this model learns to reconstruct the input. Hence, this model is a sequence autoencoder. LSTM is used in this paper. This unsupervised learning model is used for pretraining LSTM for different tasks such as sentiment analysis, text classification, and object classification.

A Hierarchical Neural Autoencoder for Paragraphs and Documents [4] (ACL 2015)

This model introduces a hierarchical LSTM to learn a document structure. The architecture has both encoder and decoder. The encoder processes a sequence of word tokens for each sentence. The final output from LSTM represents the input sentence. Then, the second LSTM layer will take a sequence of sentence vectors and output a document vector.

The decoder works in a backward fashion. It takes a document vector and feeds it to the LSTM to decode a sentence vector. Each sentence vector is then fed to another LSTM to decode each word in the sentence.

The author also introduces attention mechanism to put emphasis on particular sentences. The attention boosts the performance of the hierarchical model.

hierarchical_model

Generating Sentences from a Continuous Space [5]

This model combines RNNLM [3] with Variational autoencoder. The architecture is again composed of an encoder and decoder and attempts to reconstruct the given input. The additional stochastic layer converts an output from an encoder to mean and variance of the target Gaussian distribution. The document representation is sampled from this distribution. The decoder takes this representation and reconstructs word by word through another LSTM.

VAE_RNNLM

Training VAE under this architecture poses a challenge due to the component collapsing. The authors use KL annealing method by incrementally increases the weight of the KL loss over time. This modification helps the model to learn a much better representation.

Conclusion

There are a few more RNN architectures that learn document/sentence representation. The goal of learning the representation can be varied. If the goal is to generate a realistic text or dialogue then it is critical to retain syntactic accuracy as well as semantic information. However, if our goal is to obtain a global view of the given document, then we may bypass syntactic details but focus more on semantic meaning. These 3 models show how RNN architectures can be used to model for such tasks.

References:

[1] Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. “Sequence to sequence learning with neural networks.” Advances in neural information processing systems. 2014.

[2] Dai, Andrew M., and Quoc V. Le. “Semi-supervised sequence learning.” Advances in Neural Information Processing Systems. 2015.

[3] Mikolov, Tomas, et al. “Recurrent neural network based language model.” Interspeech. Vol. 2. 2010.

[4] Li, Jiwei, Minh-Thang Luong, and Dan Jurafsky. “A hierarchical neural autoencoder for paragraphs and documents.” arXiv preprint arXiv:1506.01057 (2015).

[5] Bowman, Samuel R., et al. “Generating sentences from a continuous space.” arXiv preprint arXiv:1511.06349 (2015).

 

Pixel Recurrent Neural Networks (ICML’16)

This paper proposes a novel autoregressive model to generate an image pixel by pixel. This idea makes sense since each pixel is correlated to its neighbor pixels. The same object has similar color and gradient. This idea has been exploited in an image compression such as jpeg where the color frequency does not change too frequent.

With this intuition, an autoregressive model assigns a probability of the observed sequence \vec x as p(\vec x) = \prod_{i=1}^{n^2} p(x_i | x_{<i}) For an image, each pixel is conditioned on the previous seen pixels.

A recurrent neural network such as LSTM has been an effective architecture for a sequence learning task. Hence, LSTM can be applied for an image generation task as well. There are two main challenges with using an autoregressive model for an image: first, an image is a 2-dimensional object – squeezing it to a 1-d vector will lose spatial information; second, training pixel by pixel is time-consuming and can’t be parallelized because we have to process the previous pixel before the current pixel. This paper attempts to solve these problems.

Model a color image 

Each pixel x_i has 3 values: an intensity in the red, green, and blue channels. The conditional distribution is modified as:

p(x_i|\vec x_{<i}) = p(x_{i,R}|\vec x_{<i})p(x_{i,G}|\vec x_{<i},x_{i,R})p(x_{i,B}|\vec x_{<i},x_{i,R},x_{i,G})

Each color is conditioned on the other channels as well as the previous pixel values.

Pixels as Discrete Value

This is a neat idea. Using a discrete distribution provides more flexibility because we do not assume the shape of the distribution.

Now we will dive into the proposed architectures:

Row LSTM

When we apply LSTM on 2-dimensional input, we can first compute all the hidden states given the current pixel values and previous states. In this architecture, the previous states are the hidden states from the above rows. We can define the context window to control the amount of information to capture from the above row. If we set the context window to 3 (meaning taking the top, top-right, and top-left pixels’ hidden states):

h_{r, c} = f( h_{r-1, c-1}, h_{r-1, c}, h_{r-1, c+1}, x_{r, c})

We can parallelize this computation row by row.

Diagonal BiLSTM

One limitation of ROW LSTM is its lack of full context from the above row. By using bi-directional LSTM, this architecture can capture forward and backward dependency. Furthermore, each hidden state depends on the top-left and left hidden states:

h_{r,c} = f(h_{r-1,c-1}, h_{r,c-1}, x_{r,c})

However, the input image needs to be screwed by shifting each row by one pixel to the right, in order to parallelize this operation, column by column.

Pixel CNN

The sequential model is slow because it needs to process the previous states. Using convolutional layers to capture the spatial information is possible. But directly applying a convolutional kernel violates the autoregressive model’s assumption because the current pixel must not see the next pixel. Otherwise, we can’t generate the pixel because we will not know any incoming pixel. Hence, the mask is used to prevent the current pixel from seeing the future pixels. This trick is pretty neat.

Results

Diagonal BiLSTM has the best performance out of 3 architectures in terms of nats (negative log-likelihood) scores. The autoregressive model does not assume the low-dimensional manifold assumption and it models a conditional distribution directly. However, the computation is quite expensive for a large input.

References:

https://arxiv.org/pdf/1601.06759.pdf

Skip-Thought (NIPS’15)

This paper proposes a deep learning model to learn a representation of word sequences. The model is inspired by the skip-gram from word embedding. Skip-Thought model assumes that a given sentence is related to its preceding and succeeding sentences. Hence, the model has two main components: encoder and decoder. The encoder uses GRU to encode a sentence into a fixed length vector, which is a hidden state of the GRU. Then, there are two decoders that take an encoder output as an input and predict each word of the previous and next sentences. The output from the encoder is treated as a bias input to both decoders.

The Skip-Thought is unsupervised learning model that learns a general sentence representation. In contrast to the existing works that use supervisory signals to learn a high-quality representation and are task specific models.

Vocab expansion is necessary to encode unseen word into a fixed length vector. The authors use a linear transformation from word-embedding space to encoder word-embedding space.

There are some variants of Skip-Thought models used in the experiment. The encoder can either have uni-directional or bi-directional GRU. When concatenating output from these two encoders, the better performance is observed.

The empirical results on various tasks and benchmarks demonstrate that Skip-Thought can learn a robust sentence representation that can yield a competitive performance to the supervised learning models.

Reference:

https://arxiv.org/pdf/1506.06726.pdf

Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention (SIGIR’17)

This paper proposed a content-based recommendation model using attention mechanism. The task is to recommend video or image to a user based on user’s interaction with the content such as view counts or clicks. This is an implicit feedback problem.

The key motivation is that in the video or image recommendation, users do not like the entire video or images but only pay attention to the subset of the content. For the video, only some frames that users like. For an image, only some part of the images are interesting to the user. The attention mechasim

The model resembles SVD++ because the overall rating has both latent factor model and neighborhood model:

\hat R_{ij} = (\textbf{u}_i + \sum_{l \in R(i)} \alpha(i, l)\textbf{p}_l)^T \textbf{v}_j \\  =\textbf{u}_i^T \textbf{v}_j + \sum_{l \in R(i)} \alpha(i, l)\textbf{p}_l^T \textbf{v}_j

The first term is a standard latent factor model: finding an embedding vector for user and item. The second term is a neighborhood model: the more similar between item j and some items that user i likes, the higher rating prediction.

The difference is: SVD++ uses a uniform weight but this model learns weight from the data. Furthermore, the neighborhood model does not compute a dot product between two embedding item vectors but between item vector and auxiliary item vector. This vector is used to characterize users based on the set of items they interacted with.

The model architecture basically has 3 parts. The first part is to compute an item vector based on a weight combination of all components. E.g. find a vector representation for a video that has 100 frames. This representation \bar x = \sum_l \beta(l, m) x_{lm} is a personalized item representation because the weight (attention) \beta(l,m) is computed from a nonlinear function b(i, l, m) that takes a user and all item components.

The second part is to compute a neighborhood item vector, \sum_{l \in R(i)} \alpha(i,l) \textbf{p}_l. The attention is computed from a function a(i, l) that takes the user i, item l, auxiliary tiem l, \bar x_l. Then, this neighborhood vector will be added to a user vector i.

The final part performs a pair-wise learning. This paper use BPR (Bayesian Personalized Ranking) to minimize the rank loss between positive and negative samples. The evaluation metric use leave-one-out which is to keep the user’s last interaction with an item as the test set. Then, they measure hit rate and NDCG.

The hit rate of the proposed model is better than SVD++. I think the author should include FISM as a baseline since this model also performs well on implicit feedback task. Furthermore, SVD++ does not perform well on Top-N task since it optimized for RMSE metrics. Hence, SVD++ is not a strong baseline.

Overall, what I like about this work is the hierarchy model that construct a personalized item representation based on an attention. This idea makes a lot of sense because (1) people focus on different part of the content. The item vector should then be personalized; (2)  This model has a lot of parameters so it works well on the large video and image dataset.

 

 

 

References:

http://www.comp.nus.edu.sg/~xiangnan/papers/sigir17-AttentiveCF.pdf