In the papers Vol 1. 2021
Published on
Blog Posts
A roundup of papers of note from the last few weeks.
Taming Transformers for High-Resolution Image Synthesis A Survey on Neural Network Interpretability A Modern Introduction to Online Learning GAN-Control: Explicitly Controllable GANs FcaNet: Frequency Channel Attention Networks Learning from others’ mistakes: Avoiding dataset biases without modeling them BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Soft-DTW: a Differentiable Loss Function for Time-Series BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension A Survey on Visual Transformer