ML/DL research includes foundational works that shape the field and novel approaches that push its boundaries. Many recent papers build on these foundations, referencing prior work extensively, while others introduce innovations that redefine the state of the art. This list tracks foundational and novel ML/DL papers I aim to study, marking them off as I progress.
- Deep Residual Learning for Image Recognition - ResNet
- Improving Neural Networks by Preventing Co-adaptation of Feature Detectors - Dropout
- Adam: A method for Stochastic Optimization - Adam optimizer
- Auto-Encoding Variational Bayes - VAE
- Generative Adversarial Nets - GAN
- Generating Sequences with Recurrent Neural Networks - LSTM
- You Only Look Once: Unified, Real-Time Object Detection - YOLO
- Mask R-CNN
- Attention is All You Need - Transformer
- U-Net: Convolutional Networks for Biomedical Image Segmentation
- ImageNet Classification with Deep Convolutional Neural Networks
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks - CycleGAN
- Wasserstein GAN
- Semi-Supervised Classification with Graph Convolutional Networks - Graph CNN
- Improving Language Understanding by Generative Pre-Training
- Learning Transferable Visual Models From Natural Language Supervision - CLIP
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale - ViT
For tips on reading papers, see How to Read Papers. If there are any papers that you think I should add to the list reach out to me s7jang[at]uwaterloo[dot]ca