Interesting articles and research papers form DL/ML area are exponentially flourishing. Here I will link to some interesting articles online that I find interesting. My blog articles are here
2021-06π
2017-11π
-
An introduction to Generative Adversarial Networks (with code in TensorFlow)π
-
Deep Image Priorπ
"Deep Image Prior": super-resolution, inpainting, denoising without learning on a dataset and pretrained networks. Comparable results to learned methods. -
Distilling a Neural Network Into a Soft Decision Treeπ
G. Hinton describe a way of using a trained neural net to create a type of soft decision tree that generalizes better than one learned directly from the training data. -
Are GANs Created Equal? A Large-Scale Studyπ
This study shows that many papers over the last year or so were just observing sampling error, not true improvement. Some totally new GAN structures like StackGAN, Progressive GAN, CycleGAN, etc., are real advances, but this exhaustive empirical study shows that several of the new loss functions for basic GANs perform about the same as the original GAN loss. -
StarGAN π
StarGAN: learning one model that translates between multiple domains without supervision (previous works were about translating between two domains without supervision) -
Career-- How to build a Portfolio as a Machine Learning/Data Science Engineer in industry ?π
-
Basics-- An overview of gradient descent optimization algorithmsπ
GD variants, challenges, GD optimization algorithms, SGD Parallelizing and distributing, SGD optimizations -
Business questions engineers should ask when interviewing at ML/AI companiesπ
-
On the State of the Art of Evaluation in Neural Language Modelsπ
"Regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models" -
A Regularized Framework for Sparse and Structured Neural Attention GitHub codeπ
Framework for sparse and structured attention, building upon a smoothed max operator -
What's wrong with convolutional neural networks? By Geoffrey Hinton Reddit,Quoraπ
-
Dynamic Routing Between Capsules, Matrix capsules with EM routing, A Keras implementation of CapsNet in NIPS2017 paper "Dynamic Routing Between Capsulesπ
A capsule is a group of neurons whose outputs represent different properties of the same entity. -
Regularizing Neural Networks by Penalizing Confident Output Distributionsπ
Penalizing low entropy output distributions, acts as a strong regularizer in supervised learning. -
Don't Decay the Learning Rate, Increase the Batch Sizeπ
-
Layer Normalizationπ
One way to reduce the training time is to normalize the activities of the neurons.
2017-10π
-
One pixel attack for fooling deep neural networksπ
Authors prove that single pixel change can induce DNN to classify image incorrectly.
-
Efficient Processing of Deep Neural Networks: A Tutorial and Survey Rate:[4/5]π
Recommended by A.Karpathy, comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs.
-
Learning to Optimize with Reinforcement Learningπ
Can we learn ML algorithms instead instead designing manually?
-
Introducing Hybrid lda2vec Algorithm,When word2vec is enough. Part 1, Part 2π
Lda2vec algorithm. A neural network is not always required to find word vectors. (lda2vec: Tools for interpreting natural language,tensor decompositions, lda2vec-tf)
-
Learning a Hierarchyπ
Learning a hierarchical policy to solve mazes by OpenAI @GitHub ArXiv
-
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translationπ
-
AutoML - deeper than deep?π
-
Deep Learning (DLSS) and Reinforcement Learning (RLSS) Summer School, Montreal 2017π
- Good introduction videos + slides for DL and RL
2017-09π
-
Information bottleneck: New Theory Cracks Open the Black Box of Deep Learningπ
- Paper: ArXiv
- Parameterizing the Bottleneck Google DeepVIB
-
Pointer Networksπ