Overview
순서대로 읽을 필요는 없습니다!
Part 1
Part 1은 범용적인 개념들에 대해 다룹니다.
- 000 Machine Learning (ML)
- 0000 ML Terms & Concepts
- 0001 Rule-based ML
- 0002 Learning-based ML
- 0003 Learning Methods
- 001 Tensorflow (TF)
- 0010 TF Basic
- 0011 TF Advanced
- 0012 TF Master
- 002 Deep Learning (DL) Part 1
- 0020 DL Terms & Concepts
- 0021 Gradient Descent & Momentum
- 0022 Back Propagation
- 0023 Loss & Metric
- 0024 Activation Function
- 0025 Initialization
- 003 Image Processing
- 0030 Preprocessing & Augmentation
- 0031 Popular Image Dataset
- 004 Deep Learning (DL) Part 2
- 0040 Multi-layer Perceptron (MLP)
- 0041 Norm Penalty
- 0042 Dropout
- 0043 Convolutional Neural Network (CNN)
- 0044 Adaptive Learning Rate
- 0045 Batch Normalization (BN)
- 0046 Recurrent Neural Network (RNN)
- 005 Sequence Processing
- 0050 Preprocessing & Masking
- 0051 Popular Sequence Dataset
- 006 Deep Learning (DL) Part 3
- 0060 Autoencoder (AE)
- 0061 Language Model (LM)
- 0062 Word Embedding
- 0063 Residual Connection
- 0064 Sequence-to-Sequence (Seq2Seq)
- 0065 Encoder-Decoder Model
- 007 Pretrained Model
- 0070 Alexnet
- 0071 VGGnet
- 0072 Inception
- 0073 Resnet
- 008 Speech Processing
- 0080 Speech Preprocessing
- 0081 Popular Speech Dataset
Part 2
Part 2는 키워드를 중심으로 관련된 내용을 논문을 통해 다룹니다.
- 110 Attention Model
- 1100 Learning to Align
- 111 Generative Adversarial Network (GAN)
- 1110 GAN & DCGAN
- 1111 Stabilizing Techniques
- 112 Transfer Learning
- 1120 Adopting Pretrained Model
- 1121 Knowledge Distillation
- 1122 Fitnet
- 1123 Net2Net
- 113 Various Network
- 1130 Maxout network
- 1131 Network in Network
- 1132 Highway Network
- 114 Various Normalization
- 1140 Layer Normalization
- 1141 Weight Normalization
- 1142 Cosine Normalization
- 115 Restricted Boltzmann Machine (RBM)
- 1150 Energy-based Model
- 1151 Contrastive Divergence
- 1152 RBM Pretraining
- 1153 Deep Belief Network (DBN)
- 116 Denoising Autoencoder (DAE)
- 117 Variational Autoencoder (VAE)
- 118 Connectionist Temporal Classification (CTC)
- 119 Efficient Softmax
- 120 Large Image Processing
- 1200 Sliding Window
- 1201 Image Pyramid
- 1202 Non-maximum Suppression (NMS)
- 1203 Overfeat
- 121 Ladder Network
- 122 CNN Visualization
- 1220 DeCaf
- 123 Image Segmentation
- 1230 R-CNN
- 1231 Fast R-CNN
- 1232 Faster R-CNN
- 1233 You Only Look Once (YOLO)
- 124 Pruning & Compression
- 1240 SqueezeNet
- 1241 Dense-Sparse-Dense (DSD
- 125 Fixed Point Network
- 1250 Retraining Technique
- 1251 BinaryConnect
- 1252 BinaryNet
- 1253 Quantized Neural Networks
- 126 Neural Machine Translation (NMT)
- 127 Memory Model
- 128 Autoregressive Generative Model
- 129 Curriculum Learning
- 130 Distributed Training
- 131 Question & Answer
- 132 Recommendation System
- 133 Density Estimation
- 134 Domain Adaptation
- 135 Super Resolution
- 136 Beam Search & Decoding
- 137 Data Visualization
- 1370 Principal Component Analysis (PCA)
- 1371 Linear Discriminant Analysis (LDA)
- 1372 T-SNE
- 138 One-shot Learning
- 139 Energy-based Training
- 140 Style Transfer
- 1400 Neural Artistic Style
- 141 Hardware Optimization
- 142 RNN Training Techniques
- 1420 Zoneout
- 1421 Scheduled Sampling
- 1422 Teacher & Professor Forcing
- 143 Time-varying RNN
- 1430 Hierarchical Multiscale RNN
- 144 Why Deep Learning Works Well
- 145 Speech Generation
- 1450 WaveNet
- 1451 Deep Voice
- 1452 SampleRNN
- 146 Speech Recognition
- 1460 Deep Speech
- 147 Complex CNN
- 1470 All Convolution Net
- 1471 Densly Connected CNN
- 1472 U-Net
- 148 Stochastic Behaviors
- 1480 Stochastic Pooling
- 1481 Stochastic Backpropagation
- 1482 Stochastic Depth
- 149 Adversarial Attack
- 150 Image Captioning
- 151 Sentence & Document Compression
- 152 Efficient CNN
- 1520 Separable Filter
- 153 Efficient RNN
- 1530 Quasi-RNN
- 154 Neural Turing Machine (NTM)
- 155 Training Without Gradient Descent