There are no items in your cart
Add More
Add More
| Item Details | Price | ||
|---|---|---|---|
Instructor: CampusX
Validity Period: 1095 days
Description:
A comprehensive deep learning course spread across 100 days, covering a wide range of topics from basics to advanced concepts. Ideal for beginners and enthusiasts looking to enhance their understanding of deep learning models and techniques.
Key Highlights:
What you will learn:
| Important | |||
| Readme | |||
| 100 Days of DL Complete Notes (959 pages) | |||
| Day 1 | |||
| 100 Days of Deep Learning | Course Announcement 19:00 | Preview | ||
| 01_Course_Announcementocr (7 pages) | Preview | ||
| Day 2 | |||
| What is Deep Learning? Deep Learning Vs Machine Learning 67:00 | Preview | ||
| 02_What is Deep Learning_Deep Learning Vs Machine Learningocr (22 pages) | Preview | ||
| Day 3 | |||
| Types of Neural Networks | History of Deep Learning | Applications of Deep Learning 33:00 | Preview | ||
| 03_Types of Neural Networks History of Deep Learningocr (24 pages) | Preview | ||
| Day 4 | |||
| What is a Perceptron? Perceptron Vs Neuron | Perceptron Geometric Intuition 39:00 | Preview | ||
| 04_What_is_perceptron Perceptron vs Neuron Perceptron Geometric Intuitionocr (7 pages) | Preview | ||
| Day 5 | |||
| Perceptron Trick | How to train a Perceptron | Perceptron Part 2 | Deep Learning Full Course 52:00 | Preview | ||
| 05_Perceptron Trick How to train a perceptron Part 2ocr (7 pages) | Preview | ||
| Day 6 | |||
| Perceptron Loss Function | Hinge Loss | Binary Cross Entropy | Sigmoid Function 59:00 | |||
| 06_Perceptron Loss Function Hinge Loss Binary Cross Entropy Sigmoid Functionocr (9 pages) | |||
| Day 7 | |||
| Problem with Perceptron 8:00 | |||
| 07_Problem with perceptronocr (8 pages) | |||
| Day 8 | |||
| MLP Notation 13:00 | |||
| 08_MLP Notationocr (8 pages) | |||
| Day 9 | |||
| Multi Layer Perceptron | MLP Intuition 38:00 | |||
| 09_Multi Layer Perceptron MLP Intuitionocr (11 pages) | |||
| Day 10 | |||
| Forward Propagation | How a neural network predicts output? 16:00 | |||
| 10_Forward Propagation How a neural network predicts outputocr (5 pages) | |||
| Day 11 | |||
| Customer Churn Prediction using ANN | Keras and Tensorflow | Deep Learning Classification 35:00 | |||
| 11_Customer Churn Prediction using ANN Keras and Tensorflow Deep Learning Classificationocr (9 pages) | |||
| Day 12 | |||
| Handwritten Digit Classification using ANN | MNIST Dataset 29:00 | |||
| 12_Handwritten Digit Classification using ANN MNIST Datasetocr (11 pages) | |||
| Day 13 | |||
| Graduate Admission Prediction using ANN 18:00 | |||
| 13_Graduate Admission Prediction using ANNocr (7 pages) | |||
| Day 14 | |||
| Loss Functions in Deep Learning | Deep Learning | CampusX 60:00 | |||
| 14_Loss Functions in Deep Learning Deep Learning CampusXocr (8 pages) | |||
| Day 15 | |||
| Backpropagation in Deep Learning | Part 1 | The What? 54:00 | |||
| 15_Backpropagation in Deep Learning Part 1 The Whatocr (7 pages) | |||
| Day 16 | |||
| Backpropagation Part 2 | The How | Complete Deep Learning Playlist 60:00 | |||
| 16_Backpropagation Part 2 The How Complete Deep Learning Playlistocr (8 pages) | |||
| Day 17 | |||
| Backpropagation Part 3 | The Why | Complete Deep Learning Playlist 40:00 | |||
| 17_Backpropagation Part 3 The Why Complete Deep Learning Playlistocr (10 pages) | |||
| Day 18 | |||
| MLP Memoization | Complete Deep Learning Playlist 25:00 | |||
| 18_MLP Memoization Complete Deep Learning Playlistocr (6 pages) | |||
| Day 19 | |||
| Gradient Descent in Neural Networks | Batch vs Stochastics vs Mini Batch Gradient Descent 38:00 | |||
| 19_Gradient_Descent_in_Neural_Network Batch_vs_Stochastics_vs_Mini_Batchocr (10 pages) | |||
| Day 20 | |||
| Vanishing Gradient Problem in ANN | Exploding Gradient Problem | Code Example 32:00 | |||
| 20_Vanishing Gradient Problem in ANN Exploding Gradient Problem Code Exampleocr (9 pages) | |||
| Day 21 | |||
| How to Improve the Performance of a Neural Network 30:00 | |||
| 21_How to Improve the Performance of a Neural Networkocr (9 pages) | |||
| Day 22 | |||
| Early Stopping In Neural Networks | End to End Deep Learning Course 12:00 | |||
| 22_Early Stopping In Neural Networks End to End Deep Learning Courseocr (8 pages) | |||
| Day 23 | |||
| Data Scaling in Neural Network | Feature Scaling in ANN | End to End Deep Learning Course 17:00 | |||
| 23_Data Scaling in Neural Network Feature Scaling in ANN End to End Deep Learning Courseocr (6 pages) | |||
| Day 24 | |||
| Dropout Layer in Deep Learning | Dropouts in ANN | End to End Deep Learning 28:00 | |||
| 24_Dropout Layer in Deep Learning Dropouts in ANN End to End Deep Learningocr (6 pages) | |||
| Day 25 | |||
| Dropout Layers in ANN | Code Example | Regression | Classification 19:00 | |||
| 25_Dropout Layers in ANN Code Example Regression Classificationocr (9 pages) | |||
| Day 26 | |||
| Regularization in Deep Learning | L2 Regularization in ANN | L1 Regularization | Weight Decay in ANN 36:00 | |||
| 26_Regularization in Deep Learning L2 Regularization in ANN L1 Regularization Weight Decay in ANNocr (7 pages) | |||
| Day 27 | |||
| Activation Functions in Deep Learning | Sigmoid, Tanh and Relu Activation Function 45:00 | |||
| 27_Activation Functions in Deep Learning Sigmoid, Tanh and Relu Activation Functionocr (14 pages) | |||
| Day 28 | |||
| Relu Variants Explained | Leaky Relu | Parametric Relu | Elu | Selu | Activation Functions Part 2 33:00 | |||
| 28_Relu Variants Explained Elu Selu Activation Functions Part 2ocr (28 pages) | |||
| Day 29 | |||
| Weight Initialization Techniques | What not to do? | Deep Learning 49:00 | |||
| 29_Weight Initialization Techniques What not to do Deep Learningocr (12 pages) | |||
| Day 30 | |||
| Xavier/Glorat And He Weight Initialization in Deep Learning 21:00 | |||
| 30_Xavier Glorat And He Weight Initialization in Deep Learningocr (8 pages) | |||
| Day 31 | |||
| Batch Normalization in Deep Learning | Batch Learning in Keras 44:00 | |||
| 31_Batch Normalization in Deep Learning Batch Learning in Kerasocr (15 pages) | |||
| Day 32 | |||
| Optimizers in Deep Learning | Part 1 | Complete Deep Learning Course 23:00 | |||
| 32_Optimizers in Deep Learning Part 1 Complete Deep Learning Courseocr (7 pages) | |||
| Day 33 | |||
| Exponentially Weighted Moving Average or Exponential Weighted Average | Deep Learning 19:00 | |||
| 33_Exponentially Weighted Moving Average or Exponential Weighted Average Deep Learningocr (15 pages) | |||
| Day 34 | |||
| SGD with Momentum Explained in Detail with Animations | Optimizers in Deep Learning Part 2 38:00 | |||
| 34_SGD with Momentum Explained in Detail with Animations Optimizers in Deep Learning Part 2ocr (13 pages) | |||
| Day 35 | |||
| Nesterov Accelerated Gradient (NAG) Explained in Detail | Animations | Optimizers in Deep Learning 28:00 | |||
| 35_Nesterov Accelerated Gradient (NAG) Explained in Detail Animations Optimizers in Deep Learningocr (8 pages) | |||
| Day 36 | |||
| AdaGrad Explained in Detail with Animations | Optimizers in Deep Learning Part 4 26:00 | |||
| 36_AdaGrad Explained in Detail with Animations Optimizers in Deep Learning Part 4ocr (7 pages) | |||
| Day 37 | |||
| RMSProp Explained in Detail with Animations | Optimizers in Deep Learning Part 5 13:00 | |||
| 37_RMSProp Explained in Detail with Animations Optimizers in Deep Learning Part 5ocr (10 pages) | |||
| Day 38 | |||
| Adam Optimizer Explained in Detail with Animations | Optimizers in Deep Learning Part 5 13:00 | |||
| 38_Adam Optimizer Explained in Detail with Animations Optimizers in Deep Learning Part 5ocr (8 pages) | |||
| Day 39 | |||
| Keras Tuner | Hyperparameter Tuning a Neural Network 66:00 | |||
| 39_Keras Tuner Hyperparameter Tuning a Neural Networkocr (10 pages) | |||
| Day 40 | |||
| What is Convolutional Neural Network (CNN) | CNN Intution 27:00 | |||
| 40_What is Convolutional Neural Network (CNN) CNN Intutionocr (22 pages) | |||
| Day 41 | |||
| CNN Vs Visual Cortex | The Famous Cat Experiment | History of CNN 15:00 | |||
| 41_CNN_Vs_Visual_Cortex_The_Famous_Cat_Experiment_History_of_CNNocr (8 pages) | |||
| Day 42 | |||
| CNN Part 3 | Convolution Operation 29:00 | |||
| 42_CNN_Part_3_Convolution_Operationocr (18 pages) | |||
| Day 43 | |||
| Padding & Strides in CNN | CNN Lecture 4 | Deep Learning 24:00 | |||
| 43_Padding & Strides in CNN CNN Lecture 4 Deep Learningocr (15 pages) | |||
| Day 44 | |||
| Pooling Layer in CNN | MaxPooling in Convolutional Neural Network 28:00 | |||
| 44_Pooling Layer in CNN MaxPooling in Convolutional Neural Networkocr (20 pages) | |||
| Day 45 | |||
| CNN Architecture | LeNet -5 Architecture 20:00 | |||
| 45_CNN Architecture LeNet -5 Architectureocr (9 pages) | |||
| Day 46 | |||
| Comparing CNN Vs ANN | CampusX 18:00 | |||
| 46_Comparing CNNVs_ANN_CampusXocr (7 pages) | |||
| Day 47 | |||
| Backpropagation in CNN | Part 1 | Deep Learning 36:00 | |||
| 47_Backpropagation in CNN Part 1 Deep Learningocr (11 pages) | |||
| Day 48 | |||
| CNN Backpropagation Part 2 | How Backpropagation works on Convolution, Maxpooling and Flatten Layers 43:00 | |||
| 48_CNN Backpropagation Part 2 How Backpropagation works on Convolution, Maxpooling and Flatten Layersocr (8 pages) | |||
| Day 49 | |||
| Cat Vs Dog Image Classification Project | Deep Learning Project | CNN Project 27:00 | |||
| 49_Cat Vs Dog Image Classification ProjectDeep Learning Project CNN Projectocr (6 pages) | |||
| Day 50 | |||
| Data Augmentation in Deep Learning | CNN 27:00 | |||
| 50_Data Augmentation in Deep Learning CNNocr (5 pages) | |||
| Day 51 | |||
| Pretrained models in CNN | ImageNET Dataset | ILSVRC | Keras Code 24:00 | |||
| 51_Pretrained models in CNN ImageNET Dataset ILSVRC Keras Codeocr (9 pages) | |||
| Day 52 | |||
| What does a CNN see? | Visualizing CNN Filters and Feature Maps | CampusX 13:00 | |||
| 52_What does a CNN see Visualizing CNN Filters and Feature Maps CampusXocr (5 pages) | |||
| Day 53 | |||
| What is Transfer Learning? Transfer Learning in Keras | Fine Tuning Vs Feature Extraction 34:00 | |||
| 53_What is Transfer Learning_Transfer Learning in Keras Fine Tuning_Vs_Feature_Extractionocr (17 pages) | |||
| Day 54 | |||
| Keras Functional Model | How to build non-linear Neural Networks? 26:00 | |||
| 54ocr (5 pages) | |||
| Day 55 | |||
| Why RNNs are needed | RNNs Vs ANNs | RNN Part 1 30:00 | |||
| 55_Why RNNs are needed RNNs Vs ANNs RNN Part 1ocr (13 pages) | |||
| Day 56 | |||
| Recurrent Neural Network | Forward Propagation | Architecture 42:00 | |||
| 56_Recurrent Neural Network Forward Propagation Architectureocr (21 pages) | |||
| Day 57 | |||
| RNN Sentiment Analysis | RNN Code Example in Keras | CampusX 37:00 | |||
| 57_RNN Sentiment Analysis RNN Code Example in Keras CampusXocr (7 pages) | |||
| Day 58 | |||
| Types of RNN | Many to Many | One to Many | Many to One RNNs 22:00 | |||
| 58_ Types of RNN Many to Many One to Many Many to One RNNsocr (7 pages) | |||
| Day 59 | |||
| How Backpropagation works in RNN | Backpropagation Through Time 34:00 | |||
| 59_How Backpropagation works in RNN Backpropagation Through Timeocr (5 pages) | |||
| Day 60 | |||
| Problems with RNN | 100 Days of Deep Learning 32:00 | |||
| 60_Problems with RNN 100 Days of Deep Learningocr (9 pages) | |||
| Day 61 | |||
| LSTM | Long Short Term Memory | Part 1 | The What? | CampusX 42:00 | |||
| 61_LSTM Long Short Term Memory Part 1 The WhatCampusXocr (12 pages) | |||
| Day 62 | |||
| LSTM Architecture | Part 2 | The How? | CampusX 70:00 | |||
| 62_LSTM ArchitecturePart 2 The How_CampusXocr (12 pages) | |||
| Day 63 | |||
| LSTM | Part 3 | Next Word Predictor Using | CampusX 60:00 | |||
| 63_LSTM Part 3 Next Word Predictor Using CampusXocr (11 pages) | |||
| Day 64 | |||
| Gated Recurrent Unit | Deep Learning | GRU | CampusX 86:00 | |||
| 64_Deep RNNs Stacked RNNs Stacked LSTMs Stacked GRUs CampusXocr (12 pages) | |||
| Day 65 | |||
| Deep RNNs | Stacked RNNs | Stacked LSTMs | Stacked GRUs | CampusX 45:00 | |||
| 65_Gated Recurrent Unit Deep Learning GRU CampusXocr (10 pages) | |||
| Day 66 | |||
| Bidirectional RNN | BiLSTM | Bidirectional LSTM | Bidirectional GRU 26:00 | |||
| 66_Bidirectional RNN BiLSTM Bidirectional LSTM Bidirectional GRUocr (7 pages) | |||
| Day 67 | |||
| The Epic History of Large Language Models (LLMs) | From LSTMs to ChatGPT | CampusX 87:00 | |||
| 67_The Epic History of Large Language Models (LLMs) From LSTMs to ChatGPT_CampusXocr (21 pages) | |||
| Day 68 | |||
| Encoder Decoder | Sequence-to-Sequence Architecture | Deep Learning | CampusX 74:00 | |||
| 68_Encoder_Decoder_Sequence_to_Sequence_Architecture_Deep_Learning_CampusXocr (29 pages) | |||
| Day 69 | |||
| Attention Mechanism in 1 video | Seq2Seq Networks | Encoder Decoder Architecture 41:00 | |||
| 69_Attention Mechanism in 1 video_Seq2Seq Networks Encoder Decoder Architectureocr (13 pages) | |||
| Day 70 | |||
| Bahdanau Attention Vs Luong Attention 53:00 | |||
| 70_Bahdanau Attention Vs Luong Attentionocr (18 pages) | |||
| Day 71 | |||
| Introduction to Transformers | Transformers Part 1 60:00 | |||
| 71_Introduction to Transformers Transformers Part 1ocr (24 pages) | |||
| Day 72 | |||
| What is Self Attention | Transformers Part 2 | CampusX 23:00 | |||
| 72_What is Self Attention Transformers Part 2 CampusXocr (8 pages) | |||
| Day 73 | |||
| Self Attention in Transformers | Deep Learning | Simple Explanation with Code! 83:00 | |||
| 73_Self Attention in Transformers Deep Learning Simple Explanation with Codeocr (21 pages) | |||
| Day 74 | |||
| Scaled Dot Product Attention | Why do we scale Self Attention? 51:00 | |||
| 74_Scaled Dot Product Attention Why do we scale Self Attentionocr (7 pages) | |||
| Day 75 | |||
| Self Attention Geometric Intuition | How to Visualize Self Attention | CampusX 21:00 | |||
| 75_Self Attention Geometric Intuition How to Visualize Self Attention CampusXocr (9 pages) | |||
| Day 76 | |||
| Why is Self Attention called "Self"? | Self Attention Vs Luong Attention in Depth Lecture | CampusX 23:00 | |||
| 76_Why is Self Attention calledSelf Attention Vs Luong Attention in Depth Lecture CampusXocr (7 pages) | |||
| Day 77 | |||
| What is Multi-head Attention in Transformers | Multi-head Attention v Self Attention | Deep Learning 38:00 | |||
| 77_What is Multi-head Attention in Transformers Multi-head Attention v Self Attention Deep Learningocr (12 pages) | |||
| Day 78 | |||
| Positional Encoding in Transformers | Deep Learning | CampusX 73:00 | |||
| 78_Positional Encoding in Transformers Deep Learning CampusXocr (24 pages) | |||
| Day 79 | |||
| Layer Normalization in Transformers | Layer Norm Vs Batch Norm 47:00 | |||
| 79_Layer Normalization in Transformers Layer Norm Vs Batch Normocr (13 pages) | |||
| Day 80 | |||
| Transformer Architecture | Part 1 Encoder Architecture | CampusX 55:00 | |||
| 80_Transformer Architecture Part 1 Encoder Architecture CampusXocr (21 pages) | |||
| Day 81 | |||
| Masked Self Attention | Masked Multi-head Attention in Transformer | Transformer Decoder 61:00 | |||
| 81_Masked Self Attention Masked Multi-head Attention in Transformer Transformer Decoderocr (15 pages) | |||
| Day 82 | |||
| Cross Attention in Transformers | 100 Days Of Deep Learning | CampusX 34:00 | |||
| 82_Cross Attention in Transformers 100 Days Of Deep Learning CampusXocr (4 pages) | |||
| Day 83 | |||
| Transformer Decoder Architecture | Deep Learning | CampusX 48:00 | |||
| 83_Transformer Decoder Architecture Deep Learning CampusXocr (10 pages) | |||
| Day 84 | |||
| Transformer Inference | How Inference is done in Transformer? | Deep Learning | CampusX 45:00 | |||
| 84_Transformer Inference How Inference is done in Transformer Deep Learning CampusXocr | |||
After successful purchase, this item would be added to your Library.
You can access the library in the following ways :