File size: 1,251 Bytes
d6312dd 9470d95 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: mit
datasets:
- MuzzammilShah/people-names
language:
- en
model_name: Batch Normalization for Neural Networks
library_name: pytorch
tags:
- makemore
- batch-normalization
- neural-networks
- andrej-karpathy
---
# Batch Normalization for Neural Networks: Makemore (Part 3)
In this repository, I implemented **Batch Normalization** within a neural network framework to enhance training stability and performance, following Andrej Karpathy's approach in the **Makemore - Part 3** video.
## Overview
This implementation focuses on:
- **Normalizing activations and gradients**.
- Addressing initialization issues.
- Utilizing Kaiming initialization to prevent saturation of activation functions.
Additionally, **visualization graphs** were created at the end to analyze the effects of these techniques on the training process and model performance.
## Documentation
For a better reading experience and detailed notes, visit my **[Road to GPT Documentation Site](https://muzzammilshah.github.io/Road-to-GPT/Makemore-part3/)**.
## Acknowledgments
Notes and implementations inspired by the **Makemore - Part 3** video by [Andrej Karpathy](https://karpathy.ai/).
For more of my projects, visit my [Portfolio Site](https://muhammedshah.com). |