[Free webinar series] Scribble Conversations: Responsible AI at the Enterprise - Register Now

Batch Normalization

Batch Normalization

What is Batch Normalization?

Batch normalization is a technique for improving the performance and stability of neural networks. It is commonly used in deep learning, where it has been shown to accelerate the training of deep neural networks and improve their ability to generalize to new data.

Batch normalization works by normalizing the activations of the neurons in a network across a batch of training examples. It does this by subtracting the mean activation of the batch and dividing it by the standard deviation so that the resulting activations have a mean of zero and a standard deviation of one. It stabilizes the distribution of activations and reduces the covariate shift that can occur during training.

In addition to improving the performance and stability of neural networks, batch normalization can also reduce the need for careful weight initialization and make it easier to use higher learning rates. These benefits make it a popular choice for practitioners working with deep learning models.

Related Resources