Загрузка страницы

Tutorial 97 - Deep Learning terminology explained - Batch size, iterations and epochs

Code associated with these tutorials can be downloaded from here: https://github.com/bnsreenu/python_for_image_processing_APEER

The batch size defines the number of samples that propagates through the network before updating the model parameters.

Each batch of samples go through one full forward and backward propagation.

Example:

Total training samples (images) = 3000
batch_size = 32
epochs = 500

Then…
32 samples will be taken at a time to train the network.
To go through all 3000 samples it takes 3000/32 = 94 iterations  1 epoch.
This process continues 500 times (epochs).
You may be limited to small batch sizes based on your system hardware (RAM + GPU).

Smaller batches mean each step in gradient descent may be less accurate, so it may take longer for the algorithm to converge.

But, it has been observed that for larger batches there is a significant degradation in the quality of the model, as measured by its ability to generalize.

Batch size of 32 or 64 is a good starting point.

Summary:
Larger batch sizes result in faster progress in training, but don't always converge as fast.
Smaller batch sizes train slower but can converge faster.

Видео Tutorial 97 - Deep Learning terminology explained - Batch size, iterations and epochs канала ZEISS arivis
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
5 апреля 2021 г. 12:00:10
00:14:37
Яндекс.Метрика