In this video, we'll see how stochastic gradient descent can be used to improve the learning algorithm by reducing time, memory, and convergence on local minima.
0:00 - intro
1:04 - stochastic gradient descent
3:46 - reduced convergence on local minima
4:06 - challenge
4:43 - solution
5:51 - code demo
-- Code -----------------------
https://github.com/ben519/nnets-for-y...
-- Vids & Playlists ---------------------------------
Google Colab - • Introduction to Google Colab
NumPy - • Python NumPy For Your Grandma
Pandas - • Python Pandas For Your Grandpa
Neural Networks - • Neural Networks For Your Dog
-- Subscribe To Mailing List ---------------------------------
https://eepurl.com/hC1Pmj
-- Music ---------------------------------
Elite Estate by The Galaxy News
Seaside Samba by King Flamingo
-- Support -----------------------
https://merchonate.com/gormanalysis