Arguably the most important hyperparameter, the learning rate, roughly speaking, controls how fast your neural net “learns”. So why don’t we just amp this up and live life on the fast lane?
Not that simple. Remember, in deep learning, our goal is to minimize a loss function. If the learning rate is too high, our loss will start jumping all over the place and never converge.
And if the learning rate is too small, the model will take way too long to converge, as illustrated above.
No comments:
Post a Comment