We all agree that the learning rate can be seen as a way to control overfitting, just like dropout or batch size. But I'm writing this answer because I think the following in Amir's answer and comments is misleading :
adding more layers/neurons increases the chance of over-fitting.
Therefore it would be better if you decrease the learning rate over
time.
Since adding more layers/nodes to the model makes it prone to
over-fitting [...]
taking small steps towards the local minima is recommended
It's actually the OPPOSITE! A smaller learning rate will increase the risk of overfitting!
Citing from Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates (Smith & Topin 2018) (a very interesting read btw):
There are many forms of regularization, such as large learning rates,
small batch sizes, weight decay, and dropout. Practitioners must
balance the various forms of regularization for each dataset and
architecture in order to obtain good performance. Reducing other forms
of regularization and regularizing with very large learning rates
makes training significantly more efficient.
So, as Guillaume Chevalier said in his first comment, if you add regularization, decreasing the learning rate might be a good idea if you want to keep the overall amount of regularization constant. But if your goal is to increase the overall amount of regularization, or if you reduced other means of regularization (e.g., decreased dropout, increased batch size), then the learning rate should be increased.
Related (and also very interesting): Don't decay the learning rate, increase the batch size (Smith et al. ICLR'18).