2

I am currently looking at different documents to understand back-propagation, mainly at this document. Now, at page 3, there is the $\epsilon$ symbol involved:

While I understand the main part of the equation, I don't understand the $\epsilon$ factor. Searching for the meaning of the $\epsilon$ in math, it means (for example) a error value to be minimized, but why should I multiply with the error (it is denoted as E anyways).

Shouldn't the $\epsilon$ be the learning rate in this equation? I think that would be what makes sense, because we want to calculate by how much we want to adjust the weight, and since we calculate the gradient, I think the only thing that's missing is the multiplication with the learning rate. The thing is, isn't the learning rate usually denoted with the $\alpha$?