-
Notifications
You must be signed in to change notification settings - Fork 114
Gradient direct assignment is missing #126
Comments
Oh without |
Making a clone of Passing
After some debugging, I tracked it down to a bug in the gradient of |
Awesome, thanks. Can you open a PR, and I'll merge it right quick? On Fri, May 27, 2016 at 4:34 PM Bart van Merriënboer <
|
I realized that when |
When using direct assignment and the tensor being assigned is a function of parameters, those parameters don't seem to get a gradient i.e.
params.W1[i] = params.W2 * x
will result in a zero gradient forW2
. The following code is a minimal test case:The gradient of
W
here is zero, while it should betorch.ones(3, 3) / 9
.When I try to run the same thing with
{optimize=true}
I get an error:The text was updated successfully, but these errors were encountered: