←
^
→
Artificial Neural Networks
Calculating the Gradient Descent
We can calculate the actual value of the gradient descent by differentiating
E
:
∂
E
∂
w
i
=
∂
∂
w
i
1
2
∑
d
(
t
d
-
o
d
)
2
=
1
2
∑
d
∂
∂
w
i
(
t
d
-
o
d
)
2
=
1
2
∑
d
2
(
t
d
-
o
d
)
∂
∂
w
i
(
t
d
-
o
d
)
=
∑
d
(
t
d
-
o
d
)
∂
∂
w
i
(
t
d
-
w
⇀
⋅
x
d
⇀
)
∂
E
∂
w
i
=
∑
d
(
t
d
-
o
d
)
(
-
x
i
,
d
)
So the weight update (training) rule is
Δ
w
⇀
=
-
η
∑
d
∈
D
(
t
d
-
o
d
)
x
id
where
x
id
is the single input component
x
i
for example
d
.
José M. Vidal
.
13 of 33