Vector.RNN
Recurrent neural network.
A recurrent neural network takes a state and a value and returns a new state and a new value.
val gated_recurrent_unit :
weight_state:
(Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref) ->
weight:
(Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref) ->
bias:
(Algebra.Vector.t Stdlib.ref
* Algebra.Vector.t Stdlib.ref
* Algebra.Vector.t Stdlib.ref) ->
(Algebra.Vector.t, Algebra.Vector.t, Algebra.Vector.t) rnn
Gated recurrent unit layer. The argument is the state and then the input.
val long_short_term_memory :
weight_state:
(Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref) ->
weight:
(Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref
* Algebra.Linear.t Stdlib.ref) ->
bias:
(Algebra.Vector.t Stdlib.ref
* Algebra.Vector.t Stdlib.ref
* Algebra.Vector.t Stdlib.ref
* Algebra.Vector.t Stdlib.ref) ->
(Algebra.Vector.t * Algebra.Vector.t, Algebra.Vector.t, Algebra.Vector.t) rnn
Long short-term memory or LSTM layer. In terms of dimensions, the state weights are from hidden to hidden, the weights are from inputs to hidden, and the bias are for hidden.
val unfold :
(Algebra.Vector.t, Algebra.Vector.t, Algebra.Vector.t) rnn ->
Algebra.Vector.t t ->
Algebra.Vector.t t list ->
Algebra.Vector.t t
Unfold an RNN so that updating is done after n steps.
Apply RNN in bulk mode, to an array of input values at once.
val bulk_state :
('s, 'a, Algebra.Vector.t) rnn ->
's0 t ->
'a0 t array ->
's1 t
Same as above, but only the state is kept.