Module Vector.RNN

Recurrent neural network.

type ('s, 'a, 'b) rnn = 's t -> 'a t -> 's t * 'b t

A recurrent neural network takes a state and a value and returns a new state and a new value.

val gated_recurrent_unit : weight_state: (Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref) -> weight: (Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref) -> bias: (Algebra.Vector.t Stdlib.ref * Algebra.Vector.t Stdlib.ref * Algebra.Vector.t Stdlib.ref) -> (Algebra.Vector.t, Algebra.Vector.t, Algebra.Vector.t) rnn

Gated recurrent unit layer. The argument is the state and then the input.

val long_short_term_memory : weight_state: (Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref) -> weight: (Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref * Algebra.Linear.t Stdlib.ref) -> bias: (Algebra.Vector.t Stdlib.ref * Algebra.Vector.t Stdlib.ref * Algebra.Vector.t Stdlib.ref * Algebra.Vector.t Stdlib.ref) -> (Algebra.Vector.t * Algebra.Vector.t, Algebra.Vector.t, Algebra.Vector.t) rnn

Long short-term memory or LSTM layer. In terms of dimensions, the state weights are from hidden to hidden, the weights are from inputs to hidden, and the bias are for hidden.

Unfold an RNN so that updating is done after n steps.

val bulk : ('s, 'a, 'b) rnn -> 's0 t -> 'a0 t array -> 's1 t * 'b0 array t

Apply RNN in bulk mode, to an array of input values at once.

val bulk_state : ('s, 'a, Algebra.Vector.t) rnn -> 's0 t -> 'a0 t array -> 's1 t

Same as above, but only the state is kept.