Interactive module
Recurrent Network Visualizer
Train an Elman RNN to classify short sequences. Watch hidden state flow through time as the network learns to remember what matters.
How it works
- Hidden state h_t carries a summary of everything seen so far. It is updated at every time step: h_t = tanh(Wxh·x_t + Whh·h_{t-1} + b).
- Recurrent weight Whh connects h_{t-1} to h_t — the key that lets the network remember. Bright circles = strongly activated hidden state.
- BPTT (backpropagation through time) unrolls the sequence and sends the error gradient backwards through each time step to update all three weight matrices.
- Loss is binary cross-entropy between the network’s output ŷ and the true label. The goal is to drive it toward zero.
Trending Up vs Down — sequences that consistently rise are class 1; sequences that fall are class 0. The network must track the direction of change over time.
Press Train to start, or use Step to advance one sample at a time.
step0sequences seen
loss—lower = better
ŷ—0 = class 0, 1 = class 1
test loss—held-out sample
test acc—running average