Interactive module
Neural Network Visualizer
Watch a small feedforward network learn in real time — weights, activations, and the decision boundary all update live.
How it works
- Weights are the learnable numbers on each connection. Thick edges = large weight. Green = positive, red = negative.
- Activations are each neuron’s output value. Brighter nodes = higher activation.
- Loss measures how wrong the network’s prediction is. The goal of training is to drive it toward zero.
- Backpropagation traces the error backwards through the network to work out how much each weight contributed to the mistake, then adjusts them all.
XOR — the classic non-linear problem. Class 0 lives in the top-left and bottom-right quadrants; class 1 in the other two. No straight line can separate them, so the hidden layer is essential.
Press Train to start, or use Step to advance one sample at a time.
step0samples seen
loss—lower = better
ŷ (prediction)—0 = class 0, 1 = class 1
test loss—held-out sample
test acc—running average