forward-forward

Train without backprop using Hinton's Forward-Forward algorithm

Layer-by-Layer Training

Layer 1
784 → 500
Layer 2
500 → 500
Idle
Epoch 0/100Batch 0/2-
loss
-
Training Loss
-----StepLoss
-

Hyperparameters

0.030
100
2.0

About Forward-Forward

Geoffrey Hinton's Forward-Forward algorithm (2022) trains neural networks without backpropagation. Instead, each layer learns independently by maximizing "goodness" for positive examples and minimizing it for negative ones.

The label is encoded in the first 10 input pixels. Positive examples use the correct label; negative examples use a random wrong label.

Logs
No output yet...