Train without backprop using Hinton's Forward-Forward algorithm
Geoffrey Hinton's Forward-Forward algorithm (2022) trains neural networks without backpropagation. Instead, each layer learns independently by maximizing "goodness" for positive examples and minimizing it for negative ones.
The label is encoded in the first 10 input pixels. Positive examples use the correct label; negative examples use a random wrong label.