The Dead Path AI-generated Copy + Revise

6 grids · reconstruct grid #4

Grid Sequence
Backpropagation through the ReLU at layer 3 zeros the gradient for all neurons whose forward-pass pre-activation was non-positive. In this layer, the pre-activations in the top two rows were negative, so their gradient contributions are exactly zero. The bottom three rows retain their gradient magnitudes from the previous layer unchanged.
Your Answer