This interactive visualization demonstrates how a shallow neural network with ReLU activations works for regression problems.
A shallow neural network extends beyond a simple perceptron by adding a hidden layer of neurons. This visualization demonstrates:
Each neuron in the hidden layer computes: ReLU(wix + bi)
Where ReLU(z) = max(0, z) is the activation function that introduces non-linearity.
The size of each hidden neuron visually indicates its contribution to the output. Green connections represent positive weights, while red connections represent negative weights.
While a single perceptron can only fit linear functions, a neural network with at least one hidden layer can approximate any continuous function with enough neurons. This makes them powerful tools for regression tasks.
The visualization shows the Mean Squared Error (MSE), which measures how well the model fits the data points. Lower MSE indicates better fit.