Neural Network Regression Visualization

This interactive visualization demonstrates how a shallow neural network with ReLU activations works for regression problems.

About Shallow Neural Networks

A shallow neural network extends beyond a simple perceptron by adding a hidden layer of neurons. This visualization demonstrates:

  • One input feature (x)
  • A hidden layer with adjustable number of neurons using ReLU activation
  • One output neuron (y) that performs a weighted sum
  • How these networks can model non-linear relationships

Each neuron in the hidden layer computes: ReLU(wix + bi)

Where ReLU(z) = max(0, z) is the activation function that introduces non-linearity.

Network Architecture

The size of each hidden neuron visually indicates its contribution to the output. Green connections represent positive weights, while red connections represent negative weights.

Regression Capabilities

While a single perceptron can only fit linear functions, a neural network with at least one hidden layer can approximate any continuous function with enough neurons. This makes them powerful tools for regression tasks.

Mean Squared Error

The visualization shows the Mean Squared Error (MSE), which measures how well the model fits the data points. Lower MSE indicates better fit.