Perceptron Visualization Demo

This interactive visualization demonstrates how a perceptron with 2 inputs and 1 output works for binary classification.

About Perceptrons

A perceptron is the simplest form of a neural network. It consists of:

  • Input features (in this case, x₁ and x₂)
  • Weights (w₁ and w₂) that determine the importance of each input
  • A bias term (b) that shifts the decision boundary
  • An activation function (step function in this case)

The perceptron computes a weighted sum of its inputs plus the bias: w₁x₁ + w₂x₂ + b

Then it applies the activation function to produce an output:

  • Output = 1 (red) if w₁x₁ + w₂x₂ + b ≥ 0
  • Output = 0 (blue) if w₁x₁ + w₂x₂ + b < 0

Decision Boundary

The green line represents the decision boundary, where w₁x₁ + w₂x₂ + b = 0.

This boundary is a straight line in 2D space, defined by:

x₂ = (-w₁x₁ - b) / w₂

Limitations

Perceptrons can only learn linearly separable functions. This is why multiple perceptrons are combined in multi-layer neural networks to solve more complex problems.