This interactive visualization demonstrates how a perceptron with 2 inputs and 1 output works for binary classification.
A perceptron is the simplest form of a neural network. It consists of:
The perceptron computes a weighted sum of its inputs plus the bias: w₁x₁ + w₂x₂ + b
Then it applies the activation function to produce an output:
The green line represents the decision boundary, where w₁x₁ + w₂x₂ + b = 0.
This boundary is a straight line in 2D space, defined by:
x₂ = (-w₁x₁ - b) / w₂
Perceptrons can only learn linearly separable functions. This is why multiple perceptrons are combined in multi-layer neural networks to solve more complex problems.