Perceptron
← Back to Neural Network Fundamentals
The basic building block of neural networks. Computes a weighted sum of inputs plus a bias, then applies an activation function. Multiple perceptrons form layers; stacking layers creates deep networks.
Key Properties
- Weighted sum: z = w1x1 + w2x2 + … + b
- Activation: output = f(z)
- Single perceptron can only learn linearly separable functions
Related
- Activation Functions (applied to perceptron output)
- Linear Models (perceptron without activation is linear)