Skip to content

Category Archives: Machine Learning

Notes on Perceptron. Part 7: Practical Convergence vs. Theoretical for Separable Data

This post illustrates that practical convergence for separable data of PLA is achieved much faster than the theoretical estimate suggests

Notes on Perceptron. Part 6: Experimenting with Learning Rate

Here we compare Pocket PLA, Adaline and some of its variations with adjustable learning rate using artificial training data.

Notes on Perceptron. Part 5: Adaline

Adaptive linear neuron (or element) aka Adaline can be viewed as a variation of PLA with update scaled by the mismatch magnitude. While not the universally best choice for the classifier learning algorithm, Adaline can be viewed as an improvement over the original PLA.

Notes on Perceptron. Part 4: Convergence Theorem

This post discusses an outline of the the PLA convergence theorem. While the algorithm convergence is not obvious, its proof hinges only on two key inequalities.

Notes on Perceptron. Part 3: The Pocket Algorithm and Non-Separable Data

Here we look at the Pocket algorithm that addresses an important practical issue of PLA stability and the absence of convergence for non-separable training dataset.

Notes on Perceptron. Part 2: PLA Visualization.

Couple of 2d visualizations for Perceptron Learning Algorithm (PLA) are showing the behavior of plain vanilla PLA.

Artificial Linearly Separable Test Data in Python

Generating artificial test data for Machine Learning (ML) algorithms is an important step in their development. This post discusses generation and plotting of linearly separable test data for binary classifiers like Perceptron.

Notes on Perceptron. Part 1: The Perceptron Learning Algorithm.

The Perceptron Learning Algorithm (PLA) is one of the simplest Machine Learning (ML) algorithms. This post goes through an elementary example to illustrate how PLA works.