Tuesday, February 17, 2015
This post illustrates that practical convergence for separable data of PLA is achieved much faster than the theoretical estimate suggests
Saturday, January 17, 2015
Here we compare Pocket PLA, Adaline and some of its variations with adjustable learning rate using artificial training data.
Saturday, December 20, 2014
Adaptive linear neuron (or element) aka Adaline can be viewed as a variation of PLA with update scaled by the mismatch magnitude. While not the universally best choice for the classifier learning algorithm, Adaline can be viewed as an improvement over the original PLA.
Monday, December 15, 2014
This post discusses an outline of the the PLA convergence theorem. While the algorithm convergence is not obvious, its proof hinges only on two key inequalities.
Saturday, December 13, 2014
Here we look at the Pocket algorithm that addresses an important practical issue of PLA stability and the absence of convergence for non-separable training dataset.
Thursday, November 27, 2014
Couple of 2d visualizations for Perceptron Learning Algorithm (PLA) are showing the behavior of plain vanilla PLA.
Monday, November 24, 2014
Generating artificial test data for Machine Learning (ML) algorithms is an important step in their development. This post discusses generation and plotting of linearly separable test data for binary classifiers like Perceptron.
Wednesday, November 12, 2014
The Perceptron Learning Algorithm (PLA) is one of the simplest Machine Learning (ML) algorithms. This post goes through an elementary example to illustrate how PLA works.