Skip to content

Notes on Perceptron. Part 7: Practical Convergence vs. Theoretical for Separable Data

This post illustrates that practical convergence for separable data of PLA is achieved much faster than the theoretical estimate suggests (Continued)

AI Planning: Logistics Puzzles in Cartesian World

In this post we discuss simple breadth-first forward state-space solver for puzzles that can be called “logistics puzzles”.  The puzzle about farmer crossing river with cabbage, goat and wolf is probably the most well known example of such puzzles. (Continued)

Notes on Perceptron. Part 6: Experimenting with Learning Rate

Here we compare Pocket PLA, Adaline and some of its variations with adjustable learning rate using artificial training data. (Continued)

Bare-bones Tasks Manager in Python

Utilizing multi-core architecture in Python scripts could be challenging even though there is already a number of libraries for that. For simple cases the following home-brewed solution consisting of a small TaskManager class can be a viable option.

(Continued)

Notes on Perceptron. Part 5: Adaline

Adaptive linear neuron (or element) aka Adaline can be viewed as a variation of PLA with update scaled by the mismatch magnitude. While not the universally best choice for the classifier learning algorithm, Adaline can be viewed as an improvement over the original PLA.

(Continued)

Notes on Perceptron. Part 4: Convergence Theorem

This post discusses an outline of the the PLA convergence theorem. While the algorithm convergence is not obvious, its proof hinges only on two key inequalities.

(Continued)

Notes on Perceptron. Part 3: The Pocket Algorithm and Non-Separable Data

Here we look at the Pocket algorithm that addresses an important practical issue of PLA stability and the absence of convergence for non-separable training dataset.

(Continued)

Notes on Perceptron. Part 2: PLA Visualization.

Couple of 2d visualizations for Perceptron Learning Algorithm (PLA) are showing the behavior of plain vanilla PLA.

(Continued)

Artificial Linearly Separable Test Data in Python

Generating artificial test data for Machine Learning (ML) algorithms is an important step in their development. This post discusses generation and plotting of linearly separable test data for binary classifiers like Perceptron.

(Continued)

Notes on Perceptron. Part 1: The Perceptron Learning Algorithm.

The Perceptron Learning Algorithm (PLA) is one of the simplest Machine Learning (ML) algorithms. This post goes through an elementary example to illustrate how PLA works.

(Continued)