This post discusses an outline of the the PLA convergence theorem. While the algorithm convergence is not obvious, its proof hinges only on two key inequalities.
The Perceptron Learning Algorithm was outlined in this earlier post.
Notations and Convergence Conditions
The input vectors in the input set are (m+1)-dimensional and have first coordinate fixed to 1. The vector defining linear functional on the space of input vectors is denoted by , it is also (m+1) dimensional and its first component is called bias. Dot product on the linear space at hands is defined in the standard (Euclidean) fashion and is denoted as .
Each vector in the training set of vectors , is assigned to one of the two classes denoted by +1 and -1. Let’s denote assignment mapping . The assignment is linearly separable if there exists such that .
By assuming linear separability achieved with vector , we can define a quantity that can be treated as the minimal separation from the decision boundary:
(1) min .
We will also assume that the PLA starts iterations with and vector updated to the i-th iteration is denoted as .
Proof Idea
The main idea of the proof is to obtain two bounds on the growth of the squared norm .
- The first (quadratic lower) bound is obtained directly from counting updates and showing that the squared norm in question grows at least quadratically with the number of iterations. This does not depend on the particular choice of the update vector used to update .
- The second (linear upper) boundd is obtained with the direct use of the update rule: is picked because it is misclassified and such a choice makes be more “aligned” with which limits the growth of from above linearly.
A combination of both bounds gives the desired result and an estimate on maximum number of iterations required for PLA to converge.
Convergence Theorem Proof
Unfolding using the update rule we get:
(2) ,
when multiplied by and using (1) this gives:
(3)
Next we use Cauchy-Schwarz inequality to obtain:
(4)
that, by combining (3) and (4), leads to:
(5)
which gives the first component of the proof: the squared norm of the vectors during update keeps at least as fast as .
To obtain the bound on the growth of we first consider equalities:
(6)
(7)
where (6) is by definition and (7) is expanded using Euclidean metric. By the definition of PLA the vector was picked because it was misclassified hence the sign on f the last term in (7) us negative since is of opposite sign of . This leads to inequalities:
(8)
and
(9)
Summation of inequality (9) for k in range from 0 to n with initial condition results in:
(10)
where max denotes maximum of squared norm of vectors .
The inequality (10) gives the second component of the proof: the growth of squared norm of the vectors is limited from above by linear function of n.
Finally, it immediately follows from (5) and (10) that the algorithm has to stop. The maximum iteration number for stopping can be obtained from the equality: where both estimates of growth are combined, i.e.
(12)
Qualitatively, the smaller minimal separation of the training data, the longer it may take to converge. In practice the number of iterations required for PLA to converge can be considerably smaller than the estimate above. However, neither or are known in advance for an arbitrary dataset so there is no easy way to calculate the estimate (12) upfront, without running the algorithm, or even conclude if it will converge or not.
The proof above was borrowed from [1] and is close to what [2] suggests as a series of steps in an exercise for Chapter 1.
References
[1] S.Haykin “Neural Networks: A Comprehensive Foundation”, 19989, Prentice-Hall, 842 pages.
[2] Yaser S. Abu-Mostafa, Malik Magdon-Ismail, Hsuan-Tien Lin. “Learning From Data” AMLBook, 213 pages, 2012.