Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I´m reviewing code from Toronto perceptron MATLAB code

The code is

function [w] = perceptron(X,Y,w_init) w = w_init;

for iteration = 1 : 100 %<- in practice, use some stopping criterion!

for ii = 1 : size(X,2) %cycle through training set if sign(w'*X(:,ii)) ~= Y(ii) %wrong decision? w = w + X(:,ii) * Y(ii); %then add (or subtract) this point to w end end sum(sign(w'*X)~=Y)/size(X,2) %show misclassification rate end

So I was reading how to apply this function to data matrix X, and target Y, but, do not know how to use this function, I understand, it returns a vector of weights, so it can classify.

Could you please give an example, and explain it??

I´ve tried

X=[0 0; 0 1; 1 1] Y=[1 0; 2 1] w=[1 1 1] Result = perceptron( X, Y, w ) ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> perceptron at 15 if sign(w'*X(:,ii)) ~= Y(ii) Result = perceptron( X, Y, w' ) ??? Error using ==> ne Matrix dimensions must agree. Error in ==> perceptron at 19 sum(sign(w'*X)~=Y) / size(X,2);

Thanks

Thank you for the answers, I got one more, If I change the Y = [0, 1], what happens to the algorithm?.

So, Any input data will not work with Y = [0,1] with this code of the perceptron right?

1 Answer

0 votes
by (107k points)

You should first understand what is the meaning of each of the inputs:

Here, X is an input matrix of examples with size M x N, where M is the dimension of the feature vector, and N the number of samples. 

Since the perceptron model for prediction is Y=w*X+b, you have to supply one extra dimension in X which is constant, usually set to 1, so the b term is "built-in" into X. In the following code for X, I have set the last entry of X to be 1 in all samples.

Y is the classification for each sample from X (the classification you want the perceptron to learn), so it should be an N-dimensional row vector that is one output for each input example. 

Since the perceptron is a binary classifier, it should have only 2 distinct possible values. Looking in the code, you see that it checks for the sign of the prediction, which tells you that the allowed values of Y should be -1,+1 (and not 0,1 for example).

w is the weight of the vector that you will be trying to learn.

% input samples X1=[rand(1,100); rand(1,100); ones(1,100)]; % class '+1' X2=[rand(1,100); 1+rand(1,100); ones(1,100)]; % class '-1' X=[X1,X2]; % output class [-1,+1]; Y=[-ones(1,100),ones(1,100)]; % init weigth vector w=[.5 .5 .5]'; % call perceptron wtag=perceptron(X,Y,w); % predict ytag=wtag'*X; % plot prediction over origianl data Figure; hold on plot(X1(1,:),X1(2,:),'b.') plot(X2(1,:),X2(2,:),'r.') plot(X(1,ytag<0),X(2,ytag<0),'bo') plot(X(1,ytag>0),X(2,ytag>0),'ro') legend('class -1','class +1','pred -1','pred +1')

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...