MATLAB Answers

What percentage of my targetdata should be 1 and What percentage should be 0?

1 view (last 30 days)
jack nn
jack nn on 29 Jun 2015
Commented: jack nn on 4 Jul 2015
hi everybody I am beginner in I want to use svm for classification of my data.suppose that Train data are like below:
that X1 and X2 are inputs of my network(X1 and X2 are features that we extracted ) and Y is output of my network. now I have a question if I have 15700 samples for training my network how many of them should have 1 label and how many should have 0 label(my network is 2 classes). should I have any appropriateness between labels of my classes ? What percentage of my targetdata should be 1 and What percentage should be 0? if 800 of my labels are 1 and 14900 are 0, will my network work right? thanks

  0 Comments

Sign in to comment.

Answers (1)

Martin Brown
Martin Brown on 29 Jun 2015
It partially depends on whether the data / distributions are separable or overlapping.
Assuming the data is separable (it probably isn't), the numbers don't matter too much as long as you have exemplars (support vectors) which lie close to the margin boundary and hence determine the decision boundary. Generally the more data you train with the better as you'll have a richer pot of potential support vectors and the relative numbers don't matter.
If the data is not separable, the numbers should typically reflect the prior class probabilities, i.e. how the examples are drawn from the real world. You give an example where about 6% are class 0 and 94% are class 1. If this reflects the fact that class 0 examples are much rarer in real life than class 1, then this is appropriate. However, if the classes are very overlapping (based on your choice of features), it may be that the classifier would just learn to say class 1 all the time as that would be right 94% of the time, but it would not be predictive in any sense. So if you have imbalanced class distributions as you seem to suggest, make sure that the features have enough discriminatory power to predict the rare class in some cases.

  3 Comments

jack nn
jack nn on 29 Jun 2015
hi thanks dear Martin Brown
based on this thing that you said: However, if the classes are very overlapping (based on your choice of features), it may be that the classifier would just learn to say class 1 all the time as that would be right 94% of the time, but it would not be predictive in any sense...
Can I remove some rows that the y are zeros in these rows? for example reduce these rows to 10000 rows that in 800 of them Y is 1 and in 9200 of them Y is 0? thanks dear
Martin Brown
Martin Brown on 1 Jul 2015
I don't fully understand your comment/question but if you remove data according to the proportions that they occur in the data set (their prior class probabilities assuming the data has been collected in an unbiased sense), then you're simply sub sampling the data.
If you're deleting rows not in proportion to their prior probabilities, you'd be producing a biased classifier (strictly speaking an SVM doesn't produce an "easy" probabilistic classifier, but it is similar in some senses). By removing data, you'd be assigning a higher weighting to one type of errors. This may be correct in some cases (medical diagnosis, fraud detection), but you should be prepared to justify these weightings. Something like
has a decent description of this.

Sign in to comment.

Sign in to answer this question.