Reaching max # of iterations using libsvm

Hi,

I’m trying to run a classification using trial-wise images both for training and testing. I got the warning “Warning: reaching max number of iterations many times”. This sounds like a similar problem pointed out here machine learning - libsvm "reaching max number of iterations" warning and cross-validation - Cross Validated

I am just wondering if there is a “right” way to get around this problem in the realm of neuroimaging. I’m pretty much using the defaults when making the cfg struct.

Thanks!
Defne

Hi Defne,

Not sure what the issue is. If the data are not scaled, I would consider scaling them. Is it possible that you are running an extremely large classification problem? In that case it might make sense to adjust the defaults used for libsvm to increase the number of iterations manually. If you check the tdt defaults, you will see a field called something like defaults.software.classification.train or the such (I don’t know the exact name out of the top of my head). From this you can see how you would need to manually adjust the field in your cfg to cfg.software.classification.train (or whatever the field is called). Now when you have called TDT once or have it on your Matlab path, just type svmtrain into the console. There you will see all options for libsvm you can adjust.

Hope this helps!
Martin

Thanks Martin!

Unfortunately for the purpose of the research question, I am hesitant to scale my data. It just may be that I have a lot of data.

It also looks like the software specifications are no longer created in decoding_defaults.

It looks like the possible parameters are the following:

Usage: model = svmtrain(training_label_vector, training_instance_matrix, ‘libsvm_options’);
libsvm_options:
-s svm_type : set type of SVM (default 0)
0 – C-SVC (multi-class classification)
1 – nu-SVC (multi-class classification)
2 – one-class SVM
3 – epsilon-SVR (regression)
4 – nu-SVR (regression)
-t kernel_type : set type of kernel function (default 2)
0 – linear: u’v
1 – polynomial: (gamma
u’v + coef0)^degree
2 – radial basis function: exp(-gamma
|u-v|^2)
3 – sigmoid: tanh(gamma*u’v + coef0)
4 – precomputed kernel (kernel values in training_instance_matrix)
-d degree : set degree in kernel function (default 3)
-g gamma : set gamma in kernel function (default 1/num_features)
-r coef0 : set coef0 in kernel function (default 0)
-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-m cachesize : set cache memory size in MB (default 100)
-e epsilon : set tolerance of termination criterion (default 0.001)
-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
-wi weight : set the parameter C of class i to weight
C, for C-SVC (default 1)
-v n : n-fold cross validation mode
-q : quiet mode (no outputs)

do you have suggestions about which one should be modified? I was hoping to find something called “iterations” or the like but was not able to and I don’t know much about how SVM’s work unfortunately.

Thanks for your help!

Before diving too deeply into this, it would probably help if you could share some more of your error messages or warnings to get a better idea of what’s going on. But a lot of data would mean it would take very long. There are actually no issues with scaling data, you can just make all numbers smaller by the same amount. It helps the classifier find a solution faster and may solve your problem.

Martin