Evaluation to optimize the use of Gaussian
SVM-RBF kernel parameters is necessarily significant to do. It is conducted to
get the results of multi class classification with minimum (smallest) error.
Multi class classification optimization with non-linear SVM depends very much
on its kernel and parameters functions. Moreover, Gaussian RBF kernel is
recommended to obtain maximum non-linear SVM classification results for a new
dataset [1]-[4]. This is due to fact that it has the same performance as the
linear kernel in parameter cost (C) and gamma (γ) / sigma
(σ) with a certain value in classification optimization. Parameter
estimation is needed in the form of constant parameter values for
soft margins (C) and kernel parameters (γ) to obtain the results of
new test dataset classification with a maximum non-linear Gaussian SVM-RBF
kernel [3], [4]. Parameters C and γ with right values can
keep the bias (measure of error contribution) and variance
(measure of deviations) low when it is used in different training
datasets by Cross Validation method. In maximizing the result of
non-linear SVM multi-class classification by Gaussian RBF kernel, it is
necessary to evaluate the use of geometric decorative motif datasets for
training and testing. Grid Search method with Cross Validation can be
used to evaluate the success in estimating RBF kernel parameters. In addition, Grid
Search is a search model for the right RBF kernel parameter values
by testing parameter values in a certain interval [2].
Each value in the interval is tested and the following value tested is the
exponential addition of the parameter value. After finding the best parameter
value, then another test is conducted at a smaller range among the best
parameters. In short, this paper discusses the evaluation of parameter testing
using the Grid Search method with Cross Validation which is implemented
on the Optimize Accuracy value of Indonesian Batik Images Classification
Research [5].
Training and test datasets used in this experiment are four classes of geometric decorative
ornaments on the texture of Indonesian batik motifs (figure 1). The use of this batik dataset is due
to the fact that each class has the same geometric decorative motifs with
very diverse motif patterns. This motive patterns diversity will
eventually lead to a high complexity in the separation between their classes,
because classes separation in classification function (hyperplane) cannot be done
linearly (non-linear). The effort to increase accurationvalue of non–linear multiclasses classification with a dataset of transient batik images is necessary to
determine kernel parameters. However, it has not been conducted in some
previous studies. In additition, it is necessary to do
parameter determination since it can
produce a good new feature (high dimension). Thus, this experiment
results in a maximum hyperplane
to be applied to image dataset with
geometric decorative
motifs. Moreover, a limited dataset in this study makes it necessary to do an
experiment to determine new parameter value on the non–linear multi-classes
classification of SVM-RBF kernels with Grid Search and Cross Validation method to obtain maximum accuracy value.
Geometric decoration is an abstract
ornamental motif in the form of circles, rectangles, curved lines, zigzags,
and/or triangles as found in the decorative motifs of Batik images
(figure 1). Indonesia Batik image is the work of fine art produced on a piece
of white fabric textile, i.e. it decorates the textile surface by holding the
dye. Batik is an Indonesian heritage showing the intelligence of the ancestors
in creating beauty in a piece of cloth. Color retaining on the textile surface
is done by applying wax liquid (wax) using a traditional tool called
“writing canthing (canting tulis)” and “stamp canthing (canthing cap)”
as shown in Figure 2. The color retention process is known as wax-resist
dyeing process [6], [7]. Indonesian batik has been recognized by UNESCO
since 2009 as a “Representative List of the Intangible Cultural Heritage of
Humanity”
There are 80 images in training and test
datasets which are divided into 4 classes. Class 1 consists of 20 images; class
2 is 16 images; class 3 is 19 images, and class 4 is 25 images. Figure 1 shows
the examples of class differentiation based on batik motifs which vary greatly.
Batik image feature used is the result of feature extraction with
Discrete Wavelet Transform (DWT) level 3 and the coefficient of Daubechies
2 (db2) scaling function. Furthermore, the use of DWT level 3 db2 is the best
result from the comparison of decomposition level and type of scaling function
coefficient in previous [8]. DWT-2D is the most effective method to apply on
the texture of geometric decorative motifs of traditional batik images. DWT can
produce good features for images possessing multi-resolution space and with
varying image scale transformation, and also can produce features by
distinguishing image intensity in sub-band spaces [9].
Class
1: “Ceplok” motif; Note: ceplok (circle-shaped)
Class
2 : “Kawung” motif; Note: kawung (javanese : sugar palm fruit motif with 4
corners)
Class
3 : “Nitik” motif; Note: nitik (dot; tiny square dot)
Class
4 : “Parang”; Note: parang (big knife, longer than knife but shorter than
sword)
Figure
1. Batik Motif Class (f. Budiman, et al. 2017)
(a)
(b)
Figure
2. Tools to create Batik : a. “canting”; b. “stamp/cap”
The accuracy of classification method
resulted from training process with SVM relies heavily on kernel functions and
the selection of parameter
values used. Then, SVM classification optimization for non-linear
multi class cases is conducted in the next test phase analysis by the use of Gaussian
/ Radial Basis Function (RBF) kernel parameters. Optimal RBF parameter
value highly depends on the dataset used. Optimization of non-linear
multi-class SVM-RBF classification results with Grid Search and Cross
Validation processes is done to assure the minimization of over and under
fittings and to obtain a combination of kernel RBF parameter values
on space parameters which produce maximum classification accuracy
values. Parameters C and γ (1/22) with the right values
can maintain the bias (a measure of the error
contribution) and the variance (measure of the deviations) remains low
when using a different training dataset with the v-fold Cross
Validation method.
Table 1 shows high C value and low γ
value which cause over fitting. Conversely, low C value and high γ
value will lead to under fitting[10]. The smaller C value, the more it
will ignore the feature point as a support vector existing close to the hyperplane;
as well as increase the maximum margin (figure 3). The higher Gamma value
(γ) will increase the vector support area increases and the flexibility of
the decision boundary (hyperplane). Thus, support vector will not
cause a broad influence (figure 4).
Table
1. The Influence of C and γ Values
|
Low Variance
|
High Variance
|
Low Bias
|
Right
γ value ?
Right
C value ?
|
Low
γ value
High
C value
|
High Bias
|
High
γ value
Low
C value
|
high bias → under fitting
high variance → over fitting
|
Figure 3. SVM-RBF
Kernel Classification for C = 512 and C = 1,
Figure 4. SVM-RBF Kernel Classification for
γ = 2-1 and γ = 2-17,
Several different test datasets required
test datasets testing to obtain the accuracy of classification results with
minimum error rate. Problems occur if the test of SVM-RBF parameters on the
training dataset has obtained good accuracy on the test dataset since it will
lead to a question whether it will also result in a good accuracy when used on
different datasets. To test different datasets in a limited number of datasets,
Cross Validation (CV) method can be applied in Grid Search
process [2], [11]. CV method divides the dataset into v partitions (v-fold)
randomly, each partition has an index number 1 through v. Futhermore, commonly
used Partitions are divided into 10 partitions or 10-fold Cross Validation[9],
[12], [13]. For 10 partitions, the test is conducted 10 times by
leave-one-out method, i.e. one part is used interchangeably into test
dataset and the other dataset (v-1) is used for training dataset as shown in
table 2.
Table
2. 10-fold Cross Validation
Testing
No
|
Training
Dataset
|
Test
Dataset
|
1
|
v1+v2+v3+v4+v5+v6+v7+v8+v9
|
v10
|
2
|
v1+v2+v3+v4+v5+v6+v7+v8+v10
|
v9
|
3
|
v1+v2+v3+v4+v5+v6+v7+v9+v10
|
v8
|
4
|
v1+v2+v3+v4+v5+v6+v8+v9+v10
|
v7
|
5
|
v1+v2+v3+v4+v5+v7+v8+v9+v10
|
v6
|
6
|
v1+v2+v3+v4+v6+v7+v8+v9+v10
|
v5
|
7
|
v1+v2+v3+v5+v6+v7+v8+v9+v10
|
v4
|
8
|
v1+v2+v4+v5+v6+v7+v8+v9+v10
|
v3
|
9
|
v1+v3+v4+v5+v6+v7+v8+v9+v10
|
v2
|
10
|
v2+v3+v4+v5+v6+v7+v8+v9+v10
|
v1
|
In every test, parameter C and γ
combination is carried out 10 times classification with 10 different training
and test datasets (10-fold). This is to ensure that there is no
excessive over-fitting on the test with different testing data. Each use
of parameter C and γ combination was tested 10 times. In addition, a
selection of random and balanced training and test datasets is conducted in
each test for class 1, 2, 3 and 4 on feature vector file consisting of 80 batik
image records. Training and testing using 80 traditional batik images as well
as 10-fold CV method with leave-one-out process on random image
feature vector selection only produces 8 feature vectors for every test dataset
used in each test. Since the result with these 8 feature vectors is still too
small for the test dataset, random selection is then carried out by doing hold
out process using 30% percentage range for test dataset and the rest is for
training dataset in increasing the number of test datasets in this study.
Hold out process,
i.e. randomly selecting records for test data, is conducted on CV for feature
vector files containing 80 batik image records. Class 1 (“ceplok” motif)
of 1-20 records resulted in selected 6 records, class 2 (“kawung” motif)
record 21-36 selected 4 records, class 3 (“nitik” motif) record 37-55
selected 5 records, and class 4 (“parang” motif) selected 7 records. Random
selection is conducted in each test of parameter C and γ combination.
Moreover, the results of hold out application to 80 image feature
vectors can be seen in table 3. Consequently, table 3 can describe 10
classifications in which every process of training and testing is different,
each training dataset contains 58 image feature vectors and every test dataset
consists of 22 image feature vectors. Feature vector column describes training
and test classification 1st to 10th and index number ‘1’ show that the record
is for training data, while index ‘0’ shows that the record is test data.
Table 3. Result of Cross Validation
Record
|
Class
|
Feature
Vector of training and test classification No
|
Number
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
0
|
0
|
1
|
1
|
1
|
2
|
1
|
0
|
1
|
1
|
1
|
0
|
1
|
0
|
1
|
1
|
1
|
3
|
1
|
1
|
0
|
0
|
1
|
1
|
0
|
1
|
1
|
0
|
1
|
4
|
1
|
1
|
1
|
0
|
1
|
0
|
1
|
0
|
1
|
1
|
1
|
5
|
1
|
0
|
0
|
1
|
0
|
1
|
1
|
1
|
1
|
1
|
1
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
….
|
78
|
4
|
0
|
0
|
1
|
1
|
1
|
1
|
0
|
1
|
0
|
0
|
79
|
4
|
1
|
1
|
1
|
1
|
1
|
0
|
0
|
0
|
1
|
1
|
80
|
4
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
0
|
1
|
1
|
Optimizing the estimation of the best parameter
values in this experiment uses a range of parameter values.
Besides, searching using this
values range is commonly called Grid Search method.
Initial trial of Grid Search method is done using a wide range of RBF, C
and γ kernel parameter values. Then, after finding the best parameters,
testing with a narrower range of values is conducted to obtain parameter values
which will result in the best classification accuracy value.
Actually there is no provision for the right range for Grid Search method.
In this case, the wider the parameters range is, the more effective it is to
get the best parameters C and γ combination as to significantly increase
the accuracy value in image recognition classification. In the initial trial
for value range estimation of parameter RBF C = {2-3,
2-1, 21, 23, 25, ... , 213,
215, 217} and γ = {2-17, 2-15,
2-13, ... , 2-5, 2-3, 2-1 , 21,
23}, ten classifications in each trial of parameters C and γ
combination are carried out using 10 training datasets and 10 test datasets
from cross validation result with hold out process. Furthermore,
there is no range requirement for C and γ value estimation as test value
of Gaussian RBF kernel parameter. The wider the range of values
for these parameters is, the more effective the application of Grid
Search parameter searching method with v-fold CV will be tofind a
combination of C and γ (1 / 2σ2).
The best parameter combination for C and
γ values will produce maximum accuracy from the
classification results. In previous experiments (Renukadevi and Thangaraj,
2013), the classification results with SVM-RBF kernel are still limited in the
use of parameters combination by using 10-fold Cross Validation and RBF
kernel parameters to optimize constant value C = 0.125, and three values
γ (0.125, 0.25, and 0.75). Thus, it doesn't apply Grid
Search. The range by adding exponentially different parameter values
as in previous studies, C and γ values range
from 0.001 to 10,000 (Syarif, Prugel-Bernnett, and Adam, 2016);
and C is between 1 and 1000 and σ value is between 1 and 100
[12]. In Gaspar et al. [12] and Syarifet al. [13], parameter
estimation optimization is not the goal of the studies. Instead, parameters are
used to test the application of RBF, polynomial, and sigmoid kernel functions
to obtain maximum accuracy values in SVM classification. Thus,
there is no parameter optimization process in testing classification accuracy
for certain datasets.
The method used
in this experiment initially results in a feature file of all images which are distributed into many classes using 3 levels decomposition and daubechies 2 Discrete Wavelet Transform (DWT) feature extractive
method. This feature
extraction method is used based on the writer’s previous research [8]. Moreover,
fold-cross validation (CV) with 10-fold CV is conducted to this feature file on each parameter value test. The use of 10-fold CV which is based on the results of 10-fold,
8-fold, 6-fold, 4-fold, and 2-fold tests on multi – class non-linear SVM–RBF
classification shows that the smaller
the effect k values is, the higher the error the image
recognition is. This is due to the less training set is, the worse
it is in representing hyperplane and margins for
each class. In
addition, the best parameter value estimation in this study is calculated
using parameter value range as referred
to Grid Search method. Then, each
parameter value is tested 10 times with a dataset of different test feature
files from 10 fold CV. This is conducted
to ensure the parameters value
used in the
classification will lead to a relatively similar accuration value in each test.
One-against-all method of non-linear SVM classification with four classes consists
of four non-linear binary class SVM classifications. Accordingly, parameter
values C and γ must be able to maximize classification
results for four hyperplane formations and margins (+1 and -1) . In the
initial trial for value range estimation optimization of parameter RBF C = {2-3,
2-1, 21, 23, 25, ... , 213,
215, 217} and γ = {2-17, 2-15,
2-13, ... , 2-5, 2-3, 2-1 , 21,
23}, the first thing to do for standard parameter is like in the
binary classification with value of C = 1 and γ = 0.5. Ten times
classification using 10-fold CV in the initial trial for parameters combination
C = 1 and γ = 0.5 is done using 10 training datasets and 10 test datasets.
In using RBF kernel parameter value C = 1; γ = 0.5 in SVM classification
for test feature data with classes 1, 2 and 3 which is yet well unrecognizable,
the accuracy value is still low below 0.5 since it cannot produce hyperplane
and margins which are able to properly recognize the testing image for class 1,
2, and 3. High bias on the use of parameter C = 1 and γ = 0.5 causes under
fittings for class 1, 2 and 3, since the margin area generated from support
vector does not have a broad influence in gathering a class. Besides, the
result of classification accuracy value is still relatively the same when using
values parameter combination of value γ = {23, 21, 2-1}
and C = {2-3, 2-1, 20}. The use of parameter
C = 2-1; γ = 0.5, C = 2-3; γ = 0.5, C = 1;
γ = 21, and C = 1; γ = 23 provides a relatively
similar accuracy value to C = 1; γ = 0.5 which is still low (below 0.5).
Consequently, it can be considered that there is still relatively high
possibility of under fittingin class recognition from the test feature data. Furthermore,
the greater C value is, the smaller the number of support vectorsis and the margin distance with the hyperplane
will be much narrower. Thus, this can lead to over-fitting or
misplacing classification class area to feature vectors near the hyperplane.
Also, if the value of γ is
smaller, the support vectorwill be smaller. Then, the
margin formed will have a smooth decision boundary that is likely similar to linear
form.
Maximum recognition with low bias and
low variance in the parameters range that have been determined in this study
is obtained in parameter values C = 27and γ = 2-15 with the number of correctly recognized
feature vectors is from 17 to 19
of 22 vector features of the test dataset in every test. Classification results and accuracy values
are obtained from 10 tests with parameter values C =27and γ = 2-15 using different
test datasets in each test. Moreover, the use of parameters C ≥ 29and γ ≤ 2-17leads to the increase in measure of
deviations (variance) which
cause an increase in over fitting in determining the class of test
feature data. In consequence, the
next test with Grid Search is carried out in a smaller range between 25< C < 29
and 2-17<
γ < 2-15. Thus, the range used by Grid Search is C = {26.5, 26.75,
27, 27.25, 27.5,
27.75, 28} and γ = {2-14.5, 2-14.75, 2-15, 2-15.25, 2-15.5,2-15.75,
2-16}. Then a
comparison with accuracy value on the use of parameters C = 27 and γ = 2-15 is conducted from this range.
Accuracy value of classification results with the use of parameter values C ={26.50,
26.75, 27, 27.25,
27.5} and γ = 2-15 (table 4) shows that parameter value C is smaller than 27 (26.75,
26.5). Accordingly, there is an increase of bias so that more under fitting occur which causes accuracy value
in each classification trial tends
to decrease in value (blue colored numbers/value). As seen in table 4, the use of parameter value C is higher than 27 (27.25, 27.5). Thus, there is a decrease in accuracy value
(blue colored numbers) in each classification test. This happensdue to the fact
that the increase in variance of C> 27 causes over fitting.
The use of parameter values combination with C = 27and γ = {2-14.5, 2-14.75,
2-15, 2-15.25, 2-15.5}
(table 5) results in a decrease in
accuracy values (blue colored numbers) compared to the use of parameter values combination 27; γ = 2-15
in every classification test with
different test data. Parameter values γ > 2-15 (2-14.5,
2-14.75) show an increase
in biasthat leads increasing misclassification due to under fitting, and parameter value
γ < 2-15 (2-15.25, 2-15.5)
increases variance that creates more misclassification due to the occurrence of over fitting.
Table 4. Accuracy
Value and The Use of Parameter Value
C={26.50, 26.75, 27, 27.25, 27.5}
and γ = 2-15
Test
|
C=27;
γ =2-15
|
C=26.50;
γ =2-15
|
C=26.75;
γ =2-15
|
C=27.25;
γ =2-15
|
C=27.50;
γ =2-15
|
No
|
Accuracy
Value
|
1
|
0.864
|
0.818
|
0.864
|
0.864
|
0.818
|
2
|
0.773
|
0.727
|
0.773
|
0.727
|
0.773
|
3
|
0.864
|
0.773
|
0.818
|
0.864
|
0.773
|
4
|
0.864
|
0.864
|
0.864
|
0.818
|
0.864
|
5
|
0.773
|
0.773
|
0.727
|
0.773
|
0.773
|
6
|
0.818
|
0.773
|
0.818
|
0.818
|
0.818
|
7
|
0.818
|
0.682
|
0.818
|
0.773
|
0.727
|
8
|
0.864
|
0.864
|
0.909
|
0.909
|
0.864
|
9
|
0.773
|
0.682
|
0.773
|
0.727
|
0.682
|
10
|
0.773
|
0.727
|
0.773
|
0.682
|
0.727
|
Table 5. Accuracy Value and The Use of
Parameter Value
C = 27and γ = {2-14.5, 2-14.75,
2-15, 2-15.25,
2-15.5}
Test
|
C=27;
γ =2-15
|
C=27;
γ =2-14.5
|
C=27;
γ =2-14.75
|
C=27;
γ =2-15.25
|
C=27;
γ =2-15.5
|
No
|
Accuracy
Value
|
1
|
0.864
|
0.727
|
0.818
|
0.818
|
0.773
|
2
|
0.773
|
0.773
|
0.727
|
0.727
|
0.727
|
3
|
0.864
|
0.818
|
0.727
|
0.818
|
0.773
|
4
|
0.864
|
0.727
|
0.7727
|
0.818
|
0.773
|
5
|
0.773
|
0.727
|
0.773
|
0.727
|
0.773
|
6
|
0.818
|
0.773
|
0.818
|
0.773
|
0.727
|
7
|
0.818
|
0.727
|
0.727
|
0.727
|
0.818
|
8
|
0.864
|
0.909
|
0.909
|
0.818
|
0.864
|
9
|
0.773
|
0.682
|
0.727
|
0.773
|
0.773
|
10
|
0.773
|
0.727
|
0.727
|
0.682
|
0.727
|
Optimization of RBF kernel parameters has
been carried out in this experiment to get maximum accuracy value in non-linear
multi-class SVM classification method to recognize images with geometric
decorative motifs. Optimization by using Grid Search method and 10-fold Cross
Validation with holdout generates 10 test datasets and 10 training datasets
from randomly selected feature vectors. With 10 tests for each combination of C
and γ, and with different test datasets for each test, it can obtain the
smallest range of parameter combinations C andγ with low bias and low
variancewhich produces the highest classification accuracy value.
The best from the method of determining
optimal parameter values of space C = {26.5, 26.75, 27, 27.25, 27.5, 27.75, 28} and γ ={2-14.5, 2-14.75, 2-15, 2-15.25, 2-15.5, 2-15.75, 2-16}
with parameters combination C = 27 and γ=2-15. Moreover, it is obtained
using Grid Search testing method and Cross Validation with hold out process
which uses 30% percentage for the test dataset and the rest for the training
dataset. Parameters combination C = 27 and γ = 2-15 is used to evaluate
the accuracy of optimal value determining method in this study. This is to
ensure that there is no high level of accuracy difference in the use of
different training and testing image datasets with different number of test
datasets.
In trial analysis, the use of parameters
combination with ranges of C = {26.5, 26.75, 27, 27.25, 27.5, 27.75, 28} and γ ={2-14.5, 2-14.75, 2-15, 2-15.25, 2-15.5, 2-15.75, 2-16} shows no
significant changes. Moreover, the best results remain in the combination of parameter
C = 27 and γ = 2-15. Based on the results of this study, it indicates that
to get the accuracy value in the identification and/or recognition of
traditional batik with textures possessing geometric decorative motifs with
multi-scale patterns and multi-color resolution; parameter values
are needed to optimize the performance of SVM-RBF kernel classification.
In addition,these maximum parameter value and smaller range of parameter values
can be used as a reference for digital image recognition with
textures possessing geometric decorative motifs using SVM-RBF kernel
classification.
1. Hofmann, Martin. ”Support Vector Machine-Kernel and The Kernel
Trick”, Bamberg University, 2006.
2. Hsu, Chih-Wei., Chang, Chih-Chung., Lin Chih-Jen., “A Practice Guide
to Support Vector Classification”. Department of Computer Science, National
Taiwan University, 2010.
3. Renukadevi, N,T., Thangaraj, P., “Performance Evaluation Of Svm-Rbf
Kernel For Medical Image Classification”, Global Journal of Computer Science
and Technology Graphics & Vision, vol.13 issue 4, Global Journal Inc, USA,
2013.
4. Rosales-Perez, Alejandro., Escalante, Hugo Jair., Gonzales, Jesus
A., Reyes-Garcia, Carlos A., “Bias And Variance Optimization For Svms Model
Selection”, Procedings of the Twenty-Sixth International Florida Artificial
Intelligence Research Society Conference, 2013.
5. Budiman, F., Suhendra, A., Agushinta, D., &Tarigan, A, “Determination
Of SVM-RBF Kernel Space Parameter To Optimize Accuracy Value Of Indonesian
Batik Images Classification”, Journal of Computer Science. 13(11):590-599,
2017.
6.
Riyanto, Rahayu, Slamet, and Pamungkas, Wisnu,
“Handbook of Indonesian Batik”. The Institute For Research and
Development of Handicraft and Batik Industrie, Yogyakarta, 1997.
7. Tirta, Iwan, “BATIK SebuahLakon”, Gaya Favorit Press, Jakarta, 2009.
8. Budiman, F., Suhendra, A., Agushinta, D., &Tarigan, A.,“Wavelet
Decomposition Levels Analysis For Indonesia Traditional Batik
Classification”, Journal of Theoretical & Applied Information
Technology, 92(2):389-394, 2016.
9. Virmani, Jitendra., Kumar, Vinod., Karla, Naveen., Khandelwal, Niranjan.,
“SVM-Based Characterization Of Liver Ultrasound Images Using Wavelet Packet
Texture Descriptors”, Journal of Digital Imaging (2013)26:530–543, Springer,
2013.
10. Boser, B., Guyan, I., Vapnik, V., “A Training Algorithm for Optimal
Margin Classfiers, Fifth Annual Workshop on Computational Learning Theory”, New
York:ACM Press, 1992.
11. Tsamardinos,
Ioannis, Rakhshani, Amin, danLagani,Vinceszo, “Performance-Estimation
Properties Of Cross Validation Based Protocols With Simultaneous
Hyper-Parameter Optimization”. International Journal on Artificial
Intelligence Tools, vol.XX no.X:1-30, World Scientific Publishing Company,
2015.
12. Gaspar, Paulo.,Carbonell, Jaime., Oliveira, Jose Luis., “On The
Parameter Optimization Of Support Vector Machines For Binary Classification”.
Journal of Integrative Bioinformatics, 9(3):201, 2012.
13. Syarif, Iwan.,Prugel-Bernnett, Adam., Wills, Gary., “Svm Parameter
Optimization Using Grid Search And Genetic Algorithm To Improve Classification
Performance”, Telkomnika Journal, 14(4):1502-1509, 2016.