SHOGUN  v1.1.0
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
List of all members | Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes
CMKL Class Reference

Detailed Description

Multiple Kernel Learning.

A support vector machine based method for use with multiple kernels. In Multiple Kernel Learning (MKL) in addition to the SVM $\bf\alpha$ and bias term $b$ the kernel weights $\bf\beta$ are estimated in training. The resulting kernel method can be stated as

\[ f({\bf x})=\sum_{i=0}^{N-1} \alpha_i \sum_{j=0}^M \beta_j k_j({\bf x}, {\bf x_i})+b . \]

where $N$ is the number of training examples $\alpha_i$ are the weights assigned to each training example $\beta_j$ are the weights assigned to each sub-kernel $k_j(x,x')$ are sub-kernels and $b$ the bias.

Kernels have to be chosen a-priori. In MKL $\alpha_i,\;\beta$ and bias are determined by solving the following optimization program

\begin{eqnarray*} \mbox{min} && \gamma-\sum_{i=1}^N\alpha_i\\ \mbox{w.r.t.} && \gamma\in R, \alpha\in R^N \nonumber\\ \mbox{s.t.} && {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber\\ && \frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j k_k({\bf x}_i,{\bf x}_j)\leq \gamma,\;\; \forall k=1,\ldots,K\nonumber\\ \end{eqnarray*}

here C is a pre-specified regularization parameter.

Within shogun this optimization problem is solved using semi-infinite programming. For 1-norm MKL using one of the two approaches described in

Soeren Sonnenburg, Gunnar Raetsch, Christin Schaefer, and Bernhard Schoelkopf. Large Scale Multiple Kernel Learning. Journal of Machine Learning Research, 7:1531-1565, July 2006.

The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for $\alpha$ and $\beta$. It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via glpk or cplex or analytically or a newton (for norms>1) step is performed.

The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based SVMlight.

In addition sparsity of MKL can be controlled by the choice of the $L_p$-norm regularizing $\beta$ as described in

Marius Kloft, Ulf Brefeld, Soeren Sonnenburg, and Alexander Zien. Efficient and accurate lp-norm multiple kernel learning. In Advances in Neural Information Processing Systems 21. MIT Press, Cambridge, MA, 2009.

An alternative way to control the sparsity is the elastic-net regularization, which can be formulated into the following optimization problem:

\begin{eqnarray*} \mbox{min} && C\sum_{i=1}^N\ell\left(\sum_{k=1}^Kf_k(x_i)+b,y_i\right)+(1-\lambda)\left(\sum_{k=1}^K\|f_k\|_{\mathcal{H}_k}\right)^2+\lambda\sum_{k=1}^K\|f_k\|_{\mathcal{H}_k}^2\\ \mbox{w.r.t.} && f_1\in\mathcal{H}_1,f_2\in\mathcal{H}_2,\ldots,f_K\in\mathcal{H}_K,\,b\in R \nonumber\\ \end{eqnarray*}

where $\ell$ is a loss function. Here $\lambda$ controls the trade-off between the two regularization terms. $\lambda=0$ corresponds to $L_1$-MKL, whereas $\lambda=1$ corresponds to the uniform-weighted combination of kernels ( $L_\infty$-MKL). This approach was studied by Shawe-Taylor (2008) "Kernel Learning for Novelty Detection" (NIPS MKL Workshop 2008) and Tomioka & Suzuki (2009) "Sparsity-accuracy trade-off in MKL" (NIPS MKL Workshop 2009).

Definition at line 93 of file MKL.h.

Inheritance diagram for CMKL:
Inheritance graph
[legend]

Public Member Functions

 CMKL (CSVM *s=NULL)
virtual ~CMKL ()
void set_constraint_generator (CSVM *s)
void set_svm (CSVM *s)
CSVMget_svm ()
void set_C_mkl (float64_t C)
void set_mkl_norm (float64_t norm)
void set_elasticnet_lambda (float64_t elasticnet_lambda)
void set_mkl_block_norm (float64_t q)
void set_interleaved_optimization_enabled (bool enable)
bool get_interleaved_optimization_enabled ()
float64_t compute_mkl_primal_objective ()
virtual float64_t compute_mkl_dual_objective ()
float64_t compute_elasticnet_dual_objective ()
void set_mkl_epsilon (float64_t eps)
float64_t get_mkl_epsilon ()
int32_t get_mkl_iterations ()
virtual bool perform_mkl_step (const float64_t *sumw, float64_t suma)
virtual float64_t compute_sum_alpha ()=0
virtual void compute_sum_beta (float64_t *sumw)
virtual const char * get_name () const
- Public Member Functions inherited from CSVM
 CSVM (int32_t num_sv=0)
 CSVM (float64_t C, CKernel *k, CLabels *lab)
virtual ~CSVM ()
void set_defaults (int32_t num_sv=0)
virtual SGVector< float64_tget_linear_term ()
virtual void set_linear_term (SGVector< float64_t > linear_term)
bool load (FILE *svm_file)
bool save (FILE *svm_file)
void set_nu (float64_t nue)
void set_C (float64_t c_neg, float64_t c_pos)
void set_epsilon (float64_t eps)
void set_tube_epsilon (float64_t eps)
float64_t get_tube_epsilon ()
void set_qpsize (int32_t qps)
float64_t get_epsilon ()
float64_t get_nu ()
float64_t get_C1 ()
float64_t get_C2 ()
int32_t get_qpsize ()
void set_shrinking_enabled (bool enable)
bool get_shrinking_enabled ()
float64_t compute_svm_dual_objective ()
float64_t compute_svm_primal_objective ()
void set_objective (float64_t v)
float64_t get_objective ()
void set_callback_function (CMKL *m, bool(*cb)(CMKL *mkl, const float64_t *sumw, const float64_t suma))
- Public Member Functions inherited from CKernelMachine
 CKernelMachine ()
virtual ~CKernelMachine ()
void set_kernel (CKernel *k)
CKernelget_kernel ()
void set_batch_computation_enabled (bool enable)
bool get_batch_computation_enabled ()
void set_linadd_enabled (bool enable)
bool get_linadd_enabled ()
void set_bias_enabled (bool enable_bias)
bool get_bias_enabled ()
float64_t get_bias ()
void set_bias (float64_t bias)
int32_t get_support_vector (int32_t idx)
float64_t get_alpha (int32_t idx)
bool set_support_vector (int32_t idx, int32_t val)
bool set_alpha (int32_t idx, float64_t val)
int32_t get_num_support_vectors ()
void set_alphas (SGVector< float64_t > alphas)
void set_support_vectors (SGVector< int32_t > svs)
SGVector< int32_t > get_support_vectors ()
SGVector< float64_tget_alphas ()
bool create_new_model (int32_t num)
bool init_kernel_optimization ()
virtual CLabelsapply ()
virtual CLabelsapply (CFeatures *data)
virtual float64_t apply (int32_t num)
- Public Member Functions inherited from CMachine
 CMachine ()
virtual ~CMachine ()
virtual bool train (CFeatures *data=NULL)
virtual void set_labels (CLabels *lab)
virtual CLabelsget_labels ()
virtual float64_t get_label (int32_t i)
void set_max_train_time (float64_t t)
float64_t get_max_train_time ()
virtual EClassifierType get_classifier_type ()
void set_solver_type (ESolverType st)
ESolverType get_solver_type ()
virtual void set_store_model_features (bool store_model)
- Public Member Functions inherited from CSGObject
 CSGObject ()
 CSGObject (const CSGObject &orig)
virtual ~CSGObject ()
virtual bool is_generic (EPrimitiveType *generic) const
template<class T >
void set_generic ()
void unset_generic ()
virtual void print_serializable (const char *prefix="")
virtual bool save_serializable (CSerializableFile *file, const char *prefix="")
virtual bool load_serializable (CSerializableFile *file, const char *prefix="")
void set_global_io (SGIO *io)
SGIOget_global_io ()
void set_global_parallel (Parallel *parallel)
Parallelget_global_parallel ()
void set_global_version (Version *version)
Versionget_global_version ()
SGVector< char * > get_modelsel_names ()
char * get_modsel_param_descr (const char *param_name)
index_t get_modsel_param_index (const char *param_name)

Static Public Member Functions

static bool perform_mkl_step_helper (CMKL *mkl, const float64_t *sumw, const float64_t suma)

Protected Member Functions

virtual bool train_machine (CFeatures *data=NULL)
virtual void init_training ()=0
void perform_mkl_step (float64_t *beta, float64_t *old_beta, int num_kernels, int32_t *label, int32_t *active2dnum, float64_t *a, float64_t *lin, float64_t *sumw, int32_t &inner_iters)
float64_t compute_optimal_betas_via_cplex (float64_t *beta, const float64_t *old_beta, int32_t num_kernels, const float64_t *sumw, float64_t suma, int32_t &inner_iters)
float64_t compute_optimal_betas_via_glpk (float64_t *beta, const float64_t *old_beta, int num_kernels, const float64_t *sumw, float64_t suma, int32_t &inner_iters)
float64_t compute_optimal_betas_elasticnet (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective)
void elasticnet_transform (float64_t *beta, float64_t lmd, int32_t len)
void elasticnet_dual (float64_t *ff, float64_t *gg, float64_t *hh, const float64_t &del, const float64_t *nm, int32_t len, const float64_t &lambda)
float64_t compute_optimal_betas_directly (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective)
float64_t compute_optimal_betas_block_norm (float64_t *beta, const float64_t *old_beta, const int32_t num_kernels, const float64_t *sumw, const float64_t suma, const float64_t mkl_objective)
float64_t compute_optimal_betas_newton (float64_t *beta, const float64_t *old_beta, int32_t num_kernels, const float64_t *sumw, float64_t suma, float64_t mkl_objective)
virtual bool converged ()
void init_solver ()
bool init_cplex ()
void set_qnorm_constraints (float64_t *beta, int32_t num_kernels)
bool cleanup_cplex ()
bool init_glpk ()
bool cleanup_glpk ()
bool check_lpx_status (LPX *lp)
- Protected Member Functions inherited from CSVM
virtual float64_tget_linear_term_array ()
- Protected Member Functions inherited from CKernelMachine
virtual void store_model_features ()

Protected Attributes

CSVMsvm
float64_t C_mkl
float64_t mkl_norm
float64_t ent_lambda
float64_t mkl_block_norm
float64_tbeta_local
int32_t mkl_iterations
float64_t mkl_epsilon
bool interleaved_optimization
float64_tW
float64_t w_gap
float64_t rho
CTime training_time_clock
CPXENVptr env
CPXLPptr lp_cplex
LPX * lp_glpk
bool lp_initialized
- Protected Attributes inherited from CSVM
SGVector< float64_tm_linear_term
bool svm_loaded
float64_t epsilon
float64_t tube_epsilon
float64_t nu
float64_t C1
float64_t C2
float64_t objective
int32_t qpsize
bool use_shrinking
bool(* callback )(CMKL *mkl, const float64_t *sumw, const float64_t suma)
CMKLmkl
- Protected Attributes inherited from CKernelMachine
CKernelkernel
bool use_batch_computation
bool use_linadd
bool use_bias
float64_t m_bias
SGVector< float64_tm_alpha
SGVector< int32_t > m_svs
- Protected Attributes inherited from CMachine
float64_t max_train_time
CLabelslabels
ESolverType solver_type
bool m_store_model_features

Additional Inherited Members

- Public Attributes inherited from CSGObject
SGIOio
Parallelparallel
Versionversion
Parameterm_parameters
Parameterm_model_selection_parameters

Constructor & Destructor Documentation

CMKL ( CSVM s = NULL)

Constructor

Parameters
sSVM to use as constraint generator in MKL SIP

Definition at line 21 of file MKL.cpp.

~CMKL ( )
virtual

Destructor

Definition at line 40 of file MKL.cpp.

Member Function Documentation

bool check_lpx_status ( LPX *  lp)
protected

check glpk error status

Returns
if in good status

Definition at line 175 of file MKL.cpp.

bool cleanup_cplex ( )
protected

cleanup cplex

Returns
if cleanup was successful

Definition at line 119 of file MKL.cpp.

bool cleanup_glpk ( )
protected

cleanup glpk

Returns
if cleanup was successful

Definition at line 166 of file MKL.cpp.

float64_t compute_elasticnet_dual_objective ( )

compute ElasticnetMKL dual objective

Returns
computed dual objective

Definition at line 581 of file MKL.cpp.

float64_t compute_mkl_dual_objective ( )
virtual

compute mkl dual objective

Returns
computed dual objective

Reimplemented in CMKLRegression.

Definition at line 1513 of file MKL.cpp.

float64_t compute_mkl_primal_objective ( )

compute mkl primal objective

Returns
computed mkl primal objective

Definition at line 185 of file MKL.h.

float64_t compute_optimal_betas_block_norm ( float64_t beta,
const float64_t old_beta,
const int32_t  num_kernels,
const float64_t sumw,
const float64_t  suma,
const float64_t  mkl_objective 
)
protected

given the alphas, compute the corresponding optimal betas

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
mkl_objectivethe current mkl objective
Returns
new objective value

Definition at line 656 of file MKL.cpp.

float64_t compute_optimal_betas_directly ( float64_t beta,
const float64_t old_beta,
const int32_t  num_kernels,
const float64_t sumw,
const float64_t  suma,
const float64_t  mkl_objective 
)
protected

given the alphas, compute the corresponding optimal betas

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
mkl_objectivethe current mkl objective
Returns
new objective value

Definition at line 692 of file MKL.cpp.

float64_t compute_optimal_betas_elasticnet ( float64_t beta,
const float64_t old_beta,
const int32_t  num_kernels,
const float64_t sumw,
const float64_t  suma,
const float64_t  mkl_objective 
)
protected

given the alphas, compute the corresponding optimal betas

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
mkl_objectivethe current mkl objective
Returns
new objective value

Definition at line 462 of file MKL.cpp.

float64_t compute_optimal_betas_newton ( float64_t beta,
const float64_t old_beta,
int32_t  num_kernels,
const float64_t sumw,
float64_t  suma,
float64_t  mkl_objective 
)
protected

given the alphas, compute the corresponding optimal betas

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
mkl_objectivethe current mkl objective
Returns
new objective value

Definition at line 781 of file MKL.cpp.

float64_t compute_optimal_betas_via_cplex ( float64_t beta,
const float64_t old_beta,
int32_t  num_kernels,
const float64_t sumw,
float64_t  suma,
int32_t &  inner_iters 
)
protected

given the alphas, compute the corresponding optimal betas using a lp for 1-norm mkl, a qcqp for 2-norm mkl and an iterated qcqp for general q-norm mkl.

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
inner_itersnumber of internal iterations (for statistics)
Returns
new objective value

Definition at line 973 of file MKL.cpp.

float64_t compute_optimal_betas_via_glpk ( float64_t beta,
const float64_t old_beta,
int  num_kernels,
const float64_t sumw,
float64_t  suma,
int32_t &  inner_iters 
)
protected

given the alphas, compute the corresponding optimal betas using a lp for 1-norm mkl

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
sumw1/2*alpha'*K_j*alpha for each kernel j
suma(sum over alphas)
inner_itersnumber of internal iterations (for statistics)
Returns
new objective value

Definition at line 1316 of file MKL.cpp.

virtual float64_t compute_sum_alpha ( )
pure virtual

compute beta independent term from objective, e.g., in 2-class MKL sum_i alpha_i etc

Implemented in CMKLClassification, CMKLOneClass, and CMKLRegression.

void compute_sum_beta ( float64_t sumw)
virtual

compute 1/2*alpha'*K_j*alpha for each kernel j (beta dependent term from objective)

Parameters
sumwvector of size num_kernels to hold the result

Definition at line 1470 of file MKL.cpp.

virtual bool converged ( )
protectedvirtual

check if mkl converged, i.e. 'gap' is below epsilon

Returns
whether mkl converged

Definition at line 402 of file MKL.h.

void elasticnet_dual ( float64_t ff,
float64_t gg,
float64_t hh,
const float64_t del,
const float64_t nm,
int32_t  len,
const float64_t lambda 
)
protected

helper function to compute the elastic-net objective

Definition at line 554 of file MKL.cpp.

void elasticnet_transform ( float64_t beta,
float64_t  lmd,
int32_t  len 
)
protected

helper function to compute the elastic-net sub-kernel weights

Definition at line 343 of file MKL.h.

bool get_interleaved_optimization_enabled ( )

get state of optimization (interleaved or wrapper)

Returns
true if interleaved optimization is used; wrapper otherwise

Definition at line 176 of file MKL.h.

float64_t get_mkl_epsilon ( )

get mkl epsilon for weights (optimization accuracy for kernel weights)

Returns
epsilon for weights

Definition at line 213 of file MKL.h.

int32_t get_mkl_iterations ( )

get number of MKL iterations

Returns
mkl_iterations

Definition at line 219 of file MKL.h.

virtual const char* get_name ( ) const
virtual
Returns
object name

Reimplemented from CSVM.

Definition at line 258 of file MKL.h.

CSVM* get_svm ( )

get SVM that is used as constraint generator in MKL SIP

Returns
svm

Definition at line 130 of file MKL.h.

bool init_cplex ( )
protected

init cplex

Returns
if init was successful

Definition at line 70 of file MKL.cpp.

bool init_glpk ( )
protected

init glpk

Returns
if init was successful

Definition at line 155 of file MKL.cpp.

void init_solver ( )
protected

initialize solver such as glpk or cplex

Definition at line 52 of file MKL.cpp.

virtual void init_training ( )
protectedpure virtual

check run before starting training (to e.g. check if labeling is two-class labeling in classification case

Implemented in CMKLClassification, CMKLOneClass, and CMKLRegression.

bool perform_mkl_step ( const float64_t sumw,
float64_t  suma 
)
virtual

perform single mkl iteration

given sum of alphas, objectives for current alphas for each kernel and current kernel weighting compute the corresponding optimal kernel weighting (all via get/set_subkernel_weights in CCombinedKernel)

Parameters
sumwvector of 1/2*alpha'*K_j*alpha for each kernel j
sumascalar sum_i alpha_i etc.

Definition at line 395 of file MKL.cpp.

void perform_mkl_step ( float64_t beta,
float64_t old_beta,
int  num_kernels,
int32_t *  label,
int32_t *  active2dnum,
float64_t a,
float64_t lin,
float64_t sumw,
int32_t &  inner_iters 
)
protected

perform single mkl iteration

given the alphas, compute the corresponding optimal betas

Parameters
betanew betas (kernel weights)
old_betaold betas (previous kernel weights)
num_kernelsnumber of kernels
label(from svmlight label)
active2dnum(from svmlight active2dnum)
a(from svmlight alphas)
lin(from svmlight linear components)
sumw1/2*alpha'*K_j*alpha for each kernel j
inner_itersnumber of required internal iterations
static bool perform_mkl_step_helper ( CMKL mkl,
const float64_t sumw,
const float64_t  suma 
)
static

callback helper function calling perform_mkl_step

Parameters
mklMKL object
sumwvector of 1/2*alpha'*K_j*alpha for each kernel j
sumascalar sum_i alpha_i etc.

Definition at line 239 of file MKL.h.

void set_C_mkl ( float64_t  C)

set C mkl

Parameters
Cnew C_mkl

Definition at line 140 of file MKL.h.

void set_constraint_generator ( CSVM s)

SVM to use as constraint generator in MKL SIP

Parameters
ssvm

Definition at line 110 of file MKL.h.

void set_elasticnet_lambda ( float64_t  elasticnet_lambda)

set elasticnet lambda

Parameters
elasticnet_lambdanew elastic net lambda (must be 0<=lambda<=1) lambda=0: L1-MKL lambda=1: Linfinity-MKL

Definition at line 374 of file MKL.cpp.

void set_interleaved_optimization_enabled ( bool  enable)

set state of optimization (interleaved or wrapper)

Parameters
enableif true interleaved optimization is used; wrapper otherwise

Definition at line 167 of file MKL.h.

void set_mkl_block_norm ( float64_t  q)

set block norm q (used in block norm mkl)

Parameters
qmixed norm (1<=q<=inf)

Definition at line 387 of file MKL.cpp.

void set_mkl_epsilon ( float64_t  eps)

set mkl epsilon (optimization accuracy for kernel weights)

Parameters
epsnew weight_epsilon

Definition at line 207 of file MKL.h.

void set_mkl_norm ( float64_t  norm)

set mkl norm

Parameters
normnew mkl norm (must be greater equal 1)

Definition at line 365 of file MKL.cpp.

void set_qnorm_constraints ( float64_t beta,
int32_t  num_kernels 
)
protected

set qnorm mkl constraints

Definition at line 1564 of file MKL.cpp.

void set_svm ( CSVM s)

SVM to use as constraint generator in MKL SIP

Parameters
ssvm

Definition at line 119 of file MKL.h.

bool train_machine ( CFeatures data = NULL)
protectedvirtual

train MKL classifier

Parameters
datatraining data (parameter can be avoided if distance or kernel-based classifiers are used and distance/kernels are initialized with train data)
Returns
whether training was successful

Reimplemented from CMachine.

Definition at line 193 of file MKL.cpp.

Member Data Documentation

float64_t* beta_local
protected

sub-kernel weights on the L1-term of ElasticnetMKL

Definition at line 466 of file MKL.h.

float64_t C_mkl
protected

C_mkl

Definition at line 451 of file MKL.h.

float64_t ent_lambda
protected

Sparsity trade-off parameter used in ElasticnetMKL must be 0<=lambda<=1 lambda=0: L1-MKL lambda=1: Linfinity-MKL

Definition at line 459 of file MKL.h.

CPXENVptr env
protected

env

Definition at line 487 of file MKL.h.

bool interleaved_optimization
protected

whether to use mkl wrapper or interleaved opt.

Definition at line 472 of file MKL.h.

CPXLPptr lp_cplex
protected

lp

Definition at line 489 of file MKL.h.

LPX* lp_glpk
protected

lp

Definition at line 494 of file MKL.h.

bool lp_initialized
protected

if lp is initialized

Definition at line 497 of file MKL.h.

float64_t mkl_block_norm
protected

Sparsity trade-off parameter used in block norm MKL should be 1 <= mkl_block_norm <= inf

Definition at line 463 of file MKL.h.

float64_t mkl_epsilon
protected

mkl_epsilon for multiple kernel learning

Definition at line 470 of file MKL.h.

int32_t mkl_iterations
protected

number of mkl steps

Definition at line 468 of file MKL.h.

float64_t mkl_norm
protected

norm used in mkl must be > 0

Definition at line 453 of file MKL.h.

float64_t rho
protected

objective after mkl iterations

Definition at line 480 of file MKL.h.

CSVM* svm
protected

wrapper SVM

Definition at line 449 of file MKL.h.

CTime training_time_clock
protected

measures training time for use with get_max_train_time()

Definition at line 483 of file MKL.h.

float64_t* W
protected

partial objectives (one per kernel)

Definition at line 475 of file MKL.h.

float64_t w_gap
protected

gap between iterations

Definition at line 478 of file MKL.h.


The documentation for this class was generated from the following files:

SHOGUN Machine Learning Toolbox - Documentation