top of page

Multiclass Transfer Learning from Unconstrained Priors

Transfer Learning (TL) aims to improve the learning performance on a target problem using knowledge extracted from related source tasks. Many labelled data are generally available for the source domain, and we call "prior" a model built when learning on one source task.

The vast majority of TL methods proposed in the visual recognition domain over the last years addresses object category detection, assuming a strong control over the priors. This is a strict condition, as it concretely limits the use of this type of approach in several settings. Moreover, the lack of a multiclass formulation for most of the existing TL algorithms prevents using them for real object categorization problems.

The main contribution of our ICCV paper is a multiclass transfer learning algorithm that allows to take advantage of priors built over different features and with different learning methods than the one used for learning the new target task. This is achieved by using the prior knowledge as experts evaluating the new incoming data and transferring their confidence output. These outputs are used to augment the feature space of the target data.

Multi Kernel Transfer Learning

We model our learning algorithm using the structural risk minimization principle, with a group norm regularization term which allows to tune the level of sparsity on the prior models avoiding transfer from sources which are not informative for a given target task. Thus the learning process is defined solving an optimization problem which considers both from where and how much to transfer using a principled multiclass formulation. We show that it is possible to cast the problem within the Multi Kernel Learning (MKL) framework, and to solve it efficiently with off-the-shelf MKL solvers.

We build on recent work [F. Orabona et al, CVPR 2010] that solves the problem in the primal, resulting in a computationally efficient method that scales well with respect to the number of priors. We call the proposed method Multi Kernel Transfer Learning (MKTL).

 

Source Code

The software for the MKTL algorithm described in our ICCV paper can be downloaded below. The demo runs the experiment on the Animals with Attributes dataset as in section 4.2 of the paper and produces results for:

 

  • MKTL

  • No-Transfer baseline: standard supervised learning on the target task,SVM with RBF kernel.

  • Prior-Features baseline: the outputs of prior models are used as featuredescriptors for the target task, learning with linear SVM.

 

All scripts are implemented in MATLAB. The code has been tested under Linux environment using MATLAB 7.8.0.347 (R2009a).

 

>> MKTLsoftware.tar.gz (12.2 Mb)

 

To use this software, please cite the following paper "Multiclass Transfer Learning from Unconstrained Priors" L. Jie*, T. Tommasi*, B. Caputo (* equal authorship), ICCV 2011.

bottom of page