top of page

Transfer Learning

The ability to learn from few examples is a hallmark of human intelligence. It is known that our perceptual system has the capacity of learning to detect a new visual object category from a brief exposure to a single example (one-shot learning). Machine Learing tools can in general provide very few guarantees when considering small samples. Without additional information, learning from few examples always reduces to an ill-posed optimization problem.

A possible solution is Transfer Learning: using knowledge learned from previous related tasks. The basic intuition is that, if a system has already learned k categories, learning the k+1 should be easier even with few information.

We propose a discriminative method for learning visual categories from few examples. The strategy consists in constraining a new learning model to be close to a subset of all pre-trained models corresponding to prior knowledge. The algorithm automatically decides from where to transfer (on which known categories to rely), how much to transfer (the degree of adaptation) and if it is worth transferring something at all.

Experimental Setup

As research on knowledge transfer is still in its infancy, especially applied to object recognition, to our knowledge there is not an official testbed database nor a standard experimental setup. In our CVPR paper we overcame this problem proposing a reproducible setting that can be used in the future as a reference to benchmark transfer learning methods. We designed the experiments to test the bahaviour of the algorithm in different scenarios:

  • Related prior knowledge;

  • Unrelated prior knowledge;

  • Mixed prior knowledge;

  • Increasing Prior Knowledge.

We considered a subset of the Caletch 256 database: it is rich in number of categories and it has a clear taxonomy which helps in identifing "related" and "unrelated classes". Note that subsets of the Caltech 256 were also used in other works of knowledge transfer in computer vision (L. Fei Fei ICCV 2003 and PAMI 2006, A. Zweig, D. Weinshall ICCV 2007). Considering several cues is also important to show that the behaviour of the transfer learning method is not affected by the feature's choice.

 

>> data.tar.gz (12.9 Mb) - features for the subset of the Caltech 256 database used in the CVPR experiments.

Source Code

The software for the Multi Model Knowledge Transfer Method described in the CVPR paper can be downloaded below. All scripts are implemented in MATLAB. The code has been tested under Linux environment using MATLAB 7.1.0.183 (R14) Service Pack 3.

 

>> cross_val.tar.gz (203.2 Mb) - scripting code to run the initial cross validation and prepare the prior knowledge files.

>> KTsoftware.tar.gz (19.1 Kb) - source code to run the knowledge transfer experiments.

 

To use this software, please cite the following paper "Safety in Numbers: Learning Categories from Few Examples with Multi Model Knowledge Transfer" T. Tommasi, F. Orabona, B. Caputo, CVPR 2010.

bottom of page