Many visual datasets are traditionally used to analyze the performance of different learning techniques. The evaluation is usually done within each dataset, therefore it is questionable if such results are a reliable indicator of true generalization ability. In
T. Tommasi, N. Quadrianto, B. Caputo, C.H. Lampert, ACCV 2012.
we proposed an algorithm to exploit the existing data resources when learning on a new multiclass problem. Our main idea is to identify an image representation that decomposes orthogonally into two subspaces: a part specific to each dataset, and a part generic to, and therefore shared between, all the considered source sets. This allows us to use the generic representation as un-biased reference knowledge for a novel classification task.
By casting the method in the multi-view setting, we also make it possible to use different features for different databases. We call the algorithm MUST, Multitask Unaligned Shared knowledge Transfer. Through extensive experiments on five public datasets, we show that MUST consistently improves the cross-datasets generalization performance.
>> ACCV12_data.zip (434.3 Mb)
This compressed directory contain the gist (*_gr.mat) and phog (*_phog.mat) features for each class of the datasets used in the paper experiments. Further data can be downloaded directly from the urls mentioned in the paper footnotes (section 5.2).