Document Type: Article
Faculty of IT & Computer Engineering, Urmia University of Technology, Urmia, Iran.
Faculty of IT & Computer Engineering, Urmia University of Technology, Urmia, Iran
It is usually supposed that the training (source domain) and test (target domain) data follow a similar distribution and feature space in most pattern recognition tasks. However, in many real-world applications, particularly in visual recognition, this hypothesis has been frequently violated. This problem is known as domain shift problem. Domain adaptation and transfer learning are promising techniques to learn an effective and robust classifier to tackle shift problem. In this paper, we propose a novel scheme for domain adaptation, entitled as Joint Distribution Adaptation via Feature and Model Matching (JDAFMM), in which feature transform and model matching are jointly optimized. Due to joint optimization, we can have a robust model with feasible feature transformation and model parameter adaptation. By introducing regularization operated between the marginal and conditional distributions’ shifts across domains, we can successfully adapt data drift as much as possible along with empirical risk minimization and rate of consistency maximization between manifold and prediction function. We conduct extensive experiments to evaluate the performance of the proposed model against those of other machine learning and domain adaptation methods in three types of visual benchmark datasets. Our experiments illustrate that our JDAFMM significantly outperforms other baseline and state-of-the-art methods.