Inductive transfer
From Wikipedia, the free encyclopedia
Inductive transfer (also called transfer learning and other terms) is the retention and application of knowledge learned about some tasks to efficiently learn a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are similar but not the same. For example, learning to recognize chairs might help learn to recognize tables; or learning to play checkers might improve the learning of chess. While people are adept at inductive transfer, even across widely disparate domains, there exists little associated computation learning theory and few systems that exhibit knowledge transfer.
At the Neural Information Processing Conference in 1995 (NIPS 1995), Danny Silver and Rich Caruana (along with co-organizers) led a successful two-day workshop on "Learning to Learn" that focused on the need for machine learning methods that retain and reuse learned knowledge. The co-organizers were Jon Baxter, Tom Mitchell, Lorien Pratt, and Sebastian Thrun. The motivation for this meeting was the acceptance that machine learning systems would benefit from manipulating knowledge learned from related experience, and that this would enable them to move beyond task-specific tabula rasa systems. The workshop resulted in a series of articles published in a special issue of Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a book entitled "Learning to Learn" [Pratt and Thrun 1998].
Research of inductive transfer has continued under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, and incremental/cumulative learning. The recent burst of activity in this area is illustrated by the research of multi-task learning with kernel methods and Bayesian networks that has established new frameworks for capturing task relatedness to improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara 04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05, Chapelle and Harchaoui 05].
At NIPS 2005, a follow-on workshop entitled "Inductive Transfer: 10 Years Later" was organized by Silver and Caruana (along with co-organizers Goekhan Bakir, Kristin Bennett, Massimiliano Pontil, Stuart Russell, Prasad Tadepalli). This workshop examined the progress that was made in ten years, the questions and challenges that remain, and the opportunities for new applications of inductive transfer systems. The workshop organizers identified three major goals: (1) To summarize the work thus far in the area of inductive transfer so as to develop a taxonomy of research indicating open questions, (2) To share new theories, approaches and algorithms regarding the accumulation and use of learned knowledge for the purposes of more effective and efficient learning, (3) To discuss a more formal inductive transfer community (or special interest group) that might begin by offering a website, benchmarking data and methods, shared software, and links to various research programs and other web resources. As an example, please see the Machine Life-Long Learning website at http://birdcage.acadiau.ca:8080/ml3/. The workshop website has a host of additional information and can be found at http://iitrl.acadiau.ca/itws05/.
In 2005, the US Defense Advanced Research Projects Agency created a multi-year research program in Transfer Learning.