Key points are not available for this paper at this time.
Transfer learning, where a model is first pre-trained on a data-rich task being fine-tuned on a downstream task, has emerged as a powerful in natural language processing (NLP). The effectiveness of transfer has given rise to a diversity of approaches, methodology, and. In this paper, we explore the landscape of transfer learning for NLP by introducing a unified framework that converts all-based language problems into a text-to-text format. Our systematic study pre-training objectives, architectures, unlabeled data sets, transfer, and other factors on dozens of language understanding tasks. By the insights from our exploration with scale and our new ``Colossal Crawled Corpus'', we achieve state-of-the-art results on many benchmarks summarization, question answering, text classification, and more. To future work on transfer learning for NLP, we release our data set, -trained models, and code.
Building similarity graph...
Analyzing shared references across papers
Loading...
Colin Raffel
Noam Shazeer
Adam Roberts
Building similarity graph...
Analyzing shared references across papers
Loading...
Raffel et al. (Wed,) studied this question.
www.synapsesocial.com/papers/6984b6e33ee498a9db49a3e6 — DOI: https://doi.org/10.48550/arxiv.1910.10683