Loading…

Pre-Trained Models: Past, Present and Future

Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massi...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-08
Main Authors: Xu, Han, Zhang, Zhengyan, Ding, Ning, Gu, Yuxian, Liu, Xiao, Huo, Yuqi, Qiu, Jiezhong, Yao, Yuan, Zhang, Ao, Zhang, Liang, Han, Wentao, Huang, Minlie, Qin, Jin, Lan, Yanyan, Liu, Yang, Liu, Zhiyuan, Lu, Zhiwu, Qiu, Xipeng, Song, Ruihua, Tang, Jie, Ji-Rong, Wen, Yuan, Jinhui, Wayne Xin Zhao, Zhu, Jun
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Xu, Han
Zhang, Zhengyan
Ding, Ning
Gu, Yuxian
Liu, Xiao
Huo, Yuqi
Qiu, Jiezhong
Yao, Yuan
Zhang, Ao
Zhang, Liang
Han, Wentao
Huang, Minlie
Qin, Jin
Lan, Yanyan
Liu, Yang
Liu, Zhiyuan
Lu, Zhiwu
Qiu, Xipeng
Song, Ruihua
Tang, Jie
Ji-Rong, Wen
Yuan, Jinhui
Wayne Xin Zhao
Zhu, Jun
description Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2541573609</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2541573609</sourcerecordid><originalsourceid>FETCH-proquest_journals_25415736093</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQCShK1Q0pSszMS01R8M1PSc0ptlIISCwu0VEAyhSn5pUoJOalKLiVlpQWpfIwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRqYmhqbmxmYGlMXGqABkeMLE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2541573609</pqid></control><display><type>article</type><title>Pre-Trained Models: Past, Present and Future</title><source>Publicly Available Content Database</source><creator>Xu, Han ; Zhang, Zhengyan ; Ding, Ning ; Gu, Yuxian ; Liu, Xiao ; Huo, Yuqi ; Qiu, Jiezhong ; Yao, Yuan ; Zhang, Ao ; Zhang, Liang ; Han, Wentao ; Huang, Minlie ; Qin, Jin ; Lan, Yanyan ; Liu, Yang ; Liu, Zhiyuan ; Lu, Zhiwu ; Qiu, Xipeng ; Song, Ruihua ; Tang, Jie ; Ji-Rong, Wen ; Yuan, Jinhui ; Wayne Xin Zhao ; Zhu, Jun</creator><creatorcontrib>Xu, Han ; Zhang, Zhengyan ; Ding, Ning ; Gu, Yuxian ; Liu, Xiao ; Huo, Yuqi ; Qiu, Jiezhong ; Yao, Yuan ; Zhang, Ao ; Zhang, Liang ; Han, Wentao ; Huang, Minlie ; Qin, Jin ; Lan, Yanyan ; Liu, Yang ; Liu, Zhiyuan ; Lu, Zhiwu ; Qiu, Xipeng ; Song, Ruihua ; Tang, Jie ; Ji-Rong, Wen ; Yuan, Jinhui ; Wayne Xin Zhao ; Zhu, Jun</creatorcontrib><description>Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Empirical analysis ; Mathematical models ; Parameters ; Supervised learning ; Training</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2541573609?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Xu, Han</creatorcontrib><creatorcontrib>Zhang, Zhengyan</creatorcontrib><creatorcontrib>Ding, Ning</creatorcontrib><creatorcontrib>Gu, Yuxian</creatorcontrib><creatorcontrib>Liu, Xiao</creatorcontrib><creatorcontrib>Huo, Yuqi</creatorcontrib><creatorcontrib>Qiu, Jiezhong</creatorcontrib><creatorcontrib>Yao, Yuan</creatorcontrib><creatorcontrib>Zhang, Ao</creatorcontrib><creatorcontrib>Zhang, Liang</creatorcontrib><creatorcontrib>Han, Wentao</creatorcontrib><creatorcontrib>Huang, Minlie</creatorcontrib><creatorcontrib>Qin, Jin</creatorcontrib><creatorcontrib>Lan, Yanyan</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Liu, Zhiyuan</creatorcontrib><creatorcontrib>Lu, Zhiwu</creatorcontrib><creatorcontrib>Qiu, Xipeng</creatorcontrib><creatorcontrib>Song, Ruihua</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Ji-Rong, Wen</creatorcontrib><creatorcontrib>Yuan, Jinhui</creatorcontrib><creatorcontrib>Wayne Xin Zhao</creatorcontrib><creatorcontrib>Zhu, Jun</creatorcontrib><title>Pre-Trained Models: Past, Present and Future</title><title>arXiv.org</title><description>Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.</description><subject>Artificial intelligence</subject><subject>Empirical analysis</subject><subject>Mathematical models</subject><subject>Parameters</subject><subject>Supervised learning</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQCShK1Q0pSszMS01R8M1PSc0ptlIISCwu0VEAyhSn5pUoJOalKLiVlpQWpfIwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRqYmhqbmxmYGlMXGqABkeMLE</recordid><startdate>20210811</startdate><enddate>20210811</enddate><creator>Xu, Han</creator><creator>Zhang, Zhengyan</creator><creator>Ding, Ning</creator><creator>Gu, Yuxian</creator><creator>Liu, Xiao</creator><creator>Huo, Yuqi</creator><creator>Qiu, Jiezhong</creator><creator>Yao, Yuan</creator><creator>Zhang, Ao</creator><creator>Zhang, Liang</creator><creator>Han, Wentao</creator><creator>Huang, Minlie</creator><creator>Qin, Jin</creator><creator>Lan, Yanyan</creator><creator>Liu, Yang</creator><creator>Liu, Zhiyuan</creator><creator>Lu, Zhiwu</creator><creator>Qiu, Xipeng</creator><creator>Song, Ruihua</creator><creator>Tang, Jie</creator><creator>Ji-Rong, Wen</creator><creator>Yuan, Jinhui</creator><creator>Wayne Xin Zhao</creator><creator>Zhu, Jun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210811</creationdate><title>Pre-Trained Models: Past, Present and Future</title><author>Xu, Han ; Zhang, Zhengyan ; Ding, Ning ; Gu, Yuxian ; Liu, Xiao ; Huo, Yuqi ; Qiu, Jiezhong ; Yao, Yuan ; Zhang, Ao ; Zhang, Liang ; Han, Wentao ; Huang, Minlie ; Qin, Jin ; Lan, Yanyan ; Liu, Yang ; Liu, Zhiyuan ; Lu, Zhiwu ; Qiu, Xipeng ; Song, Ruihua ; Tang, Jie ; Ji-Rong, Wen ; Yuan, Jinhui ; Wayne Xin Zhao ; Zhu, Jun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25415736093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial intelligence</topic><topic>Empirical analysis</topic><topic>Mathematical models</topic><topic>Parameters</topic><topic>Supervised learning</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Han</creatorcontrib><creatorcontrib>Zhang, Zhengyan</creatorcontrib><creatorcontrib>Ding, Ning</creatorcontrib><creatorcontrib>Gu, Yuxian</creatorcontrib><creatorcontrib>Liu, Xiao</creatorcontrib><creatorcontrib>Huo, Yuqi</creatorcontrib><creatorcontrib>Qiu, Jiezhong</creatorcontrib><creatorcontrib>Yao, Yuan</creatorcontrib><creatorcontrib>Zhang, Ao</creatorcontrib><creatorcontrib>Zhang, Liang</creatorcontrib><creatorcontrib>Han, Wentao</creatorcontrib><creatorcontrib>Huang, Minlie</creatorcontrib><creatorcontrib>Qin, Jin</creatorcontrib><creatorcontrib>Lan, Yanyan</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Liu, Zhiyuan</creatorcontrib><creatorcontrib>Lu, Zhiwu</creatorcontrib><creatorcontrib>Qiu, Xipeng</creatorcontrib><creatorcontrib>Song, Ruihua</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Ji-Rong, Wen</creatorcontrib><creatorcontrib>Yuan, Jinhui</creatorcontrib><creatorcontrib>Wayne Xin Zhao</creatorcontrib><creatorcontrib>Zhu, Jun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Han</au><au>Zhang, Zhengyan</au><au>Ding, Ning</au><au>Gu, Yuxian</au><au>Liu, Xiao</au><au>Huo, Yuqi</au><au>Qiu, Jiezhong</au><au>Yao, Yuan</au><au>Zhang, Ao</au><au>Zhang, Liang</au><au>Han, Wentao</au><au>Huang, Minlie</au><au>Qin, Jin</au><au>Lan, Yanyan</au><au>Liu, Yang</au><au>Liu, Zhiyuan</au><au>Lu, Zhiwu</au><au>Qiu, Xipeng</au><au>Song, Ruihua</au><au>Tang, Jie</au><au>Ji-Rong, Wen</au><au>Yuan, Jinhui</au><au>Wayne Xin Zhao</au><au>Zhu, Jun</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Pre-Trained Models: Past, Present and Future</atitle><jtitle>arXiv.org</jtitle><date>2021-08-11</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2541573609
source Publicly Available Content Database
subjects Artificial intelligence
Empirical analysis
Mathematical models
Parameters
Supervised learning
Training
title Pre-Trained Models: Past, Present and Future
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T01%3A09%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Pre-Trained%20Models:%20Past,%20Present%20and%20Future&rft.jtitle=arXiv.org&rft.au=Xu,%20Han&rft.date=2021-08-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2541573609%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25415736093%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2541573609&rft_id=info:pmid/&rfr_iscdi=true