Loading…

Few-shot Learning with Prompting Methods

Today, in natural language processing, labeled data is important, however, getting adequate amount of data is a challenging step. There are many tasks for which it is difficult to obtain the required training data. For example, in machine translation, we need to prepare a lot of data in the target l...

Full description

Saved in:
Bibliographic Details
Main Authors: Bahrami, Morteza, Mansoorizadeh, Muharram, Khotanlou, Hassan
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Today, in natural language processing, labeled data is important, however, getting adequate amount of data is a challenging step. There are many tasks for which it is difficult to obtain the required training data. For example, in machine translation, we need to prepare a lot of data in the target language, so that the work performance is acceptable. We may not be able to collect useful data in the target language. Hence, we need to use few-shot learning. Recently, a method called prompting has evolved, in which text inputs are converted into text with a new structure using a certain format, which has a blank space. Given the prompted text, a pre-trained language model replaces the space with the best word. Prompting can help us in the field of few-shot learning; even in cases where there is no data, i.e. zero-shot learning. Recent works use large language models such as GPT-2 and GPT-3, with the prompting method, performed tasks such as machine translation. These efforts do not use any labeled training data. But these types of models with a massive number of parameters require powerful hardware. Pattern-Exploiting Training (PET) and iterative Pattern-Exploiting Training (iPET) were introduced, which perform few-shot learning using prompting and smaller pre-trained language models such as Bert and Roberta. For example, for the Yahoo text classification dataset, using iPET and Roberta and ten labeled datasets, 70% accuracy has been reached. This paper reviews research works in few-shot learning with a new paradigm in natural language processing, which we dub prompt-based learning or in short, prompting.
ISSN:2049-3630
DOI:10.1109/IPRIA59240.2023.10147172