Loading…
Can ChatGPT's Responses Boost Traditional Natural Language Processing?
The employment of foundation models is steadily expanding, especially with the launch of ChatGPT and the release of other foundation models. These models have shown the potential of emerging capabilities to solve problems without being particularly trained to solve them. A previous work demonstrated...
Saved in:
Published in: | IEEE intelligent systems 2023-09, Vol.38 (5), p.5-11 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The employment of foundation models is steadily expanding, especially with the launch of ChatGPT and the release of other foundation models. These models have shown the potential of emerging capabilities to solve problems without being particularly trained to solve them. A previous work demonstrated these emerging capabilities in affective computing tasks; the performance quality was similar to that of traditional natural language processing (NLP) techniques but fell short of specialized trained models, like fine-tuning of the RoBERTa language model. In this work, we extend this by exploring whether ChatGPT has novel knowledge that would enhance existing specialized models when they are fused together. We achieve this by investigating the utility of verbose responses from ChatGPT for solving a downstream task in addition to studying the utility of fusing that with existing NLP methods. The study is conducted on three affective computing problems: namely, sentiment analysis, suicide tendency detection, and big-five personality assessment. The results conclude that ChatGPT has, indeed, novel knowledge that can improve existing NLP techniques by way of fusion, be it early or late fusion. |
---|---|
ISSN: | 1541-1672 1941-1294 |
DOI: | 10.1109/MIS.2023.3305861 |