Loading…
IN AI, IS BIGGER BETTER?
In one early test of its reasoning abilities, ChatGPT scored just 26% when faced with a sample of questions from the 'MATH' data set of secondary-school-level mathematical problems1. The Minerva results hint at something that some researchers have long suspected: that training larger LLMs,...
Saved in:
Published in: | Nature (London) 2023-03, Vol.615 (7951), p.202-205 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In one early test of its reasoning abilities, ChatGPT scored just 26% when faced with a sample of questions from the 'MATH' data set of secondary-school-level mathematical problems1. The Minerva results hint at something that some researchers have long suspected: that training larger LLMs, and feeding them more data, could give them the ability, through pattern-recognition alone, to solve tasks that are supposed to require reasoning. [...]these models have major downsides. Besides concerns that their output cannot be trusted, and that they might exacerbate the spread of misinformation, they are expensive and suck up huge amounts of energy. In some instances, multiple power laws can govern how performance scales with model size, the researchers say. |
---|---|
ISSN: | 0028-0836 1476-4687 |
DOI: | 10.1038/d41586-023-00641-w |