Loading…

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench)...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-04
Main Authors: Abdin, Marah, Sam Ade Jacobs, Ammar Ahmad Awan, Aneja, Jyoti, Awadallah, Ahmed, Awadalla, Hany, Bach, Nguyen, Bahree, Amit, Bakhtiari, Arash, Behl, Harkirat, Benhaim, Alon, Bilenko, Misha, Bjorck, Johan, Bubeck, Sébastien, Cai, Martin, Caio César Teodoro Mendes, Chen, Weizhu, Chaudhary, Vishrav, Chopra, Parul, Allie Del Giorno, de Rosa, Gustavo, Dixon, Matthew, Eldan, Ronen, Iter, Dan, Garg, Amit, Goswami, Abhishek, Gunasekar, Suriya, Haider, Emman, Junheng Hao, Hewett, Russell J, Huynh, Jamie, Javaheripi, Mojan, Jin, Xin, Kauffmann, Piero, Karampatziakis, Nikos, Kim, Dongwoo, Khademi, Mahoud, Kurilenko, Lev, Lee, James R, Lee, Yin Tat, Li, Yuanzhi, Chen, Liang, Liu, Weishung, Lin, Eric, Lin, Zeqi, Madan, Piyush, Mitra, Arindam, Modi, Hardik, Nguyen, Anh, Norick, Brandon, Patra, Barun, Perez-Becker, Daniel, Portet, Thomas, Reid Pryzant, Qin, Heyang, Radmilac, Marko, Rosset, Corby, Roy, Sambudha, Olatunji Ruwase, Saarikivi, Olli, Amin Saied, Salim, Adil, Santacroce, Michael, Shah, Shital, Shang, Ning, Sharma, Hiteshi, Song, Xia, Tanaka, Masahiro, Wang, Xin, Ward, Rachel, Wang, Guanhua, Witte, Philipp, Wyatt, Michael, Xu, Can, Xu, Jiahang, Yadav, Sonali, Yang, Fan, Yang, Ziyi, Yu, Donghan, Zhang, Chengruidong, Zhang, Cyril, Zhang, Jianwen, Li Lyna Zhang, Zhang, Yi, Zhang, Yue, Zhang, Yunan, Zhou, Xiren
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).
ISSN:2331-8422