18.1 C
New York
Saturday, October 5, 2024

Meet BloombergGPT: A Large Language Model With 50 Billion Parameters That Has Been Trained on a Variety of Financial Data

Meet BloombergGPT: A Large Language Model With 50 Billion Parameters That Has Been Trained on a Variety of Financial Data

By Aneesh Tickoo, MarkTechPost

The 2020 release of GPT-3 served as a compelling example of the advantages of training extremely large auto-regressive language models. The GPT-3 model has 175 billion parameters—a 100-fold increase over the GPT-2 model—performed exceptionally well on various current LLM tasks, including reading comprehension, answering open-ended questions, and code development. Many additional models have reproduced this performance. Moreover, data shows that huge models display emergent behaviours because their size permits them to gain skills unavailable to smaller models. A famous example of emergent behaviour is the capacity to accomplish tasks with few-shot prompting, where a model can learn a task from just a few examples. When the number of language models increases, this ability increases beyond random.

Keep reading here >

This post was originally published on this site

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

156,618FansLike
396,312FollowersFollow
2,320SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x