
PaLM 2 model is trained based on 3.6 trillion tokens and this amount is 780 billion tokens for the previous version.
Google's new Large Language Model (LLM), PaLM 2, unveiled by the tech giant last week, will use nearly five times more training data than its predecessor from 2022 to help it perform tasks such as coding, math, and writing. Be more creative.
According to internal documents seen by CNBC, the PaLM 2 model introduced at the Google I/O conference was trained on 3.6 trillion tokens. Tokens, which are actually strings of words, are an important element for teaching LLMs, as AI models can use them to predict the next words.
The previous version of Google's PALM, which stands for Pathways Language Model, was released in 2022 and was trained with 780 billion tokens.
W...
.....................................
Read more on our website:
Google's PaLM 2 model becomes five times stronger
Comments
Post a Comment