Techno Blender
Digitally Yours.

Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022 (Jennifer Elias/CNBC)

0 37




Jennifer Elias / CNBC:

Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022  —  – Google’s PaLM 2 large language model is using nearly five times the amount of text data for training as its predecessor LLM, CNBC has learned.






Jennifer Elias / CNBC:

Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022  —  – Google’s PaLM 2 large language model is using nearly five times the amount of text data for training as its predecessor LLM, CNBC has learned.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment