Meta Heats Up the AI Race With Their State-Of-The-Art Foundation Language Model LLaMA | Synced
Meta AI reveals the technical details of their LLaMA collection of foundation language models in the new paper LLaMA: Open and Efficient Foundation Language Models. The LLaMA models were trained on...
Source: Synced | AI Technology & Industry Review
Meta AI reveals the technical details of their LLaMA collection of foundation language models in the new paper LLaMA: Open and Efficient Foundation Language Models. The LLaMA models were trained on trillions of tokens and achieve performance competitive with state-of-the-art models such as GPT-3 and PaLM while being much smaller and using only publicly available training data.