Meta has released Llama 3.1, a family of large language models available in three sizes: 405 billion, 70 billion, and 8 billion parameters. The new models bring significant improvements in multilingual performance and long-context handling, enabling more nuanced understanding of multiple languages and the ability to process much longer inputs.
Key features of Llama 3.1 include support for native multilingual dialogues, extended context windows, and enhanced reasoning capabilities. The models are designed to be more efficient for developers, with optimizations that reduce inference costs while maintaining high accuracy.
Meta emphasizes that Llama 3.1 is open-source, allowing researchers and developers to fine-tune and deploy the models for a wide range of applications. The release marks a major step forward in making advanced AI accessible, particularly for non-English languages and tasks requiring long-form comprehension.