Meta AI announced the release of Llama 2, the latest update to its large language artificial intelligence (AI) model, on July 18.
The company said in an additional Twitter comment:
“We believe an open approach with broad accessibility is the right one for the development of today’s Al models…. Today, we’re releasing Llama 2 … available for free for research & commercial use.”
The new release exceeds Llama 1 in many ways. Llama 2 is trained on 40% more data than its predecessor, has greater context lengths, uses models with up to 70 billion parameters, and features pre-trained models trained on 2 trillion tokens.
The use of public data is another focus of the announcement. Llama 2 was pre-trained on publicly available online data. Its fine-tuned model, Llama-2-chat, makes use of publicly available instruction datasets and over 1 million human annotations.
Meta’s announcement provides data showing that Llama 2 “outperforms” competing open-source language models when tested against several benchmarks including coding, reasoning, proficiency, and knowledge tests. In the introduction of an attached paper, Meta asserts that other publicly released language models cannot fully compete with closed models such as OpenAI’s GPT, Google’s Bard, and Anthropic’s Claude at present.
Meta has also published a statement declaring it intends to take a responsible approach to AI. There, it asserts that “visibility, scrutiny and trust” and an open approach will be beneficial. The statement has been signed by dozens of industry members.
Llama 2 is part of a bigger picture
The announcement represents the latest step toward making Llama public. Meta released Llama under a non-commercial license in February. Shortly after that limited release, Llama’s source code leake
Go to Source to See Full Article
Author: Mike Dalton