You are currently viewing LLaMA 2: Meta Launches it New Versatile Open-Source AI Application

LLaMA 2: Meta Launches it New Versatile Open-Source AI Application

This week, Facebook’s parent company Meta, an open-source large language model (LLM) designed to compete with big tech rivals’ limiting practises, created waves in the artificial intelligence (AI) market.

The code and data underpinning LLaMA 2 are being openly released by Meta to allow academics all over the world to build upon and develop the technology, in contrast to AI systems introduced by Google, OpenAI, and others that are highly guarded in private models.

Mark Zuckerberg, CEO of Meta, has frequently emphasised the value of open-source software in fostering innovation.

“Open-source drives innovation because it enables many more developers to build with new technology,” Zuckerberg said in a Facebook post. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.”

Given that developers from all over the world now have access to, may examine, analyse, and build upon the foundation model, LLaMA 2’s open-source nature may very well result in significant improvements in AI. It’s a risky step that might democratise the quickly developing field of artificial intelligence by giving programmers strong tools to create cutting-edge software and solutions.

Depending on the model you select, LLaMA 2 is available in three sizes: 7 billion, 13 billion, and 70 billion parameters. Comparatively, Google’s Bard (based on LaMDA) and OpenAI’s GPT-3.5 series include 137 billion and up to 175 billion parameters, respectively. The amount of parameters in GPT-4 was infamously kept a secret by OpenAI in its published study. A model’s performance and accuracy are typically correlated with the number of its parameters, but larger models require more computer power and training data.

Another interesting aspect of LLaMA 2’s training approach is how it differs from more conventional approaches.

Read More: