Meta gives nod to weaponizing Llama – but only for the good guys

Change of mind follows discovery China was playing with it uninvited? Meta has historically restricted its LLMs from uses that could cause harm – but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications. ...

featured-image

Meta has historically restricted its LLMs from uses that could cause harm – but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications. Nick Clegg, Meta's president of global affairs, wrote yesterday that Llama, already available to the public under various conditions, was now available to US government agencies – as well as a number of commercial partners including Anduril, Lockheed Martin, and Palantir.

Meta told The Register all of its Llama models have been made available to the US government and its contractors. Llama – which is described by Meta as open source though it really isn't – is already being used by Uncle Sam's partners such as Oracle to improve aircraft maintenance, and by Scale AI "to support specific national security team missions." IBM, through watsonx, is bringing Llama to national security agencies' self-managed datacenters and clouds, according to Clegg.



"These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open source standards in the global race for AI leadership," Clegg asserted. The new permission for the federal government and its contractors to use Llama for national security purposes conflicts with the model's general-public acceptable use policy, which specifically prohibits use in "military, warfare, nuclear industries or applications, espionage" or "operation of critical infrastructure, transportation technologies, or heavy machinery." Even so, we're told nothing's changing – outside of the deal Clegg announced.

"Our Acceptable Use Policy remains in place," a Meta spokesperson told us. "However, we are allowing the [US government] and companies that support its work to use Llama, including for national security and other related efforts in compliance with relevant provisions of international humanitarian law." As Meta clears Llama for US defense use, Chinese giant Tencent has stepped up its own AI game by introducing Hunyuan-Large – claiming it's the largest "open source" transformer-based mixture of experts (MoE) model with a total of 389 billion parameters.

It isn't open source; the terms of use have various limiting conditions , such as ones that kick in if you have more than 100 million monthly users. In a arXiv paper this week, Tencent's Hunyuan team boasted that the model has 52 billion activated parameters and the ability to handle up to 256,000 tokens. The team claims that Hunyuan-Large surpasses Meta's Llama 3.

1-70B in benchmark performance. "We conduct a thorough evaluation of Hunyuan-Large's superior performance across various benchmarks including language understanding and generation, logical reasoning, mathematical problem-solving, coding, long-context, and aggregated tasks, where it outperforms Llama 3.1-70B and exhibits comparable performance when compared to the significantly larger Llama 3.

1-405B model," Tencent declared in its write-up. However, there are some important nuances to consider. For example, unlike Meta's models, which utilize all parameters simultaneously, Hunyuan-Large activates only 52 billion of its total 389 billion parameters at any given time.

Additionally, its benchmark results rely on synthetic data and specific tests – leaving its broader real-world performance yet to be fully validated. Beyond research settings, the team claims Tencent's AI chatbot Yuanbao has adopted the MoE-based model since early 2024. The tech giant also uses Hunyuan models to enhance "thousands of scenarios" within its applications.

Clegg waxed philosophical throughout his blog post about how the success of Llama's ostensibly open design was fundamental to American economic and national security needs. "In a world where national security is inextricably linked with economic output, innovation and job growth, widespread adoption of American open source AI models serves both economic and security interests," Clegg wrote. "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere.

" Clegg went on to argue that open standards for AI will increase transparency and accountability – which is why the US has to get serious about making sure its vision for the future of the tech becomes the world standard. "The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to AI globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies," Clegg explained. To that end, Meta told Bloomberg, similar offers for the use of Llama by government entities were extended to the US's "Five Eyes" intelligence partners: Canada, the UK, Australia, and New Zealand.

But let's not forget the self-serving aspect of this deal. It was just days ago, during Meta's Q3 earnings call, that Mark Zuckerberg asserted that opening up Llama would benefit his company, too – by ensuring its AI designs become a sort of de facto standard. "As Llama gets adopted more, you're seeing folks like Nvidia and AMD optimize their chips more to run Llama specifically well, which clearly benefits us," Zuckerberg told investors listening to the earnings call.

"So it benefits everyone who's using Llama, but it makes our products better rather than if we were just on an island building a model that no one was kind of standardizing around in the industry." The announcement is perfectly timed to give Llama a patriotic paint job after news broke last week that researchers in China reportedly had built Llama-based AI models for military applications. Meta maintained that China's use of Llama was unauthorized and contrary to its acceptable use policy.

And that's inviolable – except for the US government and its allies, apparently. ®.