"Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported.
"We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies."
Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested.
It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4.
Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed.
In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption.""
https://arstechnica.com/tech-policy/2025/05/musks-doge-used-metas-llama-2-not-grok-for-govt-slashing-report-says/
#USA #Trump #DOGE #Austerity #AI #GenerativeAI #Llama #Grok