Chinese research institutions affiliated with the People’s Liberation Army (PLA) have reportedly used Meta’s open-source large language model, Llama 2 13B, to create an AI tool with potential military applications. As reported by Reuters, a research paper published in June reveals that researchers, including those from the PLA’s research division, used an early version of Meta’s Llama as the foundation for a tool they refer to as “ChatBIT”.
The six Chinese researchers is said to have come from three institutions, including the Academy of Military Science (AMS).
PLA-affiliated Chinese institutions ‘customize Meta’s AI’The report suggests that they customized Meta’s Llama model with unique parameters to develop a military-oriented AI tool designed to gather and process intelligence and deliver accurate and reliable information for operational decision-making.
The paper describes how ChatBIT was fine-tuned and “optimized for dialogue and question-answering tasks in the military field.”
The model reportedly outperformed some other AI systems, achieving nearly 90% of OpenAI’s ChatGPT-4 capabilities. However, the authors did not disclose specific performance metrics or confirm whether ChatBIT is currently in active deployment, as noted in a Reuters report.
Citing Sunny Cheung, an associate fellow at the Jamestown Foundation specializing in China’s use of technology, is quoted saying: “It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes.”
Meta faces challenges enforcing usage restrictionsMeta, which advocates for an open-release model for many of its AI systems, including Llama, imposes specific usage restrictions. For example, organizations with over 700 million users are required to obtain a license from Meta, and the terms expressly forbid the use of these models for military, nuclear, espionage, or other sensitive purposes.
Nonetheless, the company acknowledges that, due to the open nature of its models, its ability to enforce compliance is limited. Molly Montgomery, Meta’s director of public policy, told Reuters: “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”
The researchers mentioned that the model was trained on only 100,000 military dialogue records—a relatively small dataset compared to other large language models.
Earlier this week, ReadWrite reported that the Biden administration finalized restrictions to limit U.S. investments in advanced technology in China, such as AI. Citing national security concerns, the rules have been contemplated over the last year and are now set to be enacted on January 2.
ReadWrite has reached out to Meta for comment.
Featured image: Canva
The post Chinese researchers ‘modify Meta’s AI model’ for potential military use – report appeared first on ReadWrite.