Friday, November 1, 2024

Exclusive-Chinese researchers develop AI model for military use on back of Meta’s Llama

Must read

By James Pomfret and Jessie Pang

(Reuters) – Top Chinese research institutions linked to the People’s Liberation Army have used Meta’s publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts.

In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.

The researchers used the Llama 2 13B large language model (LLM) that Meta released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.

ChatBIT was fine-tuned and “optimised for dialogue and question-answering tasks in the military field”, the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI’s powerful ChatGPT-4. The researchers didn’t elaborate on how they defined performance or specify whether the AI model had been put into service.

“It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes,” said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China’s emerging and dual use technologies including AI.

Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company.

Its terms also prohibit use of the models for “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to “incite and promote violence”.

However, because Meta’s models are public, the company has limited ways of enforcing those provisions.

In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse.

“Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” Molly Montgomery, Meta’s director of public policy, told Reuters in a phone interview.

Latest article