Monday, December 23, 2024

UK, US, and Canada join forces on cybersecurity and AI research

Must read

The UK, the US, and Canada are set to collaborate on research, development, testing, and evaluation of technologies related to artificial intelligence (AI), cybersecurity, resilient systems, and information domain-related innovations.

This initiative by the UK Ministry of Defence, the US Defense Advanced Research Projects Agency (DARPA), and the Canadian Department of National Defence will focus on critical research areas in support of defence and security.

By developing methodologies, algorithms, capabilities, and tools, the collaboration aims to address real-world challenges. These developments will be applied to new concepts of operations.

The Defence Science and Technology Laboratory (Dstl) will represent the UK in this effort, while Defence Research and Development Canada (DRDC) will lead Canada’s participation.

According to the UK Ministry of Defence, the global partnership comes at a time of rapid technological advancement and increasing complexity in global security challenges. The effort is intended to streamline research across the nations, reducing duplication, and enhancing the effectiveness of shared programmes.

It also aims to accelerate the transition of new technologies from research to operational use, minimising technological risks.

“Our international research collaborations with both the US and Canada are some of our most vital and enduring partnerships,” said the UK Ministry of Defence science and technology director, Nick Joad. “This agreement cements our collective commitments to advancing emerging cyber security technologies such as cyber security and artificial intelligence to enhance the defence and security of our nations.”

US, UK and Canada build a CASTLE

One of the key projects currently underway is the Cyber Agents for Security Testing and Learning Environments (CASTLE) programme, which trains AI systems to autonomously defend networks against sophisticated cyber threats.

Several other areas of research are being explored within the collaboration. These include the integration of AI with human teams, particularly in military medical triage, the creation of trustworthy AI systems that can withstand attacks from well-resourced adversaries, and the protection of information domains.

Furthermore, there is a focus on developing tools to enhance system resilience and security, such as techniques for the quick certification of software.

This collaboration was reinforced during a symposium convened by DARPA in the summer of 2024, where representatives from the UK, US, and Canadian governments gathered to discuss ongoing and future research efforts. 

“The trilateral collaboration is a big step toward enhancing our understanding in the outlined research and development thrust areas,” said DARPA’s director, Stefanie Tompkins. “Working with our international partners on science and technology helps us all leverage each other’s individual strengths in order to develop much greater collective capability.”

UK pursues AI conference in San Francisco

Last week, the UK government announced plans to hold a conference in San Francisco in November 2024 to engage with AI developers on implementing commitments made at the AI Seoul Summit. The programme will focus on AI safety, serving as a precursor to the AI Action Summit, which will be held in France in February 2025.

In another related development, the Council of Europe introduced the first legally binding international treaty on AI earlier this month. Signed by the UK, Israel, the US, and the European Union (EU), along with Council of Europe member states, the treaty aims to ensure that AI use is in line with human rights, democracy, and the rule of law. It was

Prior to this, in July, regulators in the UK, the US and the EU issued a joint communiqué calling for effective, balanced competition in the generative AI space.


Latest article