Defence Minister Bill Blair says Canada is working on incorporating artificial intelligence in its military, but the technology won’t replace humans.
Blair made the remarks at a summit in Seoul, South Korea where Canada was among 61 countries that endorsed a new document on responsible military use of AI.
Canada is working on making the Canadian Armed Forces an “AI-enabled” organization by 2030 under a strategy launched earlier this year, Blair said.
He said it’s “critical that we harness this technology both effectively and ethically,” according to a copy of his remarks at the Responsible AI in the Military Domain summit.
That includes using AI to “improve the work of our military personnel but not to replace it,” Blair said.
“That is why we have committed to ensuring that humans will always remain at the forefront of significant decisions with appropriate accountability mechanisms remaining in place.”
He added the strategy also emphasizes working with allies to “ensure that AI technologies are not only developed efficiently but also effectively integrated and managed.”
The strategy document says Canada’s allies are moving fast to adopt AI, and warns Canada must move to keep pace – noting the technology is also becoming more accessible to potential adversaries.
Countries supporting the “blueprint for action” in Seoul include allies like the United States, United Kingdom, Germany and France, according to a list posted on the summit’s website. China and Russia are not on the list of countries endorsing the agreement. Israel is also absent.
The meeting in South Korea followed an inaugural summit in the Netherlands last year at which countries, including Canada, supported an earlier document.
The “blueprint” from this week’s event says AI “holds extraordinary potential to transform every aspect of military affairs,” but also warns military use of AI could pose humanitarian, societal and ethical risks.
The document says the technology should be used in accordance with applicable national and international law, and that “responsibility and accountability can never be transferred to machines.”
It also calls for “safeguards to reduce the risks of malfunctions or unintended consequences,” including from biases in data or algorithms.
Humans should be able to understand, explain and trust outputs from AI systems, it says.
The document also calls for a shared understanding and an open discussion on the application of AI in the military domain.
It says “AI applications in the military domain should be developed, deployed and used in a way that maintains and does not undermine international peace, security and stability.”