United Nations (UN) group of expert hosts first formal inter-governmental discussion on AI armed conflicts.
What are AI weapons?
Artificial intelligence (AI, also machine intelligence, MI) is Intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals.
The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
Autonomous weapons select and engage targets without human intervention.
They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria.
What are the concerns with future weapons?
Throughout history, the capacity to wield new technologies has changed how wars are fought, and the strategic balance between attack and defence maintained.
The norms around what is considered acceptable in warfare have also evolved in response to new technologies.
Since the 19th century, those norms have been codified in international humanitarian law, which is more or less universally accepted as regulating armed conflict among civilised nations.
Recent advances in artificial intelligence (AI) are throwing up a new challenge to these norms.
Many technology leaders are worried about autonomous systems taking life-and-death decisions without “meaningful human supervision or control”.
How this concerns are to be addressed?
Tech billionaires around the world recently signed a letter warning that the weaponisation of AI-based technologies risks opening lethal problems.
The letter called on the UN to find a way to protect human society from all the dangers of automated weaponisation.
In many areas of technological complexity, alternative governance models have emerged, such as the ‘multi-stakeholder’ approach to Internet governance.