U.S.World

More than 60 countries have agreed on the need to control weapons with AI

More than 60 countries have agreed on the need to control weapons with AI

With the emergence of artificial intelligence it became clear that sooner or later it would be used for military purposes, after which a number of serious ethical problems arose. For example, how the AI will dispose of the right to destroy people, if granted the right.

Last week, the Hague hosted REAIM 23, the first international conference on the responsible use of artificial intelligence in the military sphere, which was sponsored by the Netherlands and South Korea and attended by over 60 countries.

At the end of the summit, participants (with the exception of Israel) signed a petition stating that the countries they represent are committed to using AI in accordance with international law without undermining the principles of “international security, stability and accountability.

Among the issues REAIM 23 participants also discussed were the reliability of military AI, the unintended consequences of its use, the risks of escalation, and the degree of human involvement in decision-making.

According to critical experts, the petition, while not binding, does not solve many problems, including the use of AI in military conflicts as well as UAVs controlled by artificial intelligence.

And such concerns are far from unfounded. For example, one of the largest U.S. military contractors, Lockheed Martin, reported that its new fighter trainer, while in the air for about 20 hours, was controlled by AI the entire time. And Google CEO Eric Schmidt shared his concerns about the fact that AI itself can provoke military conflicts, including those involving nuclear weapons.

You may be interested: Mercedes-Benz and fashion brand Moncler have developed an amazing SUV