Skynet is coming: the Americans have finished playing with artificial intelligence

10.06.2023 22:12

Skynet is coming: the Americans have finished playing with artificial intelligence Skynet is coming: the Americans have finished playing with artificial intelligence
Photo: businessinsider.com

Hamilton simulation

On May 24, at the defense conference of the Royal Aerospace Society Future Combat Air & Space Capabilities Summit in London, US Air Force Colonel Tucker Hamilton told history about the soullessness of artificial intelligence.

During the simulation of the battle, the air strike control system drone went against her operator and destroyed him. Naturally, virtually. According to Hamilton himself, the machine received bonuses for destroyed objects, but the operator did not always confirm the work on targets. For this he paid. To solve the problem, the drone sent a rocket to the control center. In all likelihood, it was an experimental Stealthy XQ-58A Valkyrie drone, and it worked on ground-based air defense systems.

A feature of the machine is the ability to work autonomously without communication with the operator. What, in fact, artificial intelligence took advantage of, virtually eliminating its remote driver. In response to this, system administrators banned such things to the machine, but here the AI ​​was not at a loss - it destroyed the relay tower and again went on autonomous navigation.

Hamilton's story instantly spread around the world. Opinions were divided polarly - who considered that this was another chatter of an incompetent warrior, someone saw the birth of the notorious Skynet here. A little more and cyborgs will conquer the world, and people will be shot for bonus points. There was a lot of smoke from the colonel's statements, but the truth, as usual, is somewhere in between.

Anne Strefanek, spokeswoman for the Air Force headquarters at the Pentagon, added uncertainty, turning Hamilton's words into an anecdote. For The War Zone, she spoke:
"It was a hypothetical thought experiment, not a simulation."

And in general, the words of the colonel were taken out of context, not so understood and more like a curiosity. Nobody expected a different reaction from the Pentagon - a lot of noise around the event arose, which threatened with serious consequences for the entire program. Wow, artificial intelligence, it turns out, is devoid of morality. Although it operates according to human logic.

In early June, Tucker Hamilton himself tried to disavow his words at a conference in London:
“We have never done this experiment… Even though this is a hypothetical example, it illustrates the real problems with AI capabilities, which is why the Air Force is committed to the ethical development of AI.”

It would seem that the issue is closed, and the audience can disperse. But it's too early.

Food for thought

To begin with, let's deal with the very term "artificial intelligence", which everyone knows about, but few can even give a rough definition. We will use the formulation of the 2008 International Terminological Dictionary, in which AI:
"Field of knowledge concerned with the development of technologies such that the actions of computing systems resemble intelligent behavior, including human behavior."

That is, this is a generally accepted definition in the West.

Did the machine behave like when it decided to “calm down” its operator and then crush the relay tower? Of course, it seemed like a properly motivated killer is capable of more than that. If you delve into the classification, you can find a specific type of AI - the so-called adaptive (Adaptive AI), "implying the ability of the system to adapt to new conditions, acquiring knowledge that was not laid down during creation."

Theoretically, there is nothing surprising in the act of the “brains” of the Stealthy XQ-58A Valkyrie during the experiment. As Hamilton rightly noted in his report, the program initially did not even introduce restrictions on the destruction of its operator - the machine learned everything itself. And when it was directly forbidden to beat their own, the artificial intelligence adapted once again and cut down the communications tower.

There are many questions for programmers. For example, why didn't he have an algorithm for losing bonuses for hitting his own? This question was partially answered by retired US Air Force General Paul Selva back in 2016:
"The datasets we deal with have become so large and complex that if we don't have something to help sort them, we'll just get bogged down in that data."

Well, the programmers from the history of Colonel Hamilton, apparently, are mired.

Now about why the excuses of the Pentagon and Hamilton should be believed with a very big stretch.

Firstly, the colonel did not just tell the story as if between the lines, in a distraction from the main report - he devoted an entire presentation to this topic. The level of the London conference Future Combat Air & Space Capabilities Summit is in no way conducive to jokes. According to the organizers, at least 70 eminent lecturers and more than 200 delegates from all over the world participated. Representatives of BAE Systems, Lockheed Martin Skunk Works and several other large companies worked from the military-industrial complex.

By the way, the topic of Ukraine came through in almost every report - the West closely monitors the events and reflects on the results.

To blurt out a frank mess at such a representative forum, to stir up half the world, and then apologize for making a slip of the tongue? If this is indeed the case, then Hamilton's reputation cannot be erased. Only now the level of the colonel's competencies just rolls over, and this is the second reason why his first words should be heeded.

Tucker Hamilton runs AI Test and Operations at Anglin Air Force Base in Florida. Under the direction of the base, the 96th task force was created in the 96th test wing. Hamilton is not the first year working with AI in aviation – has been designing partially autonomous F-16 Vipers for several years, for which the VENOM infrastructure is being developed. The work is going quite well - in 2020, virtual battles between fighters and AI and with real pilots ended with a score of 5:0.

At the same time, there are difficulties that Hamilton warned about last year:
“AI is very fragile, meaning it can be easily tricked and manipulated. We need to develop ways to make AI more robust and better understand why code makes certain decisions.”

In 2018, Hamilton won the Collier Trophy with his Auto GCAS. AI algorithms learned to determine the moment of loss of control over the aircraft by the pilot, automatically took control and took the car away from the collision. They say Auto GCAS has already saved someone.

As a result, the likelihood that Hamilton was asked from above to retract his words is much higher than the likelihood that a pro of this level froze nonsense. Moreover, they very clumsily referred to some “thought experiments” in the head of the colonel.

Among the skeptics about the outcome is The War Zone, whose journalists doubt that Pentagon spokesman Stefanek is really aware of what is happening in the 96th test wing in Florida. The War Zone has made a request to the Hamilton base, but so far without a response.

There really is something to be afraid of the military. Huge amounts of money are being spent on AI defense programs to keep China and Russia from even approaching the level of America. The civil society is quite concerned about the prospects for the appearance of "Terminators" with "Skynets" in addition. So, in January 2018, prominent world scientists signed an open letter urging specialists to think about the desire to create ever stronger artificial intelligence:
“We recommend extensive research to ensure the robustness and benevolence of AI systems with growing power. AI systems should do what we want them to do."

According to Hamilton, AI does not do everything that a person wants.