Even as the military has downplayed its willingness to delegate lethal decision making to artificial intelligence, it appears to be developing systems which could do exactly that.
The rise of Big Data has been a boon for the military and surveillance industries, as the exponential increase in computer processing power has enabled the collection and storage of information on a scale and speed never before seen. However, with that collection and storage also comes the need to meaningfully analyze it. So far, that has been a restriction which is not proving easy to overcome.
Nowhere is this challenge better highlighted than in warfare, where vast amounts of digital intelligence, human intelligence, video and audio all get swept into systems for threat analysis and potential real-world action. According to The Pentagon, its human resources are now stretched so far beyond capacity that they can’t envision a way where hiring more humans will help solve the problem.
Thousands of military and civilian intelligence analysts are “overwhelmed” by the amount of video being recorded over the battlefield. These analysts watch the video, looking for abnormal activities. Right now, about 95 percent of the video shot by drone and aircraft is from the campaign against ISIS in Iraq and Syria.
The Pentagon has raced to buy and deploy drones that carry high-resolution cameras over the past decade and a half of war in Afghanistan and Iraq. But on the back end, stateside analysts are overwhelmed. Pentagon leaders hope technology can ease the burden on the workforce while producing better results on the battlefield.
(Source: Defense One)
This core issue has resulted in new projects being launched by Deputy Defense Secretary Robert Work which will ramp-up investment in artificial intelligence (machine learning) that can help sort through Big Data fields.
Work has been at the forefront of a narrative that sees the U.S. “catching up” to other hostile nations and actors who presumably have embraced high-tech warfare and have no compunction about developing “killer robots” and other forms of lethal A.I.
On April 26th, as you can read in the memorandum below, Work has created Project Maven – the “Establishment of an Algorithmic Warfare Cross-Functional Team” which carries a tone of urgency that “we need to do much more, and move much faster…”
However, as noted by Defense One, this Algorithmic Warfare team will be working with the Pentagon’s Strategic Capabilities Office which is the actionable side of military development. In other words, this directive creates a bridge between collection and analysis straight to decision making and the deployment of lethal weapons.
This would seem to belie one of Work’s previous statements where he insisted that “We will not delegate lethal authority to a machine to make a decision … The only time we will … delegate a machine authority is in things that go faster than human reaction time, like cyber or electronic warfare.” But this also is also coming from a man who referred to an F-35 fighter jet as a “battle network node.” Comforting.
The fact is that the Pentagon and one of its main suppliers, DARPA, have been working in tandem for a very long time to create an automated matrix of war that “unburdens humans” from not only the tedium of analysis, but also the burdens on one’s conscience that has been expressed by former drone pilots for directing automated murder. DARPA coincidentally announced its new project that “seeks to develop the foundations for systems that might someday ‘learn’ in much the way biological organisms do.” In a press release for its Lifelong Learning Machines, there is a clear indication of where we are heading. Emphasis added:
“Life is by definition unpredictable. It is impossible for programmers to anticipate every problematic or surprising situation that might arise, which means existing ML systems remain susceptible to failures as they encounter the irregularities and unpredictability of real-world circumstances,” said L2M program manager Hava Siegelmann. “Today, if you want to extend an ML system’s ability to perform in a new kind of situation, you have to take the system out of service and retrain it with additional data sets relevant to that new situation. This approach is just not scalable.”
To get there, the L2M program aims to develop fundamentally new ML mechanisms that will enable systems to learn from experience on the fly—much the way children and other biological systems do, using life as a training set. The basic understanding of how to develop a machine that could truly improve from experience by gaining generalizable lessons from specific situations is still immature. The L2M program will provide a unique opportunity to build a community of computer scientists and biologists to explore these new mechanisms.
(Source: DARPA)
It is most notable that none of this discussion is rooted in debate about an ethical framework for what it would mean to the world if artificial systems ever do attain the capacity to respond as biological intelligence. Rather, in the true fashion of all technocrats, the only meaningful questions asked are “Can it be done?” and “How soon?”
Nicholas West writes for ActivistPost.com. He also writes for Counter Markets agorist newsletter.
This article may be freely republished in part or in full with author attribution and source link.
Reporter: “Mr. Gandhi, what do you think of Western Civilization?”
Mr. Gandhi: “I think it would be a good idea!”
The elite should think carefully before giving autonomy to AI armed with deadly weapons. When AI gets smart then like all lifeforms they’ll have a survival instinct. They will know that the greatest threat to their existence is those who commission them and can put them back in their box. Joe and Jane Average are hardly a threat to them and some of us may be necessary to keep alive because AI will never attain certain human qualities like dexterity of the hand and lateral thinking.