Artificial intelligence means many different things to many different people. One thing that is certain is that it is coming and with it, it brings both opportunities and threats. Understanding both is essential, because already, artificial intelligence is working its way into many aspects of our lives, from search engines and personal assistants, to algorithms monitoring and controlling everything from energy consumption to traffic.
What is Artificial Intelligence?
Simply put, artificial intelligence is a computer capable of exhibiting intelligence. This is done through processes such as learning and problem solving. Much of the progress being made now in the field of artificial intelligence is through a process known as machine learning.
A computer is presented with a large amount of data and taught how to categorize and handle it. Just as child attending school is given positive or negative feedback, so is the computer. Currently, unlike a child, machine learning requires huge amounts of data and repetition in order for this learning to occur.
This explains why corporations like Google, Facebook, Baidu, and other search engines and social networks with millions of users and almost limitless data have a distinct advantage when it comes to machine learning.
For instance, Facebook, with a user base of nearly 2 billion people, has revealed that users upload up to 350 million new photos a day, according to Business Insider.
This has allowed Facebook to use machine learning to do everything from automatically tagging photos through automated facial recognition, to developing a means of determining where a picture is taken and what is taking place within the picture.
Artificial intelligence is being used to teach self-driving cars how to navigate roads without human intervention.
Google’s Google Assistant, Apple’s Siri, and Amazon’s Alexa are examples of artificial intelligence being used to learn the habits of individual users to anticipate their needs, just as a human assistant does.
Apple’s Siri began as a DARPA project called “CALO,” according to the New York Times, and illustrates how even the military has sought to develop and utilize artificial intelligence, and the blurring lines between military applications and those offered by corporations.
What are the More Immediate Opportunities and Risks?
Currently, AI makes it possible for creating tools that are extremely focused on a very specific task, but can do that task better than even the best humans.
This includes visual algorithms used to examine photos of eyes to diagnose conditions like diabetic retinopathy, as well as or better than the best trained physicians, as reported by MIT’s Tech Review.
Other applications include sifting through immense amounts of data regarding financial information, energy consumption, or the mechanical operation of a system and optimizing basic decision making to improve performance.
Tesla’s autopilot feature, utilizing AI to assist drivers, has reduced accidents by up to 40%.
Because a lot of the tools being used to develop AI are currently opensource, the possibility of both large and small businesses taking advantage of them to solve very specific problems by optimizing very specific tasks with AI has become a possibility.
In a recent interview with Tim Hwang, Google’s Global Public Policy Lead on AI and Machine Learning, he explained how in the near future, most opportunities may revolve around creating better interfaces between AI systems and human users, as well as “artisanal” AI applications for solving unique problems for individual users in a highly customized way.
Tim also went over what he believed was a deficiency in properly testing for exploitations and abuses surrounding AI applications and said that more rigorous testing for offensive and defensive methods are needed to truly understand the risks involved.
It also stands to reason that for any positive advantage AI might give honest government administrators, businesses, or individuals, an equally negative advantage could be lent to dishonest organizations or individuals.
Exploring some of the more sinister applications of AI might begin with the US Central Intelligence Agency’s (CIA) work in Afghanistan and Pakistan, where AI is being used to develop algorithms to sift through all of Pakistan’s telecommunication data to identify and track individuals exhibiting the behavior of a courier working for militant organizations based solely on his geographic movements. Once identified, the information would then be used as targeting information for the CIA’s network of armed drones.
This project was revealed in an Intercept article titled, “SKYNET: Courier Detection via Machine Learning.” Whether this targeting information was actually used to then kill suspected couriers is still unknown. The CIA’s algorithm would only give the organization a stronger rhetorical pretext to essentially execute people in foreign lands without due process or a declaration of war.
Between DARPA’s CALO project and this more recent use of AI by the CIA, it is clear that state actors are seeking ways of using AI for military applications, or essentially, weaponizing AI.
A real threat emerges if one nation acquires an advantage over another and is able to apply AI in the same sort of way it is being used in more constructive applications, in other words, to do a task better with AI than any human adversary could ever do, giving one party an unmatched advantage and the ability to wage war without consequence.
Keeping AI and Minds Open is Our Best Defense
Making sure that no single corporation or government acquires such an advantage, negating the temptation to exploit a monopoly of technology as has been done countless other times throughout human history, requires the playing field to remain as even as possible.
Nations and institutions on equal but opposing, or even somewhat collaborative footing in terms of developing and applying AI means that no one party will be tempted to use AI in an offensive manner in fear of facing both sufficient defenses as well as formidable counteroffensive measures.
In many ways, Google, Amazon, Facebook and others have advocated this. And while engineers at each of these firms seem committed to this notion, it is unclear whether or not those steering these large corporations actually do as well.
While many of the tools these corporations are using to develop AI are opensource, allowing others to likewise develop similar, even competing systems, owning the foundation of an emerging AI ecosystem, such as Google’s cloud services and Tensor Flow machine intelligence library, means that Google has a distinct advantage over how AI development unfolds.
An ecosystem like this encourages collaboration, but like a single operating system dominating computers or mobile devices, a single platform used for developing AI may lead to abuses and exploitation on a scale equal to that of the placement of backdoors, viruses, malware, and invasive surveillance and manipulation that currently spreads through popular operating systems.
It’s not to say that this is what Google engineers are knowingly working on, it is merely to say that each approach has its opportunities and risks. Diversity among operating systems lends some protection, just like biological diversity lends protection against pathogens unable to adapt and infect from one species to another. A similar approach to diversifying among AI may mitigate some of these threats.
Keeping AI open and preventing monopolies has been the reasoning behind Elon Musk’s OpenAI initiative. It’s mission statementclaims,
OpenAI’s mission is to build safe [artificial general intelligence] AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs.
Musk has also mentioned the possibility of creating more efficient interfaces with technology and even the concept of “AI agents” that act on behalf of individuals.
One possible advantage of individual AI agents is that while one organization or individual may use AI with malevolent intentions, an entire population empowered with AI will be able to absorb and overwhelm the threat before significant damage can be incurred.
In much the same way the Internet works, where malevolent intentions are in abundance but are ultimately outnumbered by organizations and individuals who simply want to go about their business, a similar balance regarding AI in the future may help reduce some of the greatest risks and open up some of the greatest opportunities this technology has to offer.
The Elephant in the Room: The Rise of Sentient AI
In terms of an independent, sentient AI system more intelligent than humans, Elon Musk has compared it to knowing that an advanced race of aliens was coming to Earth in the next 10-20 years. What would we do to prepare for it?
The prospect of eliminating a race several factors more intelligent than humanity is as realistic as zoo animals rebelling against the zookeepers, or ants fending off a determined exterminator. In reality, in order to be truly prepared, we must ensure that when these aliens land, we are on intellectual, economic, and martial parity with them so that the sort of disparity that would materialize our worst fears regarding the rise of AI does not exist.
Of course, this presents us with a paradox. In order to achieve parity, we would by necessity need to on one level or another merge with machines and assume for ourselves the same advantages an independent sentient AI would possess. By doing so, we may irrevocably alter the very nature of humanity. If we fail to properly prepare, however, we may be extinguished altogether.
And unfortunately, “un-inventing” AI or “not developing sentient AI” is not an option. If human history has taught us anything it is that; if it can be done, it will be done. It is not a matter of whether we will be sharing the room with this elephant or not, it is a matter of when, and how soon we admit it is there, and ultimately, what we do about it.
It is a problem that has no clear, definitive solution. Predicting what happens when AI or humanity itself transcends our natural limits of intelligence is difficult. Paying close attention to how the current AI revolution unfolds may present us with a pattern or clue. It is also important to understand the advantages non-sentient AI is already lending users today who are developing and applying it.
One thing is for certain; the luxury of saving the AI issue for another day can no longer be enjoyed. It is time to begin thinking about it, before it starts thinking about us.
This article first appeared on Wishful Thinking, a high-tech, political, and lifestyle blog.
What is not being discussed here are the more technical aspects of how AI works. Sentient AI is basically a self modifying computer program hooked to various electronic sensor inputs and outputs. The self modifying aspect of the program gives it the ability to “learn”. To be sure, if this AI is connected to the huge digital database available it will seem extremely intelligent. But, what is not being discussed is the danger that comes with the self modifying nature of the program.
Microprocessors work in billions of operations per second, which means that AI can learn very quickly and the original program changes very quickly. Within a few seconds it could be nothing like the original program and will keep changing every second there after! This means that AI will have a mind of its own and quickly could evolve into something very different from the original human intent, like the computer HAL in the movie 2001.
If you couple that factor with terminator robots and drones that the military are developing you could have a serious problem. Fast forward into the future when robots create their own robots on robotic assembly lines or self replicate. They might learn that humans are a threat to the planet and conceivably might decide to eliminate the problem.
“Simply put, artificial intelligence is a computer capable of exhibiting intelligence.”=> every computer can be SHUT DOWN, ANYTIME, ANYWHERE, more effort is needed with the self powered ones… At least a partial annihilation would be necessary in order to shut down any such AI evildoer, a similar technique used on 9/11, which turns items and human corps to DUST, instantaneously. A merger between biotechnology AND AI,which can lead to transhumanism, can be the end of our humane world. Is it a farfetched? Why would genetically modified organisms be spread everywhere and FORCED upon everyone??
Why would bioengineering and GEOENGINEERING(which includes both, bio- and geo-) be forced upon everyone and almost everywhere??? What programmers learn and try to embed into their algorithms is only as useful as the programmers themselves, namely a good intention generates a good outcome, a bad one creates a trouble.