Tech Giants Team up to Sell Artificial Intelligence to the Public Despite Dire Predictions

artificial-intelligence-2By Josie Wales

Technology powerhouses Microsoft, IBM, Facebook, Google, and Amazon announced yesterday they have joined forces to create the Partnership on Artificial Intelligence (AI) to Benefit People and Society.

According to their website, the non-profit organization was “[e]stablished to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

The non-profit’s website shows the coalition’s main goal is to show the public that these companies are conducting extensive research and developing Artificial Intelligence technology to benefit the people in an ethical manner. In order for people to want to purchase AI products when made available, they first have to trust the fundamental technology  — and right now, not many people do.

It should come as no surprise that software programmed by humans, who are flawed, will reflect and even amplify those human flaws. The White House released a report in May highlighting the major potential for discrimination in “Big Data.” A quick look at the table of contents shows that mitigating discrimination is a challenge in every area. A report published by ProPublica showed risk assessment programs used by courtrooms across the nation turned up significant racial disparities, falsely labeling black defendants as future criminals at twice the rate of whites.

Though the public is skeptical of AI, most people are unaware of these shortcomings.

Even so, a poll conducted by the British Science Association shows “60 per cent think that the use of robots or programmes equipped with artificial intelligence (AI) will lead to fewer jobs within ten years, and 36 per cent of the public believe that the development of AI poses a threat to the long term survival of humanity.”

In 2014, theoretical physicist and author Stephen Hawking penned an op-ed in the Independent that cautioned against it:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.

While Hawking acknowledges and praises advancements in technology using AI, which have undoubtedly improved the quality of life for many, he warned in a 2014 interview that “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

In 2015, Hawking signed an open letter released at last year’s International Joint Conference on AI warning that artificial intelligence could be more dangerous than nuclear weapons. He was joined by Apple, Inc. co-founder Steve Wozniak and inventor and tech mogul Elon Musk, along with thousands of other industry giants and AI researchers. The letter was put together by The Future of Life Institute and warns of the very serious dangers of AI weapons:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Musk has even gone as far as to say that “[w]ith artificial intelligence we are summoning the demon,” calling AI “our biggest existential threat.” He has stated his investment in the AI research company DeepMind is intended to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Even Bill Gates stated last year:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

It’s important to point out the myriad ways in which AI is saving lives and improving the quality of life of millions across the globe. It’s even more important to be careful not to embrace a technology that could easily slip out from under our control. Earlier this year, TechCrunch published a thought-provoking article by Doc Huston on the subject, which pointed out that “no one knows where the actual crossover point — the edge or tipping point — exists, and thus we mortals are unlikely to be able to prevent it from occurring. Said differently, there is a very high probability that we will misjudge where that crossover point is and will thus go beyond the key threshold. Overshooting is the norm in biology and in most, if not all, evolving systems, but especially man-made ones.”

While many of these AI research companies are currently conducting likely harmless research with good intentions that will be successful in mitigating threats to humans, Huston warns “there is no reason to assume…that governments and military organizations throughout the world will play by the same rules. Rather, as all of history tells us, they will bend or break rules however they see fit under the claim that the ends justify the means. That is classic realpolitik — if we don’t do it, ‘they’ will…and we lose.”

After all, Microsoft, IBM, Facebook, Google, and Amazon already have ties to the government in one form or another, and the Department of Defense is planting firm roots in Silicon Valley.

This article (Tech Giants Team up to Sell Artificial Intelligence to the Public Despite Dire Predictions) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Josie Wales and theAntiMedia.org. Anti-Media Radio airs weeknights at 11 pm Eastern/8 pm Pacific. Image credit: Stephen Bowler. If you spot a typo, please email the error and name of the article to [email protected].

Also See the Activist Post “Killer Robots” Archive HERE.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

2 Comments on "Tech Giants Team up to Sell Artificial Intelligence to the Public Despite Dire Predictions"

  1. Who cares about AI? HUSSEIN O is giving the Internet away to the Chinese (ITU) which means your e-commerce business WILL be Monitored and you will be DICTATED to as soon as you fall out of favor with your new Internet (Foreign) anti-Freedom and Liberty owners.

  2. Problem is, they’re absolutely right: If we don’t develop it, and our enemies do, we’re screwed.

Leave a comment