image source |
Nicholas West
Activist Post
The arrival date of The Singularity – the point at which artificial intelligence surpasses that of humans – is accelerating. In Ray Kurzweil’s 2006 book, The Singularity is Near, he put the final date at 2045.
According to a new TEDx lecture by robotics pioneer, David Hanson (posted below), this date is likely to arrive much sooner based on current trends of exponential growth in computing and technology.
The question still remains, What will this new super intelligence really look like?
Many scientists and robotics experts are beginning to sound the alarm that the proper parameters for robot development and interaction with humans has not been established within a foolproof ethical framework. This has led to human rights organizations and universities like Cambridge to issue formal cautions against the inevitable rise of “Terminator Robots” who are outfitted with all of the current moral capacity of the human beings that have given us modern weapons of warfare. Such a super intelligence would identify humans themselves as part of the threat matrix and seek to eradicate us as a threat to them.
This concern has spawned an unresolved dialogue at the United Nations, and even the U.S. military is seeking ways to create moral, ethical robots. While I can’t suggest that the United Nations or the U.S. military would be the sources for any help on issues of peace, debate needs to intensify – quickly – about what we can collectively do to prevent dystopian science fiction from becoming reality.
Hanson sees many upsides to human-robot cooperation and the evolution of humanoid robots which would have similar creative abilities. But he asserts that we need to incorporate more humanity into robot development. But one then has to wonder: What type of human? The type that seeks hierarchical control, total domination, and fabricating wars to change the geopolitical landscape? Clearly that would be a step backward, not forward. Can a robot be taught only to have empathy and compassion and not the more destructive human traits?
It’s an important lecture covering a future that is fast approaching and will affect nearly everyone. In fact, much more already exists than you might imagine. Please leave your comments about what we should be doing. Do we implement some sort of international ban on creating killer robots? Do we permit the current level of autonomous systems of warfare such as drones, but demand that humans must give final directives and ultimately bear full responsibility for any violations? Do we issue a moratorium on merging artificial intelligence with robots until a full range of independent research can be thoroughly considered?
We look forward to your thoughts.
Hat tip: 33rd Square
Recently From Nicholas West:
- Emotional Robots Aim to Become “Man’s New Best Friend”
- Mind Control Scientists Claim Ability To Turn Off Consciousness
- Predictive Technology: A New Tool For The Thought Police
- Robots to Get Internet Cloud Brain: “Wikipedia For Robots”
- Artificial Intelligence Researchers Want Survival of the Fittest for Robots
Be the first to comment on "“We May Only Have a Few Years” To Prevent Killer Robots"