By B.N. Frank
According to a recent survey, a significant percentage of people worldwide support the idea of replacing their lawmakers with Artificial Intelligence (A.I.). Ditto on entrusting a “nanny robot” to take care of their children. After all, people make mistakes. Of course A.I. makes mistakes too. Therefore, A.I. doesn’t guarantee a better world and there’s a growing list of incidents that verify this.
From Wired:
Don’t End Up on This Artificial Intelligence Hall of Shame
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.
When a person dies in a car crash in the US, data on the incident is typically reported to the National Highway Traffic Safety Administration. Federal law requires that civilian airplane pilots notify the National Transportation Safety Board of in-flight fires and some other incidents.
The grim registries are intended to give authorities and manufacturers better insights on ways to improve safety. They helped inspire a crowdsourced repository of artificial intelligence incidents aimed at improving safety in much less regulated areas, such as autonomous vehicles and robotics. The AI Incident Database launched late in 2020 and now contains 100 incidents, including #68, the security robot that flopped into a fountain, and #16, in which Google’s photo organizing service tagged Black people as “gorillas.” Think of it as the AI Hall of Shame.
The AI Incident Database is hosted by Partnership on AI, a nonprofit founded by large tech companies to research the downsides of the technology. The roll of dishonor was started by Sean McGregor, who works as a machine learning engineer at voice processor startup Syntiant. He says it’s needed because AI allows machines to intervene more directly in people’s lives, but the culture of software engineering does not encourage safety.
“Often I’ll speak with my fellow engineers and they’ll have an idea that is quite smart, but you need to say ‘Have you thought about how you’re making a dystopia?’” McGregor says. He hopes the incident database can work as both a carrot and stick on tech companies, by providing a form of public accountability that encourages companies to stay off the list, while helping engineering teams craft AI deployments less likely to go wrong.
“My fellow engineers will have an idea that is quite smart, but you need to say ‘Have you thought about how you’re making a dystopia?’”
Sean McGregor, Partnership on AI
The database uses a broad definition of an AI incident as a “situation in which AI systems caused, or nearly caused, real-world harm.” The first entry in the database collects accusations that YouTube Kids displayed adult content, including sexually explicit language. The most recent, #100, concerns a glitch in a French welfare system that can incorrectly determine people owe the state money. In between there are autonomous vehicle crashes, like Uber’s fatal incident in 2018, and wrongful arrests due to failures of automatic translation or facial recognition.
A.I. can be used to create misleading information (see 1, 2, 3, 4, 5, 6) and disturbing online content. It can be used to discriminate (see 1, 2) and invade our privacy (see 1, 2, 3). It can be used to replace human jobs.
Like all technology, A.I. devices also require conflict minerals to operate. Obsolete A.I.-driven technology will inevitably lead to more non-recyclable toxic E-Waste in landfills. Of course, proponents will want to replace obsolete A.I. with new A.I. so the humanitarian and environmental devastation to create and sustain it will continue indefinitely. Doesn’t all of this qualify for the “Hall of Shame?”
Activist Post reports regularly about unsafe technology. For more information, visit our archives.
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "Current List of Incidents Included in “Artificial Intelligence Hall of Shame”; Backlog of Submissions Await Processing"