image source |
Jon Rappoport
Activist Post
Research on simulating the human brain is marching forward. Corporations are attempting to build devices that talk to their users in a “realistic” fashion.
These computers would continuously update profiles of their owners, seeking to read their emotional states and preferences and respond to them.
The old phrase, “the machine age,” takes on new meaning. Sellers are betting that consumers want machines that understand them. This bet has a corollary: human to human interaction is just too complicated and unpredictable.
Instead, machines can be programmed to reflect their users. Narcissism wins.
“I’m your machine. I’m not here to criticize you or challenge you. I’m here to be like you and serve your needs. I’m here to talk to you in ways you understand and appreciate.”
This is a far cry from the robotic telephone operator who puts you on hold for 20 minutes. This is friendship. This is happiness.
There’s one major stumbling block. The emotional range of an alive and alert human is too wide, too subtle, and too varied to embed in a machine that is supposed to stand in as a friend and companion.
The response to that problem is: reduce the range of the human user.
This campaign has been underway for some time. Watch movies, watch television shows and video games, listen to popular music, listen to politicians. It’s all about reduction. Simplification. Lowest common denominators.
Observe the slogans of social movements. If you have the stomach for it, go into a public school and watch what teachers are doing to your children.
Check out New Age-type spiritual movements. Notice how they tend to sell oversimplified slogans and encourage focusing on empty generalizations.
You see, the individual is too complex for this new machine age. His range of feeling and thought must be diminished.
Eventually, he’ll interact with a sophisticated talking computer and feel right at home. He’ll believe his emotions are being mirrored and appreciated.
Reduction. Never proliferation.
If you’ve ever studied infomercials, you know the whole business is based on back-end sales. It’s not the product you buy for $19.95, it’s the products they can hook you into after you spend the $19.95.
So it is with Google Glass. It’s all about the apps that’ll be attached.
Glass gives the wearer short-hand reality as he taps in. That’s what it’s for. The user is “on the go.” If he’s driving his Lexus and suddenly thinks about Plato, he’s not going to download the full text of The Republic to mull while he’s crashing into big trucks on the Jersey Turnpike. He’s going to take a shorthand summary. A few lines.
People want boiled-down info while they’re on the move. Reduction. The “essentials.”
This is perfectly in line with the codes of the culture. Ads, quick-hitter seminars, headlines, two-sentence summaries, ratings for products, news with no context. Stripped-down.
Well, here is a look into right now. A student at Stanford is developing a Google app that “reads other people.”
From SFGate, 8/26/13, “Google Glass being designed to read emotions”: “The [emotion-recognition] tools can analyze facial expressions and vocal patterns for signs of specific emotions: Happiness, sadness, anger, frustration, and more.”
This is the work of Catalin Voss, an 18-year-old student at Stanford and his start-up company, Sension.
So you’re wearing Google Glass at a meeting and it checks out the guy across the table who has an empty expression on his mug and, above your right eye, you see the word “neutral.” Now he smiles, and the word “happy” appears.
This information is supposed to guide you in your communication. The number of things that can go wrong? Count the ways, if you’re able. I’m personally looking forward to that guy across the table saying, “Hey, you, schmuck with the Glass, what is your app saying about me now? Angry?” That should certainly enhance the communication.
Or a husband, just back from his 12-mile morning bike ride, enters his Palo Alto home, wearing Glass, of course, and as he looks at his wife, who is sitting at the kitchen table reading a book, he sees the word “sad” appear above his eye. “Honey,” he says, recalling the skills he picked up in a 26-minute webinar, “have you been pursuing a negative line of thinking?”
She slowly gazes up at the goggle-eyed monster in his spandex and grasshopper helmet, rises from her chair and tosses a plate of hot eggs in his face. YouTube, please!
But wait. There’s more. The Glass app is also being heralded as a step forward in “machine-human relationships.” With recognition services like Google Now and Siri, when computers and human users talk to each other, the computers will be able to respond not only to the content of the user’s words, but also to his tone, his feelings.
This should be a real marvel. The emotion-recognition tool is all about reduction. It shrinks human feelings to simplistic labels. Therefore, what machines say back to humans will be something to behold.
Machine version of NLP, anyone?
The astonishing thing about this new app is that many tech people are so on-board with it. In other words, they believe that human feelings can be broken down and worked with on an androidal basis, with no loss incurred. These people are already boiled down, cartoonized.
You think you’ve observed predictive programing in movies? That’s nothing. The use of apps like this one will help bring about a greater willingness on the part of humans to reduce their own thoughts and feelings to…FIT THE SPECS OF THE MACHINES AND THE SOFTWARE.
Count on it.
This isn’t really about machines acting more like humans. It’s about humans acting like machines.
The potential range of human emotions is extraordinary. Our language, when used with imagination, actually extends that range. It’s something called art.
No matter how subtle the machines and their emotion-recognition algorithms become, there will always be a wide, wide gap between what they produce and the expression of humans.
The most profound kind of mind control seeks to eliminate that gap by encouraging us to mimic technology. That means people will think and feel less, and what they think and feel will mean less.
The machines won’t say, “I’m sorry, I can’t identify that emotion, it’s too complex.” They’ll say “sad” or “happy” or “upset” or whatever they have to say to give the appearance that they’re on top of the human condition.
Eventually, significant numbers of people will tailor their self-awareness to what the machines point to, name, label, declare.
The wolf becomes a lamb, the lamb becomes a flea.
And peace prevails. You can wear it and see with it.
Eventually, realizing that Glass is too obvious and obnoxious and bulky, companies will develop something they might call Third Eye, a chip the size of half a grain of rice, made flat, and inserted under the skin of the forehead.
Perfect. Invisible. Of course, cops will have them. And talk to them.
“I’m parked at the corner of Wilshire and Westwood. Suspicious male standing outside the Harmon Building.”
“I see him. Searching relevant data.”
Which means any past arrests, race, conditions noted in his medical records, tax status, questionable statements he’s made in public or private, significant known associates, group affiliations, etc. And present state of mind.
The cop: “Recommendation?”
“Passive-aggressive, right now he’s peaking at 3.2 on the Hoover Bipolar scale. Bring subject into custody for general questioning.”
“Will do.”
No one will wonder why, because such analysis resonates with the vastly reduced general perception of what reality is all about.
People mimic how machines see them and adjust their human thinking accordingly.
Hand and glove, key and lock. Wonderful.
As the cop is transporting the suspect to the station, Third Eye intercedes: “Sorry, Officer Crane, it took me a minute to dig further. Suspect is an important business associate of (REDACTED). This is a catch and release. Repeat, catch and release. Printing out four backstage passes to Third Memorial Rolling Stones concert at the Hollywood Bowl. Apologize profusely, give subject the tickets, and release him immediately.”
“I copy.”
“This arrest and attendant communication is being deleted…now.”
Here is another long-term trend that’s conspired to produce humans who want to interact with machines in a virtual world: child-entitlement.
Give a child what he wants when he wants it. Every time. Become a slave to your child’s immediate needs. (And when you’re exhausted from that routine, just set him up in front of the television set, where he can experience fast-cutting shows that entrain his brain to accept a shortened attention span. More reduction.)
It’s easy. And 30 years from now, a child won’t even want his parents, because his companion, friend, and guide, his personal machine, a little cube he carries around with him, will understand him so much better.
“Good morning, Jimmy. It’s me again, your friend Oz. How are you feeling? Happy, sad? Let me do a quick scan. I see you’re a little sad…”
Jon Rappoport is the author of two explosive collections, The Matrix Revealed and Exit From the Matrix, Jon was a candidate for a US Congressional seat in the 29th District of California. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free emails at www.nomorefakenews.com
Be the first to comment on "The virtual society is being built: refining the Matrix"