Study shows that computers might be able to spot liars better than human experts

Madison Ruppert, Contributing Writer

Similar to the so-called “threat assessment” technology being researched, funded and field tested by the United States Department of Homeland Security (DHS), computer scientists are researching ways to read the visual cues individuals display when they are lying.

In a small-scale study of forty videotaped conversations, researchers at the University of Buffalo’s Center for Unified Biometrics and Sensors (CUBS) were able to correctly identify whether subjects were telling the truth or lying a whopping 82.5 percent of the time.

Keep in mind that even the most expert of human interrogators average around 65 percent accuracy according to Ifeoma Nwogu, a research scientist at CUBS quoted by the UB Reporter, the University of Buffalo’s newspaper.

“What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes,” Nwogu said.

Others involved with the CUBS research were Nisha Bhaskaran, Venu Govindaraju and Professor Mark G. Frank, a professor of communications as well as a behavioral scientist who focuses his research on human facial expressions and deception.

Previously the attempts to computerize detection of deceit leveraged sensors which analyzed involuntary physiological signals like body heat and facial expressions.

The new CUBS system utilizes the tracking of eye movement, which is one of the many factors analyzed by the Future Attribute Screening Technology (FAST) system that the DHS has been heavily researching.

By leveraging a statistical model of how human eyes move during non-deceitful, regular conversation as well as when someone is lying, the system can reportedly detect lies with surprising accuracy.

When someone’s eye movement pattern differed between the two situations, the system assumes that the individual is lying. Those who displayed consistent eye movement between both scenarios are believed to be telling the truth.

Previous research which used human observers to code facial movements documented a marked difference in the amount of eye contact an individual made when they were making what was considered to be a high-stakes lie.

Nwogu and her colleagues built upon this earlier research by creating an automated system that could both verify and improve upon the data human coders used to successfully detect deceit and differentiate it from truthful statements.

In March of last year, the IEEE held the 2011 International Conference on Automatic Face and Gesture Recognition, during which Nwogu and others presented their research ranging from “Beyond simple features: A large-scale feature search approach to unconstrained face recognition” to Bhaskaran, Nwogu, Frank and Govindaraju’s “Lie to Me: Deceit detection via online behavioral learning” to “Real-time face recognition demonstration” and much more.

The research from Nwogu and colleagues utilized a sample size of forty, which is too small to be statistically significant, yet Nwogu says their findings were still exciting.

The findings suggest that computers may very well be able to learn enough about the behavior of a person in a relatively short period of time that they might be able to outperform even the most experienced of investigators.

In order to best detect deceit, the researchers included videos of people with a range of head poses in various lighting with assorted skin colors and items which can obstruct the face like glasses.

The next step in this research, according to Nwogu, will be to draw from a larger sample size of videos and to develop more advanced automated pattern-recognition models to suss out liars.

Thankfully, Nwogu isn’t claiming that the technology is foolproof as some people are able to maintain eye-movement patterns while lying and thus tricking their system.

However, she does say that automated deceit detection systems could indeed be used in law enforcement and security screenings.

In reality, they are already being field tested by the DHS and perhaps other federal agencies as well under the banner of “threat assessment” and “malicious intent detection.”

While it might be beneficial in some ways, I think that the risks are much greater than the rewards, since the DHS seems to want to use this as a kind of pre-crime technology.

They seek to create a world where if a computer says you’re lying, you become instantly criminalized, even if you are just darting your eyes around or your skin temperature is raised because you are nervous.

As I have pointed out in my previous coverage of such technology, the physiological signals monitored by these symptoms are wildly variable from person to person.

This is likely why these studies are avoiding using samples which would actually make the findings statistically significant as it would greatly diminish the results.

The DHS tests of the FAST system are heavily redacted so it is almost impossible to tell how effective their systems supposedly are.

I see this type of technology as posing a great risk to the entire notion of due process and the concept of “innocent until proven guilty” which is already being eradicated with a vengeance here in the United States.

There is also the concerns raised by retired Federal Bureau of Investigation (FBI) counterintelligence special agent Joe Navarro, who was a founding member of the FBI’s Behavioral Analysis Unit and 25 year FBI veteran.

He told Scientific American, “I can tell you as an investigator and somebody who’s studied this not just superficially but in depth, you have to observe the whole body; it can’t just be the face,” adding that failing to take body language into account could result in “an inordinate amount of false positives.”

Scientific American makes a great point that human law enforcement today have to take “into account that interrogations can make even honest people a little anxious,” which is obviously something a machine cannot do.

This could result in wholly innocent people being treated as potential criminals just because they’re uncomfortable being questioned by police, and this is something that should never happen in the United States or anywhere else, for that matter.

 This article first appeared at EndtheLie.com. Read other contributed articles by Madison Ruppert here.

Madison Ruppert is the Editor and Owner-Operator of the alternative news and analysis database End The Lie and has no affiliation with any NGO, political party, economic school, or other organization/cause. He is available for podcast and radio interviews. Madison also now has his own radio show on Orion Talk Radio from 8 pm — 10 pm Pacific, which you can find HERE.  If you have questions, comments, or corrections feel free to contact him at [email protected]

var linkwithin_site_id = 557381;

linkwithin_text=’Related Articles:’


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Study shows that computers might be able to spot liars better than human experts"

Leave a comment