By Chris Burt
Businesses deploying emotion recognition algorithms need to be aware of the legal risks that they could incur in a regulatory environment that is inconsistent across the United States, and becoming more so as states pass data privacy laws governing biometrics and other personal information. This is the view advanced by Lena Kempe of LK Lawfirm in an article in the American Bar Association’s Business Law Today.
Emotion AI comes with some of the same concerns as biometric technologies, such as risks to data privacy and bias. It also introduces the possibility that individuals whose emotions are understood by automated systems could also be manipulated by them.
Kempe suggests that the market is growing. The article cites a forecast from market analyst Valuates, which sets revenues in the field at $1.8 billion in 2022, and predicts rapid growth to $13.8 billion by 2032 as businesses attempt to improve online user experiences and organizations address mental health and wellbeing.
Kempe also notes that Affectiva was performing advertising research for a quarter of the companies in the Fortune 500 as of 2019. A year and a half later, the company said it was up to 28 percent, and today it is 26 percent, somewhat undercutting the claim of rapid growth.
Emotion AI uses data such as the text and emojis contained in social media posts, facial expressions, body language and eye movements captured by cameras, and the tone, pitch and pace of voices captured by microphones and shared over the internet. Biometric data such as heart rate can also be used to detect and identify emotions, as can behavioral data like gestures.
If this data or its output can directly identify a person, or if it can be reasonably linked to an individual, it falls under the category of personal information. This in turn, brings it into the scope of the European Union’s General Data Protection Regulation and a raft of diverse U.S. state data privacy laws. In some cases outlined by Kempe, the information can qualify as sensitive personal data, triggering further restrictions under GDPR and state law.
The frequent use of biometric data for emotion AI also introduces regulatory risk from Illinois’ Biometric Information Privacy Act (BIPA) and similar laws being passed or considered elsewhere around the country.
Kempe advises businesses to include any emotion data in comprehensive privacy notices, minimize the data they collect and store and anonymize it where possible and review and update policies to limit their data handling based on the specific purpose it is used for. They should implement opt-in measures when sensitive personal data is involved and robust security measures.
She also sets out legal strategies for avoiding bias and manipulation, which are largely related to transparency and risk management.
The unsettled regulatory environment and market for emotion AI and affective computing force companies that are using the technologies to keep abreast of ongoing changes, Kempe says, lest their excitement for a deeper understanding of their users lead to feelings of violation or betrayal, and lawsuits.
Source: Biometric Update
Chris Burt is managing editor and industry analyst at Biometric Update. He has also written nonfiction about information technology, dramatic arts, sports culture, and fantasy basketball, as well as fiction about a doomed astronaut. He lives in Toronto. You can follow him on Twitter @AFakeChrisBurt.
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, HIVE, Minds, MeWe, Twitter – X and Gab.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "Emotion AI Excites Some Businesses, But the Legal Landscape Feels Fickle"