Hard to prove harm: Google wins lawsuit over facial recognition

By Danica Sergison

A lawsuit over facial recognition privacy has been dismissed by a judge in Chicago, who found that the plaintiff didn’t suffer “concrete injuries”.

As new privacy laws attempt to address the different ways that companies collect, store and use biometric data, it’s also important to keep an eye on how the courts are interpreting and applying legislation.

In a recent court case, a federal judge ruled against a claim that Google had violated Illinois privacy laws by using uploaded pictures to create “face templates” without an individual’s consent.  The plaintiffs in the case either uploaded photos to an Android phone, or had photos uploaded of themselves.  The photos were analyzed using Google’s facial recognition software, which created a face template that allegedly was used to recognize individuals’ personal characteristics, including age, gender, race and location.

As reported by Courthouse News, the 2008 Illinois Biometric Information Privacy Act was one of the first laws to attempt to regulate biometric privacy.  The law is also unique in that it allows private individuals to sue for damages when their rights under the law are violated, including as a class action.  Civil remedies like this can provide a way for individuals to be compensated for harms and losses, as well as provide an additional financial incentive for companies to respect individual privacy rights.

Proving harm, or substantial risk

The challenge with civil remedies is that they generally require a demonstration of harm or injury – and not all types of harm or injury are recognized by courts, or easy to prove.  Privacy violations fall into this category, and the text of the decision illustrates some of the reasons why.

While Google collected, stored and applied facial recognition processes to individuals’ photos without consent, part of the reason the legal case failed was because the plaintiffs couldn’t demonstrate that they had actually been harmed.  They argued that with their photographs and related personal information could be sold, used internally for profit or advertising, or inadvertently compromised in a data breach.

However, the judge found that even though the information was gathered without consent, no actual harm had occurred.  It wasn’t enough that the plaintiffs feared that their information would be compromised, used to profit, or sold in the future – that harm had to actually happen, or be substantially likely to happen.  Evidence of recent breaches at Google (Google+) and potential future uses of the technology was not enough to persuade the judge that the potential for harm was high enough for the claim to proceed.

Privacy laws need effective enforcement tools

Cases like this illustrate some key areas of difficulty in drafting privacy legislation.  While harm doesn’t always need to be proven for the state to impose a fine or other penalty, cases brought by individuals are generally held to a different standard.  In a world where privacy breaches can have swift and dramatic consequences for personal health and safety, we need enforcement tools that can address risk proactively, rather than waiting until the damage is done.

About the author

Danica Sergison
Danica is the Associate Director of Legal and Regulatory Affairs at Advocis, the Financial Advisors Association of Canada. As a tech enthusiast, she often writes about the law, privacy and the intersections between tech and social issues. Tweet her @DanicaSergison.