FTC Bans Rite Aid From Using AI Facial Recognition Due to Falsely Tagging Consumers as Shoplifters
In a complaint filed in federal court, the Federal Trade Commission (FTC or Commission) has alleged that Rite Aid used facial recognition technology in its stores without taking reasonable steps to prevent harm to consumers who were falsely accused by employees of shoplifting or other criminal activity. The complaint also charges that Rite Aid violated a 2010 Commission data security order by failing to adequately oversee its technology service providers.
In a proposed order to settle the FTC’s charges, Rite Aid has agreed to be prohibited from using facial recognition technology for surveillance purposes for five years. The settlement has many other requirements to prevent harm to consumers if Rite Aid deploys any automated systems that use biometric information to track shoppers or flag them as security risks in the future.
Samuel Levine, the director of the FTC’s Bureau of Consumer Protection, said, “Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk. Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”
The Complaint
In its complaint, the FTC alleges that from at least October 2012 to July 2020, Rite Aid used artificial intelligence (AI) facial recognition technology in hundreds of its stores to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior in order to “drive and keep persons of interest out of [Rite Aid’s] stores.” The technology generated “match alerts” to Rite Aid’s employees, including by email or mobile phone app notifications, indicating that people who had entered its stores were matches for entries in Rite Aid’s watchlist database.
After receiving these alerts, Rite Aid employees were directed to take action against the people who had triggered the supposed matches, including subjecting them to increased surveillance; banning them from entering or making purchases at the stores; publicly and verbally accusing them of past criminal activity in front of friends, family, acquaintances and other shoppers; detaining them or subjecting them to searches; and calling the police to report that they had engaged in criminal activity. In numerous instances, the match alerts were false. The technology frequently incorrectly identified a person who had entered a store as someone in Rite Aid’s database.
The complaint states that the failures of Rite Aid’s technology caused substantial injury to consumers, especially to Black, Asian, Latino and women consumers, who were at increased risk of being incorrectly matched with an image in the company’s watchlist database.
The complaint has many examples of Rite Aid’s facial recognition technology making errors, including the following:
- During a five-day period, Rite Aid’s facial recognition technology generated over 900 separate alerts in more than 130 stores in New York; Los Angeles; Philadelphia; Baltimore; Detroit; Sacramento; Delaware; Seattle; Manchester, New Hampshire; and Norfolk, Virginia, all claiming to match a single image of one person in the database. “In multiple instances, Rite Aid employees took action, including asking consumers to leave stores, based on matches to this enrollment.”
- Employees at a Rite Aid store in The Bronx uploaded an image to Rite Aid’s database on May 16, 2020. Between that date and July 2020, Rite Aid’s facial recognition technology generated over 1,000 match alerts for the same person – nearly 5 percent of all match alerts generated by Rite Aid’s facial recognition technology during this time period.
- A Rite Aid employee stopped and searched an 11-year-old girl because of a false match. The girl’s mother reported that she missed work because her daughter was so distraught about the incident.
- Rite Aid’s facial recognition technology generated an alert indicating that a Black woman was a match for an enrollment image that Rite Aid employees described as depicting “a white lady with blonde hair.” “In response to the alert, Rite Aid employees called the police and asked the woman to leave the store before realizing the alert was a false positive.”
- Many other Rite Aid customers, shopping for medicine, food, and other basics, were wrongly searched, accused and kicked out of stores. They were humiliated in front of their friends, families and strangers.
Although approximately 80 percent of Rite Aid stores are located in plurality-White areas, the complaint alleges that Rite Aid concentrated its use of the technology in stores where most patrons were likely to be Black, Asian and Latino. Specifically, the complaint states that most of the stores in which Rite Aid installed the technology were located in New York City; Los Angeles; San Francisco; Sacramento; Philadelphia; Baltimore; Detroit; Atlantic City; Seattle; Portland, Oregon; and Wilmington, Delaware.
The complaint charges that Rite Aid failed to take reasonable measures to prevent harm to consumers who were erroneously accused by employees of shoplifting or other criminal activity because its facial recognition technology falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other wrongdoer.
The complaint alleges that Rite Aid did not inform consumers that it was using facial recognition technology in its stores, and employees were discouraged from revealing this information. “Rite Aid specifically instructed employees not to reveal Rite Aid’s use of facial recognition technology to consumers or the media.”
According to the complaint, the technology generated thousands of false positive messages. Rite Aid’s technology was more likely to generate false positives in stores located in plurality-Black and -Asian communities than in plurality-White communities. A majority of Rite Aid’s facial recognition enrollments were assigned the match alert instruction “Approach and Identify,” which meant employees should approach the person, ask the person to leave, and if the person refused, call the police.
The complaint charges that Rite Aid contracted with two companies (which are not named) to help create the database of images of “persons of interest” because Rite Aid believed that they engaged in criminal activity at one of its stores. The complaint alleges that Rite Aid failed to properly vet these vendors. The vendors themselves disclaimed the accuracy of their matching technology, and Rite Aid allegedly failed to test or assess the technology from the two vendors before deploying it.
The Rite Aid database contained names, years of birth, and information related to criminal or “dishonest” behavior in which the individuals had allegedly engaged. Rite Aid collected tens of thousands of images of individuals, many of which were low quality and came from Rite Aid’s security cameras, employee phone cameras and news stories. These entries were called “enrollments.” Rite Aid directed store security employees to “push for as many enrollments as possible,” resulting in a watchlist database that included tens of thousands of people.
The complaint alleges that Rite Aid failed to take reasonable measures to prevent harm to consumers. Among other things, Rite Aid:
- Failed to assess, consider or take reasonable steps to mitigate risks to consumers associated with its implementation of facial recognition technology, including risks associated with misidentification of consumers at higher rates depending on their race or gender.
- Failed to take reasonable steps to test, assess, measure, document or inquire about the accuracy of its facial recognition technology before deploying the technology.
- Failed to take reasonable steps to prevent the use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts.
- Failed to take reasonable steps to train or oversee employees tasked with operating facial recognition technology and interpreting and acting on match alerts.
- Failed to take reasonable steps, after deploying the technology, to regularly monitor or test the accuracy of the technology, including by failing to implement any procedure for tracking the rate of false-positive facial recognition matches or actions taken on the basis of false-positive facial recognition matches.
For example, the complaint states that Rite Aid did not highlight racial or gender bias as a risk during an internal presentation advocating expansion of Rite Aid’s facial recognition program following its pilot deployment. Instead, Rite Aid’s internal presentation identified only two risks associated with the program: “Media attention and customer acceptance.”
The problems with the technology were so obvious that Rite Aid employees expressed frustration about the accuracy of the system because of the rate of false-positive match alerts generated for enrollments from geographically distant stores. Yet Rite Aid failed to take reasonable steps to monitor or test the accuracy of the system.
The Order
The proposed order would ban Rite Aid from using any facial recognition or analysis system for security or surveillance purposes at its stores or online for five years. In addition, Rite Aid would have to delete the photos or videos collected in the facial recognition system it operated between 2012 and 2020, and any data, models or algorithms derived from the photos or videos.
The proposed order covers Rite Aid’s use of all automatic biometric security or surveillance systems, not only facial recognition and analysis systems. If Rite Aid uses any such automated system in the future, it must implement a comprehensive monitoring program that requires strong technical and organizational controls. The monitoring program must address the potential risks to consumers posed by any automatic biometric system the company may implement. The proposed order would put broad provisions in place to ensure appropriate training, testing and evaluation. Before deploying any automatic biometric security or surveillance system, Rite Aid will need proof that it is accurate. If Rite Aid has reason to believe at some point that the system’s inaccuracies contribute to a risk of harm to consumers, Rite Aid must shut the system down.
Under the settlement, if Rite Aid has an automatic biometric security or surveillance system in place in the future, it must give individualized, written notice to any consumer the company adds to its system and anyone that it takes action against as a result. Rite Aid also would have to implement a robust consumer complaint procedure. In addition, the company would have to clearly disclose to consumers at retail locations and online if it is using automatic biometric security and surveillance, and the notices must be placed where consumers can read them in time to avoid the collection of their biometric information.
In addition, Rite Aid must implement a comprehensive information security program, obtain biennial assessments of that program from a third-party assessor, and provide an annual certification to the FTC from its CEO stating that Rite Aid is in compliance with the proposed order.
Because Rite Aid is currently in bankruptcy, the proposed settlement is subject to the bankruptcy court’s approval. It will go into effect after approval from the bankruptcy court and the federal district court as well as modification of the 2010 order by the Commission.
The Commission Vote
The Commission voted 3-0 to authorize staff to file the complaint and proposed order against Rite Aid.
Commissioner Alvaro Bedoya released a statement, stating: “We often talk about how surveillance ‘violates rights’ and ‘invades privacy.’ We should; it does. What cannot get lost in those conversations is the blunt fact that surveillance can hurt people.” He said that the Rite Aid case is “part of a much broader trend of algorithmic unfairness – a trend in which new technologies amplify old harms.” He stated that industry should understand that “this Order is a baseline for what a comprehensive algorithmic fairness program should look like. Beyond giving people notice, industry should carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them.” In addition, he stated that “there are some decisions that should not be automated at all; many technologies should never be deployed in the first place. I urge legislators who want to see grater protections against biometric surveillance to write those protections into legislation and enact them into law.”
Why It Matters
The FTC’s proposed order is groundbreaking and provides a roadmap for future AI testing and compliance. The order will require Rite Aid to implement comprehensive safeguards to prevent harm to consumers when using AI facial recognition and other automated systems that use biometric information to track consumers or flag them as security risks. It will also require Rite Aid to discontinue any such technology if it cannot control potential risks to consumers.
The complaint clearly shows that the FTC will be vigilant in taking law enforcement action against companies that it believes are using unfair or faulty biometric surveillance and data security practices. Notably, the FTC has signaled it will take action, not just when harm is financial but also when companies use technology that leads to embarrassment, harassment, biased treatment or other nonmonetary injury. It is likely that the FTC will file cases against other companies if they are engaging in similar practices.
According to Fight for the Future, an advocacy group, other major retail chains in the United States use facial recognition technology. In a Washington Post article, the group’s director said, “The message to corporate America is clear: stop using discriminatory and invasive facial recognition now, or get ready to pay the price.”
If your company is using AI or other automated biometric surveillance technology, it is important to provide proper notice, vet vendors, and test, assess and monitor that technology to ensure that its performance meets the FTC’s standards in the proposed order. Otherwise, there is a high risk of FTC action.