Top Considerations For Retailers Using AI To Combat Theft

(January 12, 2024, 1:20 PM EST) --
Kyle Miller
Kyle Miller
Gaurav Gupte
Gaurav Gupte
Alexis Martinez
Alexis Martinez
The Federal Trade Commission's enforcement action on Dec. 19, 2023,[1] prohibiting Rite Aid Corp. from using facial recognition technology in its stores for five years after it failed to implement reasonable procedures while deploying such technology, indicates a significant evolution in the landscape surrounding biometric information and artificial intelligence data collection by retailers.

Retailers desiring to use AI tools to combat increasing retail theft should take reasonable measures when implementing such technology solutions to ensure compliance with applicable laws, rules and regulations, and prevent harm to customers.

FTC Complaint

The complaint involved the use of AI-based facial recognition technology to identify customers who may have been engaged in shoplifting or theft.[2]

The FTC alleged that Rite Aid used contractors to compile "persons of interest" databases using images gleaned from store cameras, employee cameras, or news stories.[3] An image would then be captured of the customer when they entered the store and compared to the database.[4]

When determining whether there was a match, the system would generate a confidence score to express the system's degree of confidence that the two images were the same person.[5]

If the confidence score associated with a match was above a certain selected threshold, a match alert would be issued to employees' store-issued phones.[6] The match alert did not, however, include confidence scores, so the employees generally did not know the system's degree of confidence in the match.[7]

Accordingly, the resulting facial recognition technology generated numerous false positive results, or alerts that incorrectly matched a customer with a person of interest in the database. The use of this facial recognition technology was especially likely to result in false-positive matches for Black, Latino, Asian and women customers.[8]

FTC Proposed Order Signals Use of Disgorgement Remedies Ahead for AI Models

The FTC banned the company from using facial recognition or analysis systems for the next five years, and is requiring the retailer to implement significant monitoring programs for its other biometric systems and develop a comprehensive information security program.[9]

Most notably, the FTC is also requiring the deletion or destruction of all photos and videos of customers used or collected in connection with the operation of its facial recognition system, as well as any data, models, or algorithms derived in whole or in part therefrom.[10]

The retailer must notify all third parties — including vendors that received photos or videos of customers from the retailer — of the order, instructing the deletion of the same, including any data, models, or algorithms derived in whole or in part therefrom.[11] This action represents a significant milestone in the FTC's use of model disgorgement as a tool to combat perceived customer harm stemming from AI-based technologies.

The proposed order affects not only the retailer but also those who utilize the data, which could poison the model.

If AI vendors used those in-store photos or videos as training data, getting the tool back to a state where its model was not derived in part from the retailer's data may be difficult and costly.

Considerations for Retailers

Algorithm disgorgement is emerging as the preferred method for the FTC to combat AI-based data collection issues.

Disgorgement requires companies to delete data scraped, swathed, or collected without the user's knowledge and the algorithms or products that compile and process this data. Simultaneously, AI-based facial recognition represents a potent tool for retailers looking to combat the increasing incidence of retail theft nationwide.

As such, retailers can take certain steps to optimally use AI-based facial recognition in stores.

Retailers should verify that the artificial intelligence and machine learning facial recognition models are trained on diverse datasets. Exposing an AI model to more diverse datasets may help counter the disproportionate singling out of people of color and promote more ethical and equitable actions.

Retailers should test and assess the accuracy of the AI model before deploying.

Retailers should work closely with vendors to ensure the model is working accurately and provide high-quality images to reduce false-positive results before deploying the technology in stores. Retailers should consider specific risks to consumers if the model is inaccurate, and take steps to mitigate those risks.

Retailers should consistently monitor the results of the AI-based data collection. Retailers should create robust policies to regularly test the AI model and ensure that the data collection is being conducted within the parameters initially set by the retailer.

Retailers should promote and check compliance by employees. Employees should be trained on the usage and limitations of AI-based facial recognition.

Employees should also be required to report any false-positive matches to a dedicated team. The training should be documented, and employee verification of training should be maintained.

Retailers should provide conspicuous notice to customers of the use of facial recognition in stores. Retailers should provide clearly visible notice regarding the collection of customer biometric information.

Retailers should implement a robust data security program. To maintain customer privacy, retailers should implement a stringent data security program with regular tests, expert checkups and compliance with the latest data processing standards.

Retailers should review agreements with third parties engaged in developing AI models. Regular review can allow retailers to mitigate risks associated with potential disgorgement remedies regulators seek.

Conclusion

Retailers should work to mitigate risks posed by AI-based facial recognition, particularly when collecting customer data on a massive scale.

As the FTC signals an increased willingness[12] to pursue costly remedies, such as algorithm disgorgement, retailers should ensure compliance with these most recent developments.



Kyle W. Miller is a partner, and Gaurav Gupte and Alexis Martinez are associates, at Dentons.

Dentons associate Jessica Bartolacci and shareholder Matthew H. Clark contributed to this article.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] Fed. Trade Comm'n, Decision and Order, Fed. Trade Comm'n v. Rite Aid Corp., F.T.C. Docket No. C-4308 (Dec. 19, 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/rite_aid_administrative_order.pdf.

[2] Fed. Trade Comm'n, Complaint for Permanent Injunction and Other Relief, Fed. Trade Comm'n v. Rite Aid Corp., F.T.C. Case No. 2:23-cv-5023 at 1-2 (E.D. Pa. 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_riteaid_complaint_filed.pdf.

[3] Id. at 2.

[4] Id.

[5] Id. at 7.

[6] Id.

[7] Id.

[8] Id. at 2.

[9] Fed. Trade Comm'n, Decision and Order, supra note 1, at 6.

[10] Id.

[11] Id. at 6-7.

[12] Fed. Trade Comm'n, Policy Statement of the Federal Trade Commission on Biometric Information and Section 5 of the Federal Trade Commission Act, FTC (May 18, 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/p225402biometricpolicystatement.pdf (last visited January 8, 2024).

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!