left-caret

PH Privacy

DOJ to Evaluate AI Compliance Programs

October 10, 2024

By Aaron Charfoos,Michelle A. Reed,& Jeremy Berkowitz

The Department of Justice (DOJ) recently raised the stakes for businesses under investigation who use artificial intelligence (AI). The Evaluation of Corporate Compliance Program (ECCP) outlines the criteria to be considered by Federal prosecutors in determining how effective organizations’ compliance programs are and deciding whether to pursue legal action against them. With the updated ECCP, investigators will now assess how businesses manage artificial intelligence risks when evaluating corporate compliance programs.

Deputy Attorney General Lisa Monaco explained the basis for inclusion, noting that “Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences—for individual and corporate defendants alike.” She further explained that the Criminal Division will incorporate “assessment of disruptive technology risks—including risks associated with AI—into its guidance on Evaluation of Corporate Compliance Programs.”

What artificial intelligence is in scope?

The artificial intelligence to be evaluated is wide in scope and includes machine learning, reinforcement learning, transfer learning, and generative AI: “no system should be considered too simple to qualify as a covered AI system due to a lack of technical complexity.”

What should businesses evaluate?

The risk management areas to be evaluated include:

  • How assessment of AI risk is done in conjunction with the enterprise risk management program
  • Whether policies and procedures give both content and effect to ethical norms and to mitigate risks identified by the company as part of its risk assessment process
  • How organizations conduct training on AI usage for all directors, officers, relevant employees, and, where appropriate, agents and business partners

This criteria comes at a time where there is increased activity with AI Regulation. The EU AI Act was signed into law in March 2024 and will be going into effect over the next 2 years, requiring “deployers” of AI systems to provide notice of AI and regularly assess activities depending on level of risk. Several states, most notably California and Colorado, have passed AI bills into law that require disclosure and assessment of AI activities. President Biden signed an Executive Order last year directing a number of Federal agencies to take action on AI for industries in their purview.

As organizations continue to navigate growing use of AI and regulations, companies should understand:

  • How they are using AI
  • The types of data they are collecting, processing, and generating as a result of AI, particularly around sensitive data
  • Controls they have in place to monitor this activity including human autonomy and bias prevention
  • The risks of such technology to the business

Our Data Privacy and Cybersecurity practice regularly works with clients to address issues related to privacy and AI risks. If you have any questions, please do not hesitate to contact any member of our team.

Practice Areas

Data Privacy and Cybersecurity

Privacy and Cybersecurity Solutions Group


For More Information

Image: Aaron Charfoos
Aaron Charfoos

Partner, Litigation Department

Image: Michelle A. Reed
Michelle A. Reed

Partner, Litigation Department

Image: Jeremy Berkowitz
Jeremy Berkowitz

Senior Privacy Director and Deputy Chief Privacy Officer