PH Privacy
White House Passes Sweeping AI Executive Order
November 01, 2023
By Aaron Charfoos,John Gasparini,& Kimia Favagehi
Introduction
On October 30, 2023, the Biden-Harris Administration unveiled a sweeping Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The Executive Order represents the most significant action taken thus far by the Federal government to address the risks and challenges posed by the rapid development of AI systems. While the focus of the Executive Order is on AI risks, regulation, and development, it also addresses AI’s extensive impacts on the federal government and various sectors and issue areas, such as healthcare, national security, critical infrastructure, and consumer protection.
Notably, the Executive Order also addresses cybersecurity and data privacy risks associated with AI. Below, we’ve summarized some of the Executive Order’s key elements, with a closer look at its implications for data privacy and cybersecurity. As is so often the case with Executive Orders, the real substance will come as the agencies carry out their new mandates. While a major step, the Executive Order represents the beginning of months and years of agency action across the federal government, with an array of impacts that cannot yet be crystallized.
AI and Cybersecurity
The Executive Order focuses on the impact of AI on cybersecurity in different contexts, such as with respect to safety in general, critical infrastructure, financial institutions, and cyber defense.
Safety and Security. The Executive Order prioritizes cybersecurity by calling for safe and secure AI, specifically requiring standardized evaluations of AI systems. Additionally, the Executive Order directs the Secretary of Commerce (through the National Institute of Standards and Technology) to develop a companion resource to its existing AI Risk Management Framework, for generative AI, and to develop guidance for evaluating and auditing AI risks in cybersecurity.
Critical Infrastructure and Cybersecurity. In addition to the general safety and security of AI systems, the Executive Order establishes various cybersecurity-driven requirements for the protection of critical infrastructure from physical and cyber-attacks. It is not clear whether and how those requirements will interact with the forthcoming requirements of the Critical Incident Reporting for Critical Infrastructure Act – draft regulations implementing that law are due to be published by the end of March 2024.
Financial Institutions and Cybersecurity. The White House also requires the Secretary of the Treasury to issue a public report on the best practices for financial institutions to manage AI-specific cybersecurity risks.
Cyber Defense. Not all of the Executive Order’s provisions focus on AI risk. The Order also addresses the potential advantages that AI may present, specifically how to capitalize on AI’s potential to improve U.S. cyber defenses. The Executive Order requires the Secretary of Defense and the Secretary of Homeland Security to conduct an operational pilot project to identify, develop, evaluate, and deploy AI capabilities to aid in the discovery and remediation of vulnerabilities in critical U.S. government systems and networks, and to prepare a report of the results.
AI and Privacy
Biden’s Executive Order also emphasizes the privacy risks associated with AI. Specifically, the White House calls for Congress to pass comprehensive, bipartisan data privacy legislation to protect all Americans. The Executive Order also directs the following privacy-related initiatives—
- Directing the Secretary of Commerce (through the National Institute of Standards and Technology) to create guidelines for agencies to evaluate the efficacy of privacy protections, specifically with respect to agencies’ use of privacy-enhancing technologies;
- Advancing research, development, and implementation related to privacy-enhancing technologies, specifically calling on the Secretary of Energy to fund the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research.
- Ordering agency experts to examine ways to mitigate privacy risks potentially exacerbated by AI, such as AI’s facilitation of the collection and use of personal information;
Other Considerations
In addition to cybersecurity and data privacy, the Order also addresses an array of other impacts of AI on a mix of areas, such as immigration, consumer protection, civil rights, national security, trade controls, and critical infrastructure.
Additionally, the Executive Order calls for the responsible and effective government use of AI, directing agency guidance on AI, assisting agencies with acquiring specified AI products and services, and accelerating the rapid hiring of AI professionals.
Looking Ahead
The White House is no stranger to developments in the AI space. Just last year, the Biden-Harris Administration released the Blueprint for an AI Bill of Rights, addressing five key principles for AI in the U.S. However, Monday’s Executive Order marks the beginning of a number of binding initiatives to tackle the responsible use of AI among the executive branch.
In fact, following the release of the Executive Order, on November 1, 2023, Vice President Harris announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The guidance seeks to establish AI governance structures in federal agencies, and even provides recommendations for managing risk in federal procurement of AI.
Paul Hastings attorneys will be closely monitoring developments in this space, and remain available to answer questions and assist clients in navigating the ever-shifting AI landscape.