Client Alert
The Impact of AI Legislation on Employment
August 20, 2019
Jessica Mendelson & Emily Stover
In the last decade, we have witnessed the rapidly evolving growth of Artificial Intelligence (“AI”), which will soon impact every phase of life, from birth to death, politics to war, education to emotion, and jobs to unemployment. Given the inevitable and imminent power of AI, regulation is critical. But how, where, and by whom should AI be regulated? These questions—and countless others—must be answered. On June 21, 2019, the Paul Hastings Employment Law Department endeavored to answer such questions by sponsoring an elite panel of experts entitled “Elimination of Work—AI Legislation.”
The panel, which was moderated by Jennifer Baldocchi, the Co-Chair of Paul Hastings’ Los Angeles Employment Law Department, and Bradford Newman, the Chair of Paul Hastings’ Employee Mobility and Trade Secrets Practice, included: Reggie Davis, former General Counsel at Docusign; Hillary Cain, Director of Technology & Innovation Policy at Toyota North America; Dr. Luis Videgaray Caso, Director at MIT’s Geopolitics and Artificial Intelligence World Project and former foreign Minister of Mexico; and Mike Belote, President of California Advocates.
Assumptions/Framework
To frame the discussion, the panelists agreed on a series of assumptions, specifically that worker displacement by AI is: (1) inevitable, (2) harmful to society when accompanied with masses of unemployed workers, and (3) not necessarily detrimental. These assumptions represent the underlying tension in regulating AI—namely, how to best protect workers and the economy while simultaneously respecting innovation and the demands of capitalism.
Current Legislation and Approaches
To date, the majority of attempts at AI regulation have been academic in nature. For example, state and local governments have implemented commissions that identify the problem and study the implications of AI. By contrast, there has been little proposed legislation that provides actual, practical solutions on how to regulate AI. Currently, there are two primary approaches to AI legislation: (1) protectionist and (2) revenue-focused.
Protectionist legislation focuses on protecting workers and jobs, while revenue-focused legislation focuses on the financial benefits expected to result from companies implementing AI. One panelist observed that, to date, the protectionist approach was employed more frequently, and there seemed to be “more and more policies designed to study how to regulate, rather than actually regulate.” He noted the U.S. government as an example of a party that engaged in protectionist legislation, and expressed concern that the U.S. government would likely continue to play catch-up, adopting protectionist legislation which would stifle innovation, and require lawyers to “work around” the regulations.
Revenue-based legislation levies taxes or attaches other fundraising mechanisms to the use of AI. One panelist opined that such AI taxation is inevitable. Whether we like it or not, the scope and societal impact of AI-caused worker displacement, coupled with the massive reduction in payroll expense for covered entities and the resulting loss in government revenue, mandates that businesses play a substantial role in funding society’s efforts to respond to and retrain displaced workers. Because mass worker displacement left unchecked has the potential to cause serious societal disruption, taxation should not be a provocative proposition.
Other panelists expressed concerns, however, about a revenue-based regulation. One questioned whether a tax would really work. Given the number of tax loopholes in countries like the United States, would companies actually comply with tax regulations on a quickly changing and difficult-to-regulate product like AI? Yet another panelist expressed concerns with how the tax would be utilized, explaining that many similar policies in the past were either (1) ineffective re-training policies, or (2) effective re-training that no one actually took advantage of. As such, it would be important to deliberate to ensure the creation of a good training program that people would actually take.
Global vs. Local Regulation
In discussing how to implement AI regulation, the panelists confronted the issue of whether regulation ought to be global or regional in nature.
Global Regulation
The panelists universally agreed that artificial intelligence was a global phenomenon by nature and, as such, could not be regulated locally. As one panelist put it, artificial intelligence is a geopolitical issue, and thus something that concerns people worldwide, and it must be regulated with that consideration in mind. Another panelist agreed, expressing concerns that having different artificial intelligence regulations in different countries might provide to those countries that have weaker regulations an advantage in the global marketplace, and stifle competition.
Recently, the international Organization for Economic Coordination and Development (“OECD”) released a set of five principles for the regulation of artificial intelligence, to which multiple countries (including the United States) have agreed:
AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being;
AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society;
There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them;
AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed; and
Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Our panelists supported the OECD’s human-centered policies. They also expressed the importance of adopting a similar type of framework in global regulations designed to protect data, avoid bias, ensure transparency, and protect workers. Although our panelists questioned whether worldwide regulation is realistic, they agreed that, at the very least, it is necessary to establish a framework for the fundamental principles to regulate artificial intelligence. One panelist suggested it was likely that Europe, not the United States, would play a key leadership role in global regulation of artificial intelligence.
U.S. Regulation
The panelists also discussed the regulation of artificial intelligence within the United States. There was general agreement that the more uniform regulation within the states, the better. If each state adopts a different framework for regulation, it will be difficult, especially for small companies, to comply with all these regulations.
Notwithstanding, the panelists expressed concern that federal regulation may not be the answer, due to deadlock in Washington. As one panelist put it, “I have zero confidence in the current U.S. Congress to regulate anything.” In part because of the stagnation in federal regulation, various states, including California and New York, have taken it upon themselves to establish bills that touch upon artificial intelligence. For example, California recently adopted the California Consumer Privacy Act, and the legislature has discussed various bills designed to regulate autonomous vehicles. Similarly, New York recently passed a bill to establish a temporary commission to study the regulation of artificial intelligence.
California
Since the panel was held in San Francisco, the panelists also took a moment to discuss the California-specific factors that impact AI legislation in this state. First, California is historically heavily represented by Democrats as opposed to Republicans. The legislature requires a two-thirds vote to raise taxes, something that can currently be done with just the Democratic caucus. Second, labor has power in California’s capitol, and there is a growing hostility to the technology industry, which is seen as arrogant. Third, California is dangerously dependent on income tax, which makes up a substantial portion of the state’s revenue. When job displacement does occur, it may cause significant problems for the state’s revenue. All these factors must be taken into account in regulating artificial intelligence in California.
Privacy Issues
Finally, the panel discussed the privacy implications of regulating artificial intelligence. The panelists agreed that it was not realistic to regulate artificial intelligence without taking privacy into account, as artificial intelligence includes algorithmic technology involving data. The use of such data has inherent privacy implications, so both will need to be regulated together. Another panelist expressed concerns that, at the moment, there is frequently no notice or choice for the average consumer about whether or not to disclose his or her data. A third panelist mentioned one possible approach for consumers: they consent once to the use of their data, and say that their data is available to anyone for certain types of use, so as to encourage positive data use.
Conclusion
Ultimately, there continues to be a long way to go before we achieve successful AI legislation. Paul Hastings is equipped to advise on AI issues. Contact Paul Hastings’ Artificial Intelligence Practice to learn more.