Hogan Lovells 2024 Election Impact and Congressional Outlook Report
California continues as the frontrunner of U.S. AI regulatory developments.
Following the Governor’s executive order on Generative AI (GenAI) published last year, California state agencies have worked to implement its provisions, publishing GenAI procurement guidelines and risk assessment methodology for the ethical and responsible deployment of GenAI. These resources are aimed at aiding state entities in evaluating the risk associated with GenAI systems.
For the private sector, the guidelines provide insight into what California regulators view as critical components of GenAI compliance considerations, which may suggest the direction that California is moving with regards to the regulation of GenAI generally, and, they lay out the requirements that companies that sell GenAI products to California state agencies will need to meet.
Last year, California Governor Gavin Newsom, issued an executive order directing California agencies to study GenAI technologies and develop additional guidance on the procurement of these technologies. In March, the Newsom administration released GenAI Guidelines that state agencies must follow if they purchase or use AI to generate content, such as analyses of health care claims or tax filing data. These AI guidelines for procurement follow the trend of providing high-level requirements, focused on higher risk activities that are proliferating across states. The California Department of Technology (CDT) Office of Information Security also released its GenAI Toolkit and Generative AI Intelligence Risk Assessment to aid state entities in evaluating GenAI systems.
State entity responsibilities. The GenAI Guidelines highlight the responsibility of each state entity to deploy GenAI in an ethical, transparent, and trustworthy manner. State entities are directed to conduct an inventory of all GenAI uses and submit it to the CDT. Additionally, each state entity director and their executive leadership teams, including their Chief Information Officers (CIOs), are directed to:
GenAI Procurement. The GenAI Guidelines list out a set of requirements for state entities seeking to procure GenAI, which will have an impact on companies that wish to sell GenAI to those entities. State entities will be required to assign a GenAI subject matter expert to assist with contract management functions and report all contracts that are intended to purchase GenAI. But, many of the requirements will impact companies that wish to sell GenAI to the state entities:
GenAI risk assessment and management. The GenAI Guidelines mandate each state entity’s CIO to complete a risk assessment created by CDT. That risk assessment is based on NIST’s AI Risk Management Framework, regardless of the intent. Essential questions to address from the outset include:
Similar to other regulatory frameworks (in particular, those in the EU), the assessment categorizes risks into separate levels (low, moderate, high) and identifies the relevant criteria state entities can use to determine the appropriate level. Under Part 1 of the risk assessment, state CIOs complete a series of questions aimed at evaluating the implementation and implications of using GenAI solutions within the state organization. The topics covered include an assessment of the project overview and organizational need, an analysis of alternative solutions, an examination into the type of GenAI system being sought out including whether the system will be shared with other state entities or third parties, the completion of relevant impact assessments, and an evaluation of financial considerations such as the long-term viability and maintenance of integrated GenAI systems.
Once Phase 1 is completed, and if the GenAI risk level is rated moderate or high, state entities must complete Phase 2, which lists specific controls that state entities must incorporate before procuring and deploying higher risk GenAI systems. These controls include:
For the full list, see here.
State workforce GenAI training. The GenAI Guidelines recommend state entities incorporate training on emerging technologies, such as GenAI, into mandatory Information Privacy and Security Training for all state employees. They also outline a phased approach to workforce training. Phase 1 focuses on broad educational and training programs for executive-level personnel then specialized training for roles such as legal, labor, and privacy specialists. Phase 2 covers developing GenAI skills among program staff to enhance operational efficiency and support the delivery of safe, high-quality, equitable services (including training for technical and cybersecurity experts). Phase 3 involves foundational education for the general workforce before deployment of GenAI tools.
The developments around AI governance are not isolated to imposing requirements on the public sector. Local legislators in California are also aiming to regulate the private sector’s use of AI by addressing many of the same concerns being addressed by the GenAI Guidelines and risk assessment. In March, California also continued to move forward on SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models (not Systems) Act), which would govern developers of frontier models. Additionally, other AI-related bills have moved forward and are currently with committee. These include:
These developments provide concrete examples of the types of questions and risk mitigations that state regulators consider when evaluating compliance activities associated with the procurement, deployment, and use of GenAI technologies. While the incorporation of these technologies may be new to many organizations, the California resources underscores that the necessary diligence involved in integrating such technologies is not and sets out baseline expectations from potential regulators.
Organizations seeking to incorporate AI systems, including GenAI functionalities, within their business can adapt existing compliance procedures and policies to address the AI-specific issues and risks. Key principles such as oversight and accountability, needs assessment, performance monitoring, leadership and management training, due diligence and risk assessments, and proper contracting can be leveraged to promote and support appropriate internal governance and as evidence to regulators that the organization treats the use of AI systems with the appropriate amount of care and attention. As organizations build out and assess their own AI safety and compliance efforts, these resources can be a helpful starting point or reference to identify gaps or areas for improvement.
Authored by
Nathan Salminen, Alyssa Golay, and Pat Bruny.