How states and the private sector can work together to shape AI governance

inkoly via Getty Images
COMMENTARY | The technological prowess of the private sector and the public sector’s governance culture make for a strong combination as states and localities look to integrate the technology.
With the rapid development and proliferation of artificial intelligence in almost every aspect of life, AI governance — establishing oversight, compliance and a consistent operational framework — is key to the successful and ethical integration of AI.
This is especially true when it comes to integrating AI into public services, and why state governments and the private sector should partner together to get it right.
The Trump administration has signaled its interest in keeping robust access to AI widespread to fuel innovation while keeping Americans safe from AI systems that could have “high-impact.” While the implications of the latest executive orders are unclear, it’s also the case that Congress has not moved forward on an AI regulatory framework, despite bipartisan agreement on its need.
Ideally, to avoid a patchwork of regulations across the country, Congress will step up and provide consistent guardrails that spur innovation while mitigating AI risks. In the meantime, many states have collaborated with the private sector to establish best practices, with promising signs.
While the private sector tends to lead the government in the adoption of new technologies, many state agencies are ahead of the governance curve due to regulatory demands, greater aversion to risk and experience with handling sensitive citizen data.
This combined expertise can generate innovative results. For example, several states have chartered public-private AI task forces to assess risks and opportunities and make recommendations on leveraging AI for public service delivery, including Wisconsin, Massachusetts, Rhode Island, Alabama, New Jersey and Arkansas.
The Wisconsin task force announced an AI action plan in July 2024, recommending policy directions and investments to help the state capitalize on the AI transformation. The Massachusetts task force is helping establish and shape the Massachusetts AI Hub in December 2024, which will serve as the central state entity for AI collaboration and innovation across academia, industry and government.
The Rhode Island task force is set to develop a road map for AI usage in the state by this summer. Elsewhere, Utah has enacted the Artificial Intelligence Policy Act which established a government office to work with industry on proposals “to foster innovation and safeguard public safety.”
Last month, North Carolina hired a 20-year AI industry pioneer who has helped public and private sector organizations implement emerging and transformational technologies responsibly. In a newly created role that recognizes the importance of AI adoption and governance, the executive will “ensure the ethical, transparent and accountable integration of AI technologies into public services to support innovation while managing associated risks.”
The state also announced a partnership with OpenAI to use ChatGPT to analyze publicly available data to improve the efficiency of government services, such as identifying discrepancies in state financial audits. These moves are not surprising. North Carolina has been at the vanguard of government data use for years, having launched the NC Government Data Analytics Center, the nation’s first enterprise data management program, in 2014.
The recent steps are game-changers for North Carolina, and states can learn a lot from collaborating directly with and tapping industry knowledge. However, having subject matter expertise on specific AI tools and data usage is just one piece of the puzzle.
Proper AI governance requires a big picture approach to AI adoption, anticipating and mitigating potential negative impacts and reflecting organizational values — all of which will be important to establish from the outset when integrating AI into public services. The private sector is not only experienced with the best use cases for AI, but is a valuable collaborator in navigating the potential challenges — and ethical dilemmas — an organization might face when integrating AI into government services.
With North Carolina’s forward-footed efforts on AI, the state is clearly leading the charge for the use of AI in public services. State agencies should follow North Carolina and other leading states.
While Congress considers comprehensive AI legislation, states can’t afford to wait for guidance. States need to be collaborating with the private sector experts who have spent their careers developing, integrating and understanding the risks and opportunities of AI technologies, because AI could improve the lives of every citizen.
Steven Tiell is a technology, strategy, governance and innovation executive with 25+ years of experience in driving innovation and transformation across multiple industries and continents. He is currently Global Head of AI Governance Advisory at SAS, which is part of the leading Data Ethics Practice, where he works with executives at SAS customers to establish AI Governance and bring about Trustworthy AI transformations.