New Great Democracy Initiative report argues for AI procurement reform
The proliferation of AI has raised urgent questions about how the government should harness and regulate this powerful technology. Last week, President Trump issued an Executive Order “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” Two weeks earlier, the White House issued governmentwide guidance about “regulatory and non-regulatory approaches” to AI in the private sector. The juxtaposition of these frameworks reveals a disquieting truth: the government is procuring unregulated AI tools from private actors, whose financial motivations and legal sensitivities may not align with the government or the people it serves.
A new report from the Great Democracy Initiative, Federal Procurement of Artificial Intelligence: Perils and Possibilities, argues that federal procurement reform must be part of the Biden-Harris agenda for trustworthy and responsible algorithmic governance. Authored by David Rubenstein (James R. Ahrens Chair in Constitutional Law, and Director of the Robert J. Dole Center for Law & Government, at Washburn University School of Law), the report further argues that the government’s purchasing power and insistence on ethical AI can spur market innovation and galvanize public trust in this technology. For government and industry alike, public trust in AI is essential but precariously brittle.
“Under the right conditions, AI systems can solve complex problems, reduce administrative burdens, and optimize resource allocations. Under the wrong conditions, AI systems can lead to widespread discrimination, invasions of privacy, dangerous concentrations of power, and the erosion of democratic norms. Thus, achieving the right conditions for AI is critical,” said Rubenstein, “and federal procurement law can advance that mission. More than a marketplace, the acquisition gateway must be reimagined as a policymaking space.”
Toward that objective, this report offers a set of legal prescriptions to align federal procurement law with the imperatives of ethical algorithmic governance:
- First, federal lawmakers should mandate the creation of a government-wide inventory report that includes clear information on each AI system used by federal agencies and the vendors that supplied them.
- Second, federal lawmakers should require agencies to prepare pre-acquisition “AI risk assessments,” which can be used and updated throughout the procurement process to help manage a portfolio of AI risks relating to transparency, accountability, data privacy, security, and algorithmic bias.
- Third, federal lawmakers should integrate ethical AI considerations into existing regulations for source selection and contractual award. Doing so will force agency officials and vendors to think more critically—and competitively—about the AI tools passing through the acquisition gateway for government use.
“By no means is federal procurement law the sole solution to the challenges of algorithmic governance. But it must be part of the solution,” said Rubenstein. “Currently, the government is investing huge sums of taxpayer dollars to acquire AI systems that may be unusable, either because they are not trustworthy, or because they fail to pass legal muster. Even if an AI system clears those hurdles, it may still violate federal anti-discrimination laws, privacy laws, and domain-specific laws and regulations. Litigation will no doubt surface these tensions; indeed, it already has. Yet much of that screening can occur through federal procurement in ways that are more efficient, effective, and prior to harm.”