Executive Order on AI says a lot of the right things, but requires follow-through to ensure real change
For immediate release: October 30, 2023
Caitlin Seeley George
The sweeping order directs agencies to take steps toward addressing the existing harms of AI, but whether or not any of us will be safer is unclear—particularly when it comes to AI and law enforcement agencies.
The Biden Administration has released its long-anticipated Executive Order on Artificial Intelligence. The 100+ page document lays out various areas for action—including banking, education, healthcare, housing, the workplace—and primarily directs federal agencies to develop standards for use that minimize harms, while maximizing benefits for the U.S.
The following statement can be attributed to Caitlin Seeley George (she/her), campaigns and managing director at Fight for the Future:
“It’s far from breaking news that Artificial Intelligence is exacerbating discrimination and bias, but it’s a positive step for the Biden Administration to acknowledge these harms and direct agencies to address them in this Executive Order.
However, it’s hard to say that this document, on its own, represents much progress. Biden has given the power to his agencies to now actually do something on AI. In the best case scenario, agencies take all the potential actions that could stem from the Executive Order, and use all their resources to implement positive change for the benefit of everyday people. For example Agencies like the FTC have already taken some action to rein in abuses of AI, and this Executive Order could supercharge such efforts, unlocking the federal government’s ability to put critical guardrails in place to address harmful impacts of AI.
But there’s also the possibility that agencies do the bare minimum, a choice that would render this Executive Order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI.
It’s impossible to ignore the gaping hole in this Order when it comes to law enforcement agencies’ use of AI. Some of the most harmful uses of AI are currently being perpetrated by law enforcement, from predictive policing algorithms and pre-trial assessments to biometric surveillance systems like facial recognition. Many AI tools marketed to law enforcement require massive amounts of data that is often unjustly procured via data brokers. These systems deliver discriminatory outcomes, particularly for Black people and other people of color. As written, the primary action required by the Executive Order regarding law enforcement use of racially biased and actively harmful AI is for agencies to produce reports. Reports are miles away from the specific, strong regulatory directives that would bring accountability to this shadow market of harmful tech that law enforcement increasingly relies upon.
We cannot stress enough that if the Biden Administration fails to put real limits on how law enforcement uses AI, their effort will ultimately fail in its goal of addressing the biggest threats that AI poses to our civil rights.
A good portion of the Executive Order focuses on ways to maximize the opportunities that AI presents. People often say that if the AI cat is already out of the bag, we might as well ensure that it benefits the U.S. as much as possible. But it’s critical that the federal government not only focus on expanding its use of AI, but also cases where it must be restricted. Agencies directed to set “standards” must consider cases where AI should not be used.
Specifically, we believe that there are high-impact uses where AI decision making should not be allowed at all, including for hiring and/or firing in the workplace; law enforcement suspect identification, parole, probation, sentencing, and pretrial release and detention; and military actions. While the Executive Order may call for the development of “best practices” in these areas, we argue this is a misnomer, as there is no “best” way to use automated decision making in these cases where the consequences are so significant. People’s lives and livelihoods depend on the Administration aggressively drawing lines that should not be crossed, and that will now require followthrough from the agencies.”
####