RELEASE: Rep. Summer Lee Introducing the Eliminating BIAS Act with Senator Ed Markey to Combat Algorithmic Discrimination in Federal Agencies

Nov 01, 2024
Press

For Immediate Release
Contact:
Vaibhav Vijay, Vaibhav.Vijay@mail.house.gov
Kyla Gill, Kyla.Gill@mail.house.gov

Rep. Summer Lee Introducing the Eliminating BIAS Act with Senator Ed Markey to Combat Algorithmic Discrimination in Federal Agencies

Washington, D.C. – Today, Rep. Summer L. Lee (D-PA) and Senator Edward J. Markey (D-MA) introduced the Eliminating Bias in Algorithmic Systems (BIAS) Act to address the mounting risks posed by artificial intelligence (AI) bias within federal agencies. This legislation mandates that every federal agency utilizing, funding, or overseeing AI establish a civil rights office dedicated to combating algorithmic bias, discrimination, and associated harms. The bill requires these offices to submit regular reports to Congress detailing their efforts to monitor and mitigate AI-driven discrimination, providing vital insights and recommendations for further action. Senator Ed Markey (D-MA) introduced a Senate companion back in December

“Algorithmic bias doesn’t just affect individuals; it deepens systemic inequalities that marginalized communities face every day. With the Eliminating BIAS Act, we’re demanding transparency, accountability, and protection from technologies that—without oversight—could cause irreversible harm to Black, brown, low-income, and other vulnerable communities,” said Rep. Summer Lee. “This legislation is a proactive step to ensure AI works to lift people up, not shut them out.”

“From housing to health care to national security, algorithms are making consequential decisions, diagnoses, recommendations, and predictions that can significantly alter our lives,” said Senator Markey. “As AI deployment continues to increase, the federal government must protect the marginalized communities that have already been facing the greatest consequences from Big Tech’s reckless actions. My Eliminating BIAS Act will ensure that the government has the proper tools, resources, and personnel to protect these communities and mitigate AI’s dangerous effects, while providing Congress with critical information to address algorithmic harms.”

“As AI develops rapidly with little to no guardrails, we’ve already seen instances of discriminatory bias in this technology, causing undue harm to marginalized communities. It is imperative that action is taken to address these biases, especially as AI technologies are increasingly used by federal agencies,” said Nicole Gill, Co-founder and Executive Director of Accountable Tech. “Through increased oversight, the Eliminating BIAS Act offers the government an opportunity to lead by example and work to mitigate this potential harm. We thank Representative Lee and Senator Markey for introducing this legislation, and we urge Congress to act swiftly to reduce bias in AI technologies.”

The Eliminating Bias Act is co-sponsored by Eleanor Holmes Norton (DC-00), Ayanna Pressley (MA-07), Bennie  Thompson  (MS-02), Rashida Tlaib (MI-12), Cori Bush (MO-01), Suzanne Bonamici (OR-01), Dwight Evans (PA-03), Jesús “Chuy” García (IL-04), Al Green (TX-09)

Raúl Grijalva (AZ-07)

The Eliminating BIAS Act is endorsed by Lawyers’ Committee for Civil Rights Under Law, Leadership Conference on Civil and Human Rights, Center for Democracy and Technology, National Urban League, Electronic Privacy Information Center (EPIC), Free Press Action, Public Knowledge, Accountable Tech, Demand Progress, Fight for the Future, Common Sense Media, Center for Digital Democracy, Common Cause, Open Technology Institute, Upturn, National Hispanic Media Coalition, Asian Americans Advancing Justice – AAJ, UnidosUS, The Trevor Project, National Action Network, Fairplay, National Urban League, National Council of Negro Women, and Access Now. 

Background 

The Eliminating BIAS Act addresses the growing concern of bias in AI and algorithmic systems used in critical sectors like healthcare, finance, and public services. Federal agencies increasingly rely on these technologies to make decisions that profoundly impact people’s lives; however, unchecked algorithmic systems have been shown to unfairly target vulnerable communities. Facial recognition has already been proven to discriminate most against Black women. Companies have deployed algorithms that exclude users of certain gender identitiesfrom viewing job advertisements, block advertisements for clothing designed for people with disabilities, and apply higher interest rates for minority groups. 

AI bias has become a critical issue affecting communities across the U.S., including Pittsburgh. In law enforcement, AI systems can unfairly target Black and brown communities, deepening systemic discrimination. This was evident in 2020 when Pittsburgh police deployed facial recognition technology with little oversight during Black Lives Matter protests, raising significant concerns over privacy, racial bias, and unchecked surveillance.

On a federal level, AI’s rapid adoption in decision-making often outpaces the necessary oversight to prevent discrimination. For example, the Department of Justice’s National Institute of Corrections has promoted the use of risk assessment tools despite documented biases that disproportionately impact Black defendants. Similarly, the Department of Veterans Affairs (VA) used an algorithm called the Care Assessment Needs (CAN) score, intended to predict health outcomes, which was found to underestimate health risks for Black veterans.

Key Provisions of the Eliminating BIAS Act

The Eliminating BIAS Act requires federal agencies to:

  • Establish a Civil Rights Office: Each agency must have a dedicated civil rights office to identify, prevent, and address algorithmic bias, ensuring staff have the expertise to analyze and rectify discriminatory outcomes.
  • Biennial Reporting: Civil rights offices will submit reports to Congress detailing the risks posed by algorithmic systems, actions taken to mitigate these risks, and recommended legislative or administrative measures.
  • Interagency Coordination: An interagency working group will facilitate best practices, coordinating across federal agencies to protect civil rights in AI and ensure fair treatment for all communities.

Recent Posts


Dec 17, 2024
Press


Dec 17, 2024
Press


Dec 13, 2024
Press