FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

By Anthony Kimery

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

The IG said “SBOMs remain uncommon” and that “undisclosed embedded AI tools could result in FBI personnel utilizing AI capabilities unknowingly and without such tools having been subjected to the FBI’s AI governance. Additionally, an FBI official expressed concern about the fact that vendors are not required to obtain independent testing of their products to verify the accuracy of data models used in embedded AI capabilities.”

The FBI’s AI Ethics Council (AIEC), which was established to ensure compliance with ethical principles and federal laws, faces a substantial backlog in reviewing and approving AI use cases. This backlog, which averaged 170 days for pending reviews in 2024, highlights systemic inefficiencies that may delay safeguards against privacy violations. Furthermore, while the AIEC’s ethical framework aligns with guidelines from the Office of the Director of National Intelligence (ODNI), the evolving policy landscape creates uncertainty, delaying critical decisions and leaving open the risk of non-compliance with emerging regulations.

The deployment of AI in the context of national security also raises acute civil rights issues, particularly regarding the potential for racial or ethnic bias. Tools like facial recognition systems, often scrutinized for their propensity to misidentify individuals from marginalized communities, exemplify these risks. The FBI and DEA must navigate the dual mandate of national security and law enforcement, meaning that AI applications will often operate in contexts with high stakes for personal freedoms.

Although the FBI has initiated steps to document AI use cases and develop an overarching governance policy, the incomplete integration of ethical considerations into operational workflows poses risks. Without robust oversight mechanisms and transparency, AI systems could facilitate unwarranted surveillance, eroding public trust and violating constitutional protections against unreasonable searches and seizures.

The DEA’s use of AI further complicates the picture. With its sole AI tool sourced externally, the DEA relies heavily on other U.S. Intelligence Community elements, limiting its control over the tool’s design and implementation. Such reliance not only constrains accountability, but it also exposes DEA operations to the risks inherent in third-party AI systems, including biases that could unfairly target specific groups.

Both agencies cited recruitment and retention challenges as significant barriers to adopting AI responsibly. The IG said the inability to attract technical talent, particularly individuals equipped to address AI’s ethical and legal implications, leaves gaps in the agencies’ capacity to mitigate risks. In addition, “many individuals with the right technical skills are unable to pass background investigations,” the IG reported.

Budgetary constraints further hinder the acquisition and independent testing of AI tools, increasing reliance on commercially available systems with unknown biases or limitations.

The IG said FBI personnel pointed out that “it can be challenging to test and deploy a new system without a research and development budget because it is difficult to justify using limited funds to test unproven technology when operations supporting the mission are so critical. This is in contrast to other intelligence agencies, which according to an FBI official, have research and development budgets that allow them to test and deploy new technology. FBI personnel have submitted proposals to ODNI when internal funding was not available, but those sources of funding are not guaranteed.”

Modernizing IT infrastructure is another critical hurdle. Legacy systems impede the integration of AI, and inadequate data architectures exacerbate issues related to data quality and security. Poorly managed data systems could inadvertently expose sensitive personal information to breaches or misuse, further endangering privacy and civil rights.

“Due to limited resources and a lack of strategic planning, federal agencies often struggle to ensure that data architecture remains modern and instead use outdated information systems, even when those systems themselves require significant resources to maintain,” the IG’s report says. “Such systems can frustrate the move to AI because they can be difficult to integrate with newer technologies, lack features essential for modern data science tasks, struggle to handle today’s large and complex datasets, and often require more time and manual effort from their users. FBI personnel also noted that the movement of data and AI tools across classification levels is complicated and requires additional funding to address.”

“Additionally,” the IG said, “capturing quality data is fundamental to allow an organization to utilize data for decisions by implementing processes to ensure that incoming data is accurate, consistent, and relevant.

The IG highlighted a number of actions the FBI and DEA can take to address the concerns raised by the audit. For one thing, both agencies should evaluate how AI can be integrated ethically and effectively to improve intelligence collection while protecting individual rights. Also, strengthening the AIEC and similar mechanisms with sufficient resources to handle increased AI adoption is critical for upholding ethical standards.

Mandating SBOMs and independent testing for all AI tools would ensure that the FBI and DEA – and other agencies – can verify the safety and legality of their applications. Also, the IG recommended implementing routine assessments to evaluate the potential impact of AI tools on civil liberties, particularly in surveillance contexts.

Source: Biometric Update

Anthony Kimery is the former Editor-in-Chief and co-founder of Homeland Security Today. He managed the magazine, daily online news operations and wrote the award-winning “Kimery Report,” which covered a broad spectrum of HS-related issues, from public health preparedness to intelligence collection. He has 30-plus years of broad institutional knowledge and expertise in homeland/national security matters and issues as an editor, analyst, and consultant. He also serves as Advisory Board Member of Mississippi College’s Center for Counterterrorism Studies.

Image credit: The Last American Vagabond

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, HIVE, Minds, MeWe, Twitter – X  and Gab.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription