Generative AI in European Policing

The landscape of law enforcement is undergoing a major shift. While still slow and in its beginnings, Generative Artificial Intelligence (GenAi) is no longer a buzzword and is rapidly qualifying as a new General-Purpose Technology (OECD, 2025). Its current trajectory, marked massive performance gains, industry-wide adoption and a developing high-performance computing ecosystem, suggests it will soon become as foundational to policing as the radio once was.

While the “hype” is significant, the path to production maturity for AI systems is estimated to be approximately five years (Brohan, 2025). In policing, the journey from administrative support AI to high-risk investigative tools in the next years requires a nuanced, tiered approach, as its integration into police work is an evolution of capabilities.

The immediate priority for police organisations is the reduction of administrative friction by increasing paperwork efficiency.

  • Automated Translation & Text Correction
  • Using grounded Large Language Models for knowledge management.
  • Current solutions often lack Retrieval-Augmented Generation (RAG). This capability is essential in a police environment to ensure that AI outputs are grounded in specific, verifiable legal databases and case files rather than general internet data.

In the medium term, as the technology matures, the focus will shift from administration to investigation.

  • Transformer-Based OCR for license-plate recognition and document digitization.
  • Graph Neural Networks (GNN) for automating complex investigational workflows by identifying patterns and links in disparate data points.
  • Multimodal Large Language Models (MLLMs) can process text, images, and audio simultaneously. In practice, this means automated report generation from agentic sensors (like bodycams) and more efficient DNA and fingerprint identification.
  • Edge Computing & Federated Learning: Processing data directly on devices for speed while using federated models to ensure privacy by keeping sensitive data localized.

The primary challenge in all of this is compliance with law and international regulations, like the EU AI Act. Since some of these AI activities will be “High Risk”, there needs to be guardrails in the form of Fundamental Rights Impact Assessments (FRIA), strict human oversight and transparent data governance (European Parliament, 2024). There seems to be a beginning of dedicated processes by EU member states to transpose the EU AI Act into national law, focussing on sectoral oversight, regulatory sandboxes and a tiered penalty system (Maria and Rauer, 2025).

There is also a more critical point to highlight in relation with the National Security Exemption defined in article 2 of the EU AI Act. The risk is that the idea of “National Security” is a fluid definition, adapting to political climate and situations of perceived national risk, possibly even applied with varying intensities throughout the union. Powerful AI systems could avoid FRIA and transparency regulations in such situations (Statewatch 2025).

Secondary concerns are data sovereignty and security and AI-Ready Data, ensuring that “datasets are optimized for AI applications, enhancing accuracy and efficiency” (Gartner, 2025).

Furthermore, I feel like the primary universal guardrail currently cited is “strict human oversight”. However, this often misses the issue of “Human-in-the-Loop” fallacy, aka automation bias. Psychological studies in France and Germany have shown that when AI flags a suspect or generates a “fact” in a report, a human is more likely to agree with the AI rather than critically challenge it, especially in real-time environments and time pressure situations. (European Union Agency for Fundamental Rights). Without specific training, officers is in danger of becoming merely a moral “crumple zone”, absorbing legal liability without any added benefit of oversight.

Finally, it seems clear that the true potential of AI in law enforcement lies in its ability to work with data at scale and adapt. Unlike previous technologies that failed to move beyond niche applications, GenAI currently thrives on a connected ecosystem of complementary tech, like ultra-fast networks and specialised chips.

In these fast changing and chaotic times, it becomes more and more important to transform AI from a black-box novelty into a transparent, regulated, and efficient partner in public safety. While these systems are here to stay for protection and investigation, they carry the latent potential to devolve into instruments of extreme political control. Without rigorous oversight, they risk stifling the fundamental rights of expression and assembly.

Therefore, the ultimate safeguard is not the algorithm nor is it the human as an individual, but the “rule of law” in a democratic setting, ensuring that as AI scales, it remains a servant of the people and justice rather than an architect of control (European Digital Rights, 2025/2026).

Reference list:

Brohan, M. (2025) McKinsey forecasts up to $5 trillion in agentic commerce sales by 2030, Digital Commerce 360. Available at: https://www.digitalcommerce360.com/2025/10/20/mckinsey-forecast-5-trillion-agentic-commerce-sales-2030/ (Accessed: 7 February 2026). 

European Parliament and Council of the European Union (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence … (Artificial Intelligence Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (Accessed 8 February 2026). 

European Union Agency for Fundamental Rights (FRA) – Reports on Bias in Algorithmic Decision-Making (2024/2025). Available at: https://fra.europa.eu/en/publication/2025/assessing-high-risk-ai (Accessed 23 February 2026)

European Digital Rights (2025/2026), The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations, EDRI. Available at: https://edri.org/our-work/the-ai-act-isnt-enough-closing-the-dangerous-loopholes-that-enable-rights-violations/ (Accessed 24 February 2026).

Gartner (2025) Gartner predicts over 40% of agentic AI projects will be canceled by end of 2027, Gartner, 25 June. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027 (Accessed: 7 February 2026). 

Gartner (2025) Gartner Hype Cycle Identifies Top AI Innovations in 2025, Gartner.com, Available at: https://www.gartner.com/en/newsroom/press-releases/2025-08-05-gartner-hype-cycle-identifies-top-ai-innovations-in-2025 (Accessed 8 February 2026).

Khandabattu, H. (2025) The latest hype cycle for Artificial Intelligence Goes Beyond Genai. Available at: https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence  (Accessed: 7 February 2026). 

Maria, D. and Dr. Nils Rauer, M. (2025) Luxembourg law addresses EU AI act enforcement, Pinsent Masons. Available at: https://www.pinsentmasons.com/out-law/news/luxembourg-law-addresses-eu-ai-act-enforcement (Accessed: 8 February 2026).

OECD (2025) Is Generative AI a General Purpose Technology? (EN). Available at: https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/06/is-generative-ai-a-general-purpose-technology_6c76e7b2/704e2d12-en.pdf  (Accessed: 7 February 2026). 

Statewatch (2025), “Automating Authority” or TechPolicy.Press (2026) on National Security Shields. Available at: https://www.statewatch.org/media/4888/eu-automating-authority-report-4-25.pdf (Accessed 25 February 2026).

Similar Posts

  • Is the Turing Test Contentious?

    In 1950, Alan Turing proposed a simple solution to the complex question, “Can machines think?” He suggested that if a human interrogator could not distinguish between a computer and a human through text-based conversation, the machine should be considered “intelligent.” While this “Imitation Game” became the foundational benchmark for the field, it has since become one…

  • Bias – The Ghost in the Machine

    Bias (German: Vorurteil | French: Biais), is a disproportionate weight in favour of or against one thing, person, or group compared with another, usually in a way considered to be unfair (Oxford University Press, 2026). In the world of AI, we need to consider specifically Cognitive Bias (The inherent human prejudices) and Algorithmic Bias, where AI…

  • The uncomfortable middle

    or the erosion of human agency Frontier AI models behave intelligently. Several dimensions of human intelligence are increasingly replicated by multimodal AI architectures, including interpersonal and intrapersonal capabilities, the classic human realms of emotional intelligence where we understand others and ourselves. While AI machines still fail strict empathy tests, the social reality is that they don’t…

  • The powerful imperfection

    Imagine packing for holidays. You have a fixed amount of space in the car and a variety of irregular shapes (suitcases, coolers, a guitar, tennis rackets, two dogs, a bag of snacks, …). Humans can easily solve this problem because we use a heuristic approach (a mental shortcut), like “put the big heavy boxes in first…

  • Finding a definition of artificial intelligence

    The four approaches to defining AI according to Russell and Norvig (2020): Acting humanly ( The Turing Test approach) The output counts. It doesn’t matter how the machine “thinks,” as long as its behavior is indistinguishable from that of a human. Thinking humanly (The cognitive modeling approach) The process counts. We must first understand how the…

  • Industry 4.0 and AI

    Historically, the development of a General Purpose Technology does not follow a straight line. Instead, societies usually move through cycles of high excitement followed by periods of losing interest. Even though we are currently in the middle of the Fourth Industrial Revolution, we can still use old frameworks to understand what is happening. One important concept…