Exploring AI for Decision Support, Trend Detection and Analysis, Resource Allocation and Management, Across the Profession
Law enforcement agencies are operating in an era of data abundance. Agencies have more information than ever, including computer-aided dispatch logs, records management data, digital evidence, sensor feeds, and especially video. Advances in artificial intelligence (AI) and analytics offer new ways to make use of these vast data streams, but many agencies have limited resources and time to convert that data into timely insights that inform strategic decisions for improved public safety.Â
Agencies face pressure to allocate limited resources efficiently amid new and evolving challenges. Using AI and other advanced technologies as a “force multiplier” for smaller agencies is one way to level the playing field.
As of 2018, there were nearly 18,000 individual state and local law enforcement agencies in the U.S. Ninety-three percent of those agencies have fewer than 100 sworn officers (BJS 2022), and agencies in this size range typically have significantly fewer resources. Often what we hear or read about law enforcement agencies leveraging AI comes from the larger departments that make up the remaining 7% of agencies (approximately 1,300 departments nationwide).
The vast majority of smaller to mid-sized law enforcement agencies likely fall along a continuum from thinking about AI to experimenting with AI in limited ways or using AI through a purchased tool or solution. This doesn’t mean there are no innovators or early adopters among them—there absolutely are—but we need to consider the realities of the profession and where most agencies are in AI adoption: just starting out and moving forward cautiously.
The Big Picture
Law enforcement has leveraged innovative technologies for decades: early examples of AI use in policing have been around for years, from gunshot detection systems and license plate readers to body-worn cameras and predictive policing solutions intended to forecast crime hotspots. Many of these tools leverage AI capabilities to turn raw inputs into actionable insights.Â
For the vast majority of agencies, the attainable and optimal use of AI doesn’t align with hyped up futuristic visions of AI-driven policing. Instead, law enforcement can leverage AI for a wide range of needs and tasks that every agency faces in the pursuit of efficient and effective public safety outcomes, including:
- Early identification of public safety trends
- Deployment of resources with better precision
- Reduced administrative burden, improved policies, training, and communications
Technology providers will continue to offer more and more solutions leveraging AI and, as a result, more agencies will use them. But agencies can and should consider how they leverage AI for everyday use cases.
At a recent Commission on Accreditation for Law Enforcement Agencies (CALEA) Conference, NPI facilitated a discussion of AI adoption. In the audience were 217 law enforcement attendees who were engaged using an interactive tool. What we learned helps us see the realities across the profession:
- Only 38% of agency representatives in the audience acknowledged using AI currently
- 20% said they were not using AI
- 32% said they were pilot-testing or evaluating AI tools
There was an acknowledgement of the need to pilot-test solutions before adoption. Most are taking a cautious approach to what they would allow AI to do and most agreed that there is a need for training on acceptable uses.
This, we believe, is the reality for most agencies in 2026.Â
This moderated approach may be substantially influenced by resources and access, but it may also offer advantages for a careful and measured approach to adoption. It’s an approach that allows room for the creation of governance frameworks, careful testing before deployment, and strict rules for ethical usage.
With that context, let’s turn to what the research tells us so far about how emerging technologies—including but not limited to AI—can impact policing.
What We Know
Public Perceptions of AI Use in Policing
Trustworthiness is a major factor in the acceptance of a variety of emerging technologies (Guler, Kula, & Boke, 2025). Research suggests that it’s critical to align AI implementation with what the public expects in terms of fairness, trust, and transparency: these are key factors for legitimizing its usage in law enforcement. While high-profile cases make for great opportunities to introduce the need for AI or other tools, research finds that efforts should be directed toward improving public understanding of the organizational requirements in addition to the technical requirements necessary for AI implementation (Schiff et.al., 2025).
Analyzing Body Camera Video for Training, Evaluation, and Performance
AI dramatically reduces the burden of monitoring and reviewing the massive volumes of video produced by cameras and devices. As one example of this in practice, NPI has partnered with agencies to analyze large volumes of body-worn camera footage using AI-assisted tools to produce agency-level assessments of officer-community interactions on a weekly timescale rather than monthly. In one such project, analysis of thousands of videos found that 93% of officer contacts met or exceeded performance standards while still highlighting areas for further attention.
NPI’s research has demonstrated the technology’s use in evaluative contexts and has validated the reliability of this approach, finding that AI-scored interactions align closely with independent human raters (Dolly et al., 2024; Dolly et al., 2025).
Natural Language Processing for Records and Reports
Police work generates countless written reports and video recording transcripts. AI can digest these documents to surface insights and even draft summaries of police reports. While early adopters report significant time savings in AI-assisted police report writing, two studies involving random assignment could not find similar results, though officers and supervisors reported positive views and favorable perceptions of time savings (Boehme et.al., 2025).Â
AI Use Cases Are Rapidly Expanding, But A Framework Is Needed for AI Adoption
Research suggests that AI use should be centered on practical applicability and that legitimate use cases are essential to ensure applications are operationally valid and provide benefit to policing and communities. Eric Halford in the U.K. has produced a risk-assessed listing of over 40 functions or use cases, including responding to the public, criminal investigation, intelligence analysis, and workforce management, and further proposes a framework for AI adoption in policing (Halford, 2025).
The takeaway is that while AI holds strong capabilities to analyze complex and massive data sets far faster than a person, the technology is still evolving and requires human involvement and oversight, as well as strong governance frameworks.Â
Where The Gaps Remain
Significant gaps in research and practice are expected to persist for some time. Technologies are evolving rapidly, which means our testing and research will necessarily lag somewhat behind the state of play. And despite the advancements we’re seeing, one age-old problem continues to warrant attention: the quality and completeness of data. While data is more abundant than ever, data quality and completeness haven’t kept pace.
Research highlights significant gaps in what algorithms and technologies can do. A 2024 RAND study, for instance, found that despite an abundance of police data, its usability is severely limited: information is siloed in incompatible systems, often incomplete or inaccurate, and not readily analyzable. This limits the effectiveness of any AI, since bad (or fragmented) data going in means bad (or, at best, partial) insights coming out (Barnum et al., 2024; Adams et. al., 2026).
Beyond the technology itself, we continue to observe major gaps in adoption and governance frameworks tailored to policing, leaving the profession susceptible to a variety of challenges, including questions about the appropriate, ethical use of these technologies and what safeguards and accountability exist for protecting privacy and civil liberties. Preparing our workforce to leverage these tools in ways that are responsible, effective, and just will take a significant investment of time and energy.
What Agencies Can Do Now
Though it’s tempting to point to early adopters and suggest following their path, it’s important to recognize the differences in agency sizes, resources, and contexts. These all play a significant role in determining what sorts of adoption strategies are practical for any given law enforcement agency. That said, all agencies have opportunities to effectively leverage AI, and they should start by developing a governance framework incorporating legal, ethical, and safety considerations to inform how they will and won’t use the technology.
Here are five things agencies can do to prepare to leverage AI for improved public safety:Â
1. Get Your Data in Order
A strong data foundation is essential. AI can’t help if you don’t have the data you need, including staffing allocations, incident report details, metadata and tagging associated with videos, etc. Agencies should invest in cleaning and organizing their data into analyzable formats that can be easily searched, compared, and processed by analytical tools. Where your systems have the capability to collect useful data, ensure it’s collected. Where data is often missing, identify the cause and address it. New technologies aren’t always needed: even basic spreadsheets can be helpful in many instances. Data quality assessments and roadmaps can be useful in preparing to leverage AI capabilities. In many cases, the solutions only require clean-up, clear policy and SOPs, and/or internal training.Â
2. Create an Agency AI Acceptable Use Policy
Many municipal and local governments have established AI use policies that may allow a range of AI use cases. We recommend that law enforcement agencies establish their own AI use policy within those guidelines, tailored to address the unique needs of law enforcement. Multiple agencies have published their own AI use policies online, and multiple organizations have published recommendations and model policy proposals that can be considered.Â
3. Leverage AI and Analytics for Quick Wins
Many tools exist that agencies can leverage without IT procurements or system modifications, though compliance with applicable law, policy, and contracts can’t be overlooked. For example, the body-worn camera analysis mentioned earlier requires no tech installation and can provide your agency with something most have never before had: an analysis of your agency’s contacts with the community that considers officer and community member actions and responses as well as the nature of the call, providing a comprehensive view of what’s happening on the street. Agencies can also deploy tools to analyze internal operations—such as response times, case backlogs, or overtime drivers—to spot inefficiencies. Agency representatives participating in our recent CALEA conference session reported using AI for improved email communications, drafting press releases and talking points for meetings, and communications content.Â
4. Pilot AI in Lower-Risk Areas
Identify use cases where AI can save time or improve insight without making life-and-death decisions. Agencies such as the St. Louis (MO) County Police are using AI chatbots to triage non-emergency calls or public queries, and others are improving agency communications with the public and developing early drafts of policies and SOPs. The goal is to free up staff from tedious tasks so they can focus on complex policing work. Any pilot should be coupled with evaluation: measure whether it actually saves time or improves outcomes.
5. Maintain Human Oversight and Ethics
Keeping “a human in the loop” is essential. AI outputs should inform, not dictate, decision-making. For instance, if a tool flags a particular area for increased patrol, treat it as one input alongside officer knowledge and community feedback, not an automatic order. If using an AI-generated report summary, an officer should verify every critical detail. This guardrail is crucial because current AI can sometimes be wrong or context-blind. This is a critical component of any AI policy and should be incorporated.Â
The Bottom Line
Leaders should establish clear policies around AI use, emphasizing that these tools are there to assist officers, not replace their judgment. Ethical considerations (privacy, civil rights, transparency) need to be baked in from the start. Here are some concrete steps to take:
- Designate an internal lead and/or form a review committee.
- Consult legal advisors on the use of data and AI within government systems.
- Communicate with the public about what tools are being used and what data they use.
- Ensure there’s an oversight process for the use of AI.
Taking these steps now will position agencies to capitalize on AI and analytics in a responsible, effective manner.
AI’s a powerful tool, but not a magic fix. Commanders and officers who combine the best of technology with human insight and community collaboration are likely to see the greatest gains in public safety and organizational performance. By starting with solid data practices, incremental innovation, and ethical oversight, police leaders can make progress today while setting the stage for more transformative uses of AI tomorrow.
Sources
Adams, I.T., Barter, M., McLean, K (2026). et al. “No man’s hand: artificial intelligence does not improve police report writing speed.” Journal of Experimental Criminology 22, 137–154 (2026).
Barnum, Jeremy D., Cahill, Meagan E., Woods, Dulani, Lucey, Kevin D., Vermeer, Michael J. D., and Jackson, Brian A (2024). “Better Measures of Justice: Identifying High-Priority Needs to Improve Data and Metrics in Policing.” RAND.
Bureau of Justice Statistics, Census of State and Local Law Enforcement Agencies, 2018.
Boehme, H., Adams, I. T., Barter, M., Jr., I. A. G., & McLean, K. (2025). “Writing at the Speed of Hype: Officers’ Post-Experimental Perceptions of AI Report Writing.” CrimRxiv.
Dolly, C., Wender, J., Hansen, E., Gallardo, R., Coleman, J. E., Lande, B., Ta-Johnson, V., Tomlinson, M., Ben-Yosef, G., & Tu, P. (2024). Multi-modal analysis of body-worn camera recordings: Evaluating novel methods for measuring police implementation of procedural justice (Award No. 2020-R2-CX-0010). National Policing Institute.
Dolly, C., & Tomlinson, M. (2025). “Multi-modal analysis of body-worn camera recordings: Evaluating novel methods for measuring police implementation of procedural justice.” Presented at the American Society of Evidence-Based Policing.
Dolly, C., Weisburd, D., Valdovinos Olson, M., & Dong, B. (2025). Improving police encounters in pedestrian stops: The Phoenix quasi-experiment (Award No. 2019-R2-CX-0025). National Policing Institute.
Eric Halford (2025). “The Transformer Led Policing Model: a framework for applying generative artificial intelligence in policing.” Policing: A Journal of Policy and Practice, Volume 19, 2025.
Guler, A., Kula, S., & Boke, K. (2025). “Examining public support for AI in policing: the role of perceived procedural justice.” Police Practice and Research, 26(6), 673–695. https://doi.org/10.1080/15614263.2025.2516535
National Policing Institute (2025). “Partners with Polis Solutions to Offer New Data Analytics Tools and Services for Law Enforcement.” National Policing Institute.
Schiff, Kaylyn Jackson, Daniel S. Schiff, Ian T. Adams, Joshua McCrain, and Scott M. Mourtgos (2025). “Institutional Factors Driving Citizen Perceptions of AI in Government: Evidence from a Survey Experiment on Policing.” Public Administration Review 85 (2): 451–467.Â
Never miss an issue of InFocus
Share