Part One: AI as Infrastructure for Fairer Justice

By Unilink on 10-Sep-2025 14:42:01

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Part One: AI as Infrastructure for Fairer Justice</span>

The UK Ministry of Justice has confirmed new Artificial Intelligence (AI) plans, making it clear that AI is no longer a speculative concept in the justice sector. Soon, it will be a practical force helping to identify prisoners at high risk of violence. 

While public and media attention often focuses on a narrow set of applications, such as surveillance, enforcement or fictional scenarios like those in the Tom Cruise film Minority Report, concentrating only on these factors alone risks overlooking AI’s broader potential to improve justice services in more constructive, human-centred ways.

At Unilink, we view AI not as a standalone solution but as a multi-functional foundational layer that supports smarter decision-making and helps simplify complex processes for users and practitioners alike.

Intelligent self-service tools

We have already begun integrating AI functionality into our existing platforms. For example, our whole self-service platform is being enhanced with chatbots that help users navigate. These tools can answer questions, suggest appropriate next steps and even personalise support. For prison staff, this means fewer routine enquiries and more time spent on higher-value, interpersonal work.

Real-time translation

AI is also helping to reduce access barriers in custody settings, where linguistic diversity often creates communication challenges between staff and individuals who speak little or no English. With AI-powered translation now integrated into our systems, we can offer real-time translation, both written and spoken, across almost all languages, including Arabic, Urdu and many others. Such translation solutions not only help staff and prisoners understand each other, but also make key services and support more inclusive.

In international contexts, this capability becomes even more important. We are beginning to deploy systems that allow translated instructions, messages and FAQs to be accessed in the user’s preferred language, increasing engagement and reducing miscommunication at scale. Answers to FAQs can be read by the user or the system can speak the words in the required language. 

As with all our AI interfaces, we prioritise plain language and accessibility to ensure tools are usable for people with a range of literacy and digital skill levels.

From reaction to prediction

More broadly, AI is helping us shift from reactive models to more predictive, proactive ones. Our AIM (Alert, Intervene, Monitor) platform is designed to identify changes in behaviour patterns that might otherwise go unnoticed. For example, it is able to recognise someone who is regularly attending appointments, avoiding conflict and engaging with rehabilitation programmes and many other factors. These trends can then inform decisions around early interventions, with a focus on supporting progress rather than merely reacting to incidents.

Real-world impact

Each of these examples demonstrates how AI can shift how the justice system operates. Unilink is committed to ensuring transparency of AI recommendations and to addressing the risk of inaccuracies or hallucinations in an ethical way.

Designed to support professionals

Our aim is not to replace professional judgment but to support it in line with MoJ guidelines. Each AI feature we implement is intended to ease operational pressure and surface insights that might otherwise be overlooked. In a sector where decisions directly affect people's lives, this kind of support is essential.

Crucially, we do not view AI as a finished product but as a flexible capability that must be applied carefully and refined through real-world use. That is why we are collaborating with justice partners and frontline experts to ensure our tools meet both technical standards and day-to-day operational needs.

In Part 2 of this series, we will explore the principles and safeguards that guide our work, from data governance and transparency to ethical model design, and how these ensure that justice technology remains accountable to the people it serves.