Artificial Intelligence (AI) is rapidly weaving itself into the fabric of nearly every sector. From finance and healthcare to public services and the creative industries, there are plenty of claims about how machine learning and automation are going to completely transform the way we work. Social value is no different. A growing number of voices are suggesting that AI could cut through complexity, speed up reporting, and even solve that long-standing headache of measuring what truly matters to our communities.
It’s a seductive proposition. But can AI actually deliver on that promise?
The hope and the hype
Measuring social value has always been inherently difficult. Legislative frameworks, such as the Social Value Act and Procurement Policy Note 06/20, have rightly raised the bar, while methodologies like TOMs and SROI offer structured pathways. Yet deciding which outcomes to prioritise, how to properly evidence them, and, crucially, how to interpret their significance still requires human judgement.
AI appears to offer a fast-track solution. Large Language Models and predictive algorithms can chew through data far quicker than any team of people. They can highlight trends, scan thousands of documents in seconds, and spit out very tidy reports. For commissioners and suppliers under pressure to show results, the appeal is obvious. However, faster isn’t necessarily better, and efficiency isn’t the same as credibility.
The pitfalls of handing over the keys
Social value isn’t a universal, standardised dataset that you can simply pump into an algorithm. It is inherently local, contextual, and deeply human. What’s important in a small rural village may not hold the same weight in a bustling city centre. The same project or intervention can generate wildly different results depending on the people involved. If AI is trained on vast, generalised datasets, there’s a real risk it will flatten this essential nuance into neat figures that simply don’t reflect the reality of people’s lives.
Transparency is another major worry. If an AI tool produces a number, but nobody can satisfactorily explain how it arrived at that figure, does that really build confidence, or does it ultimately erode it? Policymakers and practitioners are starting to ask these probing questions. Influential reports, like the House of Lords’ analysis on AI in the Public Sector, have underscored both the genuine promise and the significant risks of relying too heavily on algorithms without diligent human oversight.
A measured approach
This is the very thinking behind the Social Value Engine. Our technology is now AI-enabled, but it always operates within a framework that guarantees rigour and accountability. The AI helps automate parts of the reporting process and makes it easier to generate insights quickly, but everything remains rooted in transparent, open-source proxies and a methodology that is accredited by Social Value International. AI should support better decisions; it shouldn’t be making those decisions for you.
Looking ahead
The real question, then, isn’t whether AI can help measure social value—it absolutely can, in valuable but limited ways. The real challenge lies in how we choose to use it.
If we treat AI as a shortcut that allows us to shirk human responsibility, we risk eroding trust and repeating the costly mistakes of ‘social value washing’. If, however, we use it to support robust, transparent, and evidence-based practice, then it has the potential to become part of a stronger, more credible system.
Social value is, fundamentally, about people and their communities. No algorithm will ever replace that. But when deployed with care, AI can help us spend less time wrestling with spreadsheets and more time making sure the benefits of social value are actually felt where they matter most.