To start on a positive note, library data is the most interesting public data there is. Libraries have around 4,000 branches across the UK and over 15,000 mobile library stops. Data on catalogues, locations, hours, issues, footfall is largely underused, but with huge potential.
Libraries Connected released an interim report on their accreditation for libraries work. This looks at how library services can be quality-assessed.
Library leadership normally tend towards positivity, but the overriding emotion from the report seems to be one of caution, if not fear. Worries about judgements, defensive over standards, and afraid of traditional data. The report is good: it accurately reflects views from workshops, surveys, and meetings.
What is ‘traditional’ library data? I’d say firstly what is available: the location of libraries and mobile library stops, PCs, library opening hours, library stock, digital services, lists of events. Then usage of those things: membership, library footfall, checkouts, renewals, PC bookings, enquiries, and event attendance.
People discuss what libraries should measure and collect, but the data already exists in systems. They’re really talking about what data to look at. Libraries tend to hold too much data, and not look at it enough.
In the report, few of these datasets are mentioned at all, but when they are it is within a negative context:
“There is a need to change the narrative around what libraries do through the collection of metrics which move away from footfall and towards experiential data.”
“Include data/metrics which go beyond footfall – genuine impact measures.”
“Superficial measures – footfall, loans, hours”
Interim report on library accreditation
This isn’t just cherry-picking quotes, this is all the mentions of library data. Existing data is dismissed as superficial, non-genuine in terms of impact, and unrelated to the experience of using a library.
It is common to be critical of library data, but normally simply “we shouldn’t only look at this”. Or more defensively, “this doesn’t fully represent what we do”. And that seems reasonable, though can often come from not using data well.
But the report displays active hostility to library data. The comment on data being superficial was the report describing pitfalls they need to be careful not to fall down. In other words, be careful not to look at footfall, loans or opening hours.
It can be frustrating to see certain data dismissed, when it’s significantly underused. Not to mention insulting to campaigns against reductions in opening hours and stock. Those things aren’t superficial, they relate to:
- How attractive the library is to the public
- How often the library is visited
- When people are able to visit the library
They seem relevant to a quality framework.
Its easy to be cynical about these things. When the first lockdown hit, and library services were going digital-only, the main news stories were huge increases in digital usage: 200%, 400%, 700%. They weren’t then reported as superficial. Even when the increases were from low baselines, such as 50 users to 150 users.
We seem to be stuck in a situation where data is positive or negative. Either good for advocacy or bad for it. Useful for funding, or better avoided. Evidence of good, or poor, performance. And it’s none of those things. Data is always useful. If it’s only used as a way of criticising libraries then of course they will hate it and turn against it. Data should be used for operational running and improvement of services.
An alternative view
Everyone will have their own ideas around public library quality. But the public library service belongs to the people. To assess the quality of a service, the group that should be least involved is libraries, except as users themselves. The task is to enable the public to make their own assessment. That means publishing as much open data as possible.
Funding and funders are mentioned a number of times in the report. But the primary library funders are the public. They control what happens in the local authority, and provide the money. There is little mention of the public or library campaigners.
There was an opportunity to link this work to another. The library open data schemas were developed in collaboration with library services in 2019/2020 (disclaimer: I helped). They are designed to provide a structure for which library services can share data about themselves publicly, to enable collaboration and further analysis. Details on this were sent to the accreditation project, but they didn’t reply.
I would propose the following as one aspect of a quality framework for libraries.
- Library services achieve a level of quality by releasing each dataset within the data schemas as open data
- Additional levels of quality are achieved by:
- Increased frequency of updates to the data (monthly, weekly, daily)
- Encouraging re-use from the public and third parties
- Extending the datasets to contain data particular to the service
- Further credit achieved by demonstrating internal service improvement, or external public benefit, from using the data
There will be many other aspects of quality. But this one would enable a modern-day sharing of data, would align with government and local government digital standards and strategy, and put data into the hands of the public. And also directly deliver service improvement.
This is a bit of a pipe-dream. The accreditation work seems unlikely to include anything like that, and that’s OK.
Being an interim report, it remains to be seen. But it appears to be an internal tool for library service heads to (optionally) accredit themselves with bronze/silver/gold status, with little reference to the public. The audience for that accreditation is funders of various kinds.
Perhaps that’s a good thing - if this is an avenue for more money to come to libraries then it’s hard to criticise. But unfortunately it doesn’t appear to be something that will relate to the public experience of using libraries, library open data, or open standards.