Search suggestions:

What the Academic AI Impact Study shows about productivity, capacity, and human oversight in library workflows

Share:

March 25, 2026 | 8 min read |

Academic libraries are being asked to do more with the same, or fewer, resources. Staff capacity is stretched. Metadata backlogs persist. Course support workflows face recurring time pressure. In that environment, AI is no longer being explored only as a future possibility. For some libraries, it is already being used as practical operational support inside everyday workflows.

A new independent study by Emerging Strategy on behalf of Clarivate looks closely at what that support actually means in practice. Based on interviews with 11 library professionals across 8 academic institutions, the study focuses on two core workflow areas: metadata creation and cataloguing, and course reading list support, specifically through Alma Metadata Assistant and Leganto Syllabus Assistant.

The study makes clear that AI does not remove the need for expertise. When introduced deliberately, it can reduce manual effort, accelerate first-pass preparation, and help libraries expand what is operationally feasible while preserving professional oversight. Libraries reported gains in productivity and throughput, but also stressed that review, judgment, and governance remain essential.

 

Productivity gains are real, but they are workflow-specific

One reason this study is useful is that it stays grounded in specific workflows rather than generic claims about AI.

In course reading list workflows, the most immediate value came from removing manual citation entry. Before AI-assisted processing, staff described list creation as dominated by searching, transcription, and follow-up. With Leganto Syllabus Assistant, libraries could upload or paste a syllabus and receive a populated draft list, shifting staff effort from typing and searching to review and refinement.

That translated into meaningful time savings. Across interviews, participants reported that reading lists that previously took 15 to 45 minutes to build manually could now be created in 2 to 5 minutes. In more complex cases, lists that previously took more than a full day of focused work could be drafted in one to two hours and then refined by staff.

The impact went beyond speed alone. Earlier visibility into course materials helped libraries surface items they already owned, reduce follow-up with faculty, and make more reading lists available to students earlier in the process. The study reports that 50–60% of reading lists were immediately available after AI processing in some workflows, especially where course materials were already held by the library.

 

In metadata workflows, AI helps remove friction at the first-pass stage

The metadata side of the study is just as important, especially for libraries dealing with backlog work, inconsistent descriptive quality, or minimal-level cataloguing.

Here, Alma Metadata Assistant was most valuable when it reduced manual transcription and initial descriptive work. Rather than entering fields line by line, staff could upload images of covers and tables of contents or run existing records through enrichment, receiving a populated draft record to review and refine.

For large-scale enrichment work, the gains could be substantial. At Universidad Tecnológica de Bolívar, the transition from manual transcription to AI-assisted cataloguing resulted in a reported 70–80% reduction in processing time. Records that previously required hours of transcription and assembly could be generated in 30 to 60 seconds for the initial draft, and the technical services team reported being able to process 200 to 600 records daily. New books that once took three to four days to appear in discovery were frequently available the same day they arrived.

That is a strong example of productivity and capacity improvement in a workflow that had previously made retrospective enrichment difficult to sustain manually. The study shows that this kind of work makes previously infeasible tasks tractable, shifting it from optional to operationally meaningful.

 

Productivity is not the whole story

At the same time, the study is valuable precisely because it does not reduce everything to speed.

One case from a large research university in the Western United States shows why. There, Alma Metadata Assistant did not reduce per-record processing time. In fact, the average time per record increased from 23 to 28 minutes. But the AI-assisted records included summaries, contents notes, subject terms, and genre fields that would otherwise have taken significantly longer to add manually or would have been omitted entirely under a minimal-level approach. The reported value was not speed alone, but richer and more consistent metadata, reduced research burden, and better support for materials that would otherwise remain under-described.

That matters because it shows AI’s role in libraries is not only about producing more output faster. In some workflows, the benefit is that staff can produce a stronger first draft, reduce unstructured research effort, and devote their attention where it adds the most value.

The study makes clear that cataloguing judgment remains essential. AI helps make review and refinement more manageable within real staffing constraints.

 

Human oversight remains the operating principle

This is one of the most important findings in the report.

Across both course support and metadata workflows, interviewees consistently emphasized that AI changes where effort is spent, but not who remains accountable. The study is explicit that AI does not change professional judgment or accountability, does not eliminate the need for review and correction, and does not remove the need for governance. Treating AI outputs as drafts rather than finished products was one of the clearest patterns across successful implementations.

Libraries that reported the strongest outcomes introduced AI gradually, focused on high-friction workflows, and kept clear ownership over standards, review practices, and configuration. Adoption was most effective when it was library-led and selective, not universal.

That makes the study especially relevant for institutions that are still evaluating AI. It does not present adoption as inevitable or uniform. Instead, it shows that value depends on scope, governance, and fit with local workflows.

 

What this means for Alma and Leganto users

For Alma and Leganto users, the study offers something more concrete than abstract AI messaging. It connects AI to identifiable operational outcomes.

In Leganto workflows, that means less manual citation handling, faster draft reading list creation, earlier student access to materials, and more room for libraries to move from reactive to proactive support. In Alma metadata workflows, it means reducing transcription bottlenecks, making large-scale enrichment more feasible, and helping cataloguers spend more time on validation, normalization, and standards-based judgment rather than first-pass assembly.

The study also reinforces something practical for current and prospective users: these tools are most useful when applied to well-defined workflows where libraries are trying to recover time, reduce friction, or increase capacity without compromising oversight. That is where the operational case becomes strongest.

 

A more useful question for libraries

The study does not argue that every library should adopt AI immediately. It asks a more practical question: can existing workflows continue to absorb growing demands without additional tools that reduce friction and expand capacity? For the institutions interviewed, AI became useful as manual workflows alone struggled to keep up with growing demands.

That is what makes this research worth reading. It moves the conversation away from broad promises and toward real workflow conditions: where time is lost, where staff effort is concentrated, what becomes feasible, and what still depends on human expertise.

If you want to see how academic libraries are applying AI in metadata and course support workflows today, the full Academic AI Impact Study offers the deeper evidence, examples, and case studies behind these findings. It is a useful resource for anyone evaluating how AI fits into library operations in a way that is practical, governed, and aligned with professional standards.

Read the Academic AI Impact Study here.

 

Get
Started