Responsible Use of LLMs in Research: Moving Beyond the Hype

13 January 2025

The surge in Large Language Model (LLM) capabilities has sparked excitement about AI’s potential to transform scientific research. While these tools promise to help researchers analyse complex datasets and accelerate innovation, realizing their potential requires more than just deploying new technology. A recent workshop by the Accelerate Programme for Scientific Discovery brought together UK researchers to explore what it takes to use LLMs responsibly and effectively in research.

Beyond the AI Hype

“A lot of conversations about AI for science frame AI as a magic solution to long-standing scientific challenges. We need to reframe how we think about AI – tools like LLMs could help drive scientific advances, but only if we get real about what it takes to deploy them effectively,” says Jessica Montgomery, Director of ai@cam and the Accelerate Programme for Scientific Discovery.

This sentiment was echoed by Maya Indira Ganesh, Associate Director at the Leverhulme Centre for the Future of Intelligence: “Rather than the fantastical futures imagined by tech billionaires, we need more storytelling about how communities of scholars and practitioners do the ‘real’ work of building technology.” Such innovation is typically needs-driven, iterative, and practical, taking place in specific contexts rather than shiny labs.

Building Foundations for Responsible Use

Successful deployment of LLMs in research requires robust technical and human infrastructure. Workshop participants identified several critical considerations:

Data Quality and Domain Knowledge

Responsible LLM use begins with a critical examination of the data that shapes these tools. Researchers must understand not just what data went into their models, but the broader implications of these choices. Alessandro Trevisan, a PhD student in Digital Humanities at the University of Cambridge, encourages researchers to ask fundamental questions: “Who designed these models, within what socio-cultural or economic contexts (e.g.: Western, for-profit, academic, etc.), and what were their objectives in developing and deploying them?”

This scrutiny extends beyond just data quality to consider representation and bias. Researchers need to understand whose perspectives are centred in the training data and whose are marginalised. This is particularly crucial in academic research, where LLMs’ tendency to amplify dominant viewpoints could inadvertently narrow the scope of scientific inquiry rather than expand it.

Scientific Integrity

As LLMs become more integrated into research workflows, maintaining scientific rigor becomes increasingly complex. A key concern is the potential for LLMs to ‘hallucinate’ or generate plausible but inaccurate information, which could propagate through the scientific literature if not carefully checked. “To counter this risk, the AI for science community can take lessons from the ‘slow science’ movement. LLM adoption should be accompanied by approaches to its use that encourage thoughtful reflection and experimentation, and that promote rigorous analysis,” Montgomery explains.

Workshop participants emphasized the importance of developing domain-specific benchmarks and evaluation criteria. This includes designing robust experiments that can properly account for uncertainties in AI-generated results and establishing clear protocols for verifying LLM outputs against traditional scientific methods. The goal is not just to use LLMs effectively, but to ensure they enhance rather than compromise the quality of scientific research.

Transparency and Documentation

Open science practices are crucial for maintaining research integrity in an era of AI-assisted research. This includes clear documentation about data sources, training methods, and parameters for use. As Montgomery notes, “AI methods that are more explainable, that embed domain knowledge, that allow exploration of causal relationships in data, or that integrate different data types to support more sophisticated simulations of complex systems could all contribute to more effective use of AI in research.”

Documentation needs to go beyond technical specifications to include the rationale behind key decisions in the research process. This includes explaining why particular datasets were chosen, how potential biases were addressed, and what steps were taken to verify results. Such transparency not only supports reproducibility but also helps build trust in AI-assisted research findings across the scientific community.

The Path Forward

The workshop highlighted that both individual researchers and institutions have roles to play in ensuring responsible LLM adoption. Research organisations need to update ethical review processes, encourage interdisciplinary collaboration, and provide time for training and experimentation.

“It’s up to researchers to maintain a scientific scepticism when using AI and take responsibility for the ethical use of AI in academic research,” emphasizes Trevisan. This includes being transparent about AI usage, providing documentation, and addressing concerns about privacy and bias.

As Ganesh concludes, “We already write papers, make toolkits, and do workshops, but there is still much work to be done in crafting inspiring narratives about the behind-the-scenes creativity and collaboration that goes into the making and doing of science and technology.” The responsible adoption of LLMs in research represents a delicate balance between innovation and scepticism. While these tools offer remarkable possibilities for accelerating scientific discovery, their successful integration depends on thoughtful implementation that prioritises scientific integrity, transparency, and ethical considerations. By fostering open dialogue about both successes and failures, maintaining rigorous standards, and building robust support structures, the research community can work toward realising the true potential of LLMs while safeguarding the foundations of good science. The path forward requires not just technological advancement, but a commitment to responsible innovation that puts scientific quality and ethical considerations at its core.