ESRC Digital Good Network


Generative AI and the Digital Good: Representation, bias, and applications to misinformation

Part of the ‘Digital Good Research Fund 2024: call for applications’ webinar series

This online webinar took place on Friday 9 February 2024.

You can watch a recording of it below.
Please note that it contains an additional recording in response to the questions that were not answered within the original webinar.

In the second round of the Digital Good Research Fund, we encouraged applications that use quantitative or computational methods, as this represented a gap in our existing research which we would like to fill.

This webinar, led by Dr Scott A. Hale from our management team, reflects on the use of Large-language models (LLMs, or types of artificial intelligence (AI) that uses deep learning techniques and large datasets to understand, summarize, generate and predict new text-based content) and generative AI in computational social science. It considers how downstream users of LLMs need stronger evaluations in order to understand the languages, domains, and tasks within an LLM’s training and those which fall outside. 

This webinar considers:

  • Applying LLMs to identify misinformation narratives. 
  • How current approaches to Reinforcement Learning from Human Feedback (RLHF) fail to address harmful, stereotypical outputs in non-Western contexts.
  • An in-progress study that aims to better understand what different people want from LLMs and how they perceive generative AI output.

It also covers details of the 2024 Digital Good Research Fund, which is now closed.

Scott Hale

Dr Scott A. Hale

Emerging Methodologies Lead, Digital Good Network; Associate Professor and Senior Research Fellow at the Oxford Internet Institute

As well as being a member of the Digital Good Network management team, Scott is Associate Professor at the Oxford Internet Institute, University of Oxford, Director of Research at Meedan, and a Fellow of the Alan Turing Institute. His applied Natural Language Processing and Machine Learning research seeks to achieve more equitable access to quality information online. He also builds open-source tools for fact-checking and facilitates academic-practitioner collaborations.

Back to Outputs