Red teaming large language models (LLMs) for resilience to scientific disinformation

The red teaming event brought together 40 health and climate postgraduate students with the objective to scrutinise and bring attention to potential vulnerabilities in large language models (LLMs1 ).
N/A
Since 2023
In United States of America 🇺🇸

Parent organization:

The Royal Society

Red teaming large language models (LLMs) for resilience to scientific disinformation
Org. type: Non-profit / charity / foundation
Project type: Document
Last modified: Nov 12, 2025 Added: Feb 20, 2025
Back to Top