Red teaming large language models (LLMs) for resilience to scientific disinformation
The red teaming event brought together 40 health and climate postgraduate students with the objective to scrutinise and bring attention to potential vulnerabilities in large language models (LLMs1 ).