Red-teaming is the practice of testing a computer system from the perspective of an opponent to identify potential attack vectors.
UK government's cybersecurity program working with internal and external experts to test its tech
Autonomous Ethical Hacking
An open-source, extensible knowledge base of AI failures
As ARVA, our mission is to empower communities to recognize, diagnose, and manage vulnerabilities in AI that affects them.
The red teaming event brought together 40 health and climate postgraduate students with the objective to scrutinise and bring attention to potential vulnerabilities in large language models (LLMs1 ).
Humane Intelligence is a tech nonprofit building a community of practice around algorithmic evaluations.
The lab focuses on improving the software security of projects that advance OTF’s Internet freedom goals by ensuring that code, data, and people behind the tools have what they need to create a safer...