
AI RISK LIBRARY RESOURCES
AI Risk Repository
The AI Risk Repository has three parts:
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
SAIF Risk Assessment tool
Google’s Secure AI Framework (SAIF) to help assess your AI risks in your business and practical guidance on controls to address them. This self-assessment is for security practitioners, to help identify which AI risks may be most relevant to you.
Use this assessment to start conversations and guide further research.
The Foundation Model Transparency Index
A comprehensive assessment of the trasparency of foundation model developers
As part of the May 2024 version of FMTI, developers prepared reports including information related to the FMTI's 100 transparency indicators. We hope that these reports provide a model for how companies can regularly disclose important information about their foundation models
Catalogue of Tools & Metrics
There are tools and metrics out there that help AI actors to build and deploy AI systems that are trustworthy. However, these tools and metrics are often hard to find and absent from the ongoing AI policy discussions.
This catalogue makes it easier to find tools and metrics by providing a one-stop-shop for helpful approaches, mechanisms and practices for trustworthy AI.
Carbon Tracker - Climate risks of AI
Carbontracker tracks hardware power consumption and local energy carbon intensity during training to provide accurate measurements and predictions of the operational carbon footprint.
It’s important for organizations, institutions, and businesses alike to track their carbon footprint for prosperity. We need to be mindful of the environmental effects AI compute has and share ways to optmize and coordinate for better use of resources.
AI Impact Navigator (AUS)
The AI Impact Navigator is a framework for companies to use in assessing and measuring the impact and outcomes of their use of AI systems.
Using a continuous improvement cycle known as Plan, Act, Adapt, the Navigator provides a way for company leaders to communicate and discuss what’s working, what they’ve learned, and what their AI impact is.
AI Risk Management Framework (RMF)
In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
LIST AI Sandbox (Beta)
Making sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly is a critical issue on which LIST wants to contribute with its scientific and technological expertise. This concern is also a priority of the European Parliament and the upcoming AI Act will establish rules for providers and users depending on the level of risk from Artificial Intelligence.
AUDITING LARGE LANGUAGE MODELS: A THREE-LAYERED APPROACH
In this article, we address that gap by outlining a novel blueprint for how to audit LLMs. Specifically, we propose a three-layered approach. We show how audits, when conducted in a structured and coordinated manner on all three levels, can be a feasible and effective mechanism for identifying and managing some of the ethical and social risks posed by LLMs.
Video Resources: AI Risks
-
All Tech is Human
19 DECEMBER 2024
AI and Marginalized Communities
In this webinar, we will explore the intersection of AI and marginalized communities, highlighting the challenges while advocating for technological advancement that is designed for inclusivity, equity, and justice.
Spread the Word
Please let us know of great tools to add to our repository we should promote, other initiatives around the world we can learn from and expert consultants to work with. Please contact us if you want a partnership to help build a collaborative AI Future.