AI REPORTS

World Economic Forum & Capgemini

AI agents are autonomous systems capable of sensing, learning and acting upon their environments. This white paper explores their development and looks at how they are linked to recent advances in large language and multimodal models. It highlights how AI agents can enhance efficiency across sectors including healthcare, education and finance.

  • An AI agent responds autonomously to inputs and its reading of its environment to make complex decisions and change the environment.

  • Developers have transformed AI from rule-based systems to active agents capable of learning and adapting while engaged in a task.

  • AI agents have the potential to tackle challenging tasks with great efficiency. But they carry associated risks such as malfunction, malicious use and unwanted socioeconomic effects.

  • As the adoption of AI agents increases, critical trade-offs need to be made. Given the complex nature of many advanced AI agents, safety should be regarded as a critical factor alongside other considerations such as cost and performance, intellectual property, accuracy, and transparency, as well as implied social trade-offs when it comes to deployment.

AI Index Report 2024

The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

  • The 2024 AI Index tracks progress on several new benchmarks including those for tasks in coding, advanced reasoning, and agentic behavior—areas that were underrepresented in previous versions of the report

  • This chapter explores key trends in responsible AI by examining metrics, research, and benchmarks in four key responsible AI areas: privacy and data governance, transparency and explainability, security and safety, and fairness.

  • The researchers further categorize the models based on their openness levels, ncorporating over 100 indicators

    Foundation Model Transparency Index (stanford.edu)

  • The demographics of AI developers often differ from those of users. For instance, a considerable number of prominent AI companies and the datasets utilized for model training originate from Western nations, thereby reflecting Western perspectives. The lack of diversity can perpetuate or even exacerbate societal inequalities and biases.

World Economic Forum

World Economic Forum’s Global Future Council on the Future of Data Equity, a multistakeholder expert group, through various rounds of research and consultations, put together a definition that encompasses the comprehensive nature of this concept and how it impacts all sectors, industries and regions across the data lifecycle.

  • Data equity can be advanced through corrective as well as proactive actions in the different stages of the data life cycle

  • Data equity can be advanced through corrective as well as proactive actions in the different stages of the data life cycle

  • These case studies demonstrate the data equity framework through real-world examples that can be adapted to other contexts.

  • Concrete steps for private-sector companies, Academia and technical experts, Government public sector, national statistical office, civil society organizations, general public and communities

Taxonomy of Generative AI Human Rights Harms: A B-Tech Gen AI Project supplement

This taxonomy is concerned with demonstrating how the most significant harms to people related to generative AI are in fact impacts on internationally agreed human rights.

>> Press Release

  • The large scale collection, storage, and processing of data (including sensitive personal data) associated with generative AI models may increase vulnerabilities and user exposure to data breaches, hacks, and other security breaches. S

  • Generative AI–including the capacity to rapidly produce false content that appears humangenerated and authoritative at scale–may pose risks to the right to freedom of expression in various ways.

  • Generative AI models may be used by companies to monitor employee performance, raising concerns about the accuracy of such tools.

State of AI Report (2024)

  • Highlights:


    Slide #16: Nice breakdown on how 'open' open source models are, including access to dataset

    Slide #18: SWE-bench verfied as the new benchmark to evaluate model's ability to solve real -world software issues

    Slide #21: Shrinking models with minimal impact on performance finetuning
    > apropos slide #25 personalization with LoRA shows promise here

  • Highlights:


    Slide #121: Content creaters vs. AI scrapers (lawsuits pending on what fair use looks like now in the age of AI)

    Slide #137: Apple and OpenAI team up for Apple intelligence

  • Highlights:

    Slide #158: EU global opt out of AI training from users

    Slide #183: Meta redteaming using only one LLM model/ AdvPrompter

  • Highlights:

    Slide #190: RHFL still outperforms at scale (cue more data workers)

    Slide #195: Standford's Foundational Model Transparency Index (it looks bad for upstream providers)

    Slide #196: Athropic did a fun experiment to see if models would take short cut, spoiler alert . they will and they can

Readiness Assessment Methodology (RAM)

A tool of the Recommendation on the Ethics of Artificial Intelligence by UNESCO. RAM is used to identify strengths and gaps of beneficiary countries with regards to the capacity to facilitate the ethical design, development and use of AI, and how to address these.

Contributing to Global Index of Responsible AI

  • EX: Does your government inform the public when they are subjected to the use of AI systems ?

  • EX: Is there a law or policy highlighting monitoring, redress, and remedy mechanisms against harms caused by AI systems?

  • EX: Has your country enacted any law or policy to reduce the digital gender gap?

  • EX: Does your country have any laws or policies on how educators/professors should be trained to teach about AI/technology ethics?

  • EX: Does your country have a strategy to respond to AI impact on the labour market?

  • EX: Is your country involved in standardization (both technical and ethical) of AI and digital technologies? (ISO/IEC, IEEE7000)

UC Berkeley (USA)

This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of GPAIS. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS

  • AI risks based on assessments and other analytical output from the Map and Measure functions are prioritized, responded to, and managed.

  • In this section, we provide mapping of profile guidance to the code of conduct represented by the commitments announced with the White House (2023a) by several frontier model developers, when developing and releasing foundation models more capable than the July 2023 industry frontier.

  • Identify reasonably foreseeable uses, misuses, and abuses for a GPAIS (e.g., automated generation of toxic or illegal content or disinformation, or aiding with proliferation of cyber, chemical, biological, or radiological weapons), and identify reasonably foreseeable potential impacts (e.g., to fundamental rights)

Mozilla Internet Health Report (AI)

This compilation of facts and figures explores global power disparities in AI and highlights research and perspectives on how to shift that power for a healthier internet and more trustworthy AI.

  • From your social media feed to fastood restaurants, companies in every sector are turning to AI to unlock new ways to collect and analyze data to tailor their offerings.

    But the benefits — and the harms — are not evenly distributed.

  • Big tech companies play an outsized role in shaping our experience of the internet, and life itself. What we see, what we buy, even what we believe is nudged along by them daily.

  • In this special season of Mozilla’s IRL podcast, Bridget Todd introduces us to champions who insist there is a better way to build, deploy, and comprehend AI’s potential. Read their stories here.

Responsible Tech Guide

It is designed to provide information, inspiration, and illumination of pathways for more individuals to be involved in the Responsible Tech ecosystem.

  • What can responsible AI look like and the challenges with mitigating risks and harms

  • Trust by design. Key terms and developments around safety and monitoring for harmful effects

  • Legislation and actions against CSAM and explicit non-consensual content as well as age limits for platforms.

  • Increased focus on equity and citizen protection

  • Technology that is secure, resilient, open, trustworthy, and stable and upholds democratic and human rights principles and institutions

  • Regulation of curent and emerging technologies. Can be both internal company policy and government regulations

The Nordic AI and data ecosystem

This report presents an overview of the Nordic ecosystem for the responsible use of data and ethical artificial intelligence (AI), both from the perspective of the Nordic region and the individual countries within it.

  • The goal of the project is to develop and demonstrate an ethical algorithm capable of reading both digital and analogue patient journals, across medical health record systems and across Nordic borders and Nordic languages.
    » Read Article


  • In this project, a federated health data network will be developed, geared towards secondary use of health data. The project utilizes distributed machine learning, specifically federated learning, to ensure data privacy and ownership.

    » Read Article

  • The ecosystem will provide real time, detailed and structured data on demand, and thus serves the different needs for data in both business and government decisions. The availability of real time data in the ecosystem opens up for new opportunities such as the development of new data-based products and services that can create value for both public and private actors.

    » Read Article

  • Through this project, Nordic Innovation together with project lead AI Sweden and partners ICT Norway and AI Finland seek to prepare the foundations for a Nordic-Baltic AI Center focused on development and adoption of responsible AI in the Nordics and Baltics.

    » Read Article

Spread the Word

Please let us know of great tools to add to our repository we should promote, other initiatives around the world we can learn from and expert consultants to work with. Please contact us if you want a partnership to help build a collaborative AI Future.

Previous
Previous

AI RISKS

Next
Next

INITIATIVES