Since the launch of ChatGPT in 2022, the appetite for generative artificial intelligence (Generative AI) investment has been described as “insatiable.” Within the first six months of 2023, funding for Generative AI-based tools and solutions leaped to more than five times what they were in 2022, with venture capital (VC) firms investing heavily in this new sector. The global Generative AI market is expected to reach US$200.73 billion by 2032.
Generative AI refers to algorithmic systems that can create (or “generate”) new content—including audio, images, text, and even computer code. Like any new technology, these systems pose potential risks, including, but not limited to, the amplification of existing societal biases and inequities, undermining the right to privacy, and accelerating the spread of mis- and disinformation.
It is therefore crucial for companies and investors to embrace a rights-respecting approach to the design, development and deployment of Generative AI technology.
The responsibility to address and mitigate these risks, as well as prevent actual harms, lies not only with states and the companies developing Generative AI products, but also with investors, including the VC firms that are funding many of the largest Generative AI start-ups.
According to the United Nations Guiding Principles on Business and Human Rights (UN Guiding Principles), companies and investors have a responsibility to respect all human rights wherever they operate in the world and throughout their operations.
Yet, as this research undertaken by Amnesty International and the Business & Human Rights Resource Centre demonstrates, leading VC firms are largely failing in their responsibility to address risks and actual harms, including by conducting human rights due diligence.
To assess the practices of the 10 venture capital funds that had invested the most in Generative AI companies and the two start-up accelerators with the most active investments in Generative AI companies, Amnesty International USA and the Business & Human Rights Resource Centre first conducted a review of the publicly available information about each VC firm and accelerator’s human rights policies and then sent detailed letters to the General Counsels or other senior partners of each of these funds to interrogate the findings.
This analysis showed that leading VC firms and start-up accelerators are critically deficient in their responsibility to conduct human rights due diligence when investing in Generative AI start-ups. Our key findings include:
- Only three out of the 12 firms mention a public commitment to considering responsible technology in their investments
- Only one out of the 12 firms mentions an explicit commitment to human rights
- Only one out of the 12 firms states it conducts due diligence for human rights-related issues when deciding to invest in companies and in the selection of Limited Partners (LPs). LPs are investors who provide the funding for venture capital firms’ own investment funds
- Only one of the 12 firms currently provides support to its portfolio companies on responsible technology issues, though two others are in the process of implementing such support