Digital democracy|Dystopian new tech
2023 was a record year for AI incidents
The onset of this decade saw a notable surge in AI incidents, with projections indicating that 2023 might reach the peak of such occurrences.¹ Now that the year has concluded, let's delve into the events that unfolded, examining the AI incidents and exploring the trends throughout the year.
Key insights
- In 2023, there were 121 recorded AI incidents, an increase of 30% from the previous year. This figure constitutes one-fifth of all AI incidents documented from 2010 to 2023. Therefore, 2023 witnessed the highest number of AI incidents, making it a record year in the decades of AI existence.
- OpenAI led in the number of AI incidents — it was involved in over a quarter of all cases, mainly as the developer and/or deployer (the entity responsible for implementing and making the system operational). Following closely, Microsoft was mentioned in 17 instances, with 10 of them involving the company as the deployer and 7 as the harmed party. Google and Meta land 3rd and 4th with 10 and 5 AI incidents, respectively.
- Various public figures were subjects of AI incidents, including deep fake videos and AI-generated graphic pictures. Some of the influential figures affected in 2023 include Pope Francis²,³, Tom Hanks⁴, Scarlett Johansson⁵, Joe Rogan⁶ and Andrew Tate⁷.
- Influential politicians were among the targets affected by the AI incidents, including previous US presidents Donald Trump³ and Barack Obama⁸, current US president Joe Biden³, and UK Labor Party’s leader Keir Starmer⁹,¹⁰. In 2024, a record year for elections¹¹, we might see a rise in the number of AI incidents involving politicians as a means to influence the electoral process.
- The registered AI incidents declined toward the end of 2023, with 14 and 22 incidents reported in the third and fourth quarters, respectively. This contrasts sharply with the 54 and 33 instances reported in the first and second quarters of 2023. After an upsurge in AI incidents at the end of 2022 and the beginning of 2023¹, it appears that companies are starting to implement measures to prevent such incidents.
Methodology and sources
We retrieved data on AI incidents from 2010 to 2023 from the AI Incident Database on January 16, 2024. For this study, we aggregated incident counts annually and for the year 2023, quarterly. The data pertaining to alleged deployers, developers, and harmed parties was aggregated per entity, following the categorization outlined by the AI Incident Database.
In the AI Incident Database, a "deployer" refers to an entity or organization responsible for implementing or putting into operation the AI systems involved in an incident. A "developer" refers to the entity or organization that creates, designs, and constructs the AI software or system implicated in an incident. Note that for the company analysis, incidents mentioning ChatGPT were categorized as OpenAI-related.
For complete research material behind this study, visit here.