OpenAI launched a GPT store. How will it moderate that in an election year?
Last week, OpenAI began rolling out its GPT Store, a digital marketplace where users can promote and sell access to their own AI chatbots they have created from ChatGPT.
Store categories include trending GPTs, DALL-E-based image generation and bots specialising in topics such as writing, research, programming, education and lifestyle advice.
OpenAI has built a basic compliance policy for users looking to sell GPTs on its store. This begs the question: how will the team moderate what GPTs are available for sale?
The stakes are high in an election year. The same day OpenAI announced the rollout of its GPT Store, the World Economic Forum declared AI-driven misinformation “the biggest short-term threat to the global economy”.
Among the experts it surveyed for its annual risks report, 30% said there was a “high risk” of global catastrophe in the next two years given concern about how politics could be affected by the spread of false information online.
Low barrier to entry
OpenAI tells users that “building your own GPT is simple and doesn’t require any coding skills”, meaning the barrier to entry for creating GPT variants is low.
Given the concerns expressed by experts, OpenAI published a blog post on Monday to address how it would approach this year’s elections, in which 4.5bn people will be going to the polls, including in the US, UK and India.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process and we want to make sure our technology is not used in a way that could undermine this process,” the post reads.
Elections and more positive outlook to lead to strong year for advertising
The company revealed it has a “cross-functional effort dedicated to election work” with input from its safety, legal, engineering, policy and threat intelligence teams that is focused on improving transparency, enforcing “measured” policies and “elevating” accurate voting information.
In the post, OpenAI highlighted guard-rails it has put in place for AI image-generator DALL-E to decline requests that ask for image generation of real people, including political candidates. It also explained the company does not allow people to build GPTs for the purposes of political campaigning and lobbying, pretending to be real people or institutions and deterring people from voting.
OpenAI explained that it has been iterating on tools to improve factual accuracy, reduce bias and decline certain requests, and that users can report potential violations of such policies if they come across rule-breaking GPTs. But it is unclear to what degree OpenAI will be capable of holistically policing use of AI for such purposes.
Misinformation concerns
At The Year Ahead 2024 event in London last week, Ipsos UK and Ireland CEO Kelly Beaver warned of the risk to the public of AI-generated misinformation. She did so, in part, by displaying AI-generated images, created through Bing’s AI Image Creator, depicting US presidential candidates Joe Biden and Donald Trump cheating on their spouses.
“What we are likely to see throughout the course of 2024 is a real difficulty in distinguishing for the British public between what is real and what is not real,” said Beaver. She added that there’s “quite a big gap between what is perceived from execs […] and what the public thinks on AI”, especially given that just 7% of the UK population says it has experimented with AI, compared with 80% of business leaders.
Podcast: Guardian upfronts and why AI-generated misinfo could be a blessing for newsbrands
According to a recent survey of the British public, a large majority (74%) of respondents said they lacked confidence they could reliably identify AI-generated content. In addition, 86% said they favoured guidelines or regulations put in place for AI-generated content on the web, with a similar proportion agreeing that content should be clearly highlighted if it is wholly or partly generated using AI.
At the moment, this isn’t necessarily the case. While some users have developed AI tools trained in spotting AI-generated content and the European Commission’s AI Act seeks to enforce transparency rules around content provenance, the law’s implementation is likely to take some time.
Provenance tools
OpenAI says it is working on “several provenance efforts” to improve people’s confidence in whether an image is real or DALL-E-generated, including implementing the Coalition for Content Provenance and Authenticity’s digital credentials, which will encode details about AI-generated elements into content using cryptography.
It is also experimenting with a new “provenance classifier” tool to detect if an image is generated by DALL-E that has shown “promising early results” in internal testing.
For written content, OpenAI says it is increasingly integrating ChatGPT with existing sources of information from news publishers to offer news in real time, with attribution and links to stories.
Some publishers have expressed anxiety, however, that further integration with real-time news could lead to a significant reduction in referral traffic if users are satisfied with AI-generated answers and do not click to read further via the articles cited by the chatbot.