Yesterday, on May 16, the Praytell AI Team tuned in to the Senate committee hearing where OpenAI's Sam Altman testified about AI tools, their potential impact, and the need for regulation. The discussion primarily focused on understanding the tools and assessing the challenges of regulation.
The biggest takeaway from the hearing is that Altman—alongside Christina Montgomery, IBM's chief privacy and trust officer—called for robust government regulation of companies developing AI tools with Large Language Models.
It’s rare for an industry to appear before Congress and plead to be regulated.
In the hearing, Altman called for the following regulation:
- A new government agency charged with licensing AI models based on government standards (with power to revoke licenses)
- Establish safety standards for AI models, including passing safety tests
- Independent audits by independent experts
There were also a few areas of discussion that have a potential impact on our work in the Communications industry, and which we’ll continue to track:
Who Owns The Data?
Altman avoided answering questions about the ownership of training data for AI models. However, he did suggest that individuals might be allowed to opt out of having their data used for AI training. It’s unclear how this would work.
How Does Advertising Play Into This?
AI tools have the potential to radically shake up modern advertising, and "AI creative" is taking a back seat. AI chat can personalize content and communicate with you directly in a highly persuasive way. Altman said “hyper-targeted advertising is definitely coming”, along with backend risks. Once advertising persuasion makes its way onto chat platforms, there are big implications related to how we make consumer decisions, run elections, and process disinformation.
The question was also raised about using ad-supported models for AI tools. Essentially, making tools that are more addictive and time-consuming in exchange for ad impressions. Altman said there were downsides to that (including server capacity) and he doesn’t plan on using an ad model for OpenAI tools.
How Does This Impact The News Media?
Questions probed how ChatGPT-style tools will impact the ways people find and consume news, potentially exacerbating the decline of local newsrooms. AI tools could “train” on local news stories only for people to receive summaries of those news stories via chatbot, with no traffic or benefit to the local news org. Altman recognized the importance of the issue but had no response.
This dynamic plays into another larger issue with the future of media that Praytell is monitoring closely. Last week, Google unveiled their own AI chat-driven search functions with natural language text delivering responses to queries with embedded hyperlinks. This follows on the heels of Bing integrating ChatGPT into their search.
There's a risk of major disparity in how media sources are treated in this new search/surfacing interface.
One possible outcome is media giants, NYTimes/CNN/Fox dominating the link-out results with fewer opportunities for smaller outlets that depend on search traffic. This should continue to be a big topic in the months ahead.
Do The Regulators Know Enough?
A striking feature of the hearing, that also came up during the recent questioning of the TikTok CEO, is that senators over the age of 50 (with many leading the hearing being well over the age of 70) are ill-equipped to ask probing questions on AI as it relates to the modern web and tech industry.
There’s a risk that if senators and senior government officials are operating on low information, the AI companies could more easily shape regulation to benefit themselves.
Praytell will continue to monitor AI news as it impacts our work.