Social Navigation

Harris to Meet with CEOs on Artificial Intelligence Risks


WASHINGTON– Vice President Kamala Harris will meet with the CEOs of four major companies developing artificial intelligence on Thursday as the Biden administration rolls out a set of initiatives designed to ensure the rapidly evolving technology improves lives without endangering rights and the safety of people.

The Democratic administration plans to announce a $140 million investment to create seven new AI research institutes, administration officials told reporters during a preview of the effort.

In addition, the White House Office of Management and Budget is expected to release guidance in the coming months on how federal agencies can use AI tools. There will also be an independent commitment from top AI developers to participate in a public evaluation of their systems in August at the DEF CON hacker convention in Las Vegas.

Harris and administration officials plan on Thursday to discuss the risks they see in the current development of AI with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI. The message from government leaders to business is that they have a role to play in reducing risk and can work with government.

President Joe Biden noted last month that AI can help fight disease and climate change, but could also harm national security and disrupt the economy in destabilizing ways.

The release of the ChatGPT chatbot this year has led to increased debate about AI and the role of government with technology. Because AI can generate human-like writing and fake images, there are ethical and societal concerns.

OpenAI, which developed ChatGPT, has kept secret the data on which its AI systems were trained. This makes it difficult for people outside the company to understand why ChatGPT produces biased or false responses to requests or to address concerns about the theft of copyrighted works.

Companies worried about being responsible for something in their training data also might not have the incentive to track it properly, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

“I think it might not be possible for OpenAI to break down all of its training data to a level of detail that would be really helpful in terms of some of the concerns around consent, privacy, and licensing,” Mitchell said in an interview on Tuesday. “From what I know of tech culture, it just doesn’t happen.”

Theoretically, at least, some sort of disclosure law could compel AI vendors to open their systems to closer scrutiny by third parties. But with AI systems built on previous models, it won’t be easy for companies to offer greater transparency after the fact.

“I think it’s really going to be up to governments whether that means throwing away all the work you’ve done or not,” Mitchell said. “Of course, I imagine that at least in the United States, the decisions will lean towards the companies and support the fact that this has already been done. It would have such massive ramifications if all of these companies were to basically throw away all this work and restart.


Follow AP’s coverage of artificial intelligence at

Joanna Swanson

Joanna Swanson is Europe correspondent at the Thomson Reuters Foundation based in Brussels covering politics, culture, business, climate change, society, economies and inclusive tech. With specific focus in breaking news, she has covered some of the world's most significant stories.