‘AI would be good at jobs, but not…’ OpenAI CEO Sam Altman on AI’s impact on jobs at Senate hearing
Altman said at the hearing that one of his biggest fears is labor market disruption, and called on Congress to help deal with the fallout.
At the hearing, he said: “I expect there will be a significant impact on jobs, but exactly what it looks like is very difficult to predict.”
“As our quality of life gets higher and as machines and tools we make can help us live better lives, the bar is raised for what we do,” Altman said. “I’m very optimistic about how great the jobs of the future will be.”
He added: “I think it’s important to understand that GPT-4 is a tool, not a creature that easily gets confused. And it’s a tool that people have a lot of control over how they use it.”
“And secondly, GPT-4 and other similar systems are good at doing tasks, not jobs. And so you already see people using GPT-4 to do their jobs much more efficiently, helping them with tasks. “
“Now GPT-4, I think, will fully automate some tasks. And it will create new ones that we think will be much better.”
Earlier in March, Altman had said in an interview that he was “a little scared” of the AI chatbot’s potential. In an interview with ABC News, he said that ChatGPT can “eliminate” many human jobs. “We have to be careful here,” Altman said. “I think people should be happy that we’re a little bit scared of this,” he admitted.
Returning to the hearing, Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and law, called displaced workers “perhaps the worst nightmare” of AI developments, highlighting the importance of training workers to learn new skills as part of what he called a “looming new industrial revolution.”
Blumenthal also opened the hearing with an audio clip that sounded like his voice, but was actually written and composed by AI products. Minnesota Democrat Amy Klobuchar joked with Tennessee Republican Marsha Blackburn about whose state had the best musicians, according to ChatGPT.
The first major Senate hearing on artificial intelligence covered everything from the lighthearted wonders of generative AI to dire warnings about existential threats to society and democracy.
Altman told Congress that government intervention will be critical to mitigate the risks posed by increasingly powerful AI systems. “As this technology advances, we understand that people are concerned about how it could change the way we live. Us too,” said OpenAI’s CEO.
He proposed the creation of a US or global agency that would license the most powerful AI systems and have the power to “revoke that license and ensure that safety standards are adhered to”.
His San Francisco-based startup gained a lot of public attention after releasing ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like answers.
What began as a panic among educators over ChatGPT being used to cheat on homework assignments has spread to wider concerns about the ability of the latest generation of “generative AI” tools to mislead people, spread untruths, evict copyright protection. violate and overthrow some jobs.
And while there’s no immediate sign that Congress will enact sweeping new AI rules, as European lawmakers are doing, societal concerns brought Altman and other tech CEOs to the White House earlier this month and led U.S. agencies to pledge tough act against harmful AI products. that violate existing civil rights and consumer protection laws.
Pressured by his own worst fear of AI, Altman mostly avoided details except to say that the industry could do “significant damage to the world” and that “if this technology goes wrong, it could go pretty wrong.”
But he later suggested that a new regulatory body impose safeguards that would block AI models that could “replicate and exfiltrate themselves into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans to relinquish control.
That focus on a distant “science fiction trope” of superpowered AI could make it more difficult to take action on pre-existing harms that would require regulators to dig deep into data transparency, discriminatory behavior and potential for deception and disinformation, said a former Biden administration official who co-authored the plan for an AI Letter of Rights.
“It’s the fear of these (super-powered) systems and our lack of understanding of them that’s collectively freaking everyone out,” said Suresh Venkatasubramanian, a computer scientist at Brown University who was deputy director of science and justice in the Office of Science and Justice. White House Technology Policy: “This fear, which is very unfounded, is a distraction from all the concerns we face now.”
(With input from agencies)
Catch all technology news and updates on Live Mint. Download the Mint News app for daily market updates and live business news.