Sam Altman’s world tour hopes to reassure AI Doomers | WIRED
Excitement all around the appearance of the OpenAI CEO in London Sam Altman could be felt from the queues around the University College London building ahead of his speech on Wednesday afternoon. Hundreds of eager students and admirers of OpenAI’s chatbot ChatGPT went here to see the UK leg of Altman’s world tour, where he is scheduled to visit around 17 cities. This week, he visited Paris and Warsaw. Last week he was in Lagos. Next, he continued Munich.
But the queue was taped by a small group of people who had traveled to express their concern that AI is Advancement too fast. “Sam Altman is willing to bet humanity on the hope of some supernatural utopia,” one protester shouted into the loudspeaker. Ben, another protester who declined to share his last name in case it affected his job prospects, is also worried. “We are particularly concerned about the future development of AI models that could endanger humanity.”
Addressing an audience packed with nearly 1,000 people, Altman didn’t seem to be in stages. Dressed in a edgy blue suit with green patterned socks, he speaks in succinct answers, always straight to the point. And his tone is upbeat, as he explains how he thinks AI can revive the economy. “I am delighted that this technology can deliver the productivity gains that have been lacking in the past few decades,” he said. However, while he did not mention outside protests, he did acknowledge concerns about how AI could be widely used to spread misinformation.
“Humans are already very good at creating misinformation, and maybe GPT models make this easier. But that’s not what I’m afraid of,” he said. “I think one thing will be different [with AI] is the interoperability, personalization, persuasion of these systems.”
While OpenAI plans to build in a way that makes ChatGPT refuse to spread misinformation, and plans to create a monitoring system, it will be difficult to mitigate these impacts once the company releases source models. open to the public — as announced a few weeks ago when it was intended to do. “The OpenAI techniques of what we can do on our own systems are not going to work the same.”
Despite that caveat, Altman said it’s important that artificial intelligence is not unduly controlled while the technology is still evolving. The European Parliament is currently debating legislation called The AI Act, the new rules will shape how companies can develop such models and possibly create an AI office to monitor compliance. However, the UK decided spread responsibility on AI between different regulators, including those governing human rights, health and safety, and competition, rather than creating a dedicated watchdog.
“I think it’s important to strike a balance right here,” Altman said, referring to the debates currently underway among policymakers around the world about how to develop rules for AI to protect society from potential harm without restricting innovation. “The correct answer is probably something between the traditional Euro-British approach and the traditional American approach,” says Altman. “I hope this time we can all work it out together.”