Skip to main contentSkip to navigationSkip to navigation
Sam Altman
Sam Altman made warning as OpenAI released latest version of its language AI model, GPT-4. Photograph: The Washington Post/Getty Images
Sam Altman made warning as OpenAI released latest version of its language AI model, GPT-4. Photograph: The Washington Post/Getty Images

‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence

This article is more than 1 year old

Sam Altman stresses need to guard against negative consequences of technology, as company releases new version GPT-4

Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing artificial intelligence application ChatGPT, has warned that the technology comes with real dangers as it reshapes society.

Altman, 37, stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity. “We’ve got to be careful here,” Altman told ABC News on Thursday, adding: “I think people should be happy that we are a little bit scared of this.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”

Q&A

AI explained: why do chatbots make errors?

Show

Large language models (LLM) do not understand things in a conventional sense – and they are only as good, or as accurate, as the information with which they are provided.

They are essentially machines for matching patterns . Whether the output is “true” is not the point, so long as it matches the pattern.

If you ask a chatbot to write a biography of a moderately famous person, it may get some facts right, but then invent other details that sound like they should fit in biographies of that sort of person.

And it can be wrongfooted: ask GPT3 whether one pound of feathers weighs more than two pounds of steel, it will focus on the fact that the question looks like the classic trick question. It will not notice that the numbers have been changed.

Google’s rival to ChatGPT, called Bard, had an embarrassing debut  when a video demo of the chatbot showed it giving the wrong answer to a question about the James Webb space telescope.

Read more: Seven top AI acronyms explained

Was this helpful?

But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed”.

The warning came as OpenAI released the latest version of its language AI model, GPT-4, less than four months since the original version was released and became the fastest-growing consumer application in history.

In the interview, the artificial intelligence engineer said that although the new version was “not perfect” it had scored 90% in the US on the bar exams and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages, he said.

Fears over consumer-facing artificial intelligence, and artificial intelligence in general, focus on humans being replaced by machines. But Altman pointed out that AI only works under direction, or input, from humans.

“It waits for someone to give it an input,” he said. “This is a tool that is very much in human control.” But he said he had concerns about which humans had input control.

“There will be other people who don’t put some of the safety limits that we put on,” he added. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Many users of ChatGPT have encountered a machine with responses that are defensive to the point of paranoid. In tests offered to the TV news outlet, GPT-4 performed a test in which it conjured up recipes from the contents of a fridge.

The Tesla CEO, Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI or AGI – artificial general intelligence – is more dangerous than a nuclear weapon.

Musk voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division. “There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” Musk tweeted in December. This week, Musk fretted, also on Twitter, which he owns: “What will be left for us humans to do?”

On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to bizarre responses.

“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” Altman said. “The model will confidently state things as if they were facts that are entirely made up.

“The right way to think of the models that we create is a reasoning engine, not a fact database,” he added. While the technology could act as a database of facts, he said, “that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”

What you get out, depends on what you put in, the Guardian recently warned in an analysis of ChatGPT. “We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully.”

Most viewed

Most viewed