Expert on government-commissioned AI threat report: A lot of hype, but a good plan

TheStreet spoke with the Director of the Institute for Data Science at New Jersey Institute of Technology about the report.

Ian Krietzberg

Fast Facts

  • Gladstone AI last week published a nearly 300-page government-commissioned report detailing the “catastrophic” risks posed by AI.
  • TheStreet sat down with David Bader, the Director of the Institute for Data Science at New Jersey Institute of Technology, to break down the report.
  • “Certainly there’s a lot of hype,” Bader said, “but there’s also a lot of real-world threats.”

Last week, Gladstone AI published a report — commissioned for $250,000 by the U.S. State Department — that detailed the apparent “catastrophic” risks posed by untethered artificial intelligence technology. It was first reported on by Time.

The report, the result of more than 200 interviews with AI researchers, leading AI labs and prominent AI executives, was conducted by Gladstone’s Edouard Harris, Jeremie Harris and Mark Beall over the past year. It warns that, if it remains unregulated, AI could “pose an extinction-level threat to the human species.”

The report, which TheStreet reviewed in full, focuses on two key risks: the weaponization of AI and a potential loss of human control. In terms of weaponization, the report warns that models can be used to power everything from mass disinformation campaigns to large-scale cyberattacks, going on to suggest that advanced, future models might be able to assist in the creation of biological weaponry.

The risk of control, according to the report, is based on future, highly advanced (and theoretical) models that, in achieving “superhuman” capabilities, “may engage in power-seeking behaviors,” becoming " effectively uncontrollable."

Though the report says there is evidence to support this, the report does not explore that evidence.

The lines of effort

To address these two risks, the report lays out five lines of effort.

The government, according to the report, should establish an interim set of safeguards, including the creation of an AI Observatory to monitor developments in the space, a set of responsible safeguards for developers and an AI Safety Task Force to enforce those safeguards. It calls here for more control over the advanced AI supply chain.

The report also calls for the funding of open AI safety research, as well as the establishment of deployment safety standards.

The report’s fourth line of effort calls for the establishment of an AI regulatory agency, which would have licensing powers over the companies developing the tech.

And the fifth line of effort calls for the creation of international safeguards and an international rule-making and licensing agency that would oversee and monitor AI projects around the globe.

The AGI of it all

The glaring hole in the report — as pointed out by a number of experts on X — is that it is predicated around catastrophic risks stemming from the possible future creation of artificial general intelligence (AGI), a theoretical AI system that would have human-adjacent knowledge.

“There’s no science in X risk.” — Dr. Suresh Venkatasubramanian

AGI, however, does not exist, and many experts do not believe it will ever be possible, especially considering that researchers have yet to understand the whys and hows behind human intelligence, cognition or consciousness; the replication of that inexplicable human reality is, therefore, a significant challenge.

“It’s a ploy by some. It’s an actual belief by others. And it’s a cynical tactic by even more,” Dr. Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor, said of the so-called AI extinction risk last year. “It’s a great degree of religious fervor sort of masked as rational thinking.”

“I believe that we should address the harms that we are seeing in the world right now that are very concrete,” he added. “And I do not believe that these arguments about future risks are either credible or should be prioritized over what we’re seeing right now. There’s no science in X risk.”

And though current language models might seem intelligent, researchers have decried that as nothing but a miragethe models are, in actuality, predictive generators trained on an enormous amount of content.

The report, acknowledging that it views AGI as the driver behind all this AI risk — and ignoring mention of the many current harms incited by the tech — even states that while companies including OpenAI, Google DeepMind, Anthropic and Nvidia have suggested AGI is only a few years away, “they may also face incentives to exaggerate their near-term capabilities.”

The report says that, in an effort to address this problem, it asked a handful of technical sources what the odds are of AI leading to “global and irreversible effects.” The lowest estimate was 4%; the highest was 20%.

But Harris later added in a post on X that he and his team surveyed only “5-10” people, saying that the “20% was slightly more anomalous, but the folks working on the most cutting edge systems gave higher numbers in every case.”

The report goes on to acknowledge that “there is no direct empirical evidence that a future AGI system will behave dangerously” in regards to a potential loss of control, adding that on the flip side, there is also no evidence to suggest that AGI will behave safely, either.

Without that evidence, the report instead relies on “theoretical arguments guided by experiments on today’s less capable AI systems,” a model that it says has “significant limitations.”

“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist, told the New Yorker. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

On the same day the report was published, Beall left Gladstone to launch what he called “the first AI safety Super PAC,” according to VentureBeat. His plan is to “run a national voter education campaign on AI policy.”

He told VentureBeat that the PAC has already secured initial investments and plans to raise millions of dollars in the coming weeks and months.

Sifting through the hype

While it was dismissed by many for its lack of evidence in arguments not supported by science, others lauded the report, not for its fear of nonexistent AGI, but for its action-oriented suggestions that could allow the government to better rein in the industry.

The nonprofit Control AI said that it is “heartening to see this taken seriously at the highest levels,” adding that “we should be well-prepared, and regulations and guard rails should be in place to ensure that it only benefits humanity.”

Ed Newton-Rex, the CEO of the nonprofit Fairly Trained, called the report “extremely important” for recommending that the government “act fast” on AI.

TheStreet sat down with David Bader, the Director of the Institute for Data Science at New Jersey Institute of Technology, to discuss the report, who acknowledged that “certainly there’s a lot of hype, but there’s also a lot of real-world threats.”

“Now is the time that the world has to have these conversations.” — Dr. David Bader

With the technology moving as quickly as it is, he said that people are hearing everything from promises of an AI-fueled utopia to an AI-fueled apocalypse.

“My thought is that there are some real concerns to think about, but we do have some time to think about it,” he said. “This report raises some interesting directions for trying to understand what to do.”

The report’s recommendation to safeguard the supply chains behind AI technology in an effort to increase what it found to be lacking security at the labs building the tech, according to Bader, is a good one.

He did say, however, that there are a number of other risks and harms posed by AI that are important to not ignore, including issues of algorithmic bias, harms stemming from deepfake creation and mounting problems concerning disinformation, cyberattacks and self-driving.

He went on to acknowledge the national security risks laid out in the report, as well as the value of the five lines of effort, saying that they represent a number of good ways forward in terms of policies and laws regarding the mitigation of AI risk.

Still, he said that he is pessimistic that the government will be able to “put the genie back in the battle and control AI,” even through laws and regulations. “I think this regulation is probably going to be one of the hardest regulations to create,” he added, citing the difficulty inherent to the report’s lofty goals of international regulatory efforts.

But when it comes to the AGI of it all, Bader isn’t sold.

“I’m still a little bit leery that that’s something that we’ll achieve. I think the hype over AI at the moment … we see it isn’t a panacea of excellent and fantastic information,” Bader said. “It hallucinates a lot. There’s a lot of bias. We’re getting there but I still think it’s a long way off before we see AGI.”

“There is still a lot of hype with AI but it’s getting better and better every day,” he added. “Now is the time that the world has to have these conversations.”

Why not shut it down?

In this conversation, stemming from the report, about the risk of world-destroying AI, I asked Bader: “Why not shut it down?”

“Every technology, every basic and foundational technology we create, can be used for good purposes and it can be used for nefarious purposes,” Bader said. “So whether it’s a ballpoint pen, or whether it is a weather satellite or whether it’s a new medicine, everything that we create can do good or do harm.”

AI, he said, may be able to help humanity mitigate climate change (an effort some companies are already exploring), help solve geopolitical crises or help feed populations around the world.

“When we look at these technologies, I don’t think the right thing to do is shut it down. If we did that we would be left in this country without automobiles, without electricity, without lightbulbs, without all the technology that got us to where we are today,” Bader said.

“I think technology has a way of making lives better. So I’m more of an optimist that we should continue developing these technologies, but then we have to understand how to mitigate the risks and reduce the potential bad uses of that technology.”

BY IAN KRIETZBERG

Ian Krietzberg is a tech reporter for The Street. He covers artificial intelligence companies, safety and ethics extensively. As an offshoot of his tech beat, Ian also covers Elon Musk and his many companies, namely SpaceX and Tesla.

https://www.thestreet.com/technology/expert-on-government-commissioned-ai-threat-report-a-lot-of-hype-but-a-good-plan

David A. Bader
David A. Bader
Distinguished Professor and Director of the Institute for Data Science

David A. Bader is a Distinguished Professor in the Department of Computer Science at New Jersey Institute of Technology.