In 2023, ChatGPT maker OpenAI was breached, but the source code and customer data were not accessed, according to the company.
The New York Times wrote about July 4, 2024, that OpenAI experienced an undisclosed breach in mid-2023.
The NYT takes note of that the attacker didn’t access the systems housing and building the AI,, yet took conversations from a representative forum. OpenAI didn’t freely unveil the incident nor inform the FBI since, it guarantees, no data about clients nor accomplices was stolen, and the breach was not considered a threat to public safety. The firm concluded that the attack was down to a single individual with no known relationship to any foreign government.
Nevertheless, the incident led to internal staff discussions over how seriously OpenAI was addressing security concerns.
“After the breach, Leopold Aschenbrenner, an OpenAI technical program manager, focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets,” writes the NYT.
Recently, he was terminated, apparently for spilling data ((but more likely because of the memo). Aschenbrenner has a somewhat unique version on the authority spill story. In a podcast with Dwarkesh Patel (June 4, 2024), he said: “OpenAI claimed to employees that I was fired for leaking. I and others have pushed them to say what the leak was. Here’s their response in full: Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak… Before I shared it, I reviewed it for anything sensitive. The internal version had a reference to a future cluster, which I redacted for the external copy.”
Obviously, OpenAI is certainly not a blissful boat, with various conclusions on how it works, how it ought to work, and where it ought to go. The concern isn’t so much about OpenGPT (which is gen-AI) however on the fate of AGI (artificial general intelligence).
The previous ultimately transforms knowledge it learns (generally from scraping the internet), while the latter is capable of original reasoning. Gen-Artificial intelligence isn’t viewed as a threat to public safety, in spite of the fact that it might expand the scale and refinement of current cyberattacks.
AGI is an alternate matter. It will be equipped for growing new threats in cyber, the kinetic battlefield, and knowledge – and OpenAI, DeepMind, Human-centered and other leading AI firms and advancements are hurrying to be first to market. The worry over the 2023 OpenAI breach is that it might show an absence of security preparedness that could truly jeopardize public safety later on.
“A lot of the drama comes from OpenAI really believing they’re building AGI. That isn’t just a marketing claim,” said Aschenbrenner, adding, “What gets people is the cognitive dissonance between believing in AGI and not taking some of the other implications seriously. This technology will be incredibly powerful, both for good and bad. That implicates national security issues. Are you protecting the secrets from the CCP? Does America control the core AGI infrastructure or does a Middle Eastern dictator control it?”
As we draw nearer to creating AGI, the cyber threats will move from criminals to elite nation state attackers – and we see consistently that our security is inadequate to protect against them. On the back of a relatively insignificant breach at OpenAI (and we should expect that it was no more regrettable than the firm told its representatives), Aschenbrenner raised general and genuine concerns over security – and for that, it appears, he was terminated.