For years, artificial intelligence has been marketed as a helpful assistant, writing emails, fixing code, answering questions, and making everyday tasks easier.
Now, even the people building it are sounding alarm bells.
OpenAI CEO Sam Altman, who shared concerns about rising AI risks in a post on X.
OpenAI CEO Sam Altman has openly warned that advanced AI systems, particularly AI agents, are becoming powerful enough to pose serious risks if misused. When the creators of the technology themselves begin expressing concern, it signals a shift worth paying attention to.
In a recent post on X (formerly Twitter), Altman said AI systems have progressed dramatically over the past year. These new AI agents are no longer limited to simple assistance. They can now perform complex tasks that once required trained human experts, including identifying critical security vulnerabilities in computer systems.
At first glance, that capability may sound impressive. But it also opens the door to serious misuse.
The same skills that allow AI to strengthen cybersecurity can just as easily be exploited by hackers, criminal networks, or state-sponsored attackers. This marks a fundamental shift: AI is no longer just a tool used by people. It is increasingly becoming an active participant in digital attacks.
Altman acknowledged that the core challenge is not only how powerful AI has become, but how little we understand about the ways it can be abused and how difficult it is to contain that abuse without slowing innovation.
Some AI models, he noted, are already capable of discovering serious security flaws on their own. This means future cyberattacks could become faster, cheaper, and largely automated, requiring minimal human involvement.
This concern is no longer theoretical.
Early in 2025, AI company Anthropic reported that its Claude Code tool had been misused by Chinese state-sponsored hackers to target nearly 30 organizations, including banks, technology firms, and government bodies. According to the report, the attacks required very little human input. The AI system did not just assist; it carried out the work.
The implications are significant. AI-driven cyberattacks could scale faster than traditional threats, overwhelming existing defense systems and making attribution and accountability far more complex.

OpenAI announces a Head of Preparedness role as concerns grow over increasingly powerful AI systems.
In response to these growing risks, OpenAI has announced the creation of a new leadership role: Head of Preparedness. The position is focused on identifying how advanced AI systems could be misused and developing safeguards before serious harm occurs.
Altman admitted that this new phase of AI development presents challenges with little historical precedent. Solutions that appear effective in theory often fail in real-world conditions, creating dangerous edge cases that are difficult to predict.
In simple terms, AI is advancing faster than our ability to fully control it.
Cybersecurity is not the only area of raising concern. Altman also revealed that OpenAI identified early mental health risks associated with AI interactions as far back as 2025. At the same time, AI chatbots have faced lawsuits and criticism for spreading misinformation, sometimes with real-world consequences.
As AI systems increasingly influence how people think, feel, and make decisions, questions of responsibility become harder to answer. When harm occurs, it is often unclear whether accountability lies with developers, platforms, users, or regulators.
Despite these risks, Altman has not dismissed AI’s potential. He continues to emphasise that artificial intelligence can transform industries, enhance creativity, and help solve problems that humans struggle to address.
But his warnings were clear.
AI is no longer “just technology.” It is a form of power. And history shows that power without strong safeguards almost always comes at a cost.
As AI agents become more autonomous, the real challenge ahead is not building smarter systems but ensuring they do not move faster than our ability to govern, understand and control them.


