Australia’s financial regulator has warned banks that advanced artificial intelligence could lead to larger, faster, and more difficult-to-control cyber attacks. This puts extra pressure on banks to improve how they manage AI-related risks.
The Australian Prudential Regulation Authority (APRA) told banks on Thursday that the industry is falling behind the rapid pace of AI development, and current information-security measures are struggling to keep up.
APRA says banks are lagging behind the pace of AI change
APRA wrote to banks that the fast growth of AI could become a bigger threat to Australia’s financial sector.
Reuters said the regulator specifically warned about frontier AI models such as Anthropic’s Claude Mythos, explaining these tools could help bad actors find weaknesses and further increase the probability, speed and scale of cyber attacks.
This warning matters because APRA is not just talking about possible future risks. The regulator said its own review found that much of the industry’s information security is not keeping up with how fast AI is changing.
The Economic Times also reported that APRA told banks many firms rely too much on AI vendor presentations and product summaries, without looking closely at the risks these tools might bring.
Mythos becomes part of the financial-sector risk debate
The warning also puts fresh attention on Anthropic’s Claude Mythos, which Reuters said has high-level coding capabilities and has already raised concern among experts because of its potentially unprecedented ability to identify cybersecurity vulnerabilities.
Anthropic launched Claude Mythos Preview under Project Glasswing, a tightly restricted access program that includes major technology firms such as Amazon, Microsoft, Nvidia, and Apple. Anthropic did not immediately respond to Reuters’ request for comment.
By naming a specific advanced AI model, APRA’s message is more than just a general call for better cybersecurity.
It shows that Australian regulators now see the latest AI systems as a direct risk to the financial sector, especially since these systems can help attackers find weaknesses faster than older tools could.
Boards and security teams are being told to do more
APRA said it heard “clear recognition” from banks that the sector needs a “step change in cyber practices” and ongoing improvements to protect IT assets in an “evolving threat environment.”
APRA noted many bank boards are still building the technical skills needed to address AI-related risks and provide proper oversight.
The regulator also said that while banks have strict security procedures, some controls were not designed to keep up with how quickly AI is advancing.
This means the real issue is not just having cyber policies, but making sure those policies are strong enough for a world where AI can speed up both finding and exploiting weaknesses.
Banks say they are investing heavily, but the pressure is rising
The Australian Banking Association disagreed with claims that the sector is unprepared. Association chief executive Simon Birmingham said banks are always reviewing their cyber-risk measures and are ready to handle new AI technologies.
He also said Australian banks have strong cybersecurity defenses and spend “billions each year” to keep their systems safe and protect against threats.
Still, this warning comes as the credit and strategic effects of AI are growing. S&P Global also warned on Thursday that AI will affect the credit ratings of Asia-Pacific financial institutions over the next one to five years.
S&P noted that many banks in the region have big technology budgets that could help reduce risks and lower costs, but the overall impact on the financial sector may not be the same everywhere.
The bigger message from Canberra is that AI is no longer just about making banks more productive. It is now also a test of security and governance.