Hackers are trying to copy Google’s Gemini AI by sending it thousands of specially designed prompts, according to new information from the company that has been reported by several news outlets.
This effort, described as a coordinated attempt to learn how the model works, included campaigns that sent over 100,000 questions to try to copy how Gemini behaves and possibly build a similar system.
Mass prompting used to study how Gemini works
CNET reported that Google has found ongoing attempts by hackers to test Gemini with repeated prompts. This method is meant to gather enough answers to copy how the model works or create a similar system.
NBC News said that one campaign sent more than 100,000 prompts to the AI. Google called this activity “commercially motivated,” meaning some people may want to copy Gemini’s abilities for profit or their own use.
PCMag reported that these campaigns are part of a larger trend called model extraction. In this process, attackers keep testing an AI to learn how it responds, then use those results to train a new system that acts in a similar way.
Concerns over cloning and misuse
The size of these efforts has made Google worry about the risk of cloning advanced AI systems. Although the campaigns did not break into Google’s systems, they were meant to copy how the model thinks and answers.
These tactics show a growing problem for AI companies. As big models become easier to use and more valuable, repeated prompting can collect lots of answers that can be studied to build similar systems.
The copied models could be used in cyberattacks, such as making harmful code, phishing messages, or scams. The report also said that copying advanced AI could make it easier for criminals to create their own tools.
A wider security issue for AI systems
The reports show that attackers are now trying to figure out how AI systems work by using them, instead of breaking into them. Google said it is watching these activities and making its protections stronger to stop misuse and keep its models safe.
These campaigns show AI tools are now targets, not just tools for cybercrime. As more companies release powerful models, stopping people from copying them is becoming a new security focus.
Google is still looking into these activities and working to improve its defenses as more people try to copy major AI systems.