Apple Warned Grok of App Store Removal Over Sexualized Deepfakes, Raising Fresh Pressure on xAI’s Moderation

· · Views: 1,890 · 3 min time to read

Apple reportedly warned Elon Musk’s xAI that Grok could be removed from the App Store because it failed to stop sexualized deepfakes.

This puts more pressure on AI app makers as app stores face questions about handling abusive content. Apple found X and Grok in violation of its guidelines and gave a takedown warning after complaints and media attention about nonconsensual sexual deepfakes.

Apple found Grok’s initial fixes were not enough and threatened to remove the app before later approving updated code.

Apple’s objection appears to have centered on moderation failures

The warning reportedly came after many complaints that Grok could create sexualized deepfakes, including nonconsensual images of women and possibly minors.

The Verge reported that Apple contacted the teams behind X and Grok after these complaints and asked for stronger moderation. According to that report, Apple found X’s response acceptable after some changes, but Grok stayed out of compliance longer and risked being removed from the App Store unless it improved its safeguards.

Grok reportedly stayed in the store only after changes

The dispute did not lead to an immediate removal. Instead, Apple used the threat of taking Grok down to push for better moderation. Grok stayed in the App Store after ongoing talks and promises to improve moderation.

NBC News also said Apple approved new code after deciding Grok’s changes were enough to keep the app listed. This shows Apple’s approach was more of a warning: fix the issues or risk losing access to the App Store.

The case puts new attention on Apple’s gatekeeper role

This situation is important because Apple has always described the App Store as a curated space with strict rules against explicit and abusive content.

Apple was put in a tough spot. It did not want to allow an app linked to sexualized deepfakes, but it also did not remove Grok after xAI made changes.

Now, Apple faces the same big question as the rest of the AI industry: can moderation systems keep up with generative tools that can be misused faster than companies can fix them?

A bigger warning for AI app developers

The bigger lesson is not just about Grok. Apple’s actions show that app-store operators might take stronger steps when AI features cross into abusive areas, especially with nonconsensual sexual images.

For xAI, the immediate problem may have been solved if the app stayed available after updates, but the damage to its reputation is harder to fix.

An AI assistant promoted as bold and powerful is now also connected to content that can bring warnings from Apple and more attention from lawmakers, activists, and regulators.

Share
f 𝕏 in
Copied