How X users can limit Grok’s access to their images amid AI abuse concerns
X users have raised alarm over Grok, the platform’s AI assistant, being used to alter images of women by removing or changing clothing when prompted by third parties.
Critics argue that this practice enables non-consensual sexualised imagery and, in extreme cases involving minors, could result in the creation of material that meets the legal definition of child sexual abuse material (CSAM).
While Grok does not independently alter images, the ability for users to prompt the AI to reinterpret publicly posted photos has exposed serious gaps in consent, safety, and accountability.
Understanding what controls do exist is therefore critical.
Disable Grok’s access to your data
The most direct-action users can take is to limit Grok’s access to their content and data.
To do this:
- Open X
- Go to Settings and privacy
- Select Privacy and safety
- Tap Grok & Third-party collaborators
Disable options that allow:
- Your posts to be used by Grok
- Your data to be shared with third-party AI collaborators
- Your content to be used for AI training or improvement
This does not prevent other users from prompting Grok with your images, but it does stop the platform itself from ingesting your content for AI development or analysis.
Restrict the visibility of your images
Because AI tools can currently be applied to publicly visible images, reducing visibility is one of the few effective safeguards.
Users should consider:
- Setting their account to protected
- Limiting who can reply to their posts
- Avoiding posting identifiable images publicly
- Removing older images that are no longer necessary
- Private or protected posts significantly reduce exposure to AI misuse
Act immediately if AI-generated content crosses a line
If you encounter Grok-generated imagery that:
- Sexualises a person without consent
- Alters clothing in a sexualised way
- Appears to involve a minor
You should:
- Report the content directly on X
- Select categories related to sexual exploitation or unsafe AI use
- Preserve evidence
- Escalate to relevant authorities if minors may be involved
IOL News
