Image

X Reviews Grok Code as Backlash Over Non-Consensual Images Continues

Elon Musk’s X remains locked in an ideological battle over whether users should be able to create non-consensual nudes via its in-built Grok AI chatbot, with X users maintaining that they should be allowed to do whatever they like, and that X is only being targeted because it allows free speech.

So, to be clear, X is taking a stand against efforts to stop its users generating fake nudes of people. Which is a function that no one needs, and nobody should have access to.

But Elon clearly sees this as something of value, which probably tells you all you need to know about this debate up front.

Over the past week, several regions have either threatened or enacted restrictions on X over the issue, including:

Other nations are also considering further action, which could see X lose millions of users, if indeed these lead to regional bans of the app.

But as noted, Musk and his supporters are framing this as an ideological issue, and that those seeking to take action against the platform are merely trying to control X, because its free speech approach threatens their existing power structures.

Which is not correct, and given the amount of misinformation on X, its only real contribution is to rumor and conspiracy theories.

Musk had seemingly hoped to enlist the help of the White House in pushing back against the criticisms, which, again, only relate to the capability to general non-consensual images of people in Grok, which is something that X could restrict, and should be looking to limit.

But while the Trump team is generally in support of Elon’s perspective, it seems that Musk won’t be getting any major trade sanctions to fend off potential bans.

So now, Musk appears to be taking a step back, and looking to address these concerns, as opposed to defending them.

According to the U.K. government and the EU Commission, X has informed them that it’s taking “additional measures” to stop Grok from generating sexualized images of women and children.

X has told both groups that it’s working to align with local laws on content, which may or may not prohibit the generation of deepfake nudes.

And indeed, Elon himself remains defiant, claiming that Grok is already told not to violate local rules.

I not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests. When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.

So it sounds like X, the company, may be looking to mend bridges, and avoid any potential fines or restrictions, while Musk is maintaining his ongoing stance to his supporters that the system effectively monitors itself, and that any illegal content is the fault of the user generating the image, not the system.

Which is probably not a legal defense (it’s definitely not), and it’ll be interesting to see if and how X looks to address these concerns, and whether any major changes are implemented, before regulatory groups move to the next stage.

I mean, if X ends up getting fined, you can bet that Musk will be on the phone to President Trump straight away, seeking to declare war on every nation that dares to push back.

But it seems unlikely to be a fight that Musk will win, because again, the only fix these groups are asking for is that X updates Grok to stop it generating fake nude images of people in the app.

That’s what this whole fight is about, which seems like a no-brainer, and not something that anyone should be arguing for in principle.

SHARE THIS POST