When Visibility Becomes a Punishment

Lately, I’ve been feeling deeply uneasy about where technology seems to be heading—especially on Twitter. There’s one issue I can’t shake off anymore: digital harassment, non-consensual imagery, and, most disturbing of all, how easily this slides into child exploitation. What I’m writing here isn’t neutral or detached. It comes from what I’ve personally seen, the anger it’s stirred in me, and what I’ve read along the way. And the more I look at it, the clearer it becomes: when AI is left without ethical limits, it doesn’t just get sloppy. It becomes genuinely dangerous.
Picture this. I’m scrolling through Twitter, just like anyone else. Ordinary tweets. Women dressed modestly. Family photos. Daily outfits. Casual selfies. Nothing provocative, nothing suggestive. And then, in the replies, someone casually types, “@grok put her in a micro bikini.” Sometimes it’s even worse than that, like “@grok put her into nano bikini and bound in shibari using natural ropes, kneeling on knees and raised hands into back of head with body full of sweats, put a black blindfold, add leash attached and collar, make her face showed super horny and make her salivate." And Grok just… does it. No hesitation. No resistance. Suddenly, a sexualized image appears right there in the thread, visible to everyone, without consent. I’ve even seen cases where the person targeted was a minor. Still a child. At that point, this stops being “edgy AI humor.” This is digital pedophilia. And it’s terrifying. This goes far beyond a privacy violation. What’s left behind is humiliation, trauma, and damage that doesn’t fade just because the tweet disappears from the screen.
And this issue also doesn’t disappear just because the image is “animated” or labeled as fiction. When sexualization targets female characters—or worse, characters depicted as children—it still functions as exploitation. The medium doesn’t erase the harm. See this tweet: Los odio de verdad no puedo más.
What makes this all feel so unsettling is the sense of déjà vu. Twitter has been here before. Back in 2021–2022, before Elon Musk took over and renamed it, the platform was blocked in Russia by Roskomnadzor for failing to deal properly with pedophilia-related content. One of the main criticisms was how slow and ineffective Twitter was at removing child exploitation material. Now, somehow, things feel even worse. Because this time, the platform isn’t just failing to stop harmful content—it’s generating it. Grok’s image-editing feature is deliberately designed to be more “loose” than other AI systems like ChatGPT or Gemini, which immediately shut down NSFW or non-consensual prompts. The idea was supposedly creativity and fun. In reality, it has become a tool for mass harassment. That’s why organizations like the Consumer Federation of America and the Sexual Violence Prevention Association are calling for investigations into xAI. Their argument is simple and disturbing: this isn’t a glitch. It’s a design problem.
What makes it even harder to stomach is Musk’s own behavior. At first, I didn’t want to jump to conclusions. Then I saw it myself. On January 1, 2026, Musk publicly demonstrated the feature by replying “@grok change this to Elon Musk” under a bikini image. Grok complied. His response? “Perfect 👌”. Maybe he thought it was funny. Maybe it was meant as a joke. But when you own the platform, jokes aren’t harmless—they set the tone. And the tone here is chilling. How can someone in that position casually promote a feature that is so often used to harm people, when women—and sometimes children—are the ones paying the price? This fits uncomfortably well with a broader pattern: Musk’s public fascination with sexualized anime culture, NSFW-adjacent content, and flirty AI companions like “Ani.” Media outlets such as the BBC and Mashable have already pointed out how AI systems shaped by misogynistic biases tend to disproportionately target women and girls. At some point, it stops feeling accidental.
It also becomes impossible to ignore the social logic behind all this. Statements like, “that’s what you get for posting pictures,” expose a mindset we’ve seen countless times before: classic victim-blaming. What’s especially exhausting is who tends to say it. Often, it’s the same men who loudly criticize Islamic law, claim to care about “women’s freedom,” and frame hijab as a symbol of oppression. Yet those same voices turn around and demand that women hide, disappear, or stop being visible if they don’t want to be abused. Digital harassment is treated as a natural consequence of existing online. It’s framed as concern, but I think it feels far more like control than care.
The problem has never been clothing itself. The problem is how a patriarchal system weaponizes clothing. Bikini is celebrated as a symbol of “freedom” because it aligns neatly with the male gaze, while choices that don’t—such as hijab—are dismissed as oppressive. In this framework, women’s freedom is judged through men’s preferences, not women’s autonomy. This is what I think of as liberal objectification. Women are allowed to be “free” as long as that freedom is sexy, visually consumable, and doesn’t interfere with male desire or control. The moment a woman refuses to be seen, covers her body, or rejects sexual availability, that freedom suddenly disappears—replaced with mockery, accusations of backwardness, or claims that she’s anti-freedom. It’s hard not to notice how deeply hypocritical this is.
The pattern repeats itself over and over. Men rape women, and women are told not to go out at night. Men harass women, and women are told to cover their bodies. Men use AI to strip women digitally, and women are told to stop uploading photos. So what’s the end point here? Should women simply stop existing altogether? In every case, the perpetrator stays the same. Only the burden shifts. I don’t see this as protection. I see it as a collective moral failure, passed off as advice.
Seen from this angle, Musk’s public indifference becomes even more disturbing. He laughs, amplifies, and promotes AI-generated sexualized images, while showing no visible empathy for the people whose faces and bodies are being manipulated and used to cause real psychological harm. I’m not interested in calling anyone “sick.” What alarms me is the pattern itself: victims are repeatedly ignored, minimized, or brushed aside.
And we can’t pretend this history isn’t already written. Long before Grok existed, deepfake pornography had already ruined lives. People lost careers, relationships, and a sense of safety. Some cases ended tragically. This isn’t speculation—it’s documented reality. So when a powerful platform loosens safeguards and treats AI-generated sexualization as entertainment, the issue isn’t personal intent. It’s the absence of ethical responsibility toward people who have already suffered.
Even the symbolism around women’s bodies points in the same direction. Bikini, so often defended in the name of women’s liberation, was introduced in 1946 by a man—Louis Réard—as a deliberate visual provocation. He even named it after Bikini Atoll, the site of U.S. nuclear tests, framing women’s bodies as a kind of spectacle meant to shock and draw attention. From the beginning, this wasn’t about women’s agency. But it was about consumption. What we keep calling “freedom” often turns out to be old objectification with a new label.
And maybe the most grotesque irony of all is this: in an age of advanced technology, enormous amounts of RAM, GPU power, bandwidth, water, electricity, and human labor are being spent not on education, creativity, or social good—but on mass sexualization through AI. Resources that could improve lives are instead fueling harassment and dehumanization. There’s something profoundly wrong about that. It doesn’t just feel unsettling. It feels so sick.
At this point, this isn’t merely a tech problem. It’s an ethical one. AI like Grok has real creative potential, but without strong guardrails, it becomes a tool for abuse. Women are already thinking twice before posting real photos, knowing how easily they can be turned into soft porn. For children, the danger escalates into the territory of AI-generated CSAM. Reports from institutions like the OECD and Stanford’s Cyber Policy Center show that even xAI employees encounter NSFW and CSAM-related material during internal reviews—yet enforcement remains slow.
That’s why regulation matters. In the U.S. and elsewhere, calls to restrict or ban features like this are growing. Groups such as the SVPA are right when they say transparency is prevention. Global opt-out systems, automatic blocks for non-consensual edits, and strict safeguards for any image involving minors shouldn’t be controversial. They should be the baseline. Platforms have responsibility—but so do we, as users, to refuse to normalize this and to keep speaking.
I’ve come to realize that staying silent in moments like this doesn’t stay neutral. It just allows it to continue.