Can a single policy change reshape what counts as acceptable content across a whole site? On Feb. 7, 2018, the platform updated rules to bar sexual imagery posted without consent, explicitly calling out depictions that were faked. That move removed communities centered on face-swapped material and signaled a shift in how moderators and users handle misuse of likenesses.
In plain terms: the site stepped in to stop edits that placed someone’s face into explicit images or videos without permission. The change mattered fast for people whose photos could be misused and for moderators who enforce community standards.
This article will explain what was taken down, what may still appear, and why reporting and moderation remain crucial. It will also show how this decision influenced other platforms and what practical steps everyday users can take to protect themselves.
Key Takeaways
- The 2018 rule update targeted nonconsensual, faked sexual imagery and led to subreddit bans.
- Removed posts included face-swapped images and edited videos using someone’s likeness.
- Moderation and reporting are essential tools for protecting people and content integrity.
- The policy set expectations that affected other platforms and broader site standards.
- The article will cover policy details, the tech behind edits, enforcement, and comparisons.
What Reddit banned and when the policy changed
A key policy shift on Feb. 7, 2018 redefined manipulated sexual imagery as involuntary pornography and tightened enforcement.
The Feb. 7, 2018 update: “involuntary pornography” and faked depictions
On that date the site expanded its ban on nonconsensual sexual imagery to explicitly include “depictions that have been faked.”
“Do not post images or video of another person for the specific purpose of faking explicit content or soliciting ‘lookalike’ pornography.”
This wording made clear the ban covers both original and manipulated images and videos when a person lacks consent. It also targeted posts meant to create or solicit imitation explicit content.
How rules were split
The update separated one combined rule into two: involuntary pornography, and sexual or suggestive material involving minors. That split gave moderators clearer guidance and faster enforcement paths.
What “without consent” means for images and videos
“Without consent” means a person never agreed to have their face, body, or likeness used in explicit content. This can include footage taken legally but later altered to show someone in sexual scenes.
| Policy area | Focus | Enforcement result |
|---|---|---|
| Involuntary pornography | Faked and nonconsensual depictions | Removal and subreddit bans |
| Minor-related material | Sexual or suggestive content with minors | Strict removal, reporting to authorities |
| Solicitation rules | Requests for “lookalike” explicit content | Prohibited and moderated |
Why reddit ai porn became a flashpoint on the platform
What started as a novelty in image editing quickly became a serious consent and safety problem for many people.
How deepfakes work at a high level
Deepfakes use machine learning to train on many photos and video clips, then map one set of faces onto another body. The result can be convincing video and imagery without the original person’s involvement.

Why celebrities and real people were frequent targets
Public figures have vast libraries of photos and footage, so models learn faces faster and produce realistic results. That abundance of data made celebrities early targets for manipulated sexual content.
Smaller online footprints don’t stop harm. Attackers can still build fakes of real people using fewer photos, which raises risk for private individuals.
User concerns and the role of requests
People warned that non‑consensual sexual content can humiliate and cause lasting reputational damage. Many framed such material as a form of abuse or sexual violence, even when simulated.
Posts asking for “lookalike” pornography or targeted requests act as harassment. They move production from novelty toward deliberate harm and make moderation urgent.
- Easy tools + fast sharing = rapid spread of problematic content.
- Community discussion often mixed curiosity with ethical and legal worry.
- Because of that, platforms treated the issue as a consent problem, not just adult content.
Subreddits and communities removed or affected by the crackdown
When moderators moved to enforce the new rules, several large communities that hosted manipulated material were taken offline.
Major bans and what they hosted
r/deepfakes (about 90,000 subscribers) was closed; it had been a central hub for sharing face‑swap videos and scripted tutorials. r/deepfakesNSFW (about 23,000) focused on explicit edits and was also removed. r/YouTubefakes was shut down for similar reasons, as it aggregated manipulated videos taken from mainstream clips.
Why r/CelebFakes mattered
r/CelebFakes centered on photoshopped static images rather than model-generated clips. Moderators initially left it up, but the community was removed soon after because the content still fell under the site’s rule against explicit material posted without permission.
What stayed up and why
Discussion spaces for tooling and technique, like FakeApp forums, were not automatically removed. r/SFWdeepfakes remained active because it focused on non‑pornographic face swaps and parodic edits, which fit the platform’s allowed content criteria.
Edge cases and ongoing enforcement
Smaller, niche communities tried rebranding or using coded language to avoid detection. Some mirror sites and off‑site links also appeared, which moderators track over months and years.
- Major removals reduced centralized distribution but did not eliminate the ability to create or share manipulated media.
- Admins and volunteer moderators often follow up on re‑uploads, mirror communities, and repeat offenders.
Takeaway: Bans cut off large hubs and signaled policy intent, but the capability to produce and share manipulated images and videos persists across the wider website ecosystem.
How Reddit plans to enforce the ban on ai-generated porn
Stopping manipulated sexual imagery starts with reports from the person affected.
Reporting and moderation: why first-party reports matter most
Site staff said first-party reports drive investigations. When the person depicted files a report, moderators treat it as strong evidence that consent was not given.
That direct request speeds action because it ties a user identity to the claim and lets admins verify information faster.

Case-by-case review and context: how posts get evaluated
Moderators and admins evaluate posts for intent, framing, and whether a person is being targeted.
They consider community context, if the post solicits imitation material, and whether the post appears malicious or accidental.
What to do if your face or body is used without consent
If you find edited images or videos of yourself, take these actions quickly:
- Capture URLs and screenshots for records.
- Use site reporting tools and make a clear request that the content be removed.
- Message community moderators and supply identity documentation if asked.
- Check other social media and websites in case the same material was reposted.
Automated systems can miss altered uploads, so user action remains central. You are not overreacting—reporting is the right step when someone posts exploitative material.
How Reddit’s move compares to other platforms banning deepfakes
Other websites adopted bans that treated fabricated explicit imagery as a harm, not a novelty.
Major platforms like Pornhub, Gfycat, Discord, and Twitter took similar steps. Each framed manipulated sexual images and video as a policy violation and a safety risk.
Pornhub framed nonconsensual faked pornography as akin to revenge porn and called it a form of sexual assault.
“We take a hard stance against revenge porn,” said Pornhub’s vice president Corey Price.
Gfycat cited its terms against objectionable content. Discord and Twitter tightened rules and removal flows to stop quick sharing across social networks.
Consistent enforcement matters. When one site bans content, people often move it to other platforms. Aligned policies and faster takedowns reduce easy redistribution.
The tech shift also forced change. Tools like FakeApp lowered the bar to create convincing fakes, so platforms had to update systems and moderation over the years.
| Platform | Policy focus | Enforcement |
|---|---|---|
| Pornhub | Nonconsensual simulated pornography | Removal, public statements likening it to sexual assault |
| Gfycat | Objectionable edited images | Content takedowns, terms enforcement |
| Discord / Twitter | Rapid sharing of manipulated media | Faster reports, clearer rules for moderators |
Practical takeaway: If content is removed from one site it may still appear elsewhere. Check platform rules and report on each website where the material appears.
Conclusion
Consent is the clear line: the 2018 update treated nonconsensual, faked sexual material as involuntary pornography and cut off major distribution hubs. That move changed how platforms remove harmful posts and enforce their terms.
People, and especially women, reported real harm: distress, humiliation, and damaged relationships followed when likenesses appeared in explicit content without permission.
If you find exploitative posts, document URLs and screenshots, then use platform reporting tools. Accurate information and quick action help moderators take down content and improve systems.
Bottom line: this was part of an industry shift toward seeing manufactured sexual media as abuse, not a prank. Expect ongoing updates to policies and moderation as social media and creation tools evolve.