top of page
Search

SECTION 230 & AI-GENERATED NON-CONSENSUAL INTIMATE IMAGERY

  • Antara Varma
  • 2 days ago
  • 6 min read

By: Antara Varma

Edited By: Nikki Stancik

Photo by Martin Gee / Scientific American

As generative artificial intelligence (AI) capabilities advance, hyper-realistic AI-generated pornography, known as non-consensual intimate imagery (NCII), has become increasingly accessible. In 2020, a breakout report discovered that a singular AI chatbot, Telegram, produced deepfakes of over 100,000 women (Ray, 2020). In 2025, xAi—Elon Musk’s AI company and parent of the generative chatbox Grok—generated 10,000 such images in the span of a week (Mulholland, 2024).

A significant portion of this spike in AI-generated content is child sexual abuse material (CSAM). According to the International Watch Foundation, 2025 was the “worst year on record” for online child sexual abuse, with a 26,362% increase in publicly available AI-generated CSAM content from 2024 (International Watch Foundation, 2026). In light of such extreme rates of AI-generated NCII and CSAM material, it is essential to understand the culpability of AI platforms and regulate their behavior accordingly through decreased protections and increased corporate liabilities.

Under Title 18 of the U.S. Code, Section 2252, it is a federal offense to possess, distribute, or transport child pornography. This is further extended to include AI CSAM material by Code 2256, which states that the federal prohibition of CSAM includes any visual depiction, “including computer-generated images,” that is indistinguishable from a minor engaged in sexually explicit conduct (Legal Information Institute, 2012).

This is further reinforced by the 2025 TAKE IT DOWN Act, which federally criminalizes the publication of non-consensual intimate imagery for adults as well as minors. It explicitly includes AI-generated content, or “digital forgeries.” The act also requires websites to remove reported images within 48 hours, establishes penalties for offenders, and provides victims with avenues for civil recourse (Congress, 2025).

TAKE IT DOWN was passed following significant public outrage against major AI companies, which rather than independently regulating AI-generated NCII content, have taken deliberate action to increase its potential for profit.

Consider xAI’s Grok, which according to independent analyst Genevieve Oh, generates 84 times more sexualized deepfakes per hour than the other top five deepfake sites combined. Rather than meaningfully inhibiting the creation of NCII and CSAM content on Grok, xAI responded to international calls for regulation by merely limiting image and video capabilities to paid subscribers. This does not block malicious actors from creating NCII and CSAM content using GrokImagine, but merely charges them $30-300 dollars for the services. Similarly, Grok only “restricts” access to their 3D anime-style erotic chatbots in that they cost $30 a month to access on browser. They remain free and largely accessible to children 12-18 on the Grok app (Titcomb, 2025). In this way, Grok has regulated access to AI CSAM and NCII on the basis of cost, not only the basis of responsible or ethical restrictions, allowing them to commodify and financially benefit the demand for such content.

The only potential exception to this pattern cost-based regulation would be xAI’s attempts to “geo-block,” nudification services and reduce NCII and CSAM content in targeted “jurisdictions where it is illegal,” but this is easily surmountable with a VPN and has failed in multiple “geo-blocked,” locations to prevent access (Ratcliffe, 2026). Thus, it temporarily assuages public and governmental anxiety surrounding NCII and CSAM content without really threatening access or its profit potential.

TAKE IT DOWN, in this regard, has not been entirely sufficient in curbing CSAM and NCII content because it is, inherently, limited in its application. It regulates at the level of the individual, prosecuting the individual person who ‘knowingly’ published the AI-generated NCII. It does not regulate the companies or software that enable this material’s production nor does it criminalize NCII images generated for personal gratification and possession that remain unpublished and unshared (Congress, 2025). In practice, this means that 9 months after TAKE IT DOWN went into effect, the U.S. has not been geoblocked as a “jurisdiction where such content is illegal,” and that NCII and CSAM content generation is still largely accessible through paid subscriptions and careful prompting. In that regard, AI companies remain unaccountable for the violent, graphic, and often illegal content being generated through their software. Individual prosecution may have increased, but the fundamental threat of platforms as unregulated hubs for NCII and CSAM content remains.

This gap in platform-wide accountability is largely due to a lack of legal liability. Intuitively, one can understand the content generated by AI as being a co-creation between the user who prompts the action and the software that creates it. However, in reality, the majority of liability for content produced by generative AI chatbots falls on the user. This is because under Section 230 of the Communications Decency Act (1996), interactive computer services are shielded from liability for user-generated content, as they are treated as conduits for third-party material rather than publishers (Brannon & Holmes, 2024).

When it comes to hosting or distributing NCII, online platforms are often protected by Section 230 provided that it was created by a user and the platform didn’t materially help create the illegal content. This standard has yet to be tailored to encompass generative AI, but the current understanding is that platforms must “knowingly facilitate or participate” in violating a federal crime to incur liability.

The most effective method to curtail the production of AI-generated NCII and CSAM content would be to address this lack of corporate liability by explicitly exempting generative AI chatbots from Section 230 of the Communications Decency Act. According to Section 230, platforms that host, organize, or display third-party information—“interactive computer services,”—are not treated as the speaker for the content they provide (Brannon & Holmes, 2024). These are passive platforms, such as those of Instagram or Pinterest, which simply display externally sourced information and content. Generative AI, on the other hand, does not solely present existing content and rather, functions as an “information content provider,”—responsible “whole or in part,” for the “creation or development,” or information sourced through the Internet and other interactive computer services. Therefore, it is not protected from liability for third-party content under Section 230 (Waheed, 2024). Exempting generative AI, therefore, aligns more closely with Section 230’s original purpose of protecting passive hosts, not active content creators.

This has been previously established in the court cases Fair Housing Council of San Fernando Valley vs Roommates.com (2008) and FTC vs Accusearch (2009), which collectively assert that platforms that induce or contribute to illegal content, albeit using third party source material, are not eligible for Section 230 protections. With the TAKE IT DOWN ACT becoming federal law this year, generative AI platforms that contribute to, even incidentally, NCII and CSAM material would fall into this unprotected category.

Finally, aside from executing the true intent of Section 230, exempting generative AI would act as an important form of deterrence against negligence by corporate actors. Despite consistent public and regulatory backlash, efforts to mitigate the harm of AI-generated pornography have been slow. This is evident in the on-going proliferation of paid forms of erotic Grok, Chat-GPT, and the slow de-escalation of Grok image generation capabilities from publicly available to paid, to nominally “geo-blocked” and accessible through VPN (X, 2026).

Thus, rather than depending entirely on companies to ethically regulate AI-generated NCII in spite of profit incentives, it would be more efficient to establish in corporations a vested interest in preventing the proliferation of harmful and illegal content by officially differentiating AI capable of original, harmful content from the passive media platforms protected by Section 230 (Wilson, 2026).

Overall, protecting children and vulnerable adults from sexual exploitation, abuse, and harassment through AI-generated non-consensual intimate imagery is essential in order to protect personal dignity and reduce the public threat of psychological and sexual violence. To be able to provide such protection however, it is essential that legislators expand corporate liability for illegal NCII and CSAM content and reduce the protections for Section 230 for generative AI platforms.


The views expressed in this publication are the authors' own and do not necessarily reflect the position of The Rice Journal of Public Policy, its staff, or its Editorial Board.
References

Brannon, V. C., & Holmes, E. N. (2024, January 4). Section 230: An Overview. Congress.gov. https://www.congress.gov/crs-product/R46751

Congress. (2025). The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images. Congress.gov. https://www.congress.gov/crs-product/LSB11314

D’Anastasio, C. (2026, January 7). Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X. Bloomberg.com; Bloomberg. https://www.bloomberg.com/news/articles/2026-01-07/musk-s-grok-ai-generated-thousands-of-undressed-images-per-hour-on-x


Legal Information Institute . (2012). 18 U.S. Code § 2252 - Certain activities relating to material involving the sexual exploitation of minors. LII / Legal Information Institute. https://www.law.cornell.edu/uscode/text/18/2252

Mulholland, J. (2024). Bonta Orders xAI to Halt AI Deepfakes, Issues Five-Day Deadline. State Affairs. https://pro.stateaffairs.com/ca/ai/xai-deepfake-investigation-bonta-ultimatum

Ratcliffe, R. (2026, January 18). “Still here!”: X’s Grok AI tool accessible in Malaysia and Indonesia despite ban. The Guardian; The Guardian. https://www.theguardian.com/technology/2026/jan/18/grok-x-ai-tool-still-accessible-malaysia-despite-ban-vpns

Ray, S. (2020, October 20). Bot Generated Fake Nudes Of Over 100,000 Women Without Their Knowledge, Says Report. Forbes. https://www.forbes.com/sites/siladityaray/2020/10/20/bot-generated-fake-nudes-of-over-100000-women-without-their-knowledge-says-report/

Sciacca, B., Mazzone, A., Magnus Loftsson, James O'Higgins Norman, & Foody, M. (2023). Nonconsensual Dissemination of Sexual Images Among Adolescents: Associations With Depression and Self-Esteem. Journal of Interpersonal Violence , 088626052311657-088626052311657. https://doi.org/10.1177/08862605231165777

Titcomb, J. (2025, July 16). Musk launches AI Grok girlfriend available to 12-year-olds. The Telegraph. https://www.telegraph.co.uk/business/2025/07/16/ai-girlfriend-musk-app-12-year-olds/

Waheed, N. (2024, September 4). Section 230 and its Applicability to Generative AI: A Legal Analysis. Center for Democracy and Technology. https://cdt.org/insights/section-230-and-its-applicability-to-generative-ai-a-legal-analysis/

Wilson, T. (2026, January 10). Musk says outcry over X’s Grok service is “excuse for censorship.” Bbc.com; BBC News. https://www.bbc.com/news/articles/ce3kqzepp5zo

X. (2026). X (Formerly Twitter). https://x.com/Safety/status/2011573102485127562
 
 
 

Comments


Screen Shot 2022-09-08 at 2.37.45 PM.png

The views of our writers are entirely their own and do not necessarily represent the opinions of the Editorial Board, the Baker Institute Student Forum, or Rice University.

©2022 by ricejpp. Proudly created with Wix.com

bottom of page