AI and Digital Sexual Abuse – The Mara Wilson Case

In January 2026, Mara Wilson wrote about her own story regarding child sexual abuse material (CSAM) that was made using her image, and suggested how parents can help protect their children. She also said it was up to the public to demand better protection, as well as legislation. Below, our AI-generated child sexual abuse material lawyer outlines the laws already in place regarding the issue, and how you can get the help you need if you have been the subject of such material.
Mara Wilson’s story
Before writing an essay for The Guardian about AI exploitation in 2026, Mara Wilson was best known for her roles as a child actor in movies such as Matilda and Mrs. Doubtfire. Wilson stated in her essay that she always felt safe while filming on set, but it was what was happening outside of any studio that was really damaging for her.
In the essay, which was published in January 2026, Wilson spoke of the risks of generative AI. She detailed how she had been photoshopped into pornographic images and had appeared on fetish websites. She wrote about how grown men would send her ‘creepy letters’ even though she wasn’t a ‘beautiful girl.’ She highlighted the fact that it was the internet that made her accessible to predators, along with the fact that she was a public figure.
Wilson also spoke of the horrors she endured, even though technically, the fetish sites were not necessarily breaking any laws, and that the images were not technically of her, despite using her likeness. She ended her essay by highlighting the need for parents to take action to protect their children and for legislators to enact new laws that would prevent AI-generated CSAM from being created or published.
The broader risk
Mara Wilson’s recent essay certainly sheds light on the risk of CSAM and its impact on victims. However, it does not only happen to celebrities.
AI models train using real images, which are often scraped from websites without the subject’s consent. Scraping refers to extracting images or text from websites to create huge datasets for training AI models such as ChatGPT. The process allows AI to learn patterns, understand context, and generate new content. When it is used inappropriately, though, it leads to re-victimization. The images are not necessarily ‘made up.’ They are built on the faces and likenesses of real children.
Wilson concluded her essay by encouraging readers to use their power to shape how generative AI is approached by tech companies. She recommended boycotting companies that allow their AI to create CSAM and other types of pornographic images. She also called for the public to demand better legislation and asked parents to consider their own actions before posting their child’s photos online.
Holding platforms accountable
In January 2026, California’s Attorney General, Rob Bonta, launched a formal investigation into X’s AI tool, Grok. The tool, made by Elon Musk’s company xAI, seems to make it easier to harass women and girls with deepfake photos found on X and in other online spaces. Bonta said the investigation was necessary after an ‘avalanche of reports’ detailed the sexually explicit and non-consensual material that was being produced and published online.
The investigation will determine if xAI broke state law and, if so, how it was done, so it can be stopped. Musk claimed that Grok had not generated any nude images of underage children. Contrary to this statement, the AI tool stated in recent weeks that it generated photos showing underage children wearing minimal clothing when asked by users.
According to Bonta, Grok is generating images that are being used to harass public figures and social media users. When platforms provide tools but fail to implement proper safety protections, suing AI companies for deepfake exploitation may be an option in certain cases, though liability can depend on factors such as the platform’s role and applicable federal protections.
How the law is catching up
While it seems that new advances with AI are being made every day, the law is starting to catch up. The following are a few laws already in place to prevent exploitation and provide protection for the public:
- California Assembly Bill 1831 (AB 1831): California AB 1831 was signed into law on September 29, 2024, and it went into effect on January 1, 2025. The law expands existing child pornography statutes to include certain AI-generated or digitally altered depictions of minors. Before AB 1831, prosecutions generally focused on depictions involving real minors, though some altered or simulated images could still fall under existing laws.
- The DEFIANCE Act: The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act is pending federal legislation in the United States that is meant to combat the spread of non-consensual and sexually explicit deepfake images. The bill allows victims to sue individuals who create or distribute deepfakes and is intended to protect both adults and children from harm.
- The TAKE IT DOWN Act: The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act was signed in May 2025 and will become effective in May 2026. Under this law, certain nonconsensual sharing of AI-generated deepfakes and intimate images may be treated as a federal crime. Threatening to publish such materials is also considered a crime under this law. After a victim notifies a platform of their image being used, the platform will have 48 hours to take it down.
Our AI-generated sexual abuse material lawyers in California can help you make things right
At Taylor & Ring, our California AI-generated sexual abuse material lawyer understands the sensitive nature of these cases and approaches each of them with the utmost confidentiality and compassion. We can work to hold platforms, enablers, and perpetrators accountable and pursue appropriate legal remedies. Our seasoned attorneys can protect your children and family and help you seek the justice you deserve. Suing AI companies for deepfake exploitation is one way to do this. Call us today or contact us online to schedule a consultation and to learn more about how we can help you fight for your rights.

David Ring is a nationally renowned plaintiff’s personal injury trial attorney and has obtained multi-million dollar verdicts and settlements on behalf of seriously-injured individuals or families who have lost a loved one in a tragic accident. For more than 20 years, he has represented victims of sexual abuse, sexual harassment, assault, molestation and sexual misconduct in cases against a variety of employers and entities, including schools, churches and youth organizations.
He prides himself on providing aggressive, yet compassionate representation for children who have been sexually abused and women who have been sexually harassed or assaulted. Read more about David M. Ring.