I reported a deepfake of Caitlin Clark on X. Instead of it getting removed, here's what happened

Stephen Noh

I reported a deepfake of Caitlin Clark on X. Instead of it getting removed, here's what happened image

When the Pacers opened the playoffs against the Bucks on April 19, Caitlin Clark decided to lend her support in person. As she sat in one of Gainbridge Fieldhouse's suites, she was featured on the jumbotron. Loud cheers erupted from the 17,214 fans in attendance. 

Clark, proudly wearing a yellow Pacers t-shirt, smiled for the camera. She motioned over for two small children to join her on the screen. The three laughed and waved together. It was a sweet moment that the Pacers social media team captured and later posted on the platform X, formerly known as Twitter. 

Three days later, an account on X took that video, used AI technology, and tweeted out an objectionable fake clip (known as a "deepfake"). The beginning of the video was the same, featuring Clark waving to the crowd in her Pacers shirt. But instead of bringing children on-screen, the last half of the video was altered in an offensive manner.

That fake video quickly went viral. As of July 2, it had accrued 9.7 million views as compared to only 330,000 for the Pacers' original video. X users immediately slammed the deepfake, urging others to report it. I was one of many users who did exactly that, hoping to get it removed.

What happened next is typical of a new environment on X, where deepfakes of women are a growing problem.

X's response to deepfakes

Within 24 hours of reporting the Clark video, I received an email from X. It informed me that the video was not in violation of any of the site's sensitive media rules and would be allowed to stay up. 

Users eventually added a community note to the tweet, noting that "this is a deepfake sexually explicit content depicted without the person's consent. This is illegal in states with crimes specific to deepfakes and/or covered by existing laws in every state for criminal harassment. These laws are supported by Republicans if that helps." 

As of July 2, the tweet remains up, continuing to generate monetization for the account. Multiple follow-up requests to speak to a member of X's moderation team were left unanswered, as were two interview requests sent to the person behind the account. Several subsequent attempts to report the tweet under various violations of X's terms of service were also unsuccessful. 

Clark is far from the only female athlete to have deepfake technology used on her. Deepfake sites have targeted several WNBA players including Clark, Angel Reese, and Cameron Brink, among others. Reese has spoken out about the issue publicly. 

Explicit deepfakes of Reese continue to float around on X. On June 6, I reported one and was able to get it removed. Reese's deepfake was more explicit than Clark's, crossing the line for what moderators will and will not target.

The deepfake community on X is well aware of where that line stands. Users have begun sharing the prompts they feed X's AI bot, Grok, in order to produce pornographic or other sexualized images that get around the site's moderation filters. 

Many women have been left frustrated by X's lack of response to taking down these posts. Some have shared tips with each other on how to change their personal settings in order to prevent Grok from altering their photos. 

Politicians are trying to curb the proliferation of these deepfakes. On May 20, President Trump signed into law the Take It Down Act, which had bipartisan support and enacted stricter penalties for the distribution of non-consensual intimate imagery as well as deepfakes created by artificial intelligence.

"Today, we're making it totally illegal," Trump said in his comments to the press.

The law "requires certain online platforms to promptly remove such depictions upon receiving notice of their existence," according to the language in the bill. Platforms are required to remove depictions within 48 hours of notification.

X CEO Linda Yaccarino was in attendance for Trump's announcement and has pledged the company's support of the act. Nevertheless, X appears to be having issues complying with the new law. 

Deepfakes exist on other social media sites too. But X stands out in its approach to curbing the issue according to Alexios Mantzarlis, who serves as the Director of the Security, Trust, and Safety Initiative at Cornell Tech and has extensively researched the surge in deepfakes. 

"There’s so much more that could be done against this enormous problem. X in particular is a laggard," Mantzarlis told Sporting News. 

Elon Musk, a self-proclaimed free speech absolutist, disbanded X's 100-person Trust and Safety council shortly after taking over the company in 2022. Within his first four months, he had slashed the overall staff by 80 percent.

Those cuts have severely impacted moderation of the site. X drew widespread criticism in January of 2024 after numerous pornographic deepfakes of Taylor Swift appeared on the site. Swift fans flooded the site with positive videos of her to make the explicit videos harder to find, and X temporarily disabled searches of her name. Those deepfakes were viewed over 45 million times before eventually being removed. 

Musk pledged to hire 100 full-time employees for a new Trust and Safety center in Austin by the end of 2024 in the aftermath of the Swift incident. Those hires could not be confirmed. 

Musk's own attitudes towards political deepfakes have been inconsistent with his site's policies. In July of 2024, he shared a deepfake video of Kamala Harris from his personal account with the caption "this is amazing." 

Musk has also used the courts to fight against deepfake legislation. In November of 2024, X sued California over a state law targeting deepfakes used in elections. X argued the law violated the first amendment and would lead to censorship on its platform.

One day after the Clark deepfake tweet was sent, X released a statement through its Global Government Affairs team decrying the State of Minnesota's similar efforts to prohibit deepfake election-related content. 

X argued that "while the law’s reference to banning 'deep fakes' might sound benign, in reality it would criminalize innocuous, election-related speech, including humor, and make social media platforms criminally liable for not censoring such speech. Instead of defending democracy, this law would erode it."

The statement went on to take pride in the fact that X was the only social media platform challenging Minnesota's statute, and that their Community Notes program was sufficient to combat falsely generated content. 

What can be done to curb deepfakes 

There is some progress being made to stop the flood of deepfakes. One of the biggest websites, Mr Deepfakes, was shut down in May. The site had more than 650,000 users and hosted pornographic deepfake videos featuring several female athletes, including WNBA players. 

Much more needs to be done, and the problem is growing larger day-by-day. So-called "nudifier" apps are making it easier than ever to generate explicit deepfakes, and they continue to grow in reach. 

Mantzarlis believes that the Take it Down Act would be more effective if it targeted those companies in addition to going after disseminators and creators of deepfakes. The legislation does not mention them at all. 

Mantazarlis believes that the big tech companies also need to do more to combat those apps. 

"Nudifier websites are not as sophisticated as Meta," Mantzarlis said. "If the big platforms threw as much money and as many researchers as necessary, this would significantly reduce their reach. It wouldn’t eliminate it altogether of course, but it would make them very hard to access through mainstream platforms."

Tech companies haven't put that necessary level of investment into stopping the problem. As a result, deepfakes continue to flood social media sites. The problem affects wide swaths of women — six percent of American teens have been targeted by an AI nudifier, according to one recent study.

Clark is one of the most prominent athletes in the world, with a legion of fans who bring awareness to issues involving her. If one of her deepfakes can't get removed from X, what does that mean for everyone else? 

If you have been a victim of deepfakes or explicit images posted without your consent, the Cyber Civil Rights Initiative and StopNCII offer resources to help. 

Stephen Noh

Stephen Noh started writing about the NBA as one of the first members of The Athletic in 2016. He covered the Chicago Bulls, both through big outlets and independent newsletters, for six years before joining The Sporting News in 2022. Stephen is also an avid poker player and wrote for PokerNews while covering the World Series of Poker from 2006-2008.