
Giorgia Meloni did not create the images herself, but admitted the fake pictures had “improved” her
Sexualized deepfakes are becoming a growing problem around the world, and not even prime ministers are immune from being targeted.
Italian Prime Minister Giorgia Meloni shared on X that she had been targeted by what she called “zealous opponents” after AI-made sexualized images of her began circulating online.
The fake images appeared to show Meloni sitting on a bed in underwear. One social media user replied by suggesting that the pictures were “shameful and unworthy of the institutional role she holds.”
The prime minister seemed to take the fake pictures in stride, posting: “I must admit that whoever created them, at least in the attached case, has also improved me quite a bit.”
Meloni then used the moment to issue a much wider warning about the spread of AI deepfakes. She urged people to think carefully before sharing images online, especially when the content may be fake or designed to humiliate someone.
Her warning also carried legal weight, as deepfakes made to harm people are illegal in Italy.
Deepfake laws in Italy
Italy became one of the first countries in the European Union to take a direct legal stance against harmful AI deepfakes, passing rules in 2025 against the use of artificial intelligence to cause damage. That includes the creation of sexualized deepfake content.
This is not the first time Meloni has been linked to deepfake abuse. Doctored images of her previously appeared on a pornographic website that also featured altered images of other “high-profile” women.
In 2024, she sued two men for around $108,000 after they allegedly posted fake videos of her on a pornographic website based in the US.

The Italian PM warned people to be very careful of what they share onlinePhoto by Alessandro Bremec/NurPhoto via Getty Images
As she shared the newer images, Meloni warned: “Check before you believe, and believe before you share. Because today it’s happening to me; tomorrow it could happen to anyone.”
“Deepfakes are a dangerous tool, because they can deceive, manipulate, and strike anyone. I can defend myself. Many others cannot.”
Meloni is far from the only public figure to speak out about deepfakes, as actors, podcast hosts, and musicians have also found themselves targeted by fake explicit content.
Scarlett Johansson (2018)

Scarlett Johansson has been a victim of AI deepfakesJamie McCarthy/Getty Images
The Washington Post reported that Scarlett Johansson’s face had been placed onto dozens of pornographic videos, including one clip described as “leaked” footage that had been viewed 1.5 million times.
The Marvel and Jurassic World star told the publication: “Trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part.”
“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired.”
Xochitl Gomez (2024)

The Marvel star had explicit AI videos of her made without her consent when she was a teenagerAxelle/Bauer-Griffin/FilmMagic
Marvel star Xochitl Gomez, who was just 17 at the time, said she had seen several explicit deepfakes of herself on X. Her team had tried to get them removed, but she said those efforts did not work.
She told The Squeeze podcast: “It made me weirded out and I didn’t like it and I wanted it taken down. “
“It wasn’t because I felt like it was invading my privacy, more just like it wasn’t a good look for me. This has nothing to do with me. And yet it’s on here with my face.”
Bobbi Althoff (2024)
Podcast host Bobbi Althoff, who has interviewed stars including Drake and Offset, also warned her followers after a sexually explicit AI-generated video of her began trending on X.
She said: “The reason I’m trending is 100% not me & is definitely AI generated.”
Her post was another reminder of how quickly fake sexual content can spread once it starts gaining attention online.

Bobbi Althoff has also been a victim of AI deepfakesChad Salvador/Variety via Getty Images
Taylor Swift (2024)

Taylor Swift was infamously a victim of deepfakesGareth Cattermole/TAS24/Getty Images for TAS Rights Management
Taylor Swift was also targeted in 2024, when AI-generated explicit images of her spread across X and Telegram.
One of the fake images was reportedly viewed 47 million times before the scandal led to renewed pressure on tech platforms and lawmakers.
The scale of the spread showed how hard it can be to stop fake intimate images once they begin moving across major social platforms.
The scandal led many US politicians to call for tougher rules on AI deepfakes, especially when the images are sexual and shared without consent.
For victims, the damage can happen fast. Even if an image is later removed, copies can be saved, reposted, and shared across other sites.
That is why Meloni’s warning focused not just on the people who create deepfakes, but also on those who share them without checking whether they are real.
Are sexualized deepfakes illegal in the US?
On May 19, 2025, President Donald Trump signed the TAKE IT DOWN Act into law. The full name is the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act.
The law created a major federal framework to fight non-consensual intimate imagery, including AI-generated deepfakes. It bans people from publishing deepfakes of minors and non-consenting adults.
It also requires websites and online platforms to provide a clear takedown process when someone discovers an unauthorized intimate image of themselves.

The Take It Down Law was passed in 2025Photo by Chip Somodevilla/Getty Images
Last month brought the first conviction under the TAKE IT DOWN Act. According to the Department of Justice, James Strahler II, a 37-year-old man from Columbus, Ohio, pleaded guilty to cybercrimes involving both real and AI-generated sexually explicit images.
Prosecutors said Strahler used AI to create pornographic videos that showed at least one adult victim engaged in sex acts with her own father, then distributed those videos to the victim’s co-workers. They also said he harassed at least six adult female victims between December 2024 and June 2025.
Authorities said Strahler installed more than 24 AI platforms and over 100 AI web-based models on his phone. He also pleaded guilty to cyberstalking, producing obscene visual representations of child sexual abuse, and publication of digital forgeries, with sentencing to be decided at a later hearing.