☀ --°C Thursday 7 May 2026
Home » Sponsored AI NSFW Ads on Facebook: Why You Are Seeing Them
Sponsored AI NSFW Ads on Facebook: Why You Are Seeing Them
AM:news 4 months ago 4 minutes read 88 views

Alarming reports are surfacing in Aruba this week. Many users are seeing explicit videos while scrolling through their daily feeds.

These are not posts from random accounts. They are clearly labeled as “Sponsored.”

This means someone is paying Meta to show this content to you. Residents are reporting one specific video frequently: an AI-generated woman in bed. This content is bypassing the safety filters that usually block nudity.

Why is this happening? How can a huge company like Facebook allow this?

The answer is simple but scary. Scammers are using a trick called “Cloaking.”

The “Bait and Switch” Trick

Facebook uses robots to check ads before they go live. Scammers know this. So, they created a way to trick the robots.

Here is how “Cloaking” works:

  1. The Safe Disguise: The scammer submits an ad. When Facebook’s robot checks the link, it sees a safe website. It might look like a blog about pillows or sleeping tips.
  2. The Approval: The robot sees nothing wrong. It marks the ad as “Safe” and approves it.
  3. The Switch: The ad goes live. Now, a real person in Aruba clicks the link. The scammer’s system detects that you are a human, not a robot.
  4. The Trap: It instantly redirects you to the explicit video or a scam site.

The robot saw a pillow blog. You see sponsored AI NSFW ads on Facebook. It is a digital bait-and-switch.

It Is Not Just Aruba

This is a global problem. It is not happening only in Aruba.

Scammers are using new AI tools to make these videos fast. If Facebook blocks one video, the AI makes a new one in seconds. The new video looks the same to you, but the computer code is slightly different. This tricks the filters again.

These scammers often use hacked Facebook accounts. These accounts are old and have a history of paying for ads. This makes Facebook trust them more.

What You Should Do

These ads are dangerous. They appear in public feeds where children can see them.

Do not interact with these posts.

  • Do NOT Comment: Many people comment to complain. Don’t do this. Comments tell Facebook the post is popular. This makes the ad spread to more people.
  • DO Report It: Click the three dots (…) on the post. Select “Report ad.”
  • DO Hide It: After reporting, select “Hide ad.”

We do not know when Meta will fix this specific breach. For now, the best defense is to ignore, report, and keep scrolling.

Sources

Technical Analysis of “Cloaking” & Malvertising

  • Invernizzi, L., Thomas, K., Kapravelos, A., Comanescu, O., Picod, J., & Bursztein, E. (2016). Cloak of visibility: Detecting when machines browse a different web. 2016 IEEE Symposium on Security and Privacy (SP), 743–758. https://doi.org/10.1109/SP.2016.50
  • Teoh, T. T., Zhang, Y., & Chen, H. (2024). PhishDecloaker: Detecting CAPTCHA-cloaked phishing websites via hybrid vision-based interactive models. 33rd USENIX Security Symposium (USENIX Security 24). https://www.usenix.org/system/files/usenixsecurity24-teoh.pdf
  • Thomas, K., Bursztein, E., Grier, C., Ho, G., Jagpal, N., Kapravelos, A., McCoy, D., Nappa, A., Paxson, V., Pearce, P., Provos, N., & Rajab, M. A. (2015). Ad injection at scale: Assessing deceptive advertisement modifications. 2015 IEEE Symposium on Security and Privacy, 151–167. https://doi.org/10.1109/SP.2015.17

The “AI Nudify” Ecosystem & Deepfake Trends

Regulatory Context (Meta, EU, & Oversight)

Söderlund, K., Engström, E., Haresamudram, K., Larsson, S., & Strimling, P. (2024). Regulating high-reach AI: On transparency directions in the Digital Services Act. Internet Policy Review, 13(1). https://doi.org/10.14763/2024.1.1746

Bayer, J. (2025). Zuckerberg’s strategy: Leveraging Trump to defy European regulation? Verfassungsblog. https://doi.org/10.59704/e94f730ca2e6b631

What do you think about this?

❤️ 0% Love
😂 0% Funny
😲 0% Wow
😢 0% Sad
😡 0% Angry

0 Comments

Deel dit artikel met je vrienden