Picture this: a dedicated charity honoring the achievements of female photographers suddenly silenced by a massive social media platform, all because an automated system confused their heroic name with a notorious drug. That's the shocking ordeal faced by Hundred Heroines, and it's a stark reminder of how technology can sometimes go hilariously wrong in the fight against real threats.
But here's where it gets controversial: Could this be a sign that we're placing too much blind trust in AI, or is there a deeper issue with how big tech companies handle community standards? Let's dive in and explore the full story.
It all started when the UK-based charity Hundred Heroines saw its Facebook group abruptly removed, accompanied by a curt message from the company stating that the page violated their community standards on drugs. After over a month of persistent appeals, the photography-focused organization finally celebrated the reinstatement of its group. The culprit? Meta's AI tools had mistakenly identified it as promoting the class-A opioid heroin, based on a phonetic mix-up with the word "heroine" in their name.
This wasn't an isolated incident for the Gloucestershire-based group, which showcases the talents of women in photography. In fact, their Facebook presence had been shut down twice in 2025 due to alleged violations related to drug promotion. The most recent incident occurred in September, and after another appeal within the year, the Hundred Heroines: Women in Photography Today page was restored last week—without any accompanying explanation or apology from the tech giant.
The charity's founder and former president of the Royal Photographic Society, Dr. Del Barrett, described the impact as "devastating" for an organization that heavily depends on Facebook to draw in visitors. She explained that the AI system flags the term "heroin" without the 'e'—essentially misreading "heroine" as the drug—leading to these unjust bans. And once flagged, it's incredibly frustrating because there's no easy way to connect with a human reviewer, which severely hampers their outreach since about 75% of their audience comes through the platform.
Founded in 2020, Hundred Heroines operates a physical space in Nailsworth, near Stroud, housing an impressive collection of around 8,000 items centered on the contributions of female photographers throughout history. This mix-up not only disrupts their digital visibility but also underscores the challenges small charities face in the digital age, where a simple name can trigger automated red flags.
To provide some context, Meta ramped up its monitoring of drug-related groups in 2024 amid the ongoing opioid crisis in the US, where tragically, over 80,000 people lost their lives to overdoses that year. The company strictly prohibits the buying and selling of drugs on its platforms and boasts "robust measures" to detect and eliminate such content. In an official statement, Meta emphasizes its commitment to safety, stating: "We recognise the significance of the drug crisis and are committed to using our platforms to keep people safe … and strict enforcement of our community standards."
However, when these systems err and label harmless groups as violators, it creates a frustrating, almost surreal experience for users—often likened to a Kafkaesque nightmare—where feedback forms become the sole avenue for correction. Meta highlights that AI is a key player in their content review process, enabling them to spot and remove problematic material even before reports come in. Sometimes, flagged items are sent to human review teams, but Barrett reported no such interaction during their appeals.
"We thought, 'should we change our name?' But why should we? Why have we got to mess with our brand just because of Facebook?" Barrett questioned, capturing the absurdity and the potential overreach of automated moderation. She added, "It sort of verges on scary and laughable. You think these bots are running the world and they can’t tell the difference between a woman and an opioid. Heaven help us."
This situation echoes broader criticisms Meta faced earlier this year, when thousands of Facebook and Instagram accounts were mass-banned or suspended. Users pointed fingers at the AI moderation tools for these mistakes, and while the company admitted to a "technical error" affecting Facebook Groups—such as wrongly targeting a meme-sharing group for allegedly violating rules on "dangerous organisations or individuals"—it denied any widespread increase in erroneous rule enforcement across its platforms. Meta claimed they were addressing the summer glitch that led to these issues.
And this is the part most people miss: In a world increasingly reliant on algorithms for fairness and safety, incidents like this highlight the human cost of automation gone awry. For beginners navigating social media, think of AI as a super-smart assistant that scans content for patterns—it can be incredibly efficient at catching bad actors selling drugs online, but it's not perfect. It lacks the nuance to understand context, like a charity name, which can lead to unfair outcomes.
Meta has been contacted for further comment on this specific case, but the broader debate lingers: Are tech giants like Meta doing enough to balance safety with accuracy? Should small organizations bear the brunt of these errors, or is it time for more accountability and transparency in AI decisions? Do you agree that AI mistakes are just teething problems, or does this reveal a deeper flaw in how we moderate online spaces? Have you or someone you know been unfairly flagged by a platform's algorithms? We'd love to hear your thoughts—share your experiences or opinions in the comments below to keep the conversation going!