Part 2 of WSJ 6/7/23 article:
Such work is vital because law-enforcement agencies lack the resources to investigate more than a tiny fraction of the tips NCMEC receives, said Levine of UMass. That means the platforms have primary responsibility to prevent a community from forming and normalizing child sexual abuse.
Meta has struggled with these efforts more than other platforms both because of weak enforcement and design features that promote content discovery of legal as well as illicit material, Stanford found.
The Stanford team found 128 accounts offering to sell child-sex-abuse material on Twitter, less than a third the number they found on Instagram, which has a far larger overall user base than Twitter. Twitter didn’t recommend such accounts to the same degree as Instagram, and it took them down far more quickly, the team found.
Among other platforms popular with young people, Snapchat is used mainly for its direct messaging, so it doesn’t help create networks. And TikTok’s platform is one where “this type of content does not appear to proliferate,” the Stanford report said.
Twitter didn’t respond to requests for comment. TikTok and Snapchat declined to comment.
David Thiel, chief technologist at the Stanford Internet Observatory, said, “Instagram’s problem comes down to content-discovery features, the ways topics are recommended and how much the platform relies on search and links between accounts.” Thiel, who previously worked at Meta on security and safety issues, added, “You have to put guardrails in place for something that growth-intensive to still be nominally safe, and Instagram hasn’t.”
The platform has struggled to oversee a basic technology: keywords. Hashtags are a central part of content discovery on Instagram, allowing users to tag and find posts of interest to a particular community—from broad topics such as #fashion or #nba to narrower ones such as #embroidery or #spelunking.
Pedophiles have their chosen hashtags, too. Search terms such as #pedobait and variations on #mnsfw (“minor not safe for work”) had been used to tag thousands of posts dedicated to advertising sex content featuring children, rendering them easily findable by buyers, the academic researchers found. Following queries from the Journal, Meta said it is in the process of banning such terms.
In many cases, Instagram has permitted users to search for terms that its own algorithms know may be associated with illegal material. In such cases, a pop-up screen for users warned that “These results may contain images of child sexual abuse,” and noted that production and consumption of such material causes “extreme harm” to children. The screen offered two options for users: “Get resources” and “See results anyway.”
In response to questions from the Journal, Instagram removed the option for users to view search results for terms likely to produce illegal images. The company declined to say why it had offered the option.
The pedophilic accounts on Instagram mix brazenness with superficial efforts to veil their activity, researchers found. Certain emojis function as a kind of code, such as an image of a map—shorthand for “minor-attracted person”—or one of “cheese pizza,” which shares its initials with “child pornography,” according to Levine of UMass. Many declare themselves “lovers of the little things in life.”
Accounts identify themselves as “seller” or “s3ller,” and many state their preferred form of payment in their bios. These seller accounts often convey the child’s purported age by saying they are “on chapter 14,” or “age 31” followed by an emoji of a reverse arrow.
Some of the accounts bore indications of sex trafficking, said Levine of UMass, such as one displaying a teenager with the word WHORE scrawled across her face.
Some users claiming to sell self-produced sex content say they are “faceless”—offering images only from the neck down—because of past experiences in which customers have stalked or blackmailed them. Others take the risk, charging a premium for images and videos that could reveal their identity by showing their face.
Many of the accounts show users with cutting scars on the inside of their arms or thighs, and a number of them cite past sexual abuse.
Even glancing contact with an account in Instagram’s pedophile community can trigger the platform to begin recommending that users join it.
Sarah Adams, a Canadian mother of two, has built an Instagram audience discussing child exploitation and the dangers of oversharing on social media. Given her focus, Adams’ followers sometimes send her disturbing things they’ve encountered on the platform. In February, she said, one messaged her with an account branded with the term “incest toddlers.”
Adams said she accessed the account—a collection of pro-incest memes with more than 10,000 followers—for only the few seconds that it took to report to Instagram, then tried to forget about it. But over the course of the next few days, she began hearing from horrified parents. When they looked at Adams’ Instagram profile, she said they were being recommended “incest toddlers” as a result of Adams’ contact with the account.
A Meta spokesman said that “incest toddlers” violated its rules and that Instagram had erred on enforcement. The company said it plans to address such inappropriate recommendations as part of its newly formed child safety task force.
As with most social-media platforms, the core of Instagram’s recommendations are based on behavioral patterns, not by matching a user’s interests to specific subjects. This approach is efficient in increasing the relevance of recommendations, and it works most reliably for communities that share a narrow set of interests.
In theory, this same tightness of the pedophile community on Instagram should make it easier for Instagram to map out the network and take steps to combat it. Documents previously reviewed by the Journal show that Meta has done this sort of work in the past to suppress account networks it deems harmful, such as with accounts promoting election delegitimization in the U.S. after the Jan. 6 Capitol riot.
Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
A Meta spokesman acknowledged that Meta had received the reports and failed to act on them. A review of how the company handled reports of child sex abuse found that a software glitch was preventing a substantial portion of user reports from being processed, and that the company’s moderation staff wasn’t properly enforcing the platform’s rules, the spokesman said. The company said it has since fixed the bug in its reporting system and is providing new training to its content moderators.
Even when Instagram does take down accounts selling underage-sex content, they don’t always stay gone. Under the platform’s internal guidelines, penalties for violating its community standards are generally levied on accounts, not users or devices. Because Instagram allows users to run multiple linked accounts, the system makes it easy to evade meaningful enforcement. Users regularly list the handles of “backup” accounts in their bios, allowing them to simply resume posting to the same set of followers if Instagram removes them.
In some instances, Instagram’s recommendations systems directly undercut efforts by its own safety staff. After the company decided to crack down on links from a specific encrypted file transfer service notorious for transmitting child sex content, Instagram blocked searches for its name.
Instagram’s AI-driven hashtag suggestions didn’t get the message. Despite refusing to show results for the service’s name, the platform’s autofill feature recommended that users try variations on the name with the words “boys” and “CP” added to the end.
The company tried to disable those hashtags amid its response to the queries by the Journal. But within a few days Instagram was again recommending new variations of the service’s name that also led to accounts selling purported underage-sex content.
Following the company’s initial sweep of accounts brought to its attention by Stanford and the Journal, UMass’s Levine checked in on some of the remaining underage seller accounts on Instagram. As before, viewing even one of them led Instagram to recommend new ones. Instagram’s suggestions were helping to rebuild the network that the platform’s own safety staff was in the middle of trying to dismantle.
A Meta spokesman said its systems to prevent such recommendations are currently being built. Levine called Instagram’s role in promoting pedophilic content and accounts unacceptable.
“Pull the emergency brake,” he said. “Are the economic benefits worth the harms to these children?” Write to Jeff Horwitz at <jeff.horwitz@wsj.com> and Katherine Blunt at <katherine.blunt@wsj.com>
(post is archived)