The Washington PostDemocracy Dies in Darkness

Racists and Taliban supporters have flocked to Twitter’s new audio service after executives ignored warnings

Employees who complained about the lack of moderation say they were sidelined

Updated December 10, 2021 at 4:36 p.m. EST|Published December 10, 2021 at 11:50 a.m. EST
Twitter co-founder Jack Dorsey, who stepped down as the network's CEO on Nov. 29, 2021. (Prakash Singh/AFP/Getty Images)
13 min

Earlier this year, as Twitter raced to roll out Spaces, its new live audio chat feature, some employees asked how the company planned to make sure the service didn’t become a platform for hate speech, bullying and calls to violence.

In fact, there was no plan. In a presentation to colleagues shortly before its public launch in May, a top Twitter executive, Kayvon Beykpour, acknowledged that people were likely to break Twitter’s rules in the audio chats, according to an attendee who spoke on the condition of anonymity to describe internal matters. But he and other Twitter executives — convinced that Spaces would help revive the sluggish company — refused to slow down.

Fast forward six months and those problems have become reality. Taliban supporters, white nationalists, and anti-vaccine activists sowing coronavirus misinformation have hosted live audio broadcasts on Spaces that hundreds of people have tuned in to, according to researchers, users and screenshots viewed by The Washington Post. Other Spaces conversations have disparaged transgender people and Black Americans. These chats are neither policed nor moderated by Twitter, the company acknowledges, because it does not have human moderators or technology that can scan audio in real-time.

“Dear @TwitterSafety, ISIS recruiters are active on Twitter Spaces. They are openly inciting genocide of Shiite people,” wrote one Twitter user in late November, who included a short recorded clip of a Spaces chat where a user made derogatory comments about Shiite Muslims. Two people who participated in the hour-long conversation, including an Afghan journalist, told The Washington Post that the host declared support for the Islamic State terrorist group and said that Shiite Muslims deserved to die.

The botched launch of Spaces is a sign of how ongoing turmoil at Twitter is causing it to overlook hate, polarization, and extremism — and to repeat the mistakes that have long plagued Silicon Valley companies. Twitter in particular has struggled under absentee leadership in recent years. Last week, CEO Jack Dorsey unexpectedly resigned amid a campaign by investors to innovate more rapidly.

In a surprise tweet, Jack Dorsey said he's stepping down

That investor pressure led the company to “ship fast and learn in public” — pushing out products before they are tested for safety, current and former employees who spoke on the condition of anonymity to discuss sensitive matters said. It’s an ethos that has often backfired on social platforms, including as recently as October, when a Facebook whistleblower came forward with documents showing the social media giant was aware of the societal harms its products caused. In many instances, it pressed forward with them anyway.

Now, as social media companies charge into new arenas — from audio to virtual reality to cryptocurrency — they are at risk of not implementing the lessons of the past. Even before Spaces was launched to Twitter’s more than 200 million daily users, employees began raising alarms about the potential for live audio broadcasts to be used to sow hate and extremism. But those who called for Twitter to slow down have been sidelined by managers, according to four current and former employees who spoke on condition of anonymity to discuss sensitive matters.

Twitter’s new CEO, Parag Agrawal, wasted no time in announcing a major reorganization of the company last week. He jettisoned two executives — including one who had been brought in to “detoxify” the platform — while consolidating power over the company’s core products under the hard-charging Beykpour. None of those products is more central to the company’s growth strategy than Spaces, according to the employees, where Twitter believes it has a chance to become a leader in the hot new medium of live audio.

“Ensuring people’s safety and encouraging healthy conversations, while helping hosts and listeners to control their experience, have been key priorities since the beginning of [Spaces’] development,” said spokeswoman Viviana Wiewall, who acknowledged the lack of real-time moderation for audio. “We’re exploring avenues in this regard, but it’s not something that we have available at this time.”

Wiewall noted that technology that could scan audio in real-time either did not currently exist or was in nascent stages. But she said some defenses against harmful audio content exist. That includes using existing software that detects problematic keywords — such as racist slurs — to scan the titles of Spaces chats, as well as depending on users to report potential rule-breaking content.

Twitter’s new CEO announces major reorganization of the social networking company

But this safety system has already failed to detect several harmful and rule-breaking Spaces, including one hosted by users in November titled, “lets be honest, tr@nswomen are born dishonest, frauds, liars, and deceivers” and another called “Ask a White Nationalist anything.”

A third — entitled in part “Blck PPL deserve NUFFIN’’ — which was attended by 400 listeners in November, likely got around keyword detection because another part of the title misspelled the n-word.

Worse, Twitter’s software mistakenly helped some of these conversations go viral. The company’s software identified the chats as popular because many listeners were tuning into them, and as a result promoted them to more users. Wiewall acknowledged the issue and blamed it on a software “bug” that she said had been addressed.

“Dumpster fire does not even describe” the way Spaces was managed, said a former member of that team. They added that chaotic and arrogant leadership led to the problems. “When you work on something for months and no one is gathering any findings about how bad people might use this, well then it’s not shocking that months later you have Taliban or racists using the platform.”

The current and former employees said other products have also been rushed out too hastily amid renewed Wall Street pressure to grow: A service called Tip Jar, which allows Twitter users to give payments to other users who post content such as performances or comedy stand-up routines, ended up invading people’s privacy when it inadvertently exposed people’s home addresses shortly after its launch in May. Twitter said the issue was PayPal’s fault but it subsequently added a warning label letting people know their personal information could be exposed.

As tech giants foray into new territories, experts and lawmakers say that they have not resolved their current problems and could further amplify them with newer, less-moderated technologies.

“Twitter spent six years creating a strong set of procedures to take dangerous content off its platform. And then it created a whole new platform where those procedures don’t work,” said Emerson Brooking, a resident senior fellow at the Atlantic Council’s Digital Forensics Lab who studies how extremists use social media. “Spaces is totally ungoverned.”

Twitter’s new CEO is bringing an engineering background to a politics fight

The race to audio

Twitter raced to develop its audio tool last year amid intense competition from an upstart audio chat competitor called Clubhouse. The rival enabled a panel of speakers in a virtual “room” to hold a live conversation. Audience members can be invited by the hosts “onstage.”

Launched in March 2020, Clubhouse boomed to 10 million active users as celebrities and others turned to audio for the sort of intimate, spontaneous conversations and interactive live events that were no longer possible during the pandemic.

Around the same time, activist investors from the hedge fund Elliott Management gained a large stake in Twitter and a seat on the company’s board. Elliott campaigned to oust Dorsey and had issued a list of demands that Twitter pick up the pace of product development, which had stagnated in recent years.

Soon, Spaces — a nascent experiment at the time — was moved to the number 1 slot on the company’s priority road map. By early 2021, Beykpour had reassigned over 100 people to work on Spaces — versus roughly a dozen in 2020. He had turned to a friend and former colleague at the live-video streaming start-up Periscope, Alex Khoshnevissan, to lead Spaces.

But conflicts arose early on. In the first months of 2021, the product was in “beta” mode as Twitter tested it with a small number of power users. People involved in Spaces say that Khoshnevissan was a disorganized and mercurial manager who was more focused on creating a vision than on the day-to-day work of building the audio tool. They say he was reluctant to share information and would go on abstract rants. In one document obtained by The Post, the subject line of an email from him about the project was a “murmuration of starlings.”

As the company got ready to launch Spaces to Twitter users with over 600 followers in May, employees began asking whether Spaces would deploy technological tools to scan audio for rule violations.

Video is so 2020. Now Instagram, Facebook and Twitter are going all in on audio.

Twitter, like Facebook and YouTube, has built extensive tools in recent years to spot slurs, deep fakes, bots, and disinformation networks. The companies also employ third-party moderators — Twitter has over 2,000 of them today — to read posts and enforce rules.

The employees were told, however, that the technology to do so for Spaces did not exist and the small number of human moderators was incapable of listening to tens of thousands of conversations in multiple languages in real-time. People who suggested that the company should slow down to build better safety technology were dismissed or excluded from meetings, according to three of the employees. One of them recalled at least five different meetings where the lack of moderation was raised as a problem to Khoshnevissan and other managers.

As Spaces finally launched, Khoshnevissan sent another memo describing a host of problems. He lamented that “@TwitterSpace has lost its fearless, open, and transparent communication.” He also said he was disappointed and that people needed to do better. “We can definitely do this — please consider this a call for help and collaboration.”

But months into the public launch, the number of people listening to Spaces was trending downward — dropping to below a million in July from more than 1.5 million in May, according to an internal chart obtained by The Post.

Khoshnevissan demanded that the team quintuple the number of listeners by the end of the year after seeing those numbers. “We need to be moving with extreme urgency — existential urgency, like Spaces depends on it, because it does,” he wrote in another note obtained by The Post. “At this rate Spaces will not exist in six months, and not just as a figure of speech.”

Twitter’s Wiewall said she had no comment on the internal dynamics or on Khoshnevissan. He declined to comment.

In a statement after this story published, Wiewall said that the moderation plan for Spaces also included prioritizing reports of problematic audio conversations for review, and that the company had created a separate team to moderate audio after it receives reports. It says it is working on building technological tools to enforce its rules proactively.

The Wild West

In August, as the U.S. withdrew from Afghanistan, the militant Islamist insurgency, the Taliban, took over the country’s government. Social media companies had to decide whether to ban the group, which has conducted public beheadings of its enemies and engaged in violence against women.

Facebook continued to ban the Taliban, but Twitter allowed the organization to maintain an official presence as the de facto government. The company said it would continue to enforce its rules against any violating content posted by the Taliban or their supporters.

Soon after the takeover, Taliban supporters around the world started creating Spaces, hosting dozens of conversations in English and Pashto to gain support for the militant group, said the Atlantic Council’s Brooking. In some conversations he listened to, he said, people would discuss conspiracy theories about the terrorist attacks of 9/11, while others were openly anti-Semitic. At times, cryptocurrency enthusiasts would join the conversations to encourage the Taliban to use cryptocurrency to bypass the global financial system. In other cases, white nationalists would hop on to pro-Taliban Spaces to make negative comments about Israel and Jews.

Twitter prohibits language that dehumanizes people on the basis of religion and national origin. It also prohibits using its platform to “further” illegal activities, such as money laundering.

Brooking said he was shocked because Twitter appeared to be enforcing its policies against pro-Taliban tweets — policies that he agreed with — but ignoring dozens of audio chats where rules were being broken and ideas that were known to radicalize people spread. Twitter also did not allow Spaces to be recorded, making it nearly impossible for researchers to track what was said. (Twitter has since allowed recording and says it also saves audio on its own servers for up to 90 days.)

Today’s Taliban uses sophisticated social media practices that rarely violate the rules

By fall, other extremists, including white nationalists and conspiracy theorists, were starting to host and promote Spaces, said Rita Katz, executive director of SITE Intelligence Group.

Some users reporting problems contacted by The Post said the company was too slow to respond.

Last month, a Twitter user screenshotted and posted eight different pro-Taliban forums in a bid to gain the company’s attention. “As Taliban members are always using Twitter “spaces” to hold meetings, I’m just going to screenshot everyday until somebody does something about it,” the user tweeted. The person, who said his name was Alex Gomez and that he was a graduate student in biostatistics at Florida International University, said Twitter did not immediately delete many of the accounts he had flagged — enabling the same people to create new audio conversations every day.

At this point, a full-blown debate was taking place at Twitter, with employees at team meetings flagging pro-Taliban Spaces due to the risk of allowing a known extremist group to use Spaces unmoderated.

Extremists, including the far-right in the U.S., are known to be early adopters of emerging technologies, experts said. Clubhouse was plagued with similar problems.

“This is straight out of their playbook,” said Colin P. Clarke, research director at the Soufan Group, who studies terrorists. “They’re going to extract every bit of recruitment and propaganda out of it before it gets taken away.”

Spaces was also being used by pro-democracy activists and dissidents in both Hong Kong and Iran, the experts noted.

Sarah T. Roberts, professor at UCLA’s Center for Critical Internet Inquiry, said the Spaces problems reminded her of a pattern of similar mistakes by tech companies. Facebook raced, in 2015, to launch live video streaming, she said. Soon, people were filming themselves committing suicide, and murders and mass shootings were posted live.

“Audio is the wild West,” she said