YouTube hired therapists for staff dealing with disturbing content
A year into their online venture, the YouTube site kept crashing. Customer satisfaction was low and frustration was high. For one user, though, it was the final straw — and he called the office to complain. “I need to goddamn masturbate, and I can’t do that when you don’t have all those videos up,” he raged. “Get your s–t together, you goddamn whores!”
In “Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination” (Viking), author Mark Bergen charts the story of the world’s second most visited website, from its origins as a simple video-sharing platform to its place today as what he calls “the video scaffolding of the internet.”
Launched on Feb. 14, 2005 by former PayPal employees Jawed Karim, Chad Hurley and Steve Chen, the first video ever uploaded to YouTube was a short clip titled “At The Zoo” of Karim visiting San Diego Zoo and marveling at the length of the elephant’s trunks.
Fast-forward 17 years, and there are now more than 500 hours of video uploaded to the site every minute. As Bergen writes: “Imagine the longest movie you’ve ever seen. A ‘Lord of the Rings,’ maybe. Now imagine watching it one hundred times in a row, and you still have not sat through the footage added to YouTube every sixty seconds.”
While Karim’s initial clip was short and sweet, YouTube’s popularity – it now has around 2 billion visitors each month – has meant that it has, at times, struggled to keep on top on some of the more questionable content that is uploaded. Whether it’s graphic clips or conspiracy theories, ISIS videos, alt-right advocates or porn, the platform grapples constantly with protecting its users on one hand and free speech on the other.
In its formative years, staff would routinely filter content manually, working 24/7 to ensure that everything posted didn’t breach their guidelines. The so-called ‘SQUAD’ (Safety, Quality, and User Advocacy Department) team was dedicated to removing anything that could cause distress. But the effect of seeing explicit, exploitative or disturbing content on a daily basis also took its toll on workers. In time, YouTube even employed therapists to help them cope.
There was always something wacky or weird being uploaded. Once, after Google’s $1.65 billion takeover in 2006, a gaggle of the new parent company’s execs walked past the screens of YouTube moderators as they deliberated over some bizarre footage from Japan wherein women used octopuses in sexual maneuvers. “The managers hurriedly hid the screens, nervous that the suits would freak out,” writes Bergen.
When the team couldn’t decide if a video contravened their guidelines, they called in the lawyers. During one phone call, a moderator even had to describe in exacting detail the entirety of Robin Thicke’s “Blurred Lines” music video so that the company lawyer, Lance Kavanaugh, could make the call as to whether the semi-nude models appearing were “artistic” or “sexually gratifying.”
He was driving at the time.
But the platform’s soaring popularity meant YouTube “could barely police its own backyard.”
Never was that more evident than in the fall of 2007, when an 18-year-old Finnish student, Pekka-Eric Auvinen, posted several videos of school shootings, including Columbine, as well as clips of himself using firearms. His final post, called ‘Jokela High School Massacre: 11/7/2007,’ was never flagged and, on the day in question, he walked into the school armed with a semiautomatic pistol and killed eight people before turning the gun on himself.
Live-streaming brought new problems. In 2019, self-confessed white supremacist Brenton Tarrant broadcast live the initial stages of an attack on a mosque in Christchurch, New Zealand, in which he killed 51 people. He even name-dropped YouTube’s biggest star, PewDiePie, during the rampage, urging people to subscribe to his channel.
Similarly, the sheer volume of footage uploaded from the “Arab Spring” uprisings in the early 2010s, and more recently, some horrific footage broadcast by the terrorist group ISIS and its followers, presented the site with even more problems. “YouTube was caught in the vortex,” says Bergen. “The company had rushed to expand across the globe, pushing citizens to broadcast in every language and nation they could, without putting enough staff in these countries to watch videos or deal with politics on the ground.”
But YouTube, coupled with the ingenuity and financial muscle of its parent company, has come a long way in tackling contentious clips, developing the kind of advanced machine-learning algorithms that can now outperform humans. “Early on they couldn’t detect a butt from a peach, leaving that to humans,” writes Bergen, “but now they developed skin-detection algorithms to remove obscene stuff automatically.”
They even have a “trashy video classifier” to ensure the home page doesn’t become too low rent.
The algorithms aren’t perfect, though.
One recent update to their skin-detection system, for example, saw the content of scores of grumpy bodybuilders disappear because it “couldn’t differentiate porn from Speedos.”
Read the full article Here