Photos by Michael Starghill
On the afternoon of June 1, Kaitlyn Siragusa received a notification from Instagram: One of her posts had been taken down for violating the platform’s content policies. If she were to break the rules again, the alert warned, her 1.6 million-follower account could be disabled. Siragusa, a 25-year-old Twitch streamer known to fans as “Amouranth,” was puzzled. She was confident that the photo, which showed her standing poolside in a striped bikini, fully complied with Instagram’s policies.
It was the latest of nearly a dozen posts that Instagram had recently deleted, including some that she had uploaded long before their removals. In one case, she got a notification alerting her to the deletion of an “Instagram Story” that had publicly expired from her profile 10 days earlier. (Stories automatically disappear after 24 hours, though they may be saved to the poster’s private archives.) The streamer’s content had been disappearing so frequently over the previous several weeks, Siragusa told HuffPost, that she’d started a spreadsheet to keep track.
Four days later, on June 5, Siragusa received a mysterious email from someone claiming to work for Cognizant, an American tech firm hired to do content moderation for Facebook and Instagram.
“I’m sure you’ve noticed recently than [sic] many of your posts and stories have been removed,” said the email, one of several that Siragusa shared with HuffPost. “Perhaps we can reach an agreement privately.”
For monthly payments of 0.25 bitcoin (about $2,600), the emailer proposed, Siragusa could rest assured that her content would stay up. The emailer signed off as “Tampa” — the location of a Cognizant facility where content moderators have described being severely overworked and underpaid.
At first, Siragusa ignored the email. She’d never heard of Cognizant, and she assumed she was just being targeted by a scammer who was reporting her posts as violating Instagram’s rules over and over again until they disappeared and then posing as a content moderator to try to extort her. But there was something odd about the next messages that “Tampa” sent her in July, following more photo takedowns.
“I am sure by now you have noticed that two additional posts have been struck and removed. Your posts made on April 3 at 10:18 AM, and July 2 at 11:54 AM,” said one of the emails. Another simply stated: “July 12 at 10:34 AM.”
When a photo or video is posted to Instagram, it doesn’t feature a publicly visible timestamp, just a date. There are little-known ways to manually extract timestamps from existing posts — but not for those that have already come down. And yet, somehow, “Tampa” had correctly listed the exact upload times of Siragusa’s recently deleted Instagram photos, including one that she’d posted more than three months earlier.
Siragusa was able to confirm the times by checking the removal notifications she’d received from Instagram, such as the one shown above on the left. (After taking down a user’s content, Instagram sends them a private notification including a blurred copy of the post, the reason for its deletion, the date and time it was posted, and a link that can be used to appeal the removal.)
“When I saw the timestamps, that’s when I got concerned,” said Siragusa, who earns a living by video-streaming and online modeling, and relies on Instagram to promote herself. “I realized this was probably an actual person with Instagram.”
Siragusa had her agent get in contact with Facebook, which owns Instagram, to discuss the matter last month. The agent was able to arrange and record a phone call with a Facebook representative ― a rare line of communication that’s normally unavailable to the vast majority of Facebook users. Audio of the recording was shared with HuffPost.
During the 20-minute call, Siragusa’s agent explained three times that a self-proclaimed Instagram content moderator was trying to extort Siragusa for content protection.
“I can’t answer for the emails that you’re receiving,” responded the representative, who adamantly repeated that Siragusa’s posts had been “rightfully removed” for being “sexually suggestive.” The poolside photo was deemed inappropriate for Instagram in part because Siragusa had “her hand near her chest, and things like that,” according to the representative. A photo of her in a Hooters jersey, which was flagged as “nudity or pornography,” was removed because “she’s, uh, pouring water on herself.”
In the spring of this year, Instagram quietly started shadow-banning users’ borderline content that does not actually violate the company’s community guidelines, including vaguely “inappropriate” and “sexually suggestive” posts. Such content is subject to algorithmic demotion but, per Instagram’s own public policies, not deletion. When Siragusa’s agent reminded the Facebook representative that “sexually suggestive” content does not meet Instagram’s criteria for removal, the representative doubled down, stating: “We can’t allow that kind of content on our platform.”
Frustrated and confused, having received no real answers from Facebook despite her rare direct access, Siragusa reached out to HuffPost.
“I feel like it is almost impossible for individuals — even those with millions of followers — to face down Facebook,” she said.
Contacted by HuffPost, Cognizant declined to comment for this story, instead deferring to its client. In a statement, a Facebook spokesperson said that the representative with whom Siragusa’s agent had spoken is part of a team that “isn’t versed in the details of the Community Guidelines nor is it their responsibility to communicate them.”
Regarding the emails Siragusa received from “Tampa,” Facebook told HuffPost: “We take accusations like this very seriously. We investigated this matter and did not find any evidence of abuse.”
According to the Facebook spokesperson, Instagram also determined that all three of the deleted posts for which “Tampa” provided timestamps had been flagged to content moderators by Instagram’s proactive artificial intelligence technology — not from user reports — as spam, nudity and sexual solicitation.
The fact that Siragusa’s posts were flagged by AI doesn’t wholly eliminate the possibility that “Tampa” is just an opportunistic scammer out in the world, with no relation to Cognizant or Instagram. But it does make that possibility rather unlikely. With AI flagging the content, “Tampa” would have had to diligently track each of Siragusa’s Instagram pictures, manually collect the timestamps for all of them, wait for them to go down on their own, and then contact her with the posts’ details.
This extremely patient outside scammer — who apparently never bothered to speed things up by reporting Siragusa’s posts — must have also known about Cognizant’s Tampa facility at the time of the initial email to Siragusa, which was sent weeks before the first press mention about Cognizant’s content moderation operations in the city.
“Tampa” also exhibited precise knowledge of non-public Instagram policy. In that initial email, while trying to entice Siragusa to pay for protection, “Tampa” told her that content rules can be “strictly or loosely enforced” and that once Instagram removes a user’s post, the user’s account is restricted for exactly 14 days, during which impressions generated by the account are limited.
Instagram confirmed to HuffPost that the information about the 14-day restriction period is accurate and that the company had not shared it publicly.
“Tampa” did not respond to HuffPost when contacted by email. It is unclear if he or she has also solicited bribes from other Instagram users.
Over the past several days, three more of Siragusa’s Instagram photos have come down, right at the end of her most recent 14-day restriction period — meaning her account’s engagement will be limited for another two weeks at least. This time, the accompanying Instagram notifications didn’t give specific reasons for the removals — stating only that each post “goes against Community Guidelines” — and offered no option to appeal.
It’s disconcerting that Instagram seems to “pick winners and losers by fiat,” said Siragusa. “There is something extremely suspect in the way this whole ordeal has played out.”
REAL LIFE. REAL NEWS. REAL VOICES.
Help us tell more of the stories that matter from voices that too often remain unheard.
How the Dumb Design of a WWII Plane Led to the Macintosh
The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked…
The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked but still airworthy. It was a symbol of American ingenuity, held aloft by four engines, bristling with a dozen machine guns.
Imagine being a pilot of that mighty plane. You know your primary enemy—the Germans and Japanese in your gunsights. But you have another enemy that you can’t see, and it strikes at the most baffling times. Say you’re easing in for another routine landing. You reach down to deploy your landing gear. Suddenly, you hear the scream of metal tearing into the tarmac. You’re rag-dolling around the cockpit while your plane skitters across the runway. A thought flickers across your mind about the gunners below and the other crew: “Whatever has happened to them now, it’s my fault.” When your plane finally lurches to a halt, you wonder to yourself: “How on earth did my plane just crash when everything was going fine? What have I done?”
For all the triumph of America’s new planes and tanks during World War II, a silent reaper stalked the battlefield: accidental deaths and mysterious crashes that no amount of training ever seemed to fix. And it wasn’t until the end of the war that the Air Force finally resolved to figure out what had happened.
To do that, the Air Force called upon a young psychologist at the Aero Medical Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio. Paul Fitts was a handsome man with a soft Tennessee drawl, analytically minded but with a shiny wave of Brylcreemed hair, Elvis-like, which projected a certain suave nonconformity. Decades later, he’d become known as one of the Air Force’s great minds, the person tasked with hardest, weirdest problems—such as figuring out why people saw UFOs.
For now though, he was still trying to make his name with a newly minted PhD in experimental psychology. Having an advanced degree in psychology was still a novelty; with that novelty came a certain authority. Fitts was supposed to know how people think. But his true talent is to realize that he doesn’t.
When the thousands of reports about plane crashes landed on Fitts’s desk, he could have easily looked at them and concluded that they were all the pilot’s fault—that these fools should have never been flying at all. That conclusion would have been in keeping with the times. The original incident reports themselves would typically say “pilot error,” and for decades no more explanation was needed. This was, in fact, the cutting edge of psychology at the time. Because so many new draftees were flooding into the armed forces, psychologists had begun to devise aptitude tests that would find the perfect job for every soldier. If a plane crashed, the prevailing assumption was: That person should not have been flying the plane. Or perhaps they should have simply been better trained. It was their fault.
But as Fitts pored over the Air Force’s crash data, he realized that if “accident prone” pilots really were the cause, there would be randomness in what went wrong in the cockpit. These kinds of people would get hung on anything they operated. It was in their nature to take risks, to let their minds wander while landing a plane. But Fitts didn’t see noise; he saw a pattern. And when he went to talk to the people involved about what actually happened, they told of how confused and terrified they’d been, how little they understood in the seconds when death seemed certain.
The examples slid back and forth on a scale of tragedy to tragicomic: pilots who slammed their planes into the ground after misreading a dial; pilots who fell from the sky never knowing which direction was up; the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear. And others still, who got trapped in a maze of absurdity, like the one who, having jumped into a brand-new plane during a bombing raid by the Japanese, found the instruments completely rearranged. Sweaty with stress, unable to think of anything else to do, he simply ran the plane up and down the runway until the attack ended.
Fitts’ data showed that during one 22-month period of the war, the Air Force reported an astounding 457 crashes just like the one in which our imaginary pilot hit the runway thinking everything was fine. But the culprit was maddeningly obvious for anyone with the patience to look. Fitts’ colleague Alfonse Chapanis did the looking. When he started investigating the airplanes themselves, talking to people about them, sitting in the cockpits, he also didn’t see evidence of poor training. He saw, instead, the impossibility of flying these planes at all. Instead of “pilot error,” he saw what he called, for the first time, “designer error.”
The reason why all those pilots were crashing when their B-17s were easing into a landing was that the flaps and landing gear controls looked exactly the same. The pilots were simply reaching for the landing gear, thinking they were ready to land. And instead, they were pulling the wing flaps, slowing their descent, and driving their planes into the ground with the landing gear still tucked in. Chapanis came up with an ingenious solution: He created a system of distinctively shaped knobs and levers that made it easy to distinguish all the controls of the plane merely by feel, so that there’s no chance of confusion even if you’re flying in the dark.
By law, that ingenious bit of design—known as shape coding—still governs landing gear and wing flaps in every airplane today. And the underlying idea is all around you: It’s why the buttons on your videogame controller are differently shaped, with subtle texture differences so you can tell which is which. It’s why the dials and knobs in your car are all slightly different, depending on what they do. And it’s the reason your virtual buttons on your smartphone adhere to a pattern language.
But Chapanis and Fitts were proposing something deeper than a solution for airplane crashes. Faced with the prospect of soldiers losing their lives to poorly designed machinery, they invented a new paradigm for viewing human behavior. That paradigm lies behind the user-friendly world that we live in every day. They realized that it was absurd to train people to operate a machine and assume they would act perfectly under perfect conditions.
Instead, designing better machines meant figuring how people acted without thinking, in the fog of everyday life, which might never be perfect. You couldn’t assume humans to be perfectly rational sponges for training. You had to take them as they were: distracted, confused, irrational under duress. Only by imagining them at their most limited could you design machines that wouldn’t fail them.
This new paradigm took root slowly at first. But by 1984—four decades after Chapanis and Fitts conducted their first studies—Apple was touting a computer for the rest of us in one of its first print ads for the Macintosh: “On a particularly bright day in Cupertino, California, some particularly bright engineers had a particularly bright idea: Since computers are so smart, wouldn’t it make sense to teach computers about people, instead of teaching people about computers? So it was that those very engineers worked long days and nights and a few legal holidays, teaching silicon chips all about people. How they make mistakes and change their minds. How they refer to file folders and save old phone numbers. How they labor for their livelihoods, and doodle in their spare time.” (Emphasis mine.) And that easy-to-digest language molded the smartphones and seamless technology we live with today.
Along the long and winding path to a user-friendly world, Fitts and Chapanis laid the most important brick. They realized that as much as humans might learn, they would always be prone to err—and they inevitably brought presuppositions about how things should work to everything they used. This wasn’t something you could teach of existence. In some sense, our limitations and preconceptions are what it means to be human—and only by understanding those presumptions could you design a better world.
Today, this paradigm shift has produced trillions in economic value. We now presume that apps that reorder the entire economy should require no instruction manual at all; some of the most advanced computers ever made now come with only cursory instructions that say little more than “turn it on.” This is one of the great achievements of the last century of technological progress, with a place right alongside GPS, Arpanet, and the personal computer itself.
It’s also an achievement that remains unappreciated because we assume this is the way things should be. But with the assumption that even new technologies need absolutely no explaining comes a dark side: When new gadgets make assumptions about how we behave, they force unseen choices upon us. They don’t merely defer to our desires. They shape them.
User friendliness is simply the fit between the objects around us and the ways we behave. So while we might think that the user-friendly world is one of making user-friendly things, the bigger truth is that design doesn’t rely on artifacts; it relies on our patterns. The truest material for making new things isn’t aluminum or carbon fiber. It’s behavior. And today, our behavior is being shaped and molded in ways both magical and mystifying, precisely because it happens so seamlessly.
I got a taste of this seductive, user-friendly magic recently, when I went to Miami to tour a full-scale replica of Carnival Cruise’s so-called Ocean Medallion experience. I began my tour in a fake living room, with two of the best-looking project staffers pretending to be husband and wife, showing me how the whole thing was supposed to go.
Using the app, you could reserve all your activities way before you boarded the ship. And once on board, all you needed was to carry was a disk the size of a quarter; using that, any one of the 4,000 touchscreens on the ship could beam you personalized information, such which way you needed to go for your next reservation. The experience recalled not just scenes from Her and Minority Report, but computer-science manifestos from the late 1980s that imagined a suite of gadgets that would adapt to who you are, morphing to your needs in the moment.
Behind the curtains, in the makeshift workspace, a giant whiteboard wall was covered with a sprawling map of all the inputs that flow into some 100 different algorithms that crunch every bit of a passenger’s preference behavior to create something called the “Personal Genome.” If Jessica from Dayton wanted sunscreen and a mai tai, she could order them on her phone, and a steward would deliver them in person, anywhere across the sprawling ship.
The server would greet Jessica by name, and maybe ask if she was excited about her kitesurfing lesson. Over dinner, if Jessica wanted to plan an excursion with friends, she could pull up her phone and get recommendations based on the overlapping tastes of the people she was sitting with. If only some people like fitness and others love history, then maybe they’ll all like a walking tour of the market at the next port.
Jessica’s Personal Genome would be recalculated three times a second by 100 different algorithms using millions of data points that encompassed nearly anything she did on the ship: How long she lingered on a recommendation for a sightseeing tour; the options that she didn’t linger on at all; how long she’d actually spent in various parts of the ship; and what’s nearby at that very moment or happening soon. If, while in her room, she had watched one of Carnival’s slickly produced travel shows and seen something about a market tour at one her ports of call, she’d later get a recommendation for that exact same tour when the time was right. “Social engagement is one of the things being calculated, and so is the nuance of the context,” one of the executives giving me the tour said.
Subscribe to WIRED and stay smart with more of your favorite writers.
It was like having a right-click for the real world. Standing on the mocked-up sundeck, knowing that whatever I wanted would find me, and that whatever I might want would find its way either onto the app or the screens that lit up around the cruise ship as I walked around, it wasn’t hard to see how many other businesses might try to do the same thing. In the era following World War II, the idea that designers could make the world easier to understand was a breakthrough.
But today, “I understand what I should do” has become “I don’t need to think at all.” For businesses, intuitiveness has now become mandatory, because there are fortunes to be made by making things just a tad more frictionless. “One way to view this is creating this kind of frictionless experience is an option. Another way to look at it is that there’s no choice,” said John Padgett, the Carnival executive who had shepherded the Ocean Medallion to life. “For millennials, value is important. But hassle is more important, because the era they’ve grow up in. It’s table stakes. You have to be hassle-free to get them to participate.”
By that logic, the real world was getting to be disappointing when compared with the frictionless ease of this increasingly virtual world. Taken as a whole, Carnival’s vision for seamless customer service that can anticipate your every whim was like an Uber for everything, powered by Netflix recommendations for meatspace. And these are in fact the experiences that many more designers will soon be striving for: invisible, everywhere, perfectly tailored, with no edges between one place and the next. Padgett described this as a “market of one,” in which everything you saw would be only the thing you want.
The Market of One suggests to me a break point in the very idea of user friendliness. When Chapanis and Fitts were laying the seeds of the user-friendly world, they had to find the principles that underlie how we expect the world to behave. They had to preach the idea that products built on our assumptions about how things should work would eventually make even the most complex things easy to understand.
Steve Jobs’ dream of a “bicycle for the mind”—a universal tool that might expand the reach of anyone—has arrived. High technology has made our lives easier; made us better at our jobs, and created jobs that never existed before; it has made the people we care about closer to us. But friction also has value: It’s friction that makes us question whether we do in fact need the thing we want. Friction is the path to introspection. Infinite ease quickly becomes the path of least resistance; it saps our free will, making us submit to someone else’s guess about who we are. We can’t let that pass. We have to become cannier, more critical consumers of the user-friendly world. Otherwise, we risk blundering into more crashes that we’ll only understand after the worst has already happened.
Excerpted from USER FRIENDLY: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play by Cliff Kuang with Robert Fabricant. Published by MCD, an imprint of Farrar, Straus and Giroux November 19th 2019. Copyright © 2019 by Cliff Kuang and Robert Fabricant. All rights reserved.
When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.
More Great WIRED Stories
- The super-optimized dirt that helps keep racehorses safe
- The 12 best foreign horror movies you can stream right now
- VSCO girls are just banal Victorian archetypes
- Google’s .new shortcuts are here to simplify your life
- The delicate ethics of using facial recognition in schools
- 👁 Prepare for the deepfake era of video; plus, check out the latest news on AI
- 💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones
A Tesla Cybertruck Mishap, a Massive Data Leak, and More News
Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.Here’s the news you need to know, in two minutes or less.Want to receive this two-minute roundup as an email every weekday? Sign up here!Today’s NewsMeet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truckTesla CEO Elon Musk last night unveiled his newest…
Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.
Here’s the news you need to know, in two minutes or less.
Want to receive this two-minute roundup as an email every weekday? Sign up here!
Meet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truck
Tesla CEO Elon Musk last night unveiled his newest baby, an all-electric pickup called the Tesla Cybertruck. He demonstrated that it can take a sledgehammer to the door with nary a scratch, and he also accidentally demonstrated that it can’t take a ball to the window. But behind the showmanship and Elon’s audible disbelief at the onstage mishap is a truck with a 500-mile range and the torque that comes from an electric motor. It represents an important new market expansion for Tesla. Now it just has to actually put the darn thing into production.
1.2 billion records found exposed online in a single server
Hackers have long used stolen personal data to break into accounts and wreak havoc. And a dark web researcher found one data trove sitting exposed on an unsecured server. The 1.2 billion records don’t include passwords, credit card numbers, or Social Security numbers, but they do contain cell phone numbers, social media profiles, and email addresses—a great start for someone trying to steal your identity.
Fast Fact: 2025
That’s the year NASA expects to launch the first dedicated mission to Europa, where water vapor was recently discovered. The mission to Jupiter’s moon will involve peering beneath Europa’s icy shell for evidence of life.
WIRED Recommends: The Gadget Lab Newsletter
First of all, you should sign up for WIRED’s Gadget Lab newsletter, because every Thursday you’ll get the best stories about the coolest gadgets right in your inbox. Second of all, it will give you access to early Black Friday and Cyber Monday deals so you can get your shopping done early.
News You Can Use:
Here’s how to hide nasty replies to your tweets on Twitter.
This daily roundup is available as a newsletter. You can sign up right here to make sure you get the news delivered fresh to your inbox every weekday!