Connect with us

Technology

How Google took on China—and lost

https://www.technologyreview.com/s/612601/how-google-took-on-china-and-lost/

Published

on

Google’s first foray into Chinese markets was a short-lived experiment. Google China’s search engine was launched in 2006 and abruptly pulled from mainland China in 2010 amid a major hack of the company and disputes over censorship of search results. But in August 2018, the investigative journalism website The Intercept reported that the company was working on a secret prototype of a new, censored Chinese search engine, called Project Dragonfly. Amid a furor from human rights activists and some Google employees, US Vice President Mike Pence called on the company to kill Dragonfly, saying it would “strengthen Communist Party censorship and compromise the privacy of Chinese customers.” In mid-December, The Intercept reported that Google had suspended its development efforts in response to complaints from the company’s own privacy team, who learned about the project from the investigative website’s reporting.

Observers talk as if the decision about whether to reenter the world’s largest market is up to Google: will it compromise its principles and censor search the way China wants? This misses the point—this time the Chinese government will make the decisions.

Google and China have been locked in an awkward tango for over a decade, constantly grappling over who leads and who follows. Charting that dance over the years reveals major shifts in China’s relationship with Google and all of Silicon Valley. To understand whether China will let Google back in, we must understand how Google and China got here, what incentives each party faces—and how artificial intelligence might have both of them dancing to a new tune.  

The right thing to do?

When www.google.cn launched in 2006, the company had gone public only two years before. The iPhone did not yet exist, nor did any Android-based smartphones. Google was about one-fifth as large and valuable as it is today, and the Chinese internet was seen as a backwater of knockoff products that were devoid of innovation. Google’s Chinese search engine represented the most controversial experiment to date in internet diplomacy. To get into China, the young company that had defined itself by the motto “Don’t be evil” agreed to censor the search results shown to Chinese users.

Central to that decision by Google leadership was a bet that by serving the market—even with a censored product—they could broaden the horizons of Chinese users and nudge the Chinese internet toward greater openness.

At first, Google appeared to be succeeding in that mission. When Chinese users searched for censored content on google.cn, they saw a notice that some results had been removed. That public acknowledgment of internet censorship was a first among Chinese search engines, and it wasn’t popular with regulators.

“The Chinese government hated it,” says Kaiser Kuo, former head of international communications for Baidu. “They compared it to coming to my house for dinner and saying, ‘I will agree to eat the food, but I don’t like it.’” Google hadn’t asked the government for permission before implementing the notice but wasn’t ordered to remove it. The company’s global prestige and technical expertise gave it leverage. China might be a promising market, but it was still dependent on Silicon Valley for talent, funding, and knowledge. Google wanted to be in China, the thinking went, but China needed Google.

Google’s censorship disclaimer was a modest victory for transparency. Baidu and other search engines in China soon followed suit. Over the next four years, Google China fought skirmishes on multiple fronts: with the Chinese government over content restrictions, with local competitor Baidu over the quality of search results, and with its own corporate leadership in Mountain View, California, over the freedom to adapt global products for local needs. By late 2009, Google controlled more than a third of the Chinese search market—a respectable share but well below Baidu’s 58%, according to data from Analysys International.

In the end, though, it wasn’t censorship or competition that drove Google out of China. It was a far-­reaching hacking attack known as Operation Aurora that targeted everything from Google’s intellectual property to the Gmail accounts of Chinese human rights activists. The attack, which Google said came from within China, pushed company leadership over the edge. On January 12, 2010, Google announced, “We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all.”

The sudden reversal blindsided Chinese officials. Most Chinese internet users could go about their online lives with few reminders of government controls, but the Google announcement shoved cyberattacks and censorship into the spotlight. The world’s top internet company and the government of the most populous country were now engaged in a public showdown.

“[Chinese officials] were really on their back foot, and it looked like they might cave and make some kind of accommodation,” says Kuo. “All of these people who apparently did not give much of a damn about internet censorship before were really angry about it. The whole internet was abuzz with this.”

But officials refused to cede ground. “China welcomes international Internet businesses developing services in China according to the law,” a foreign ministry spokeswoman told Reuters at the time. Government control of information was—and remains—central to Chinese Communist Party doctrine. Six months earlier, following riots in Xinjiang, the government had blocked Facebook, Twitter, and Google’s YouTube in one fell swoop, fortifying the “Great Firewall.” The government was making a bet: China and its technology sector did not need Google search to succeed.

Google soon abandoned google.cn, retreating to a Hong Kong–based search engine. In response, the Chinese government decided not to fully block services like Gmail and Google Maps, and for a while it allowed sporadic access from the mainland to the Hong Kong search engine too. The two sides settled into a tense stalemate.

Google’s leaders seemed prepared to wait it out. “I personally believe that you cannot build a modern knowledge society with that kind of [censorship],” Google chairman Eric Schmidt told Foreign Policy in 2012. “In a long enough time period, do I think that this kind of regime approach will end? I think absolutely.”

Role reversal

But instead of languishing under censorship, the Chinese internet sector boomed. Between 2010 and 2015, there was an explosion of new products and companies. Xiaomi, a hardware maker now worth over $40 billion, was founded in April 2010. A month earlier Meituan, a Groupon clone that turned into a juggernaut of online-to-offline services, was born; it went public in September 2018 and is now worth about $35 billion. Didi, the ride-­hailing company that drove Uber out of China and is now challenging it in international markets, was founded in 2012. Chinese engineers and entrepreneurs returning from Silicon Valley, including many former Googlers, were crucial to this dynamism, bringing world-class technical and entrepreneurial chops to markets insulated from their former employers in the US. Older companies like Baidu and Alibaba also grew quickly during these years.

The Chinese government played contradictory roles in this process. It cracked down on political speech in 2013, imprisoning critics and instituting new laws against “spreading rumors” online—a one-two punch that largely suffocated political discussion on China’s once-raucous social-media sites. Yet it also launched a high-profile campaign promoting “mass entrepreneurship and mass innovation.” Government-funded startup incubators spread across the country, as did government-backed venture capital.

That confluence of forces brought results. Services like Meituan flourished. So did Tencent’s super-app WeChat, a “digital Swiss Army knife” that combines aspects of WhatsApp, PayPal, and dozens of other apps from the West. E-commerce behemoth Alibaba went public on the New York Stock Exchange in September 2014, selling $25 billion worth of shares—still the most valuable IPO in history.

Amidst this home-grown success, the Chinese government decided to break the uneasy truce with Google. In mid-2014, a few months before Alibaba’s IPO, the government blocked virtually all Google services in China, including many considered essential for international business, such as Gmail, Google Maps, and Google Scholar. “It took us by surprise, as we felt Google was one of those valuable properties [that they couldn’t afford to block],” says Charlie Smith, the pseudonymous cofounder of GreatFire, an organization that tracks and circumvents Chinese internet controls.

The Chinese government had pulled off an unexpected hat trick: locking out the Silicon Valley giants, censoring political speech, and still cultivating an internet that was controllable, profitable, and innovative.

AlphaGo your own way

With the Chinese internet blossoming and the government not backing down, Google began to search for ways back into China. It tried out less politically sensitive products—an “everything but search” strategy—but with mixed success.

In 2015, rumors swirled that Google was close to bringing its Google Play app store back to China, pending Chinese government approval—but the promised app store never materialized. This was followed by a partnership with Mobvoi, a Chinese smart-watch maker founded by an ex-Google employee, to make voice search available on Android Wear in China. Google later invested in Mobvoi, its first direct investment in China since 2010.

In March 2017, there were reports that authorities would allow Google Scholar back in. They didn’t. Reports that Google would launch a mobile-app store in China together with NetEase, a Chinese company, similarly came to naught, though Google was permitted to relaunch its smartphone translation app.

Then, in May 2017, a showdown between AlphaGo, the Go-playing program built by Google sibling company DeepMind, and Ke Jie, the world’s number one human player, was allowed to take place in Wuzhen, a tourist town outside Shanghai. AlphaGo won all three games in the match—a result that the government had perhaps foreseen. Live-streaming of the match within China was forbidden, and not only in the form of video: as the Guardian put it, “outlets were banned from covering the match live in any way, including text commentary, social media, or push notifications.” DeepMind broadcast the match outside China.

During this same period, Chinese censors quietly rolled back some of the openings that Google’s earlier China operations had catalyzed. In 2016, Chinese search engines began removing the censorship disclaimers that Google had pioneered. In 2017, the government launched a new crackdown on virtual private networks (VPNs), software widely used for circumventing censorship. Meanwhile, Chinese authorities began rolling out extensive AI-powered surveillance technologies across the country, constructing what some called a “21st-century police state” in the western region of Xinjiang, home to the country’s Muslim Uighurs.

Despite the retrograde climate, Google capped off 2017 with a major announcement: the launch of a new AI research center in Beijing. Google Cloud’s Chinese-born chief scientist, Fei-Fei Li, would oversee the new center. “The science of AI has no borders,” she wrote in the announcement of the center’s launch. “Neither do its benefits.” (Li left Google in September 2018 and returned to Stanford University, where she is a professor.)

If the research center was a public symbol of Google’s continued efforts to gain a foothold in China, Google was also working quietly to accommodate Chinese government restrictions. Dragonfly, the censored- search-engine prototype, which has been demonstrated for Chinese officials, blacklists key search terms; it would be operated as part of a joint venture with an unnamed Chinese partner. The documents The Intercept obtained said the app would still tell users when results had been censored.

Other aspects of the project are particularly troubling. Prototypes of the app reportedly link users’ searches to their mobile-phone number, opening the door to greater surveillance and possibly arrest if people search for banned material.

In a speech to the Dragonfly team, later leaked by The Intercept, Ben Gomes, Google’s head of search, explained Google’s aims. China, he said, is “arguably the most interesting market in the world today.” Google was not just trying to make money by doing business in China, he said, but was after something bigger. “We need to understand what is happening there in order to inspire us,” he said. “China will teach us things that we don’t know.”

In early December, Google CEO Sundar Pichai told a Congressional committee that “right now we have no plans to launch in China,” though he would not rule out future plans. The question is, if Google wants to come back to China, does China want to let it in?

China’s calculus

To answer that question, try thinking like an advisor to President Xi Jinping.

Bringing Google search back certainly has upsides. China’s growing number of knowledge workers need access to global news and research, and Baidu is notoriously bad at turning up relevant results from outside China. Google could serve as a valuable partner to Chinese companies looking to expand internationally, as it has demonstrated in a patent-sharing partnership with Tencent and a $550 million investment in e-commerce giant JD. Google’s reentry would also help legitimize the Communist Party’s approach to internet governance, a signal that China is an indispensable market—and an open one—as long as you “play by the rules.”

But from the Chinese government’s perspective, these potential upsides are marginal. Chinese citizens who need to access the global internet can still usually do so through VPNs (though it is getting harder). Google doesn’t need to have a business in China to help Chinese internet giants gain business abroad. And the giants of Silicon Valley have already ceased their public criticism of Chinese internet censorship, and instead extol the country’s dynamism and innovation.

By contrast, the political risks of permitting Google to return loom large to Xi and his inner circle. Hostility toward both China and Silicon Valley is high and rising in American political circles. A return to China would put Google in a political pressure cooker. What if that pressure—via antitrust action or new legislation—effectively forced the company to choose between the American and Chinese markets? Google’s sudden exit in 2010 marked a major loss of face for the Chinese government in front of its own citizens. If Chinese leaders give the green light to Project Dragonfly, they run the risk of that happening again.

A savvy advisor would be likely to think that these risks—to Xi, to the Communist Party, and to his or her own career—outweighed the modest gains to be had from allowing Google’s return. The Chinese government oversees a technology sector that is profitable, innovative, and driven largely by domestic companies—an enviable position to be in. Allowing Google back in would only diminish its leverage. Better, then, to stick with the status quo: dangle the prospect of full market access while throwing Silicon Valley companies an occasional bone by permitting peripheral services like translation.

Google’s gamble

Google does have one factor in its favor. If it first entered China during the days of desktop internet, and departed at the dawn of the mobile internet, it is now trying to reenter in the era of AI. The Chinese government places high hopes on AI as an all-purpose tool for economic activity, military power, and social governance, including surveillance. And Google and its Alphabet sibling DeepMind are the global leaders in corporate AI research.

This is probably why Google has held publicity stunts like the AlphaGo match and an AI-powered “Guess the Sketch” game on WeChat, as well as taking more substantive steps like establishing the Beijing AI lab and promoting Chinese use of TensorFlow, an artificial-intelligence software library developed by the Google Brain team. Taken together, these efforts constitute a sort of artificial-intelligence lobbying strategy designed to sway the Chinese leadership.

This pitch, however, faces problems on at least three battlegrounds: Beijing; Washington, DC; and Mountain View, California.

Chinese leaders have good reason to feel they’re already getting the best of both worlds. They can take advantage of software development tools like TensorFlow and they still have a prestigious Google research lab to train Chinese AI researchers, all without granting Google market access.

In Washington, meanwhile, American security officials are annoyed that Google is actively courting a geopolitical rival while refusing to work with the Pentagon on AI projects because its employees object to having their work used for military ends.

Those employees are the key to the third battleground. They’ve demonstrated the ability to mobilize quickly and effectively, as with the protests against US defense contracts and a walkout last November over how the company has dealt with sexual harassment. In late November more than 600 Googlers signed an open letter demanding that the company drop the Dragonfly project, writing, “We object to technologies that aid the powerful in oppressing the vulnerable.” Daunting as these challenges sound—and high as the costs of pursuing the Chinese market may be—they haven’t entirely deterred Google’s top brass. Though the development of Dragonfly appears to have, at the very least, paused, the wealth and dynamism that make China so attractive to Google also mean the decision of whether or not to do business there is no longer the company’s to make.

“I know people in Silicon Valley are really smart, and they’re really successful because they can overcome any problem they face,” says Bill Bishop, a digital-media entrepreneur with experience in both markets. “I don’t think they’ve ever faced a problem like the Chinese Communist Party.”

Matt Sheehan is a fellow at MacroPolo and worked with Kai-Fu Lee on his book AI Superpowers.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

How the Dumb Design of a WWII Plane Led to the Macintosh

The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked…

Published

on

How the Dumb Design of a WWII Plane Led to the Macintosh

The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked but still airworthy. It was a symbol of American ingenuity, held aloft by four engines, bristling with a dozen machine guns.

Imagine being a pilot of that mighty plane. You know your primary enemy—the Germans and Japanese in your gunsights. But you have another enemy that you can’t see, and it strikes at the most baffling times. Say you’re easing in for another routine landing. You reach down to deploy your landing gear. Suddenly, you hear the scream of metal tearing into the tarmac. You’re rag-dolling around the cockpit while your plane skitters across the runway. A thought flickers across your mind about the gunners below and the other crew: “Whatever has happened to them now, it’s my fault.” When your plane finally lurches to a halt, you wonder to yourself: “How on earth did my plane just crash when everything was going fine? What have I done?”

For all the triumph of America’s new planes and tanks during World War II, a silent reaper stalked the battlefield: accidental deaths and mysterious crashes that no amount of training ever seemed to fix. And it wasn’t until the end of the war that the Air Force finally resolved to figure out what had happened.

To do that, the Air Force called upon a young psychologist at the Aero Medical Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio. Paul Fitts was a handsome man with a soft Tennessee drawl, analytically minded but with a shiny wave of Brylcreemed hair, Elvis-like, which projected a certain suave nonconformity. Decades later, he’d become known as one of the Air Force’s great minds, the person tasked with hardest, weirdest problems—such as figuring out why people saw UFOs.

For now though, he was still trying to make his name with a newly minted PhD in experimental psychology. Having an advanced degree in psychology was still a novelty; with that novelty came a certain authority. Fitts was supposed to know how people think. But his true talent is to realize that he doesn’t.

Adapted from User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play. Buy on Amazon.

Courtesy of MCD

When the thousands of reports about plane crashes landed on Fitts’s desk, he could have easily looked at them and concluded that they were all the pilot’s fault—that these fools should have never been flying at all. That conclusion would have been in keeping with the times. The original incident reports themselves would typically say “pilot error,” and for decades no more explanation was needed. This was, in fact, the cutting edge of psychology at the time. Because so many new draftees were flooding into the armed forces, psychologists had begun to devise aptitude tests that would find the perfect job for every soldier. If a plane crashed, the prevailing assumption was: That person should not have been flying the plane. Or perhaps they should have simply been better trained. It was their fault.

But as Fitts pored over the Air Force’s crash data, he realized that if “accident prone” pilots really were the cause, there would be randomness in what went wrong in the cockpit. These kinds of people would get hung on anything they operated. It was in their nature to take risks, to let their minds wander while landing a plane. But Fitts didn’t see noise; he saw a pattern. And when he went to talk to the people involved about what actually happened, they told of how confused and terrified they’d been, how little they understood in the seconds when death seemed certain.

The examples slid back and forth on a scale of tragedy to tragicomic: pilots who slammed their planes into the ground after misreading a dial; pilots who fell from the sky never knowing which direction was up; the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear. And others still, who got trapped in a maze of absurdity, like the one who, having jumped into a brand-new plane during a bombing raid by the Japanese, found the instruments completely rearranged. Sweaty with stress, unable to think of anything else to do, he simply ran the plane up and down the runway until the attack ended.

Fitts’ data showed that during one 22-month period of the war, the Air Force reported an astounding 457 crashes just like the one in which our imaginary pilot hit the runway thinking everything was fine. But the culprit was maddeningly obvious for anyone with the patience to look. Fitts’ colleague Alfonse Chapanis did the looking. When he started investigating the airplanes themselves, talking to people about them, sitting in the cockpits, he also didn’t see evidence of poor training. He saw, instead, the impossibility of flying these planes at all. Instead of “pilot error,” he saw what he called, for the first time, “designer error.”

The reason why all those pilots were crashing when their B-17s were easing into a landing was that the flaps and landing gear controls looked exactly the same. The pilots were simply reaching for the landing gear, thinking they were ready to land. And instead, they were pulling the wing flaps, slowing their descent, and driving their planes into the ground with the landing gear still tucked in. Chapanis came up with an ingenious solution: He created a system of distinctively shaped knobs and levers that made it easy to distinguish all the controls of the plane merely by feel, so that there’s no chance of confusion even if you’re flying in the dark.

By law, that ingenious bit of design—known as shape coding—still governs landing gear and wing flaps in every airplane today. And the underlying idea is all around you: It’s why the buttons on your videogame controller are differently shaped, with subtle texture differences so you can tell which is which. It’s why the dials and knobs in your car are all slightly different, depending on what they do. And it’s the reason your virtual buttons on your smartphone adhere to a pattern language.

But Chapanis and Fitts were proposing something deeper than a solution for airplane crashes. Faced with the prospect of soldiers losing their lives to poorly designed machinery, they invented a new paradigm for viewing human behavior. That paradigm lies behind the user-friendly world that we live in every day. They realized that it was absurd to train people to operate a machine and assume they would act perfectly under perfect conditions.

Instead, designing better machines meant figuring how people acted without thinking, in the fog of everyday life, which might never be perfect. You couldn’t assume humans to be perfectly rational sponges for training. You had to take them as they were: distracted, confused, irrational under duress. Only by imagining them at their most limited could you design machines that wouldn’t fail them.

This new paradigm took root slowly at first. But by 1984—four decades after Chapanis and Fitts conducted their first studies—Apple was touting a computer for the rest of us in one of its first print ads for the Macintosh: “On a particularly bright day in Cupertino, California, some particularly bright engineers had a particularly bright idea: Since computers are so smart, wouldn’t it make sense to teach computers about people, instead of teaching people about computers? So it was that those very engineers worked long days and nights and a few legal holidays, teaching silicon chips all about people. How they make mistakes and change their minds. How they refer to file folders and save old phone numbers. How they labor for their livelihoods, and doodle in their spare time.” (Emphasis mine.) And that easy-to-digest language molded the smartphones and seamless technology we live with today.

Along the long and winding path to a user-friendly world, Fitts and Chapanis laid the most important brick. They realized that as much as humans might learn, they would always be prone to err—and they inevitably brought presuppositions about how things should work to everything they used. This wasn’t something you could teach of existence. In some sense, our limitations and preconceptions are what it means to be human—and only by understanding those presumptions could you design a better world.

Today, this paradigm shift has produced trillions in economic value. We now presume that apps that reorder the entire economy should require no instruction manual at all; some of the most advanced computers ever made now come with only cursory instructions that say little more than “turn it on.” This is one of the great achievements of the last century of technological progress, with a place right alongside GPS, Arpanet, and the personal computer itself.

It’s also an achievement that remains unappreciated because we assume this is the way things should be. But with the assumption that even new technologies need absolutely no explaining comes a dark side: When new gadgets make assumptions about how we behave, they force unseen choices upon us. They don’t merely defer to our desires. They shape them.


User friendliness is simply the fit between the objects around us and the ways we behave. So while we might think that the user-friendly world is one of making user-friendly things, the bigger truth is that design doesn’t rely on artifacts; it relies on our patterns. The truest material for making new things isn’t aluminum or carbon fiber. It’s behavior. And today, our behavior is being shaped and molded in ways both magical and mystifying, precisely because it happens so seamlessly.

I got a taste of this seductive, user-friendly magic recently, when I went to Miami to tour a full-scale replica of Carnival Cruise’s so-called Ocean Medallion experience. I began my tour in a fake living room, with two of the best-looking project staffers pretending to be husband and wife, showing me how the whole thing was supposed to go.

Using the app, you could reserve all your activities way before you boarded the ship. And once on board, all you needed was to carry was a disk the size of a quarter; using that, any one of the 4,000 touchscreens on the ship could beam you personalized information, such which way you needed to go for your next reservation. The experience recalled not just scenes from Her and Minority Report, but computer-science manifestos from the late 1980s that imagined a suite of gadgets that would adapt to who you are, morphing to your needs in the moment.

Behind the curtains, in the makeshift workspace, a giant whiteboard wall was covered with a sprawling map of all the inputs that flow into some 100 different algorithms that crunch every bit of a passenger’s preference behavior to create something called the “Personal Genome.” If Jessica from Dayton wanted sunscreen and a mai tai, she could order them on her phone, and a steward would deliver them in person, anywhere across the sprawling ship.

The server would greet Jessica by name, and maybe ask if she was excited about her kitesurfing lesson. Over dinner, if Jessica wanted to plan an excursion with friends, she could pull up her phone and get recommendations based on the overlapping tastes of the people she was sitting with. If only some people like fitness and others love history, then maybe they’ll all like a walking tour of the market at the next port.

Jessica’s Personal Genome would be recalculated three times a second by 100 different algorithms using millions of data points that encompassed nearly anything she did on the ship: How long she lingered on a recommendation for a sightseeing tour; the options that she didn’t linger on at all; how long she’d actually spent in various parts of the ship; and what’s nearby at that very moment or happening soon. If, while in her room, she had watched one of Carnival’s slickly produced travel shows and seen something about a market tour at one her ports of call, she’d later get a recommendation for that exact same tour when the time was right. “Social engagement is one of the things being calculated, and so is the nuance of the context,” one of the executives giving me the tour said.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite writers.

It was like having a right-click for the real world. Standing on the mocked-up sundeck, knowing that whatever I wanted would find me, and that whatever I might want would find its way either onto the app or the screens that lit up around the cruise ship as I walked around, it wasn’t hard to see how many other businesses might try to do the same thing. In the era following World War II, the idea that designers could make the world easier to understand was a breakthrough.

But today, “I understand what I should do” has become “I don’t need to think at all.” For businesses, intuitiveness has now become mandatory, because there are fortunes to be made by making things just a tad more frictionless. “One way to view this is creating this kind of frictionless experience is an option. Another way to look at it is that there’s no choice,” said John Padgett, the Carnival executive who had shepherded the Ocean Medallion to life. “For millennials, value is important. But hassle is more important, because the era they’ve grow up in. It’s table stakes. You have to be hassle-free to get them to participate.”

By that logic, the real world was getting to be disappointing when compared with the frictionless ease of this increasingly virtual world. Taken as a whole, Carnival’s vision for seamless customer service that can anticipate your every whim was like an Uber for everything, powered by Netflix recommendations for meatspace. And these are in fact the experiences that many more designers will soon be striving for: invisible, everywhere, perfectly tailored, with no edges between one place and the next. Padgett described this as a “market of one,” in which everything you saw would be only the thing you want.

The Market of One suggests to me a break point in the very idea of user friendliness. When Chapanis and Fitts were laying the seeds of the user-friendly world, they had to find the principles that underlie how we expect the world to behave. They had to preach the idea that products built on our assumptions about how things should work would eventually make even the most complex things easy to understand.

Steve Jobs’ dream of a “bicycle for the mind”—a universal tool that might expand the reach of anyone—has arrived. High technology has made our lives easier; made us better at our jobs, and created jobs that never existed before; it has made the people we care about closer to us. But friction also has value: It’s friction that makes us question whether we do in fact need the thing we want. Friction is the path to introspection. Infinite ease quickly becomes the path of least resistance; it saps our free will, making us submit to someone else’s guess about who we are. We can’t let that pass. We have to become cannier, more critical consumers of the user-friendly world. Otherwise, we risk blundering into more crashes that we’ll only understand after the worst has already happened.


Excerpted from USER FRIENDLY: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play by Cliff Kuang with Robert Fabricant. Published by MCD, an imprint of Farrar, Straus and Giroux November 19th 2019. Copyright © 2019 by Cliff Kuang and Robert Fabricant. All rights reserved.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


More Great WIRED Stories

Continue Reading

Technology

A Tesla Cybertruck Mishap, a Massive Data Leak, and More News

Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.Here’s the news you need to know, in two minutes or less.Want to receive this two-minute roundup as an email every weekday? Sign up here!Today’s NewsMeet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truckTesla CEO Elon Musk last night unveiled his newest…

Published

on

A Tesla Cybertruck Mishap, a Massive Data Leak, and More News

Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.

Here’s the news you need to know, in two minutes or less.

Want to receive this two-minute roundup as an email every weekday? Sign up here!

Today’s News

Meet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truck

Tesla CEO Elon Musk last night unveiled his newest baby, an all-electric pickup called the Tesla Cybertruck. He demonstrated that it can take a sledgehammer to the door with nary a scratch, and he also accidentally demonstrated that it can’t take a ball to the window. But behind the showmanship and Elon’s audible disbelief at the onstage mishap is a truck with a 500-mile range and the torque that comes from an electric motor. It represents an important new market expansion for Tesla. Now it just has to actually put the darn thing into production.

1.2 billion records found exposed online in a single server

Hackers have long used stolen personal data to break into accounts and wreak havoc. And a dark web researcher found one data trove sitting exposed on an unsecured server. The 1.2 billion records don’t include passwords, credit card numbers, or Social Security numbers, but they do contain cell phone numbers, social media profiles, and email addresses—a great start for someone trying to steal your identity.

Fast Fact: 2025

That’s the year NASA expects to launch the first dedicated mission to Europa, where water vapor was recently discovered. The mission to Jupiter’s moon will involve peering beneath Europa’s icy shell for evidence of life.

WIRED Recommends: The Gadget Lab Newsletter

First of all, you should sign up for WIRED’s Gadget Lab newsletter, because every Thursday you’ll get the best stories about the coolest gadgets right in your inbox. Second of all, it will give you access to early Black Friday and Cyber Monday deals so you can get your shopping done early.

News You Can Use:

Here’s how to hide nasty replies to your tweets on Twitter.

This daily roundup is available as a newsletter. You can sign up right here to make sure you get the news delivered fresh to your inbox every weekday!

Continue Reading

Technology

How Wily Teens Outwit Bathroom Vape Detectors

Last spring, students at Hinsdale Central High School discovered six vaping detectors in bathrooms and locker rooms around campus. About 20 miles southwest of Chicago, Hinsdale Central has been battling on-campus vaping for years. Administrators tried making students take online courses if they were caught with ecigarettes; they talked to law enforcement; the Village of…

Published

on

How Wily Teens Outwit Bathroom Vape Detectors

Last spring, students at Hinsdale Central High School discovered six vaping detectors in bathrooms and locker rooms around campus. About 20 miles southwest of Chicago, Hinsdale Central has been battling on-campus vaping for years. Administrators tried making students take online courses if they were caught with ecigarettes; they talked to law enforcement; the Village of Hinsdale even passed an ordinance that would make it easier for officers to ticket minors caught with the devices. To no avail. And the detectors? Students simply ripped them off the walls.

Ecigarettes, which are easy to conceal and, until recently, came in a dazzling array of sweet, fruity, and dessert flavors, are hugely popular among teenagers. A recent study found that 28 percent of high schoolers and 11 percent of middle schoolers frequently vape. So schools across the country are spending thousands of dollars to outfit their campuses with vaping detectors, only to find that the devices can’t stand up to wily teens and that policing student behavior isn’t the same as permanently changing it.

Like smoke detectors, vape detectors are relatively unintrusive. They don’t even record video or audio—they just register the chemical signature of vaping aerosol, then send an email or text alert to school officials.

Some schools say they’re a useful deterrent. A district in Sparta, New Jersey, started off with two detectors and is planning to install more. Freeman School District in Washington installed detectors a few weeks ago. “They’ve been very effective, and we’re glad we have them,” says superintendent Randy Russell, who noted that the detectors already helped catch one young vaper in the act.

But at Hinsdale, even before the teens subjected them to blunt force trauma, the devices hadn’t lived up to expectations. “By the time we get there the kids are gone,” says Kimm Dever, an administrator at Hinsdale Central. Dever says the devices also went off randomly, and administrators couldn’t tell which kids were vaping and which just happened to be in the bathroom when the devices alerted.

Revere Schools in Bath, Ohio, reported similar problems. Revere spent around $15,000 to install 16 detectors in its middle and high schools at the beginning of the school year. Parents were thrilled, but administrators rarely made it to the bathroom in time to catch the vapers mid-puff. “It was like chasing ghosts,” says Jennifer Reece, a spokesperson for the school district. In theory, school officials could consult footage from hallway cameras to triangulate which students were in the bathroom when the detectors went off. “That also takes up time, and we don’t always have that type of time” Reece says.

Revere bought detectors with grant money from the state Attorney General’s Office. Now, Reece often gets questions from other school districts about the devices. “If they don’t have grant money I don’t know if it’s worth [the cost],” she says.

If vaping has become the cool thing to do among students, then buying vape detectors is the big trend for school districts. Derek Peterson, the CEO of Soter Technologies, which makes the Flysense detector that Revere installed, says the company is fielding about 700 orders a month. “We have more schools coming to us than we know what to do with,” he says. IPVideo, which makes a number of cameras and other gadgets for schools, sells a Halo detector that also claims to distinguish between THC and nicotine vapor. The detectors can integrate with school camera systems so it’s easier for administrators to figure out which students are in the bathroom, and both companies’ detectors cost roughly $1,000 a piece. Flysense charges an additional annual fee.

The sensors are chemical detectors that go off when the levels of certain chemicals in the room change. Most schools say they do sense the vapor and that they’ve caught students because of them. But kids are clever. Some exhale into their backpacks or sleeves, where the aerosol dissipates before wafting up to the detector. Other kids resort to AP physics–level subterfuge. They exhale into the toilet and flush, creating a vacuum that sucks the aerosol into the pipes. “There’s nothing we can do about that,” says Peterson. “There’s no sensing that could ever change the laws of physics.”

The problem is that detectors alone can’t change students’ behavior. It’s important for schools to analyze their goals, says Bonnie Halpern-Felsher, a developmental psychologist at Stanford who studies teen vaping. Vape detectors might help catch offending kids so they can be punished, she says, but “if the goal is to prevent and stop, vape detectors are not the way to go.”

Peterson agrees and is already getting in on the education angle, offering a #NoVaping package that includes brochures, posters, and suggestions for class presentations.

Between 2017 and 2019, the California Department of Justice distributed more than $12 million to California school districts trying to deter vaping through a number of measures including installing detectors, hiring school resource officers, and running educational programs.

One of those districts was Las Virgenes Unified, which serves around 11,500 students northwest of Los Angeles. In October 2018, Las Virgenes spent half of its grant, some $50,000, to install Flysense detectors at its two high schools and three middle schools. “The technology is good. They work,” says superintendent Dan Stepenosky. But he combines the detectors with other measures. When students are caught vaping, they’re sent to a 90-minute meeting with their parents and an addiction counselor. The school dispatched administrators to nearby gas stations, grocery stores, and convenience stores to remind people not to sell ecigarettes to kids under 21. The school even partners with law enforcement to run sting operations on businesses in the community that sell ecigarettes to minors. So far they’ve conducted over 250 operations complete with undercover officers and marked bills.

But the most important element hasn’t been the sting operations, the crackdowns on local retailers, or the detectors. “The most impactful has been the education piece,” says Stepenosky. The district holds seminars for parents and teachers, and it hired extra deans to focus on student wellness and included information about ecigarettes in school curricula.

These strategies are comprehensive, and they demand a lot of resources. One school in South Dakota raised money from the local community to buy its sensors. Other school districts are suing Juul, blaming the company’s marketing for creating a new generation of nicotine-addicted kids. Those districts hope to get payouts that will alleviate the huge financial burden of running addiction counseling and education programs. Stepenosky received over a million dollars from the California Department of Justice, and he’s already applying for more funding for next year.


More Great WIRED Stories

Continue Reading

Recent Posts