Connect with us

Technology

A4 to A5: How Apple outflanked its fragmented competition in silicon

https://appleinsider.com/articles/19/11/07/a4-to-a5-how-apple-outflanked-its-fragmented-competition-in-silicon

Published

on

Apple’s A4 project helped deliver the custom silicon for the very first iPad, positioning the company in a race against time with the world’s leading mobile chip designers. Although it ran into stiff competition along the way, Apple succeeded with an implementation that was markedly different from its peers. Here’s how they did it.

Apple A4

26 years of platform establishment in just 6

IBM’s 1982 selection of an Intel processor for the original PC incrementally snowballed into the vast critical mass of an installed base of users that funded constant enhancement of the x86 architecture. The profits Intel was earning from sales of millions of x86 chips actually allowed it to transcend architectural deficiencies in its original design.

Intel could even incorporate concepts from competing RISC architectures while remaining compatible with the increasingly large installed base of DOS and Windows software developed for x86. By 1995 the global installed base of PCs—over 90% of which were x86 machines—reached 242 million. By 2002 it had passed a half billion, and by 2008 it exceeded 1 billion.

In 2008, Apple initiated its A4 project and began using the chip in 2010. Just six years later, Apple announced that it had reached an active installed base of 1 billion devices on its own–over 90% of which were powered by its Ax processors.

An installed base of that size and scale had taken 26 years to develop in PCs, and required the combined sales of every PC maker contributing to Intel’s power and weight. Apple achieved this on its own. Conversely, that also means Apple’s vast sales of premium iOS hardware did not contribute to its competitors’ third party silicon development. And in tandem, the suffering sales of higher-end iOS competitors proved detrimental to outside silicon advancement.

A motive for A4 and beyond

Over just the course of 2010, Apple shifted from building mobile devices using fairly generic Samsung parts to releasing its new iPad, iPhone 4, the fourth generation 4G iPod touch, and a new iOS-based Apple TV all using its freshly developed A4 custom SoC funded in partnership with Samsung.

This occurred just as Apple shifted from selling 20M iPhones and 10M Macs in fiscal 2009 to selling 40M iPhones, 13.6M Macs, and 7.5M iPads in 2010. Apple doubled itself, both as both a phone maker and a computer maker, in one year—and most of that new growth was powered by one new chip.

Apple’s new A4-related unit sales within 2010 were larger than its entire Mac product line powered by Intel chips. And while Intel’s chips were far more powerful and valuable, Apple’s A4 was powering device revenues in its first year roughly on par with its $17M Mac business.

Ten years later, Apple still hasn’t moved its Macs from Intel’s x86 chips to a custom architecture. That would require developing a variety of state-of-the-art silicon scaling from an economical desktop CPU to one for power-efficient notebooks, high-performance notebooks, and high-end workstations, all to power a market opportunity of around 20M units annually while incurring the risk of failure and of falling behind Intel while losing all the benefits of sharing x86 economies of scale.

Apple’s Macs also depend on an ecosystem of software that would have to be ported. On its mobile devices, Apple had avoided that problem by launching the iPhone and then iPad as an entirely new platform that made no pretense of running existing third-party Mac software. Apple ported its own apps, including Safari, Mail, and iTunes to ARM, and could then leverage its eventual installed base to create a new ecosystem of apps for iOS.

There was also one existing Mac model Apple could port to ARM: Apple TV. It was originally implemented in 2007 as a limited Intel Mac with an Nvidia CPU. It wasn’t really capable of running any third-party Mac software. At the end of 2010, Apple released a new iOS-based Apple TV using an A4 chip, enabling it to shave the price from $299 to $99.

Apple could move Apple TV from Intel to a radically simpler A4-powered box at one-third the cost | Source: iFixIt

Migrating Apple TV to an iOS-based ARM device was a no-brainer. There had been essentially no significant market for Apple’s $300 TV box “hobby,” while at $99 Apple TV turned into a billion-dollar annual business when including the iTunes media it helped to sell.

Apple TV could tag along in the wake of the critical mass of new mobile devices Apple sold in 2010, which represented a +25M unit opportunity, all using effectively the same SoC design. If Apple could sustain that growth, it could keep investing in new generations of the A4 and easily pay for the very expensive work of custom silicon design.

In fact, there was no choice. Apple had to invest in new generations of A4 long before the chip even became available and could prove itself in device sales. This required massive foresight and vision, along with incredible confidence that its huge upfront investments would pay off. Most remarkably, Apple had never done this before.

Samsung’s motive: conservatively copy what works

Across the same year, Samsung used a codeveloped version of effectively the same chip under its Hummingbird name, later rebranded as Exynos 3, in its Galaxy clones of iPhone and iPad, the Nexus S it built for Google, and it also provided it to new Chinese Android maker Meizu.

Samsung’s entire smartphone business in 2010 was less than 25M units in total; the company claimed shipments of 10M Galaxy S phones in 2010. Nexus S, which Samsung was making for Google to sell under its own brand, was lavished with attention but it did not sell in commercially significant numbers. That meant that despite having effectively the same chip as Apple, Samsung was making far less effective use of it.

Samsung was already globally positioned as an established phone maker. It viewed Apple as both an extremely valuable partner uniquely capable of sustainably buying up incredible amounts of its RAM and other components, but also as an emerging new threat in phones. Samsung’s own Omnia Windows Mobile and Symbian smartphones had both just been crushed by iPhone.

Samsung’s strategy aimed to take ideas from Apple that were attracting customers. It wasn’t betting the company on a bold new vision. It was seeking to protect its existing business from further erosion by Apple. That helps to explain why Samsung didn’t design an aggressive pipeline of new silicon in parallel with Apple. It expected to beat Apple at its own game and then return to its former comfortable position as a smartphone leader.

Samsung began closely following Apple’s designs

Samsung’s increasingly bold copies of iPhone and iPad raised the prospect that Apple was once again going to be forced to compete against a copy of its own work, reliving the history of the 1990s when Microsoft appropriated every facet of the Macintosh in Windows, while benefitting from economies of scale driven by Intel’s x86 chips.

To avoid that fate in the 2010s, Apple set out to develop a future of custom mobile silicon that could give its own products an edge against Samsung. It then executed that strategy as if its entire future was on the line–because it was.

After A4: a coherent Apple strategy in silicon

After 2010, it was clear that Apple and Samsung were pursuing dramatically different silicon strategies. At the start of the next year, Apple launched iPad 2 powered by a new dual-core A5 with dual-core PowerVR SGX543 graphics.

While the media had been largely skeptical about iPad’s prospects when it first appeared, Apple had confidently developed successive generations of silicon technology that enabled it to deliver a much more powerful version within a year. Part of that confidence clearly came from massive volumes of iPhone 4 sales that would share its chip.

Steve Jobs noted during his introduction of iPad 2 that Apple was the first company to ship a dual-core tablet in volume, and emphasized that A5 delivered twice the CPU power and that its new GPU was up to 9 times faster, while running at the same power consumption of A4. No other phone maker had any plans to deliver that kind of jump in graphics performance for their phones.

Additionally, phone and tablet makers licensing Android faced obstacles that complicated the task of adding dramatic new features such as FaceTime. They could either develop their own unique features that only worked on their models, or wait for Google to deliver an Android-wide feature that dictated the technology they would have to support, while not adding any unique value to their offerings.

iPad 2 added a front-facing camera supporting FaceTime video chat, just months after iPhone 4 introduced it

Later that spring, Apple also released a new version of iPhone 4 for Verizon’s CDMA 3G network. It was assumed Apple would have to replace its A4 with Qualcomm’s Snapdragon to work on Verizon, but Apple instead swapped out its Infineon GSM Baseband Processor with a Qualcomm CDMA modem, still paired with its same A4 Application Processor. That move, largely invisible to third-party developers, greatly simplified the architecture of Apple’s iOS devices while maximizing the economies of scale Apple was building for its A-series chips.

Apple had already enjoyed a huge surge of iPhone 4 sales with AT&T, and now it was selling another huge batch of the same phone running the same A4 for subscribers tied to Verizon and other CDMA carriers globally. Apple’s first original chip design was already a huge success, but there was already a second-generation in play.

Apple A5

Apple A5 | Source: Chipworks

Apple subsequently used the iPad’s A5 in the iPhone 4S. The new A5 chip also included a new Image Signal Processor for camera intelligence and face detection, and Audience EarSmart noise cancelation hardware to enhance voice recognition, supporting Apple’s new Siri feature that was used to launch iPhone 4S. Siri wasn’t supported on iPad 2, but the fact that both devices shared the same chip contributed to economies of scale that lowered Apple’s production costs.

Some media observers were puzzled at why Apple was shipping A5 devices with just 512MB of RAM, at a time when Android licensees were packing on 1GB or more. While it was popular to claim that Apple’s hardware engineering was simply stingy, a better answer was that its software engineering was far superior, resulting in less need for RAM and the luxury of getting by with less. That not only made it cheaper to build, but also helped to sustain battery life because RAM installed in the device required power whether it was in use or not.

That year, Microsoft’s Windows lead Steven Sinofsky noted that “a key engineering tenet of Windows 8” involved efforts “to significantly reduce the overall runtime memory requirements of the core system.”

While Apple remained quiet on the subject, Sinofsky cited Microsoft’s Performance team as detailing that “minimizing memory usage on low-power platforms can prolong battery life,” adding that “in any PC, RAM is constantly consuming power. If an OS uses a lot of memory, it can force device manufacturers to include more physical RAM. The more RAM you have on board, the more power it uses, the less battery life you get. Having additional RAM on a tablet device can, in some instances, shave days off the amount of time the tablet can sit on your coffee table looking off but staying fresh and up to date.”

Bloggers often liked to draw attention to the fact that Android and Windows mobile devices had more RAM, without any understanding of why this could contribute to lower battery life and less efficient software, yet over time this became obvious.

Apple was rapidly developing two totally different products using the same silicon engine in 2011, effectively subsidizing specialized features for iPhone and iPad at the same time using a processor architecture designed to cover multiple bases. iPhone sales for the fiscal year nearly doubled to 72M, and iPad units quadrupled to more than 32M.

At the same time, Apple could also reuse the A5 to ship lower-priced new tiers of products the next year: a third-generation Apple TV, fifth-generation iPod touch, and a new iPad mini. Apple was investing in a rocketing trajectory of new silicon chips but also wringing every last drop of value from the work it created.

Samsung’s wild fragmentation

Samsung wasn’t able to use or sell a version of Apple’s A5. Instead, it subsequently released its own new Exynos 4, which claimed only a 30% increase in CPU and 50% increase in GPU power over the previous Hummingbird.

With the apparent loss of its access to Intrinsity’s optimizations, it was easy to see why Samsung couldn’t keep up with the A5’s CPU. But its much more dramatic shortfall in graphics capabilities came from Samsung’s choice to use basic ARM Mali graphics rather than licensing the same PowerVR GPU Apple was using.

Samsung used its Exynos 4 chip in some models of its Galaxy SII flagship smartphone, but it also sold versions of that phone using Qualcomm’s Snapdragon S3 with Adreno graphics, some with a Broadcom/VideoCore chip, as well as selling a version using TI’s OMAP 4, which did use PowerVR graphics.

Similarly, Samsung’s 2011 update to its original Galaxy Tab also used its Exynos 4. But that same year the company also joined Google in releasing multiple new Android 3.0 Honeycomb tablets, and those models used Nvidia’s Tegra 2, following the reference design Google had dictated for its new tablet platform. Later in the year Samsung also released a version of its Galaxy SII phone with a Tegra 2 chip as well.

Samsung Galaxy Tab 10.1 didn’t use the compnay’s own Eyxnos 4 chip | Source: PCWorld

Complicating the rollout of Honeycomb Android tablets was the fact that Google had initially announced plans to enter the netbook market with Chrome OS in 2009, with the intent of getting the first Chromebooks to market by the middle of 2010.

That confident plan to take over the once-booming netbook category–and perhaps disrupt the market for conventional notebooks like Apple’s MacBook–was completely thrown for a loop by Apple’s launch of the original iPad. Google had to postpone the launch of Chrome OS netbooks to the end of 2010 and then again to the middle of 2011.

So Google was now floating two mobile computing platforms at once: Android tablets and Chromebooks. Samsung dutifully followed with a little bit of both, rushing out the first generation of Chrome OS netbooks with Intel Atom chips in parallel with its own rebel Galaxy Tab and new Honeycomb tablets. Apple had one product serving that market.

Samsung was just delivering its first Chromebooks in 2011

If that weren’t enough, Samsung delivered a Windows 7 tablet also using an Intel Atom processor. Samsung’s bizarre, scattershot strategy in SoCs was also reflected in operating systems, where it continued to ship Android, Bada, and Windows Phone handsets alongside its Chromebooks and TouchWiz, Honeycomb and Windows 7 tablets, waiting to see what might stick where.

What Samsung ultimately determined was that customers wanted to buy products that looked and worked like Apple’s. It even documented its thinking in internal memos that were later used as evidence in Apple’s patent infringement lawsuits.

The Linux-based Bada and Windows Phone lost favor at Samsung largely because Android was enabling a closer approximation of Apple’s work. Samsung stated a preference to sell its own Bada platform or to work with Microsoft rather than the far less experienced Google. But there wasn’t a demand for handsets that weren’t closely copying iPhone.

Just within 2011, Samsung delivered multiple versions of 7, 7.7, 8.9 and 10.1-inch tablets, even before attracting any significant installed base of tablet users. Samsung’s terrible tablet strategy resulted in equally terrible sales. A “top secret” Samsung document from February 2012 (below) detailed total 2011 U.S. Galaxy Tab sales had only reached around 1 million, contrasted with Apple’s 17.4 million iPads, 5 million Kindle Fire by Amazon, and 1.5 million B&N Nook sales in the same region.

That document also made it obvious that the “quite smooth” two million Galaxy Tabs that Samsung was “estimated” to have sold by Strategy Analytics in just the final quarter of 2010—causing a supposed crash in Apple iPad “market share” purportedly at the hands of 7 inch Android tablets—was also completely false.

An emerging media narrative that “Android is winning,” based on estimated unit sales that were often simply made up, resulted in a comfortable detachment from reality that enabled Android licensees to continue to pursue strategies that were clearly not working, with the blessing of media observers who refused to critically examine claims from vendors or from marketing groups offering free data to propagate as part of their efforts in “influencing consumer behavior and buying preferences.”

iPad wasn’t expected to survive

Rather than predicting that Apple’s investments in mobile silicon and its tightly managed strategies were setting it up to own the tablet market, the tech media nearly unanimously assumed in 2011 that Google’s Android Honeycomb partners would quickly take over Apple’s iPad business, in part because of the marketing line that their Android tablets were capable of running Adobe Flash.

In reality, first-generation Honeycomb tablets barely even worked at all, because Google’s software wasn’t finished and its licensees’ hardware had been rushed to market. They were also priced higher than iPad.

Honeycomb turned out to be Google’s third major flop after Google TV and [email protected] if you count Chromebooks–but media critics continued to give the company the benefit of the doubt every year across the last decade as it has rolled out a series of initiatives that have failed as regularly as Microsoft’s parade of CES catastrophes in the decade prior. And there is a common thread: both companies relied upon a sea of hardware partners who were themselves not very good at knowing how to introduce successful products.

In parallel with Honeycomb, there were also still expectations that RIM’s new Blackberry PlayBook might have some impact based on the company’s established base of business customers, or that HP might be able to wield its position as a leading PC maker to establish sales of its upcoming TouchPad. HP had just acquired Palm to obtain webOS in order to give it an alternative to Microsoft’s perpetually failing Windows Tablet PC platform.

Jobs didn’t expect them to survive

After its first year of sales, iPad was finally becoming acknowledged as a success. However, that success was largely seen as a short term aberration. As 2011 approached, analysts asked Jobs about the supposed “avalanche of tablets poised to enter the market.” He answered that there were really “only a handful of credible entrants,” but noted that “they use 7-inch screens,” which he said “isn’t sufficient to create great tablet apps.”

Jobs stated “we think the 7 inch tablets will be dead on arrival, and manufacturers will realize they’re too small and abandon them next year. They’ll then increase the size, abandoning the customers and developers who bought into the smaller format.”

Steve Jobs appearance at the iPad 2 event projected a confident, strategic vision that Apple would continue to follow

It ended up taking more than a year for many makers to give up on 7-inch tablets—Google kept trying to sell them until the end of 2014—but Jobs’ anticipation of what would be successful in tablets ended up being presciently correct.

Having a clear, confident, strategic vision allowed Apple to make long term plans, which included developing custom silicon optimized specifically to competently deliver that vision. The chasm between Apple’s strategic focus and the scattershot, seemingly random efforts by Google and its licensees led by Samsung would continue to grow, as the next segment will examine.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

How the Dumb Design of a WWII Plane Led to the Macintosh

The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked…

Published

on

How the Dumb Design of a WWII Plane Led to the Macintosh

The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked but still airworthy. It was a symbol of American ingenuity, held aloft by four engines, bristling with a dozen machine guns.

Imagine being a pilot of that mighty plane. You know your primary enemy—the Germans and Japanese in your gunsights. But you have another enemy that you can’t see, and it strikes at the most baffling times. Say you’re easing in for another routine landing. You reach down to deploy your landing gear. Suddenly, you hear the scream of metal tearing into the tarmac. You’re rag-dolling around the cockpit while your plane skitters across the runway. A thought flickers across your mind about the gunners below and the other crew: “Whatever has happened to them now, it’s my fault.” When your plane finally lurches to a halt, you wonder to yourself: “How on earth did my plane just crash when everything was going fine? What have I done?”

For all the triumph of America’s new planes and tanks during World War II, a silent reaper stalked the battlefield: accidental deaths and mysterious crashes that no amount of training ever seemed to fix. And it wasn’t until the end of the war that the Air Force finally resolved to figure out what had happened.

To do that, the Air Force called upon a young psychologist at the Aero Medical Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio. Paul Fitts was a handsome man with a soft Tennessee drawl, analytically minded but with a shiny wave of Brylcreemed hair, Elvis-like, which projected a certain suave nonconformity. Decades later, he’d become known as one of the Air Force’s great minds, the person tasked with hardest, weirdest problems—such as figuring out why people saw UFOs.

For now though, he was still trying to make his name with a newly minted PhD in experimental psychology. Having an advanced degree in psychology was still a novelty; with that novelty came a certain authority. Fitts was supposed to know how people think. But his true talent is to realize that he doesn’t.

Adapted from User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play. Buy on Amazon.

Courtesy of MCD

When the thousands of reports about plane crashes landed on Fitts’s desk, he could have easily looked at them and concluded that they were all the pilot’s fault—that these fools should have never been flying at all. That conclusion would have been in keeping with the times. The original incident reports themselves would typically say “pilot error,” and for decades no more explanation was needed. This was, in fact, the cutting edge of psychology at the time. Because so many new draftees were flooding into the armed forces, psychologists had begun to devise aptitude tests that would find the perfect job for every soldier. If a plane crashed, the prevailing assumption was: That person should not have been flying the plane. Or perhaps they should have simply been better trained. It was their fault.

But as Fitts pored over the Air Force’s crash data, he realized that if “accident prone” pilots really were the cause, there would be randomness in what went wrong in the cockpit. These kinds of people would get hung on anything they operated. It was in their nature to take risks, to let their minds wander while landing a plane. But Fitts didn’t see noise; he saw a pattern. And when he went to talk to the people involved about what actually happened, they told of how confused and terrified they’d been, how little they understood in the seconds when death seemed certain.

The examples slid back and forth on a scale of tragedy to tragicomic: pilots who slammed their planes into the ground after misreading a dial; pilots who fell from the sky never knowing which direction was up; the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear. And others still, who got trapped in a maze of absurdity, like the one who, having jumped into a brand-new plane during a bombing raid by the Japanese, found the instruments completely rearranged. Sweaty with stress, unable to think of anything else to do, he simply ran the plane up and down the runway until the attack ended.

Fitts’ data showed that during one 22-month period of the war, the Air Force reported an astounding 457 crashes just like the one in which our imaginary pilot hit the runway thinking everything was fine. But the culprit was maddeningly obvious for anyone with the patience to look. Fitts’ colleague Alfonse Chapanis did the looking. When he started investigating the airplanes themselves, talking to people about them, sitting in the cockpits, he also didn’t see evidence of poor training. He saw, instead, the impossibility of flying these planes at all. Instead of “pilot error,” he saw what he called, for the first time, “designer error.”

The reason why all those pilots were crashing when their B-17s were easing into a landing was that the flaps and landing gear controls looked exactly the same. The pilots were simply reaching for the landing gear, thinking they were ready to land. And instead, they were pulling the wing flaps, slowing their descent, and driving their planes into the ground with the landing gear still tucked in. Chapanis came up with an ingenious solution: He created a system of distinctively shaped knobs and levers that made it easy to distinguish all the controls of the plane merely by feel, so that there’s no chance of confusion even if you’re flying in the dark.

By law, that ingenious bit of design—known as shape coding—still governs landing gear and wing flaps in every airplane today. And the underlying idea is all around you: It’s why the buttons on your videogame controller are differently shaped, with subtle texture differences so you can tell which is which. It’s why the dials and knobs in your car are all slightly different, depending on what they do. And it’s the reason your virtual buttons on your smartphone adhere to a pattern language.

But Chapanis and Fitts were proposing something deeper than a solution for airplane crashes. Faced with the prospect of soldiers losing their lives to poorly designed machinery, they invented a new paradigm for viewing human behavior. That paradigm lies behind the user-friendly world that we live in every day. They realized that it was absurd to train people to operate a machine and assume they would act perfectly under perfect conditions.

Instead, designing better machines meant figuring how people acted without thinking, in the fog of everyday life, which might never be perfect. You couldn’t assume humans to be perfectly rational sponges for training. You had to take them as they were: distracted, confused, irrational under duress. Only by imagining them at their most limited could you design machines that wouldn’t fail them.

This new paradigm took root slowly at first. But by 1984—four decades after Chapanis and Fitts conducted their first studies—Apple was touting a computer for the rest of us in one of its first print ads for the Macintosh: “On a particularly bright day in Cupertino, California, some particularly bright engineers had a particularly bright idea: Since computers are so smart, wouldn’t it make sense to teach computers about people, instead of teaching people about computers? So it was that those very engineers worked long days and nights and a few legal holidays, teaching silicon chips all about people. How they make mistakes and change their minds. How they refer to file folders and save old phone numbers. How they labor for their livelihoods, and doodle in their spare time.” (Emphasis mine.) And that easy-to-digest language molded the smartphones and seamless technology we live with today.

Along the long and winding path to a user-friendly world, Fitts and Chapanis laid the most important brick. They realized that as much as humans might learn, they would always be prone to err—and they inevitably brought presuppositions about how things should work to everything they used. This wasn’t something you could teach of existence. In some sense, our limitations and preconceptions are what it means to be human—and only by understanding those presumptions could you design a better world.

Today, this paradigm shift has produced trillions in economic value. We now presume that apps that reorder the entire economy should require no instruction manual at all; some of the most advanced computers ever made now come with only cursory instructions that say little more than “turn it on.” This is one of the great achievements of the last century of technological progress, with a place right alongside GPS, Arpanet, and the personal computer itself.

It’s also an achievement that remains unappreciated because we assume this is the way things should be. But with the assumption that even new technologies need absolutely no explaining comes a dark side: When new gadgets make assumptions about how we behave, they force unseen choices upon us. They don’t merely defer to our desires. They shape them.


User friendliness is simply the fit between the objects around us and the ways we behave. So while we might think that the user-friendly world is one of making user-friendly things, the bigger truth is that design doesn’t rely on artifacts; it relies on our patterns. The truest material for making new things isn’t aluminum or carbon fiber. It’s behavior. And today, our behavior is being shaped and molded in ways both magical and mystifying, precisely because it happens so seamlessly.

I got a taste of this seductive, user-friendly magic recently, when I went to Miami to tour a full-scale replica of Carnival Cruise’s so-called Ocean Medallion experience. I began my tour in a fake living room, with two of the best-looking project staffers pretending to be husband and wife, showing me how the whole thing was supposed to go.

Using the app, you could reserve all your activities way before you boarded the ship. And once on board, all you needed was to carry was a disk the size of a quarter; using that, any one of the 4,000 touchscreens on the ship could beam you personalized information, such which way you needed to go for your next reservation. The experience recalled not just scenes from Her and Minority Report, but computer-science manifestos from the late 1980s that imagined a suite of gadgets that would adapt to who you are, morphing to your needs in the moment.

Behind the curtains, in the makeshift workspace, a giant whiteboard wall was covered with a sprawling map of all the inputs that flow into some 100 different algorithms that crunch every bit of a passenger’s preference behavior to create something called the “Personal Genome.” If Jessica from Dayton wanted sunscreen and a mai tai, she could order them on her phone, and a steward would deliver them in person, anywhere across the sprawling ship.

The server would greet Jessica by name, and maybe ask if she was excited about her kitesurfing lesson. Over dinner, if Jessica wanted to plan an excursion with friends, she could pull up her phone and get recommendations based on the overlapping tastes of the people she was sitting with. If only some people like fitness and others love history, then maybe they’ll all like a walking tour of the market at the next port.

Jessica’s Personal Genome would be recalculated three times a second by 100 different algorithms using millions of data points that encompassed nearly anything she did on the ship: How long she lingered on a recommendation for a sightseeing tour; the options that she didn’t linger on at all; how long she’d actually spent in various parts of the ship; and what’s nearby at that very moment or happening soon. If, while in her room, she had watched one of Carnival’s slickly produced travel shows and seen something about a market tour at one her ports of call, she’d later get a recommendation for that exact same tour when the time was right. “Social engagement is one of the things being calculated, and so is the nuance of the context,” one of the executives giving me the tour said.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite writers.

It was like having a right-click for the real world. Standing on the mocked-up sundeck, knowing that whatever I wanted would find me, and that whatever I might want would find its way either onto the app or the screens that lit up around the cruise ship as I walked around, it wasn’t hard to see how many other businesses might try to do the same thing. In the era following World War II, the idea that designers could make the world easier to understand was a breakthrough.

But today, “I understand what I should do” has become “I don’t need to think at all.” For businesses, intuitiveness has now become mandatory, because there are fortunes to be made by making things just a tad more frictionless. “One way to view this is creating this kind of frictionless experience is an option. Another way to look at it is that there’s no choice,” said John Padgett, the Carnival executive who had shepherded the Ocean Medallion to life. “For millennials, value is important. But hassle is more important, because the era they’ve grow up in. It’s table stakes. You have to be hassle-free to get them to participate.”

By that logic, the real world was getting to be disappointing when compared with the frictionless ease of this increasingly virtual world. Taken as a whole, Carnival’s vision for seamless customer service that can anticipate your every whim was like an Uber for everything, powered by Netflix recommendations for meatspace. And these are in fact the experiences that many more designers will soon be striving for: invisible, everywhere, perfectly tailored, with no edges between one place and the next. Padgett described this as a “market of one,” in which everything you saw would be only the thing you want.

The Market of One suggests to me a break point in the very idea of user friendliness. When Chapanis and Fitts were laying the seeds of the user-friendly world, they had to find the principles that underlie how we expect the world to behave. They had to preach the idea that products built on our assumptions about how things should work would eventually make even the most complex things easy to understand.

Steve Jobs’ dream of a “bicycle for the mind”—a universal tool that might expand the reach of anyone—has arrived. High technology has made our lives easier; made us better at our jobs, and created jobs that never existed before; it has made the people we care about closer to us. But friction also has value: It’s friction that makes us question whether we do in fact need the thing we want. Friction is the path to introspection. Infinite ease quickly becomes the path of least resistance; it saps our free will, making us submit to someone else’s guess about who we are. We can’t let that pass. We have to become cannier, more critical consumers of the user-friendly world. Otherwise, we risk blundering into more crashes that we’ll only understand after the worst has already happened.


Excerpted from USER FRIENDLY: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play by Cliff Kuang with Robert Fabricant. Published by MCD, an imprint of Farrar, Straus and Giroux November 19th 2019. Copyright © 2019 by Cliff Kuang and Robert Fabricant. All rights reserved.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


More Great WIRED Stories

Continue Reading

Technology

A Tesla Cybertruck Mishap, a Massive Data Leak, and More News

Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.Here’s the news you need to know, in two minutes or less.Want to receive this two-minute roundup as an email every weekday? Sign up here!Today’s NewsMeet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truckTesla CEO Elon Musk last night unveiled his newest…

Published

on

A Tesla Cybertruck Mishap, a Massive Data Leak, and More News

Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.

Here’s the news you need to know, in two minutes or less.

Want to receive this two-minute roundup as an email every weekday? Sign up here!

Today’s News

Meet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truck

Tesla CEO Elon Musk last night unveiled his newest baby, an all-electric pickup called the Tesla Cybertruck. He demonstrated that it can take a sledgehammer to the door with nary a scratch, and he also accidentally demonstrated that it can’t take a ball to the window. But behind the showmanship and Elon’s audible disbelief at the onstage mishap is a truck with a 500-mile range and the torque that comes from an electric motor. It represents an important new market expansion for Tesla. Now it just has to actually put the darn thing into production.

1.2 billion records found exposed online in a single server

Hackers have long used stolen personal data to break into accounts and wreak havoc. And a dark web researcher found one data trove sitting exposed on an unsecured server. The 1.2 billion records don’t include passwords, credit card numbers, or Social Security numbers, but they do contain cell phone numbers, social media profiles, and email addresses—a great start for someone trying to steal your identity.

Fast Fact: 2025

That’s the year NASA expects to launch the first dedicated mission to Europa, where water vapor was recently discovered. The mission to Jupiter’s moon will involve peering beneath Europa’s icy shell for evidence of life.

WIRED Recommends: The Gadget Lab Newsletter

First of all, you should sign up for WIRED’s Gadget Lab newsletter, because every Thursday you’ll get the best stories about the coolest gadgets right in your inbox. Second of all, it will give you access to early Black Friday and Cyber Monday deals so you can get your shopping done early.

News You Can Use:

Here’s how to hide nasty replies to your tweets on Twitter.

This daily roundup is available as a newsletter. You can sign up right here to make sure you get the news delivered fresh to your inbox every weekday!

Continue Reading

Technology

How Wily Teens Outwit Bathroom Vape Detectors

Last spring, students at Hinsdale Central High School discovered six vaping detectors in bathrooms and locker rooms around campus. About 20 miles southwest of Chicago, Hinsdale Central has been battling on-campus vaping for years. Administrators tried making students take online courses if they were caught with ecigarettes; they talked to law enforcement; the Village of…

Published

on

How Wily Teens Outwit Bathroom Vape Detectors

Last spring, students at Hinsdale Central High School discovered six vaping detectors in bathrooms and locker rooms around campus. About 20 miles southwest of Chicago, Hinsdale Central has been battling on-campus vaping for years. Administrators tried making students take online courses if they were caught with ecigarettes; they talked to law enforcement; the Village of Hinsdale even passed an ordinance that would make it easier for officers to ticket minors caught with the devices. To no avail. And the detectors? Students simply ripped them off the walls.

Ecigarettes, which are easy to conceal and, until recently, came in a dazzling array of sweet, fruity, and dessert flavors, are hugely popular among teenagers. A recent study found that 28 percent of high schoolers and 11 percent of middle schoolers frequently vape. So schools across the country are spending thousands of dollars to outfit their campuses with vaping detectors, only to find that the devices can’t stand up to wily teens and that policing student behavior isn’t the same as permanently changing it.

Like smoke detectors, vape detectors are relatively unintrusive. They don’t even record video or audio—they just register the chemical signature of vaping aerosol, then send an email or text alert to school officials.

Some schools say they’re a useful deterrent. A district in Sparta, New Jersey, started off with two detectors and is planning to install more. Freeman School District in Washington installed detectors a few weeks ago. “They’ve been very effective, and we’re glad we have them,” says superintendent Randy Russell, who noted that the detectors already helped catch one young vaper in the act.

But at Hinsdale, even before the teens subjected them to blunt force trauma, the devices hadn’t lived up to expectations. “By the time we get there the kids are gone,” says Kimm Dever, an administrator at Hinsdale Central. Dever says the devices also went off randomly, and administrators couldn’t tell which kids were vaping and which just happened to be in the bathroom when the devices alerted.

Revere Schools in Bath, Ohio, reported similar problems. Revere spent around $15,000 to install 16 detectors in its middle and high schools at the beginning of the school year. Parents were thrilled, but administrators rarely made it to the bathroom in time to catch the vapers mid-puff. “It was like chasing ghosts,” says Jennifer Reece, a spokesperson for the school district. In theory, school officials could consult footage from hallway cameras to triangulate which students were in the bathroom when the detectors went off. “That also takes up time, and we don’t always have that type of time” Reece says.

Revere bought detectors with grant money from the state Attorney General’s Office. Now, Reece often gets questions from other school districts about the devices. “If they don’t have grant money I don’t know if it’s worth [the cost],” she says.

If vaping has become the cool thing to do among students, then buying vape detectors is the big trend for school districts. Derek Peterson, the CEO of Soter Technologies, which makes the Flysense detector that Revere installed, says the company is fielding about 700 orders a month. “We have more schools coming to us than we know what to do with,” he says. IPVideo, which makes a number of cameras and other gadgets for schools, sells a Halo detector that also claims to distinguish between THC and nicotine vapor. The detectors can integrate with school camera systems so it’s easier for administrators to figure out which students are in the bathroom, and both companies’ detectors cost roughly $1,000 a piece. Flysense charges an additional annual fee.

The sensors are chemical detectors that go off when the levels of certain chemicals in the room change. Most schools say they do sense the vapor and that they’ve caught students because of them. But kids are clever. Some exhale into their backpacks or sleeves, where the aerosol dissipates before wafting up to the detector. Other kids resort to AP physics–level subterfuge. They exhale into the toilet and flush, creating a vacuum that sucks the aerosol into the pipes. “There’s nothing we can do about that,” says Peterson. “There’s no sensing that could ever change the laws of physics.”

The problem is that detectors alone can’t change students’ behavior. It’s important for schools to analyze their goals, says Bonnie Halpern-Felsher, a developmental psychologist at Stanford who studies teen vaping. Vape detectors might help catch offending kids so they can be punished, she says, but “if the goal is to prevent and stop, vape detectors are not the way to go.”

Peterson agrees and is already getting in on the education angle, offering a #NoVaping package that includes brochures, posters, and suggestions for class presentations.

Between 2017 and 2019, the California Department of Justice distributed more than $12 million to California school districts trying to deter vaping through a number of measures including installing detectors, hiring school resource officers, and running educational programs.

One of those districts was Las Virgenes Unified, which serves around 11,500 students northwest of Los Angeles. In October 2018, Las Virgenes spent half of its grant, some $50,000, to install Flysense detectors at its two high schools and three middle schools. “The technology is good. They work,” says superintendent Dan Stepenosky. But he combines the detectors with other measures. When students are caught vaping, they’re sent to a 90-minute meeting with their parents and an addiction counselor. The school dispatched administrators to nearby gas stations, grocery stores, and convenience stores to remind people not to sell ecigarettes to kids under 21. The school even partners with law enforcement to run sting operations on businesses in the community that sell ecigarettes to minors. So far they’ve conducted over 250 operations complete with undercover officers and marked bills.

But the most important element hasn’t been the sting operations, the crackdowns on local retailers, or the detectors. “The most impactful has been the education piece,” says Stepenosky. The district holds seminars for parents and teachers, and it hired extra deans to focus on student wellness and included information about ecigarettes in school curricula.

These strategies are comprehensive, and they demand a lot of resources. One school in South Dakota raised money from the local community to buy its sensors. Other school districts are suing Juul, blaming the company’s marketing for creating a new generation of nicotine-addicted kids. Those districts hope to get payouts that will alleviate the huge financial burden of running addiction counseling and education programs. Stepenosky received over a million dollars from the California Department of Justice, and he’s already applying for more funding for next year.


More Great WIRED Stories

Continue Reading

Recent Posts