Regina Barzilay’s office at MIT affords a clear view of the Novartis Institutes for Biomedical Research. Amgen’s drug discovery group is a few blocks beyond that. Until recently, Barzilay, one of the world’s leading researchers in artificial intelligence, hadn’t given much thought to these nearby buildings full of chemists and biologists. But as AI and machine learning began to perform ever more impressive feats in image recognition and language comprehension, she began to wonder: could it also transform the task of finding new drugs?
The problem is that human researchers can explore only a tiny slice of what is possible. It’s estimated that there are as many as 1060 potentially drug-like molecules—more than the number of atoms in the solar system. But traversing seemingly unlimited possibilities is what machine learning is good at. Trained on large databases of existing molecules and their properties, the programs can explore all possible related molecules.
Drug discovery is a hugely expensive and often frustrating process. Medicinal chemists must guess which compounds might make good medicines, using their knowledge of how a molecule’s structure affects its properties. They synthesize and test countless variants, and most are failures. “Coming up with new molecules is still an art, because you have such a huge space of possibilities,” says Barzilay. “It takes a long time to find good drug candidates.”
By speeding up this critical step, deep learning could offer far more opportunities for chemists to pursue, making drug discovery much quicker. One advantage: machine learning’s often quirky imagination. “Maybe it will go in a different direction that a human wouldn’t go in,” says Angel Guzman-Perez, a drug researcher at Amgen who is working with Barzilay. “It thinks differently.”
Others are using machine learning to try to invent new materials for clean-tech applications. Among the items on the wish list are improved batteries for storing power on the electric grid and organic solar cells, which could be far cheaper to make than today’s bulky silicon-based ones.
Such breakthroughs have become harder and more expensive to attain as chemistry, materials science, and drug discovery have grown mind-bogglingly complex and saturated with data. Even as the pharmaceutical and biotech industries pour money into research, the number of new drugs based on novel molecules has been flat over the last few decades. And we’re still stuck with lithium-ion batteries that date to the early 1990s and designs for silicon solar cells that are also decades old.
The complexity that has slowed progress in these fields is where deep learning excels. Searching through multidimensional space to come up with valuable predictions is “AI’s sweet spot,” says Ajay Agrawal, an economist at the Rotman School of Management in Toronto and author of the best-selling Prediction Machines: The Simple Economics of Artificial Intelligence.
In a recent paper, economists at MIT, Harvard, and Boston University argued that AI’s greatest economic impact could come from its potential as a new “method of invention” that ultimately reshapes “the nature of the innovation process and the organization of R&D.”
Iain Cockburn, a BU economist and coauthor of the paper, says: “New methods of invention with wide applications don’t come by very often, and if our guess is right, AI could dramatically change the cost of doing R&D in many different fields.” Much of innovation involves making predictions based on data. In such tasks, Cockburn adds, “machine learning could be much faster and cheaper by orders of magnitude.”
In other words, AI’s chief legacy might not be driverless cars or image search or even Alexa’s ability to take orders, but its ability to come up with new ideas to fuel innovation itself.
Ideas are getting expensive
Late last year, Paul Romer won the economics Nobel Prize for work done during the late 1980s and early 1990s that showed how investments in new ideas and innovation drive robust economic growth. Earlier economists had noted the connection between innovation and growth, but Romer provided an exquisite explanation for how it works. In the decades since, Romer’s conclusions have been the intellectual inspiration for many in Silicon Valley and help account for how it has attained such wealth.
But what if our pipeline of new ideas is drying up? Economists Nicholas Bloom and Chad Jones at Stanford, Michael Webb, a graduate student at the university, and John Van Reenen at MIT looked at the problem in a recent paper called “Are ideas getting harder to find?” (Their answer was “Yes.”) Looking at drug discovery, semiconductor research, medical innovation, and efforts to improve crop yields, the economists found a common story: investments in research are climbing sharply, but the payoffs are staying constant.
From an economist’s perspective, that’s a productivity problem: we’re paying more for a similar amount of output. And the numbers look bad. Research productivity—the number of researchers it takes to produce a given result—is declining by around 6.8% annually for the task of extending Moore’s Law, which requires that we find ways to pack ever more and smaller components on a semiconductor chip in order to keep making computers faster and more powerful. (It takes more than 18 times as many researchers to double chip density today as it did in the early 1970s, they found.) For improving seeds, as measured by crop yields, research productivity is dropping by around 5% each year. For the US economy as a whole, it is declining by 5.3%.
The rising price of big ideas
It is taking more researchers and money to find productive new ideas, according to economists at Stanford and MIT. That’s a likely factor in the overall sluggish growth in the US and Europe in recent decades. The graph below shows the pattern for the overall economy, highlighting US total factor productivity (by decade average and for 2000–2014)—a measure of the contribution of innovation—versus the number of researchers. Similar patterns hold for specific research areas.
Any negative effect of this decline has been offset, so far, by the fact that we’re putting more money and people into research. So we’re still doubling the number of transistors on a chip every two years, but only because we’re dedicating far more people to the problem. We’ll have to double our investments in research and development over the next 13 years just to keep treading water.
It could be, of course, that fields like crop science and semiconductor research are getting old and the opportunities for innovation are shriveling up. However, the researchers also found that overall growth tied to innovation in the economy was slow. Any investments in new areas, and any inventions they have generated, have failed to change the overall story.
The drop in research productivity appears to be a decades-long trend. But it is particularly worrisome to economists now because we’ve seen an overall slowdown in economic growth since the mid-2000s. At a time of brilliant new technologies like smartphones, driverless cars, and Facebook, growth is sluggish, and the portion of it attributed to innovation—called total factor productivity—has been particularly weak.
The lingering effects of the 2008 financial collapse could be hampering growth, says Van Reenen, and so could continuing political uncertainties. But dismal research productivity is undoubtedly a contributor. And he says that if the decline continues, it could do serious damage to future prosperity and growth.
It makes sense that we’ve already picked much of what some economists like to call the “low-hanging fruit” in terms of inventions. Could it be that the only fruit left is a few shriveled apples on the farthest branches of the tree? Robert Gordon, an economist at Northwestern University, has been a strong proponent of that view. He says we’re unlikely to match the flourishing of discovery that marked the late 19th and early 20th centuries, when inventions such as electric light and power and the internal-combustion engine led to a century of unprecedented prosperity.
If Gordon is right, and there are fewer big inventions left, we’re doomed to a dismal economic future. But few economists think that’s the case. Rather, it makes sense that big new ideas are out there; it’s just getting more expensive to find them as the science becomes increasingly complex. The chances that the next penicillin will just fall into our laps are slim. We’ll need more and more researchers to make sense of the advancing science in fields like chemistry and biology.
It’s what Ben Jones, an economist at Northwestern, calls “the burden of knowledge.” Researchers are becoming more specialized, making it necessary to form larger—and more expensive—teams to solve problems. Jones’s research shows that the age at which scientists reach their peak productivity is going up: it takes them longer to gain the expertise they need. “It’s an innate by-product of the exponential growth of knowledge,” he says.
“A lot of people tell me our findings are depressing, but I don’t see it that way,” says Van Reenen. Innovation might be more difficult and expensive, but that, he says, simply points to the need for policies, including tax incentives, that will encourage investments into more research.
“As long as you put resources into R&D, you can maintain healthy productivity growth,” says Van Reenen. “But we have to be prepared to spend money to do it. It doesn’t come free.”
Giving up on science
Can AI creatively solve the kinds of problems that such innovation requires? Some experts are now convinced that it can, given the kinds of advances shown off by the game-playing machine AlphaGo.
AlphaGo mastered the ancient game of Go, beating the reigning champion, by studying the nearly unlimited possible moves in a game that has been played for several thousand years by humans relying heavily on intuition. In doing so, it sometimes came up with winning strategies that no human player had thought to try. Likewise, goes the thinking, deep-learning programs trained on large amounts of experimental data and chemical literature could come up with novel compounds that scientists never imagined.
Might an AlphaGo-like breakthrough help the growing armies of researchers poring over ever-expanding scientific data? Could AI make basic research faster and more productive, reviving areas that have become too expensive for businesses to pursue?
The last several decades have seen a massive upheaval in our R&D efforts. Since the days when AT&T’s Bell Labs and Xerox’s PARC produced world-changing inventions like the transistor, solar cells, and laser printing, most large companies in the US and other rich economies have given up on basic research. Meanwhile, US federal R&D investments have been flat, particularly for fields other than life sciences. So while we continue to increase the number of researchers overall and to turn incremental advances into commercial opportunities, areas that require long-term research and a grounding in basic science have taken a hit.
The invention of new materials in particular has become a commercial backwater. That has held back needed innovations in clean tech—stuff like better batteries, more efficient solar cells, and catalysts to make fuels directly from sunlight and carbon dioxide (think artificial photosynthesis). While the prices of solar panels and batteries are falling steadily, that’s largely because of improvements in manufacturing and economies of scale, rather than fundamental advances in the technologies themselves.
It takes an average of 15 to 20 years to come up with a new material, says Tonio Buonassisi, a mechanical engineer at MIT who is working with a team of scientists in Singapore to speed up the process. That’s far too long for most businesses. It’s impractical even for many academic groups. Who wants to spend years on a material that may or may not work? This is why venture-backed startups, which have generated much of the innovation in software and even biotech, have long given up on clean tech: venture capitalists generally need a return within seven years or sooner.
“A 10x acceleration [in the speed of materials discovery] is not only possible, it is necessary,” says Buonassisi, who runs a photovoltaic research lab at MIT. His goal, and that of a loosely connected network of fellow scientists, is to use AI and machine learning to get that 15-to-20-year time frame down to around two to five years by attacking the various bottlenecks in the lab, automating as much of the process as possible. A faster process gives the scientists far more potential solutions to test, allows them to find dead ends in hours rather than months, and helps optimize the materials. “It transforms how we think as researchers,” he says.
It could also make materials discovery a viable business pursuit once again. Buonassisi points to a chart showing the time it took to develop various technologies. One of the columns labeled “lithium-ion batteries” shows 20 years.
Another, much shorter column is labeled “novel solar cell”; at the top is “2030 climate target.” The point is clear: we can’t wait another 20 years for the next breakthrough in clean-tech materials.
AI startups in drugs and materials
|What they do||Use neural networks to search through large databases to find small drug-like molecules that bind to targeted proteins.||Develop a combination of robotics and AI to speed up the discovery and development of new materials and chemicals.||Use artificial intelligence to search for oligonucleotide molecules to treat genetic diseases.|
|Why it matters||Identifying such molecules with desirable properties, such as potency, is a critical first step in drug discovery.||It takes more than a decade to develop a material. Cutting that time could help us tackle problems such as climate change.||Oligonucleotide treatments hold promise against a range of diseases, including neurodegenerative and metabolic disorders.|
The AI-driven lab
“Come to a free land”: that is how Alán Aspuru-Guzik invites a US visitor to his Toronto lab these days. In 2018 Aspuru-Guzik left his tenured position as a Harvard chemistry professor, moving with his family to Canada. His decision was driven by a strong distaste for President Donald Trump and his policies, particularly those on immigration. It didn’t hurt, however, that Toronto is rapidly becoming a mecca for artificial-intelligence research.
As well as being a chemistry professor at the University of Toronto, Aspuru-Guzik also has a position at the Vector Institute for Artificial Intelligence. It’s the AI center cofounded by Geoffrey Hinton, whose pioneering work on deep learning and neural networks is largely credited with jump-starting today’s boom in AI.
In a notable 2012 paper, Hinton and his coauthors demonstrated that a deep neural network, trained on a huge number of pictures, could identify a mushroom, a leopard, and a dalmatian dog. It was a remarkable breakthrough at the time, and it quickly ushered in an AI revolution using deep-learning algorithms to make sense of large data sets.
Researchers rapidly found ways to use such neural networks to help driverless cars navigate and to spot faces in a crowd. Others modified the deep-learning tools so that they could train themselves; among these tools are GANs (generative adversarial networks), which can fabricate images of scenes and people that never existed.
In a 2015 follow-up paper, Hinton provided clues that deep learning could be used in chemistry and materials research. His paper touted the ability of neural network to discover “intricate structures in high-dimensional data”—in other words, the same networks that can navigate through millions of images to find, say, a dog with spots could sort through millions of molecules to identify one with certain desirable properties.
Energetic and bubbling with ideas, Aspuru-Guzik is not the type of scientist to patiently spend two decades figuring out whether a material will work. And he has quickly adapted deep learning and neural networks to attempt to reinvent materials discovery. The idea is to infuse artificial intelligence and automation into all the steps of materials research: the initial design and synthesis of a material, its testing and analysis, and finally the multiple refinements that optimize its performance.
On a freezing cold day early this January, Aspuru-Guzik has his hat pulled tightly down over his ears but otherwise seems oblivious to the bitter Canadian weather. He has other things on his mind. For one thing, he’s still waiting for the delivery of a $1.2 million robot, now on a ship from Switzerland, that will be the centerpiece for the automated, AI-driven lab he has envisioned.
In the lab, deep-learning tools like GANs and their cousin, a technique called autoencoder, will imagine promising new materials and figure out how to make them. The robot will then make the compounds; Aspuru-Guzik wants to create an affordable automated system that would be able to spit out new molecules on demand. Once the materials have been made, they can be analyzed with instruments such as a mass spectrometer. Additional machine-learning tools will make sense of that data and “diagnose” the material’s properties. These insights will then be used to further optimize the materials, tweaking their structures. And then, Aspuru-Guzik says, “AI will select the next experiment to make, closing the loop.”
Once the robot is in place, Aspuru-Guzik expects to make some 48 novel materials every two days, drawing on the machine-learning insights to keep improving their structures. That’s one promising new material every hour, an unprecedented pace that could completely transform the lab’s productivity.
It’s not all about simply dreaming up “a magical material,” he says. To really change materials research, you need to attack the entire process: “What are the bottlenecks? You want AI in every piece of the lab.” Once you have a proposed structure, for example, you still need to figure out how to make it. It can take weeks to months to solve what chemists call “retrosynthesis”—working backwards from a molecular structure to figure out the steps needed to synthesize such a compound. Another bottleneck comes in making sense of the reams of data produced by analytic equipment. Machine learning could speed up each of those steps.
What motivates Aspuru-Guzik is the threat of climate change, the need for improvements in clean tech, and the essential role of materials in producing such advances. His own research is looking at novel organic electrolytes for flow batteries, which can be used to store excess electricity from power grids and pump it back in when it’s needed, and at organic solar cells that would be far cheaper than silicon-based ones. But if his design for a self-contained, automated chemical lab works, he suggests, it could make chemistry far more accessible to almost anyone. He calls it the “democratization of materials discovery.”
“This is where the action is,” he says. “AIs that drive cars, AIs that improve medical diagnostics, AIs for personal shopping—the economic growth from AIs applied to scientific research may swamp the economic impact from all those other AIs combined.”
The Vector Institute, Toronto’s magnet for AI research, sits less than a mile away. From the windows of the large open office space, you can look across at Ontario’s parliament building. The proximity of experts in AI, chemistry, and business to the province’s seat of government in downtown Toronto isn’t a coincidence. There’s a strong belief among many in the city that AI will transform business and the economy, and increasingly, some are convinced it will radically change how we do science.
Still, if it is do that, a first step is convincing scientists it is worthwhile.
Amgen’s Guzman-Perez says many of his peers in medicinal chemistry are skeptical. Over the last few decades the field has seen a series of supposedly revolutionary technologies, from computational design to combinatorial chemistry and high-throughput screening, that have automated the rapid production and testing of multiple molecules. Each has proved somewhat helpful but limited. None, he says, “magically get you a new drug.”
It’s too early to know for sure whether deep learning could finally be the game-changer, he acknowledges, “and it’s hard to know the time frame.” But he takes encouragement from the speed at which AI has transformed image recognition and other search tasks.
“Hopefully, it could happen in chemistry,” he says.
We’re still waiting for the AlphaGo moment in chemistry and materials—for deep-learning algorithms to outwit the most accomplished human in coming up with a new drug or material. But just as AlphaGo won with a combination of uncanny strategy and an inhuman imagination, today’s latest AI programs could soon prove themselves in the lab.
And that has some scientists dreaming big. The idea, says Aspuru-Guzik, is to use AI and automation to reinvent the lab with tools such as the $30,000 molecular printer he hopes to build. It will then be up to scientists’ imagination—and that of AI—to explore the possibilities.
How the Dumb Design of a WWII Plane Led to the Macintosh
The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked…
The B-17 Flying Fortress rolled off the drawing board and onto the runway in a mere 12 months, just in time to become the fearsome workhorse of the US Air Force during World War II. Its astounding toughness made pilots adore it: The B-17 could roar through angry squalls of shrapnel and bullets, emerging pockmarked but still airworthy. It was a symbol of American ingenuity, held aloft by four engines, bristling with a dozen machine guns.
Imagine being a pilot of that mighty plane. You know your primary enemy—the Germans and Japanese in your gunsights. But you have another enemy that you can’t see, and it strikes at the most baffling times. Say you’re easing in for another routine landing. You reach down to deploy your landing gear. Suddenly, you hear the scream of metal tearing into the tarmac. You’re rag-dolling around the cockpit while your plane skitters across the runway. A thought flickers across your mind about the gunners below and the other crew: “Whatever has happened to them now, it’s my fault.” When your plane finally lurches to a halt, you wonder to yourself: “How on earth did my plane just crash when everything was going fine? What have I done?”
For all the triumph of America’s new planes and tanks during World War II, a silent reaper stalked the battlefield: accidental deaths and mysterious crashes that no amount of training ever seemed to fix. And it wasn’t until the end of the war that the Air Force finally resolved to figure out what had happened.
To do that, the Air Force called upon a young psychologist at the Aero Medical Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio. Paul Fitts was a handsome man with a soft Tennessee drawl, analytically minded but with a shiny wave of Brylcreemed hair, Elvis-like, which projected a certain suave nonconformity. Decades later, he’d become known as one of the Air Force’s great minds, the person tasked with hardest, weirdest problems—such as figuring out why people saw UFOs.
For now though, he was still trying to make his name with a newly minted PhD in experimental psychology. Having an advanced degree in psychology was still a novelty; with that novelty came a certain authority. Fitts was supposed to know how people think. But his true talent is to realize that he doesn’t.
When the thousands of reports about plane crashes landed on Fitts’s desk, he could have easily looked at them and concluded that they were all the pilot’s fault—that these fools should have never been flying at all. That conclusion would have been in keeping with the times. The original incident reports themselves would typically say “pilot error,” and for decades no more explanation was needed. This was, in fact, the cutting edge of psychology at the time. Because so many new draftees were flooding into the armed forces, psychologists had begun to devise aptitude tests that would find the perfect job for every soldier. If a plane crashed, the prevailing assumption was: That person should not have been flying the plane. Or perhaps they should have simply been better trained. It was their fault.
But as Fitts pored over the Air Force’s crash data, he realized that if “accident prone” pilots really were the cause, there would be randomness in what went wrong in the cockpit. These kinds of people would get hung on anything they operated. It was in their nature to take risks, to let their minds wander while landing a plane. But Fitts didn’t see noise; he saw a pattern. And when he went to talk to the people involved about what actually happened, they told of how confused and terrified they’d been, how little they understood in the seconds when death seemed certain.
The examples slid back and forth on a scale of tragedy to tragicomic: pilots who slammed their planes into the ground after misreading a dial; pilots who fell from the sky never knowing which direction was up; the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear. And others still, who got trapped in a maze of absurdity, like the one who, having jumped into a brand-new plane during a bombing raid by the Japanese, found the instruments completely rearranged. Sweaty with stress, unable to think of anything else to do, he simply ran the plane up and down the runway until the attack ended.
Fitts’ data showed that during one 22-month period of the war, the Air Force reported an astounding 457 crashes just like the one in which our imaginary pilot hit the runway thinking everything was fine. But the culprit was maddeningly obvious for anyone with the patience to look. Fitts’ colleague Alfonse Chapanis did the looking. When he started investigating the airplanes themselves, talking to people about them, sitting in the cockpits, he also didn’t see evidence of poor training. He saw, instead, the impossibility of flying these planes at all. Instead of “pilot error,” he saw what he called, for the first time, “designer error.”
The reason why all those pilots were crashing when their B-17s were easing into a landing was that the flaps and landing gear controls looked exactly the same. The pilots were simply reaching for the landing gear, thinking they were ready to land. And instead, they were pulling the wing flaps, slowing their descent, and driving their planes into the ground with the landing gear still tucked in. Chapanis came up with an ingenious solution: He created a system of distinctively shaped knobs and levers that made it easy to distinguish all the controls of the plane merely by feel, so that there’s no chance of confusion even if you’re flying in the dark.
By law, that ingenious bit of design—known as shape coding—still governs landing gear and wing flaps in every airplane today. And the underlying idea is all around you: It’s why the buttons on your videogame controller are differently shaped, with subtle texture differences so you can tell which is which. It’s why the dials and knobs in your car are all slightly different, depending on what they do. And it’s the reason your virtual buttons on your smartphone adhere to a pattern language.
But Chapanis and Fitts were proposing something deeper than a solution for airplane crashes. Faced with the prospect of soldiers losing their lives to poorly designed machinery, they invented a new paradigm for viewing human behavior. That paradigm lies behind the user-friendly world that we live in every day. They realized that it was absurd to train people to operate a machine and assume they would act perfectly under perfect conditions.
Instead, designing better machines meant figuring how people acted without thinking, in the fog of everyday life, which might never be perfect. You couldn’t assume humans to be perfectly rational sponges for training. You had to take them as they were: distracted, confused, irrational under duress. Only by imagining them at their most limited could you design machines that wouldn’t fail them.
This new paradigm took root slowly at first. But by 1984—four decades after Chapanis and Fitts conducted their first studies—Apple was touting a computer for the rest of us in one of its first print ads for the Macintosh: “On a particularly bright day in Cupertino, California, some particularly bright engineers had a particularly bright idea: Since computers are so smart, wouldn’t it make sense to teach computers about people, instead of teaching people about computers? So it was that those very engineers worked long days and nights and a few legal holidays, teaching silicon chips all about people. How they make mistakes and change their minds. How they refer to file folders and save old phone numbers. How they labor for their livelihoods, and doodle in their spare time.” (Emphasis mine.) And that easy-to-digest language molded the smartphones and seamless technology we live with today.
Along the long and winding path to a user-friendly world, Fitts and Chapanis laid the most important brick. They realized that as much as humans might learn, they would always be prone to err—and they inevitably brought presuppositions about how things should work to everything they used. This wasn’t something you could teach of existence. In some sense, our limitations and preconceptions are what it means to be human—and only by understanding those presumptions could you design a better world.
Today, this paradigm shift has produced trillions in economic value. We now presume that apps that reorder the entire economy should require no instruction manual at all; some of the most advanced computers ever made now come with only cursory instructions that say little more than “turn it on.” This is one of the great achievements of the last century of technological progress, with a place right alongside GPS, Arpanet, and the personal computer itself.
It’s also an achievement that remains unappreciated because we assume this is the way things should be. But with the assumption that even new technologies need absolutely no explaining comes a dark side: When new gadgets make assumptions about how we behave, they force unseen choices upon us. They don’t merely defer to our desires. They shape them.
User friendliness is simply the fit between the objects around us and the ways we behave. So while we might think that the user-friendly world is one of making user-friendly things, the bigger truth is that design doesn’t rely on artifacts; it relies on our patterns. The truest material for making new things isn’t aluminum or carbon fiber. It’s behavior. And today, our behavior is being shaped and molded in ways both magical and mystifying, precisely because it happens so seamlessly.
I got a taste of this seductive, user-friendly magic recently, when I went to Miami to tour a full-scale replica of Carnival Cruise’s so-called Ocean Medallion experience. I began my tour in a fake living room, with two of the best-looking project staffers pretending to be husband and wife, showing me how the whole thing was supposed to go.
Using the app, you could reserve all your activities way before you boarded the ship. And once on board, all you needed was to carry was a disk the size of a quarter; using that, any one of the 4,000 touchscreens on the ship could beam you personalized information, such which way you needed to go for your next reservation. The experience recalled not just scenes from Her and Minority Report, but computer-science manifestos from the late 1980s that imagined a suite of gadgets that would adapt to who you are, morphing to your needs in the moment.
Behind the curtains, in the makeshift workspace, a giant whiteboard wall was covered with a sprawling map of all the inputs that flow into some 100 different algorithms that crunch every bit of a passenger’s preference behavior to create something called the “Personal Genome.” If Jessica from Dayton wanted sunscreen and a mai tai, she could order them on her phone, and a steward would deliver them in person, anywhere across the sprawling ship.
The server would greet Jessica by name, and maybe ask if she was excited about her kitesurfing lesson. Over dinner, if Jessica wanted to plan an excursion with friends, she could pull up her phone and get recommendations based on the overlapping tastes of the people she was sitting with. If only some people like fitness and others love history, then maybe they’ll all like a walking tour of the market at the next port.
Jessica’s Personal Genome would be recalculated three times a second by 100 different algorithms using millions of data points that encompassed nearly anything she did on the ship: How long she lingered on a recommendation for a sightseeing tour; the options that she didn’t linger on at all; how long she’d actually spent in various parts of the ship; and what’s nearby at that very moment or happening soon. If, while in her room, she had watched one of Carnival’s slickly produced travel shows and seen something about a market tour at one her ports of call, she’d later get a recommendation for that exact same tour when the time was right. “Social engagement is one of the things being calculated, and so is the nuance of the context,” one of the executives giving me the tour said.
Subscribe to WIRED and stay smart with more of your favorite writers.
It was like having a right-click for the real world. Standing on the mocked-up sundeck, knowing that whatever I wanted would find me, and that whatever I might want would find its way either onto the app or the screens that lit up around the cruise ship as I walked around, it wasn’t hard to see how many other businesses might try to do the same thing. In the era following World War II, the idea that designers could make the world easier to understand was a breakthrough.
But today, “I understand what I should do” has become “I don’t need to think at all.” For businesses, intuitiveness has now become mandatory, because there are fortunes to be made by making things just a tad more frictionless. “One way to view this is creating this kind of frictionless experience is an option. Another way to look at it is that there’s no choice,” said John Padgett, the Carnival executive who had shepherded the Ocean Medallion to life. “For millennials, value is important. But hassle is more important, because the era they’ve grow up in. It’s table stakes. You have to be hassle-free to get them to participate.”
By that logic, the real world was getting to be disappointing when compared with the frictionless ease of this increasingly virtual world. Taken as a whole, Carnival’s vision for seamless customer service that can anticipate your every whim was like an Uber for everything, powered by Netflix recommendations for meatspace. And these are in fact the experiences that many more designers will soon be striving for: invisible, everywhere, perfectly tailored, with no edges between one place and the next. Padgett described this as a “market of one,” in which everything you saw would be only the thing you want.
The Market of One suggests to me a break point in the very idea of user friendliness. When Chapanis and Fitts were laying the seeds of the user-friendly world, they had to find the principles that underlie how we expect the world to behave. They had to preach the idea that products built on our assumptions about how things should work would eventually make even the most complex things easy to understand.
Steve Jobs’ dream of a “bicycle for the mind”—a universal tool that might expand the reach of anyone—has arrived. High technology has made our lives easier; made us better at our jobs, and created jobs that never existed before; it has made the people we care about closer to us. But friction also has value: It’s friction that makes us question whether we do in fact need the thing we want. Friction is the path to introspection. Infinite ease quickly becomes the path of least resistance; it saps our free will, making us submit to someone else’s guess about who we are. We can’t let that pass. We have to become cannier, more critical consumers of the user-friendly world. Otherwise, we risk blundering into more crashes that we’ll only understand after the worst has already happened.
Excerpted from USER FRIENDLY: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play by Cliff Kuang with Robert Fabricant. Published by MCD, an imprint of Farrar, Straus and Giroux November 19th 2019. Copyright © 2019 by Cliff Kuang and Robert Fabricant. All rights reserved.
When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.
More Great WIRED Stories
- The super-optimized dirt that helps keep racehorses safe
- The 12 best foreign horror movies you can stream right now
- VSCO girls are just banal Victorian archetypes
- Google’s .new shortcuts are here to simplify your life
- The delicate ethics of using facial recognition in schools
- 👁 Prepare for the deepfake era of video; plus, check out the latest news on AI
- 💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones
A Tesla Cybertruck Mishap, a Massive Data Leak, and More News
Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.Here’s the news you need to know, in two minutes or less.Want to receive this two-minute roundup as an email every weekday? Sign up here!Today’s NewsMeet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truckTesla CEO Elon Musk last night unveiled his newest…
Hackers are stealing and Elon is squealing, but first: a cartoon about subscription dreams.
Here’s the news you need to know, in two minutes or less.
Want to receive this two-minute roundup as an email every weekday? Sign up here!
Meet the Tesla Cybertruck, Elon Musk’s Ford-fighting pickup truck
Tesla CEO Elon Musk last night unveiled his newest baby, an all-electric pickup called the Tesla Cybertruck. He demonstrated that it can take a sledgehammer to the door with nary a scratch, and he also accidentally demonstrated that it can’t take a ball to the window. But behind the showmanship and Elon’s audible disbelief at the onstage mishap is a truck with a 500-mile range and the torque that comes from an electric motor. It represents an important new market expansion for Tesla. Now it just has to actually put the darn thing into production.
1.2 billion records found exposed online in a single server
Hackers have long used stolen personal data to break into accounts and wreak havoc. And a dark web researcher found one data trove sitting exposed on an unsecured server. The 1.2 billion records don’t include passwords, credit card numbers, or Social Security numbers, but they do contain cell phone numbers, social media profiles, and email addresses—a great start for someone trying to steal your identity.
Fast Fact: 2025
That’s the year NASA expects to launch the first dedicated mission to Europa, where water vapor was recently discovered. The mission to Jupiter’s moon will involve peering beneath Europa’s icy shell for evidence of life.
WIRED Recommends: The Gadget Lab Newsletter
First of all, you should sign up for WIRED’s Gadget Lab newsletter, because every Thursday you’ll get the best stories about the coolest gadgets right in your inbox. Second of all, it will give you access to early Black Friday and Cyber Monday deals so you can get your shopping done early.
News You Can Use:
Here’s how to hide nasty replies to your tweets on Twitter.
This daily roundup is available as a newsletter. You can sign up right here to make sure you get the news delivered fresh to your inbox every weekday!