Connect with us

Technology

Quantum computing’s ‘Hello World’ moment – TechCrunch

Does quantum computing really exist? It’s fitting that for decades this field has been haunted by the fundamental uncertainty of whether it would, eventually, prove to be a wild goose chase. But Google has collapsed this nagging superposition with research not just demonstrating what’s called “quantum supremacy,” but more importantly showing that this also is…

Published

on

Quantum computing’s ‘Hello World’ moment – TechCrunch

Does quantum computing really exist? It’s fitting that for decades this field has been haunted by the fundamental uncertainty of whether it would, eventually, prove to be a wild goose chase. But Google has collapsed this nagging superposition with research not just demonstrating what’s called “quantum supremacy,” but more importantly showing that this also is only the very beginning of what quantum computers will eventually be capable of.

This is by all indications an important point in computing, but it is also very esoteric and technical in many ways. Consider, however, that in the 60s, the decision to build computers with electronic transistors must have seemed rather an esoteric point as well. Yet that was in a way the catalyst for the entire Information Age.

Most of us were not lucky enough to be involved with that decision or to understand why it was important at the time. We are lucky enough to be here now — but understanding takes a bit of explanation. The best place to start is perhaps with computing and physics pioneers Alan Turing and Richard Feynman.

‘Because nature isn’t classical, dammit’

The universal computing machine envisioned by Turing and others of his generation was brought to fruition during and after World War II, progressing from vacuum tubes to hand-built transistors to the densely packed chips we have today. With it evolved an idea of computing that essentially said: If it can be represented by numbers, we can simulate it.

That meant that cloud formation, object recognition, voice synthesis, 3D geometry, complex mathematics — all that and more could, with enough computing power, be accomplished on the standard processor-RAM-storage machines that had become the standard.

But there were exceptions. And although some were obscure things like mathematical paradoxes, it became clear as the field of quantum physics evolved that it may be one of them. It was Feynman who proposed in the early 80s that if you want to simulate a quantum system, you’ll need a quantum system to do it with.

“I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” he concluded, in his inimitable way. Classical computers, as he deemed what everyone else just called computers, were insufficient to the task.

GettyImages feynman

Richard Feynman made the right call, it turns out.

The problem? There was no such thing as a quantum computer, and no one had the slightest idea how to build one. But the gauntlet had been thrown, and it was like catnip to theorists and computer scientists, who since then have vied over the idea.

Could it be that with enough ordinary computing power, power on a scale Feynman could hardly imagine — data centers with yottabytes of storage and exaflops of processing — we can in fact simulate nature down to its smallest, spookiest levels?

Or could it be that with some types of problems you hit a wall, and that you can put every computer on Earth to a task and the progress bar will only tick forward a percentage point in a million years, if that?

And, if that’s the case, is it even possible to create a working computer that can solve that problem in a reasonable amount of time?

In order to prove Feynman correct, you would have to answer all of these questions. You’d have to show that there exists a problem that is not merely difficult for ordinary computers, but that is effectively impossible for them to solve even at incredible levels of power. And you would have to not just theorize but create a new computer that not just can but does solve that same problem.

By doing so you would not just prove a theory, you would open up an entirely new class of problem-solving, of theories that can be tested. It would be a moment when an entirely new field of computing first successfully printed “hello world” and was opened up for everyone in the world to use. And that is what the researchers at Google and NASA claim to have accomplished.

In which we skip over how it all actually works

google quantum team

One of the quantum computers in question. I talked with that fellow in the shorts about microwave amps and attenuators for a while.

Much has already been written on how quantum computing differs from traditional computing, and I’ll be publishing another story soon detailing Google’s approach. But some basics bear mentioning here.

Classical computers are built around transistors that, by holding or vacating a charge, signify either a 1 or a 0. By linking these transistors together into more complex formations they can represent data, or transform and combine it through logic gates like AND and NOR. With a complex language specific to digital computers that has evolved for decades, we can make them do all kinds of interesting things.

Quantum computers are actually quite similar in that they have a base unit that they perform logic on to perform various tasks. The difference is that the unit is more complex: a qubit, which represents a much more complex mathematical space than simply 0 or 1. Instead you may think of their state may be thought of as a location on a sphere, a point in 3D space. The logic is also more complicated, but still relatively basic (and helpfully still called gates): That point can be adjusted, flipped, and so on. Yet the qubit when observed is also digital, providing what amounts to either a 0 or 1 value.

By virtue of representing a value in a richer mathematical space, these qubits and manipulations thereof can perform new and interesting tasks, including some which, as Google shows, we had no ability to do before.

A quantum of contrivance

In order to accomplish the tripartite task summarized above, first the team had to find a task that classical computers found difficult but that should be relatively easy for a quantum computer to do. The problem they settled on is in a way laughably contrived: Being a quantum computer.

In a way it makes you want to just stop reading, right? Of course a quantum computer is going to be better at being itself than an ordinary computer will be. But it’s not actually that simple.

Think of a cool old piece of electronics — an Atari 800. Sure, it’s very good at being itself and running its programs and so on. But any modern computer can simulate an Atari 800 so well that it could run those programs in orders of magnitude less time. For that matter, a modern computer can be simulated by a supercomputer in much the same way.

Furthermore, there are already ways of simulating quantum computers — they were developed in tandem with real quantum hardware so performance could be compared to theory. These simulators and the hardware they simulate differ widely, and have been greatly improved in recent years as quantum computing became more than a hobby for major companies and research institutions.

qubit lattice

This shows the “lattice” of qubits as they were connected during the experiment (colored by the amount of error they contributed, which you don’t need to know about.)

To be specific, the problem was simulating the output of a random sequence of gates and qubits in a quantum computer. Briefly stated, when a circuit of qubits does something, the result is, like other computers, a sequence of 0s and 1s. If it isn’t calculating something in particular, those numbers will be random — but crucially, they are “random” in a very specific, predictable way.

Think of a pachinko ball falling through its gauntlet of pins, holes and ramps. The path it takes is random in a way, but if you drop 10,000 balls from the exact same position into the exact same maze, there will be patterns in where they come out at the bottom — a spread of probabilities, perhaps more at the center and less at the edges. If you were to simulate that pachinko machine on a computer, you could test whether your simulation is accurate by comparing the output of 10,000 virtual drops with 10,000 real ones.

It’s the same with simulating a quantum computer, though of course rather more complex. Ultimately however the computer is doing the same thing: simulating a physical process and predicting the results. And like the pachinko simulator, its accuracy can be tested by running the real thing and comparing those results.

But just as it is easier to simulate a simple pachinko machine than a complex one, it’s easier to simulate a handful of qubits than a lot of them. After all, qubits are already complex. And when you get into questions of interference, slight errors and which direction they’d go, etc. — there are, in fact, so many factors that Feynman decided at some point you wouldn’t be able to account for them all. And at that point you would have entered the realm where only a quantum computer can do so — the realm of “quantum supremacy.”

Exponential please, and make it a double

After 1,400 words, there’s the phrase everyone else put right in the headline. Why? Because quantum supremacy may sound grand, but it’s only a small part of what was accomplished, and in fact this result in particular may not last forever as an example of having reached those lofty heights. But to continue.

Google’s setup, then, was simple. Set up randomly created circuits of qubits, both in its quantum computer and in the simulator. Start simple with a few qubits doing a handful of operational cycles and compare the time it takes to produce results.

Bear in mind that the simulator is not running on a laptop next to the fridge-sized quantum computer, but on Summit — a supercomputer at Oak Ridge National Lab currently rated as the most powerful single processing system in the world, and not by a little. It has 2.4 million processing cores, a little under 3 petabytes of memory, and hits about 150 petaflops.

At these early stages, the simulator and the quantum computer happily agreed — the numbers they spat out, the probability spreads, were the same, over and over.

But as more qubits and more complexity got added to the system, the time the simulator took to produce its prediction increased. That’s to be expected, just like a bigger pachinko machine. At first the times for actually executing the calculation and simulating it may have been comparable — a matter of seconds or minutes. But those numbers soon grew hour by hour as they worked their way up to 54 qubits.

When it got to the point where it took the simulator five hours to verify the quantum computer’s result, Google changed its tack. Because more qubits isn’t the only way quantum computing gets more complex (and besides, they couldn’t add any more to their current hardware). Instead, they started performing more rounds of operations with a given circuit, which adds all kinds of complexity to the simulation for a lot of reasons that I couldn’t possibly explain.

For the quantum computer, doing another round of calculations takes a fraction of a second, and even multiplied by thousands of times to get the required number of runs to produce usable probability numbers, it only ended up taking the machine several extra seconds.

schroed feyn chart

You know it’s real because there’s a chart. The dotted line (added by me) is the approximate path the team took, first adding qubits (x-axis) and then complexity (y-axis).

For the simulator, verifying these results took a week — a week, on the most powerful computer in the world.

At that point the team had to stop doing the actual simulator testing, since it was so time-consuming and expensive. Yet even so, no one really claimed that they had achieved “quantum supremacy.” After all, it may have taken the biggest classical computer ever created thousands of times longer, but it was still getting done.

So they cranked the dial up another couple notches. 54 qubits, doing 25 cycles, took Google’s Sycamore system 200 seconds. Extrapolating from its earlier results, the team estimated that it would take Summit 10,000 years.

What happened is what the team called double exponential increase. It turns out that adding qubits and cycles to a quantum computer adds a few microseconds or seconds every time — a linear increase. But every qubit you add to a simulated system makes that simulation exponentially more costly to run, and it’s the same story with cycles.

Imagine if you had to do whatever number of push-ups I did, squared, then squared again. If I did 1, you would do 1. If I did 2, you’d do 16. So far no problem. But by the time I get to 10, I’d be waiting for weeks while you finish your 10,000 push-ups. It’s not exactly analogous to Sycamore and Summit, since adding qubits and cycles had different and varying exponential difficulty increases, but you get the idea. At some point you can have to call it. And Google called it when the most powerful computer in the world would still be working on something when in all likelihood this planet will be a smoking ruin.

It’s worth mentioning here that this result does in a way depend on the current state of supercomputers and simulation techniques, which could very well improve. In fact IBM published a paper just before Google’s announcement suggesting that theoretically it could reduce the time necessary for the task described significantly. But it seems unlikely that they’re going to improve by multiple orders of magnitude and threaten quantum supremacy again. After all, if you add a few more qubits or cycles, it gets multiple orders of magnitude harder again. Even so, advances on the classical front are both welcome and necessary for further quantum development.

‘Sputnik didn’t do much, either’

So the quantum computer beat the classical one soundly on the most contrived, lopsided task imaginable, like pitting an apple versus an orange in a “best citrus” competition. So what?

Well, as founder of Google’s Quantum AI lab Hartmut Neven pointed out, “Sputnik didn’t do much either. It just circled the Earth and beeped.” And yet we always talk about an industry having its “Sputnik moment” — because that was when something went from theory to reality, and began the long march from reality to banality.

2019 SB Google 0781

The ritual passing of the quantum computing core.

That seemed to be the attitude of the others on the team I talked with at Google’s quantum computing ground zero near Santa Barbara. Quantum superiority is nice, they said, but it’s what they learned in the process that mattered, by confirming that what they were doing wasn’t pointless.

Basically it’s possible that a result like theirs could be achieved whether or not quantum computing really has a future. Pointing to one of the dozens of nearly incomprehensible graphs and diagrams I was treated to that day, hardware lead and longtime quantum theorist John Martines explained one crucial result: The quantum computer wasn’t doing anything weird and unexpected.

This is very important when doing something completely new. It was entirely possible that in the process of connecting dozens of qubits and forcing them to dance to the tune of the control systems, flipping, entangling, disengaging, and so on — well, something might happen.

Maybe it would turn out that systems with more than 14 entangled qubits in the circuit produce a large amount of interference that breaks the operation. Maybe some unknown force would cause sequential qubit photons to affect one another. Maybe sequential gates of certain types would cause the qubit to decohere and break the circuit. It’s these unknown unknowns that have caused so much doubt over whether, as asked at the beginning, quantum computing really exists as anything more than a parlor trick.

Imagine if they discovered that in digital computers, if you linked too many transistors together, they all spontaneously lost their charge and went to 0. That would put a huge limitation on what a transistor-based digital computer was capable of doing. Until now, no one knew if such a limitation existed for quantum computers.

“There’s no new physics out there that will cause this to fail. That’s a big takeaway,” said Martines. “We see the same errors whether we have a simple circuit or complex one, meaning the errors are not dependent on computational complexity or entanglement — which means the complex quantum computing going on doesn’t have fragility to it because you’re doing a complex computation.”

They operated a quantum computer at complexities higher than ever before, and nothing weird happens. And based on their observations and tests, they found that there’s no reason to believe they can’t take this same scheme up to, say, a thousand qubits and even greater complexity.

Hello world

That is the true accomplishment of the work the research team did. They found out, in the process of achieving the rather overhyped milestone of quantum superiority, that quantum computers are something that can continue to get better and to achieve more than simply an interesting experimental results.

This was by no means a given — like everything else in the world, quantum or classical, it’s all theoretical until you test it.

It means that sometime soonish, though no one can really say when, quantum computers will be something people will use to accomplish real tasks. From here on out, it’s a matter of getting better, not proving the possibility; of writing code, not theorizing whether code can be executed.

It’s going from Feynman’s proposal that a quantum computer will be needed to using a quantum computer for whatever you need it for. It’s the “hello world” moment for quantum computing.

Feynman, by the way, would probably not be surprised. He knew he was right.

Google’s paper describing their work was published in the journal Nature. You can read it here.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

How Google took on China—and lost

https://www.technologyreview.com/s/612601/how-google-took-on-china-and-lost/

Published

on

By

Google’s first foray into Chinese markets was a short-lived experiment. Google China’s search engine was launched in 2006 and abruptly pulled from mainland China in 2010 amid a major hack of the company and disputes over censorship of search results. But in August 2018, the investigative journalism website The Intercept reported that the company was working on a secret prototype of a new, censored Chinese search engine, called Project Dragonfly. Amid a furor from human rights activists and some Google employees, US Vice President Mike Pence called on the company to kill Dragonfly, saying it would “strengthen Communist Party censorship and compromise the privacy of Chinese customers.” In mid-December, The Intercept reported that Google had suspended its development efforts in response to complaints from the company’s own privacy team, who learned about the project from the investigative website’s reporting.

Observers talk as if the decision about whether to reenter the world’s largest market is up to Google: will it compromise its principles and censor search the way China wants? This misses the point—this time the Chinese government will make the decisions.

Google and China have been locked in an awkward tango for over a decade, constantly grappling over who leads and who follows. Charting that dance over the years reveals major shifts in China’s relationship with Google and all of Silicon Valley. To understand whether China will let Google back in, we must understand how Google and China got here, what incentives each party faces—and how artificial intelligence might have both of them dancing to a new tune.  

The right thing to do?

When www.google.cn launched in 2006, the company had gone public only two years before. The iPhone did not yet exist, nor did any Android-based smartphones. Google was about one-fifth as large and valuable as it is today, and the Chinese internet was seen as a backwater of knockoff products that were devoid of innovation. Google’s Chinese search engine represented the most controversial experiment to date in internet diplomacy. To get into China, the young company that had defined itself by the motto “Don’t be evil” agreed to censor the search results shown to Chinese users.

Central to that decision by Google leadership was a bet that by serving the market—even with a censored product—they could broaden the horizons of Chinese users and nudge the Chinese internet toward greater openness.

At first, Google appeared to be succeeding in that mission. When Chinese users searched for censored content on google.cn, they saw a notice that some results had been removed. That public acknowledgment of internet censorship was a first among Chinese search engines, and it wasn’t popular with regulators.

“The Chinese government hated it,” says Kaiser Kuo, former head of international communications for Baidu. “They compared it to coming to my house for dinner and saying, ‘I will agree to eat the food, but I don’t like it.’” Google hadn’t asked the government for permission before implementing the notice but wasn’t ordered to remove it. The company’s global prestige and technical expertise gave it leverage. China might be a promising market, but it was still dependent on Silicon Valley for talent, funding, and knowledge. Google wanted to be in China, the thinking went, but China needed Google.

Google’s censorship disclaimer was a modest victory for transparency. Baidu and other search engines in China soon followed suit. Over the next four years, Google China fought skirmishes on multiple fronts: with the Chinese government over content restrictions, with local competitor Baidu over the quality of search results, and with its own corporate leadership in Mountain View, California, over the freedom to adapt global products for local needs. By late 2009, Google controlled more than a third of the Chinese search market—a respectable share but well below Baidu’s 58%, according to data from Analysys International.

In the end, though, it wasn’t censorship or competition that drove Google out of China. It was a far-­reaching hacking attack known as Operation Aurora that targeted everything from Google’s intellectual property to the Gmail accounts of Chinese human rights activists. The attack, which Google said came from within China, pushed company leadership over the edge. On January 12, 2010, Google announced, “We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all.”

The sudden reversal blindsided Chinese officials. Most Chinese internet users could go about their online lives with few reminders of government controls, but the Google announcement shoved cyberattacks and censorship into the spotlight. The world’s top internet company and the government of the most populous country were now engaged in a public showdown.

“[Chinese officials] were really on their back foot, and it looked like they might cave and make some kind of accommodation,” says Kuo. “All of these people who apparently did not give much of a damn about internet censorship before were really angry about it. The whole internet was abuzz with this.”

But officials refused to cede ground. “China welcomes international Internet businesses developing services in China according to the law,” a foreign ministry spokeswoman told Reuters at the time. Government control of information was—and remains—central to Chinese Communist Party doctrine. Six months earlier, following riots in Xinjiang, the government had blocked Facebook, Twitter, and Google’s YouTube in one fell swoop, fortifying the “Great Firewall.” The government was making a bet: China and its technology sector did not need Google search to succeed.

Google soon abandoned google.cn, retreating to a Hong Kong–based search engine. In response, the Chinese government decided not to fully block services like Gmail and Google Maps, and for a while it allowed sporadic access from the mainland to the Hong Kong search engine too. The two sides settled into a tense stalemate.

Google’s leaders seemed prepared to wait it out. “I personally believe that you cannot build a modern knowledge society with that kind of [censorship],” Google chairman Eric Schmidt told Foreign Policy in 2012. “In a long enough time period, do I think that this kind of regime approach will end? I think absolutely.”

Role reversal

But instead of languishing under censorship, the Chinese internet sector boomed. Between 2010 and 2015, there was an explosion of new products and companies. Xiaomi, a hardware maker now worth over $40 billion, was founded in April 2010. A month earlier Meituan, a Groupon clone that turned into a juggernaut of online-to-offline services, was born; it went public in September 2018 and is now worth about $35 billion. Didi, the ride-­hailing company that drove Uber out of China and is now challenging it in international markets, was founded in 2012. Chinese engineers and entrepreneurs returning from Silicon Valley, including many former Googlers, were crucial to this dynamism, bringing world-class technical and entrepreneurial chops to markets insulated from their former employers in the US. Older companies like Baidu and Alibaba also grew quickly during these years.

The Chinese government played contradictory roles in this process. It cracked down on political speech in 2013, imprisoning critics and instituting new laws against “spreading rumors” online—a one-two punch that largely suffocated political discussion on China’s once-raucous social-media sites. Yet it also launched a high-profile campaign promoting “mass entrepreneurship and mass innovation.” Government-funded startup incubators spread across the country, as did government-backed venture capital.

That confluence of forces brought results. Services like Meituan flourished. So did Tencent’s super-app WeChat, a “digital Swiss Army knife” that combines aspects of WhatsApp, PayPal, and dozens of other apps from the West. E-commerce behemoth Alibaba went public on the New York Stock Exchange in September 2014, selling $25 billion worth of shares—still the most valuable IPO in history.

Amidst this home-grown success, the Chinese government decided to break the uneasy truce with Google. In mid-2014, a few months before Alibaba’s IPO, the government blocked virtually all Google services in China, including many considered essential for international business, such as Gmail, Google Maps, and Google Scholar. “It took us by surprise, as we felt Google was one of those valuable properties [that they couldn’t afford to block],” says Charlie Smith, the pseudonymous cofounder of GreatFire, an organization that tracks and circumvents Chinese internet controls.

The Chinese government had pulled off an unexpected hat trick: locking out the Silicon Valley giants, censoring political speech, and still cultivating an internet that was controllable, profitable, and innovative.

AlphaGo your own way

With the Chinese internet blossoming and the government not backing down, Google began to search for ways back into China. It tried out less politically sensitive products—an “everything but search” strategy—but with mixed success.

In 2015, rumors swirled that Google was close to bringing its Google Play app store back to China, pending Chinese government approval—but the promised app store never materialized. This was followed by a partnership with Mobvoi, a Chinese smart-watch maker founded by an ex-Google employee, to make voice search available on Android Wear in China. Google later invested in Mobvoi, its first direct investment in China since 2010.

In March 2017, there were reports that authorities would allow Google Scholar back in. They didn’t. Reports that Google would launch a mobile-app store in China together with NetEase, a Chinese company, similarly came to naught, though Google was permitted to relaunch its smartphone translation app.

Then, in May 2017, a showdown between AlphaGo, the Go-playing program built by Google sibling company DeepMind, and Ke Jie, the world’s number one human player, was allowed to take place in Wuzhen, a tourist town outside Shanghai. AlphaGo won all three games in the match—a result that the government had perhaps foreseen. Live-streaming of the match within China was forbidden, and not only in the form of video: as the Guardian put it, “outlets were banned from covering the match live in any way, including text commentary, social media, or push notifications.” DeepMind broadcast the match outside China.

During this same period, Chinese censors quietly rolled back some of the openings that Google’s earlier China operations had catalyzed. In 2016, Chinese search engines began removing the censorship disclaimers that Google had pioneered. In 2017, the government launched a new crackdown on virtual private networks (VPNs), software widely used for circumventing censorship. Meanwhile, Chinese authorities began rolling out extensive AI-powered surveillance technologies across the country, constructing what some called a “21st-century police state” in the western region of Xinjiang, home to the country’s Muslim Uighurs.

Despite the retrograde climate, Google capped off 2017 with a major announcement: the launch of a new AI research center in Beijing. Google Cloud’s Chinese-born chief scientist, Fei-Fei Li, would oversee the new center. “The science of AI has no borders,” she wrote in the announcement of the center’s launch. “Neither do its benefits.” (Li left Google in September 2018 and returned to Stanford University, where she is a professor.)

If the research center was a public symbol of Google’s continued efforts to gain a foothold in China, Google was also working quietly to accommodate Chinese government restrictions. Dragonfly, the censored- search-engine prototype, which has been demonstrated for Chinese officials, blacklists key search terms; it would be operated as part of a joint venture with an unnamed Chinese partner. The documents The Intercept obtained said the app would still tell users when results had been censored.

Other aspects of the project are particularly troubling. Prototypes of the app reportedly link users’ searches to their mobile-phone number, opening the door to greater surveillance and possibly arrest if people search for banned material.

In a speech to the Dragonfly team, later leaked by The Intercept, Ben Gomes, Google’s head of search, explained Google’s aims. China, he said, is “arguably the most interesting market in the world today.” Google was not just trying to make money by doing business in China, he said, but was after something bigger. “We need to understand what is happening there in order to inspire us,” he said. “China will teach us things that we don’t know.”

In early December, Google CEO Sundar Pichai told a Congressional committee that “right now we have no plans to launch in China,” though he would not rule out future plans. The question is, if Google wants to come back to China, does China want to let it in?

China’s calculus

To answer that question, try thinking like an advisor to President Xi Jinping.

Bringing Google search back certainly has upsides. China’s growing number of knowledge workers need access to global news and research, and Baidu is notoriously bad at turning up relevant results from outside China. Google could serve as a valuable partner to Chinese companies looking to expand internationally, as it has demonstrated in a patent-sharing partnership with Tencent and a $550 million investment in e-commerce giant JD. Google’s reentry would also help legitimize the Communist Party’s approach to internet governance, a signal that China is an indispensable market—and an open one—as long as you “play by the rules.”

But from the Chinese government’s perspective, these potential upsides are marginal. Chinese citizens who need to access the global internet can still usually do so through VPNs (though it is getting harder). Google doesn’t need to have a business in China to help Chinese internet giants gain business abroad. And the giants of Silicon Valley have already ceased their public criticism of Chinese internet censorship, and instead extol the country’s dynamism and innovation.

By contrast, the political risks of permitting Google to return loom large to Xi and his inner circle. Hostility toward both China and Silicon Valley is high and rising in American political circles. A return to China would put Google in a political pressure cooker. What if that pressure—via antitrust action or new legislation—effectively forced the company to choose between the American and Chinese markets? Google’s sudden exit in 2010 marked a major loss of face for the Chinese government in front of its own citizens. If Chinese leaders give the green light to Project Dragonfly, they run the risk of that happening again.

A savvy advisor would be likely to think that these risks—to Xi, to the Communist Party, and to his or her own career—outweighed the modest gains to be had from allowing Google’s return. The Chinese government oversees a technology sector that is profitable, innovative, and driven largely by domestic companies—an enviable position to be in. Allowing Google back in would only diminish its leverage. Better, then, to stick with the status quo: dangle the prospect of full market access while throwing Silicon Valley companies an occasional bone by permitting peripheral services like translation.

Google’s gamble

Google does have one factor in its favor. If it first entered China during the days of desktop internet, and departed at the dawn of the mobile internet, it is now trying to reenter in the era of AI. The Chinese government places high hopes on AI as an all-purpose tool for economic activity, military power, and social governance, including surveillance. And Google and its Alphabet sibling DeepMind are the global leaders in corporate AI research.

This is probably why Google has held publicity stunts like the AlphaGo match and an AI-powered “Guess the Sketch” game on WeChat, as well as taking more substantive steps like establishing the Beijing AI lab and promoting Chinese use of TensorFlow, an artificial-intelligence software library developed by the Google Brain team. Taken together, these efforts constitute a sort of artificial-intelligence lobbying strategy designed to sway the Chinese leadership.

This pitch, however, faces problems on at least three battlegrounds: Beijing; Washington, DC; and Mountain View, California.

Chinese leaders have good reason to feel they’re already getting the best of both worlds. They can take advantage of software development tools like TensorFlow and they still have a prestigious Google research lab to train Chinese AI researchers, all without granting Google market access.

In Washington, meanwhile, American security officials are annoyed that Google is actively courting a geopolitical rival while refusing to work with the Pentagon on AI projects because its employees object to having their work used for military ends.

Those employees are the key to the third battleground. They’ve demonstrated the ability to mobilize quickly and effectively, as with the protests against US defense contracts and a walkout last November over how the company has dealt with sexual harassment. In late November more than 600 Googlers signed an open letter demanding that the company drop the Dragonfly project, writing, “We object to technologies that aid the powerful in oppressing the vulnerable.” Daunting as these challenges sound—and high as the costs of pursuing the Chinese market may be—they haven’t entirely deterred Google’s top brass. Though the development of Dragonfly appears to have, at the very least, paused, the wealth and dynamism that make China so attractive to Google also mean the decision of whether or not to do business there is no longer the company’s to make.

“I know people in Silicon Valley are really smart, and they’re really successful because they can overcome any problem they face,” says Bill Bishop, a digital-media entrepreneur with experience in both markets. “I don’t think they’ve ever faced a problem like the Chinese Communist Party.”

Matt Sheehan is a fellow at MacroPolo and worked with Kai-Fu Lee on his book AI Superpowers.

Continue Reading

Technology

All the reasons 2018 was a breakout year for DNA data

https://www.technologyreview.com/s/612688/all-the-reasons-2018-was-a-breakout-year-for-dna-data/

Published

on

By

Genetic IQ tests. DNA detective work. Virtual drug trials. These were some of the surprising new uses of DNA information that emerged over the last 12 months as genetic studies became larger than ever before.

Think back to 2003. We had just decoded the first human genome, and scientists still spent their time searching for very specific gene errors that cause quite serious inherited problems, like muscular dystrophy. Now, though, we’re dealing with information on millions of genomes. And the gene hunts are not only bigger—they’re fundamentally different. They’re starting to unearth the genetic roots of common illnesses and personality traits, and they’re making genetic privacy all but impossible.

Here are the trends you need to know, from MIT Technology Review’s own coverage over the last year.

Consumers: It’s all about genetic data. Now it’s being collected on millions of people, in national efforts and commercial ones too.

Last February, we reported that 12 million people had already taken consumer DNA tests. Since that figure has been reliably doubling every year, it’s probably up to 25 million by now. In fact, DNA reports are now a mass-appeal item. During the Thanksgiving weekend, the gene test from AncestryDNA, which tells people where their ancestors are from, was among the top-selling items.

Big data: To understand the genome, scientists say, they need to study as many people as they can, all at once. In 2018, several gene hunts broke the million-person mark for the first time. These included searches for the genetic bases of insomnia and educational success. To do it, researchers tapped national biobanks and also got help from 23andMe, the popular gene test company, whose users can sign up to participate in research.

Polygenic scores: Some diseases are due to a single gene that goes wrong. But big killers like heart disease aren’t like that—instead, they’re influenced by hundreds of genetic factors. That’s why a new way of predicting risks from a person’s entire genome was the most important story of the year (see polygenic scores on our 10 Breakthrough Technologies list). The new scores can handicap a person’s odds of breast cancer, of getting through college, or even of being tall enough for the NBA. In 2019, keep an eye on gene-test companies like 23andMe and Color Genomics to see if they launch such gene predictions commercially.

Genetic IQ tests: Genes don’t affect just what we look like, but who we are. Now some scientists say these same DNA scores can offer a decent guess at how smart a kid will be later in life. The unanswered question: how we should use this information, if at all?

Testing embryos: Yes, it’s probably going to be exactly like that sci-fi movie Gattaca, the one about a world where parents pick their kids from a petri dish. Already, IVF centers run gene tests and let parents pick embryos to avoid certain serious disease risks. Now Genomic Prediction, a New Jersey company we exclusively covered in 2017, says it’s ready to begin testing embryos to grade their future educational potential. So forget CRISPR babies—designer kids are already here.

Racial bias: Here’s something that’s not so great: about 80% of the DNA ever analyzed is from white people of European ancestry. It means some new discoveries and commercial tests only work in white people and don’t apply to Africans, Asians, Latinos, or others ancestry groups whose genetic patterns differ. There are good scientific reasons to expand the gene hunt, says Stanford University geneticist Carlos D. Bustamante. We may be missing health breakthroughs by looking too narrowly.

Mimicking clinical trials: Did you know you’re part of a gigantic, random experiment? It’s true. Or at least some geneticists see you that way. And now they’ve come up with a very clever trick called Mendelian randomization that uses people’s medical information to predict which new drugs will work for them and which won’t.

Crime fighters: The more DNA data is out there, the easier it is to find out who a drop of blood or a hair follicle belongs to. That’s what the Golden State Killer learned in April, when he was caught by sleuths employing an informal collection of DNA profiles and genealogical trees. In fact, the way the math works out, genetic anonymity is kaput—sine pretty much all of us have a relative in a DNA database already. One genetic genealogist, CeCe Moore, told us that she’s identified 27 murderers and rapists since April. A very good year.

Continue Reading

Technology

The day I tasted climate change

https://www.technologyreview.com/s/612658/the-day-i-tasted-climate-change/

Published

on

By

In early November, gale-force winds whipped a brush fire into an inferno that nearly consumed the town of Paradise, California, and killed at least 86 people.

By the second morning, I could smell the fire from one foot outside my door in Berkeley, some 130 miles from the flames. Within a week, my eyes and throat stung even when I was indoors.

Air quality maps warned that the soot-filled air blanketing the Bay Area had reached “very unhealthy” levels. For days, nearly everyone wore masks as they walked their dogs, rode the train, and carried out errands. Most of those thin-paper respirators were of dubious value. Stores quickly ran out of the good ones—the “N-95s” that block 95% of fine particles—and sold out of air purifiers, too.

People traded tips about where they could be found, and rushed to stores rumored to have a new supply. Others packed up and drove hours away in search of a safe place to wait it out. By the time my masks arrived by mail, I was in Ohio, having decided to move up my Thanksgiving travel to escape the smoke.

Climate change doesn’t ignite wildfires, but it’s intensifying the hot, dry summer conditions that have helped fuel some of California’s deadliest and most destructive fires in recent years.

I’ve long understood that the dangers of global warming are real and rising. I’ve seen its power firsthand in the form of receding glaciers, dried lake beds, and Sierra tree stands taken down by bark beetles.

This is the first time, though, that I smelled and tasted it in my home.

Obviously, a sore throat and a flight change are trivial compared with the lives and homes lost in the Camp Fire. But after I spent a week living under a haze of smoke, it did resonate on a deeper level that we’re really going to let this happen.

Thousands if not millions of people are going to starve, drown, burn to death, or live out lives of misery because we’ve failed to pull together in the face of the ultimate tragedy of the commons. Many more will find themselves scrambling for basic survival goods and fretting over the prospect of more fires, more ferocious hurricanes, and summer days of blistering heat.

There’s no solving climate change any longer. There’s only living with it and doing everything in our power to limit the damage.

And seeing an entire community near one of the world’s richest regions all but wiped out, while retailers failed to meet critical public needs in the aftermath, left me with a dimmer view of our ability to grapple with the far greater challenges to come.

Suffering

Some observers believe that once the world endures enough climate catastrophes, we’ll finally come to our collective senses and make some last-minute push to address the problem. But for many, that will be too late.

Carbon dioxide takes years to reach its full warming effect and persists for millennia. We may well have already emitted enough to sail past a dangerous 1.5 ˚C of warming. And at the rate we’re going, it could take hundreds of years to shift to a global energy system that doesn’t pump out far more climate pollution—every ton of which only makes the problem worse.

President Barack Obama’s top science advisor, John Holdren, once said that our options for dealing with climate change are cutting emissions, adapting (building, say, higher seawalls or city cooling centers), and suffering.

Since we’re utterly failing in the first category, far more of the job will inevitably come down to the latter two. By choosing not to deal with the root cause, we’ve opted to deal with the problem in the most expensive, shortsighted, destructive, and cruel way possible.

We could have overhauled the energy system. Instead we’ll have to overhaul almost every aspect of life: expanding emergency response, building more hospitals, fortifying our shorelines, upgrading our building materials, reengineering the way we grow and distribute food, and much more.

And even if we pay the high price to do all that, we’ll still have worse outcomes than if we had tackled the core problem in the first place. We’ve decided to forever diminish our quality of life, sense of security, and collective odds of living out happy and healthy lives. And we’ve done it not just for ourselves, but for our children and foreseeable future generations.

Uneven and unfair

The devastation from climate change will manifest in different ways in different places, in highly uneven and unfair ways: severe drought and famine across much of Africa and Australia, shrinking water supplies for the billions who rely on the glaciers of the Tibetan Plateau, and the threat of forced displacement for at least tens of millions exposed to rising sea levels in South Asia.

In California, higher temperatures, declining snowpack, and shifting precipitation patterns mean more people already live under the threat of droughts and fires.

I’ve smelled or spotted four major blazes in the last two years. This July, a close friend and her pregnant sister sped down Interstate 580, through the Altamont Pass, as flames raged on both sides. Another friend raced into Paradise to evacuate her father on the morning that the Camp Fire tore through the town. Still another sifted ashes in the remnants of homes a few days later, looking for bone fragments and other human remains as part of a local search and rescue team.

Global warming has already doubled the area scorched by forest fires during the last three decades across the American West, according to an earlier study in Proceedings of the National Academy of Sciences. By midcentury, that footprint could swell again by a multiple of two to six, according to the recent US National Climate Assessment (see “Cutting emissions could prevent tens of thousands of heat deaths annually”).

Self-preservation

None of this is a defense for throwing up our hands—it’s an argument for redoubling our efforts. Even if we’re not going to “solve” climate change, we’re going to have to work feverishly to manage it, like a chronic disease. We need to learn to live with the symptoms while finding ways to keep them from getting worse.

Every additional gigaton of greenhouse gas we put into the atmosphere from this point forward only increases the economic costs, ecosystem devastation, and human suffering.

So the question is: What’s it going to take to finally bring about the public policies, accelerated innovation, and collective will needed to force rapid change?

One hopes that as climate change becomes increasingly undeniable, and its effects come to feel like real and immediate threats to our well-being, people will demand that our leaders and industries take aggressive action.

Research has found that experiencing higher temperatures and extreme weather events is correlated with greater belief in or concern about climate change. And younger people, who are staring at a much grimmer future, are considerably more likely to believe that climate change is real and action is required—even among millennial Republicans in the US.

Overwhelmed

But as I watched the death count rise from simultaneous infernos across California last month, it struck me that another possibility was just as plausible: the destruction of climate change will overwhelm society in ways that make us less likely to undertake the sacrifices necessary for a safer future.

We’re likely to face a shrinking economy, skyrocketing emergency response costs, and a staggering price tag for adaptions measures like seawalls—all while we still need to race to zero emissions as quickly as possible.

People may dig deep for certain adaptions that promise to improve their security immediately—but the perceived return on investments into cutting emissions could shrink as extreme weather becomes more common and costly. That’s because, again, carbon dioxide works on a time delay, and the problem only stops getting worse—doesn’t disappear—once we’ve reached zero emissions (unless we figure out how to suck massive amounts of it from the atmosphere as well).

As more of our money, time, and energy gets sucked up by the immediate demands of overlapping tragedies, I fear people may become less willing to invest increasingly limited resources in the long-term common good.

Put another way, one paradoxical impact of climate change is that it could make many even more reluctant to take it on.

Worse to come

When I started writing seriously about climate change a little more than five years ago, the dangers largely seemed distant and abstract. Without realizing it, most of this time I’ve carried along an assumption that we will somehow, eventually, confront the problem in a meaningful way. We don’t have a choice. So sooner or later, we’ll do the right thing. 

But after two years closely reporting and writing on clean energy technologies here, it has slowly dawned on me that, well, maybe not. While we absolutely could accomplish much of the necessary transformation with existing or emerging technologies, the sheer scale of the overhaul required and the depth of the entrenched interests may add up to insurmountable levels of inertia.

So the Camp Fire and its aftermath didn’t singlehandedly push me from optimism to pessimism. The more I’ve come to understand the true parameters of the problem, the more I’ve tilted toward the dire side of the spectrum.

But the surreal scene of high-paid workers walking through the murky yellow air of downtown San Francisco, masks inadvertently color-coordinated with their earbuds in the capital of techno-utopianism, certainly widened my frame of the possible—and felt like a taste of things to come.

Continue Reading

Recent Posts