Connect with us

Science

Inside Olympic Destroyer, the Most Deceptive Hack in History

Just before 8 pm on February 9, 2018, high in the northeastern mountains of South Korea, Sang-jin Oh was sitting on a plastic chair a few dozen rows up from the floor of Pyeongchang’s vast, pentagonal Olympic Stadium. He wore a gray and red official Olympics jacket that kept him warm despite the near-freezing weather,…

Published

on

Inside Olympic Destroyer, the Most Deceptive Hack in History

Just before 8 pm on February 9, 2018, high in the northeastern mountains of South Korea, Sang-jin Oh was sitting on a plastic chair a few dozen rows up from the floor of Pyeongchang’s vast, pentagonal Olympic Stadium. He wore a gray and red official Olympics jacket that kept him warm despite the near-freezing weather, and his seat, behind the press section, had a clear view of the raised, circular stage a few hundred feet in front of him. The 2018 Winter Olympics opening ceremony was about to start.

As the lights darkened around the roofless structure, anticipation buzzed through the 35,000-person crowd, the glow of their phone screens floating like fireflies around the stadium. Few felt that anticipation more intensely than Oh. For more than three years, the 47-year-old civil servant had been director of technology for the Pyeongchang Olympics organizing committee. He’d overseen the setup of an IT infrastructure for the games comprising more than 10,000 PCs, more than 20,000 mobile devices, 6,300 Wi-Fi routers, and 300 servers in two Seoul data centers.

That immense collection of machines seemed to be functioning perfectly—almost. Half an hour earlier, he’d gotten word about a nagging technical issue. The source of that problem was a contractor, an IT firm from which the Olympics were renting another hundred servers. The contractor’s glitches had been a long-term headache. Oh’s response had been annoyance: Even now, with the entire world watching, the company was still working out its bugs?

Andy Greenberg is a WIRED senior writer. This story is excerpted from his book Sandworm, to be published on November 5, 2019.

The data centers in Seoul, however, weren’t reporting any such problems, and Oh’s team believed the issues with the contractor were manageable. He didn’t yet know that they were already preventing some attendees from printing tickets that would let them enter the stadium. So he’d settled into his seat, ready to watch a highlight of his career unfold.

Ten seconds before 8 pm, numbers began to form, one by one, in projected light around the stage, as a choir of children’s voices counted down in Korean to the start of the event:

Sip!Gu!Pal!Chil!

In the middle of the countdown, Oh’s Samsung Galaxy Note8 phone abruptly lit up. He looked down to see a message from a subordinate on KakaoTalk, a popular Korean messaging app. The message shared perhaps the worst possible news Oh could have received at that exact moment: Something was shutting down every domain controller in the Seoul data centers, the servers that formed the backbone of the Olympics’ IT infrastructure.

As the opening ceremony got underway, thousands of fireworks exploded around the stadium on cue, and dozens of massive puppets and Korean dancers entered the stage. Oh saw none of it. He was texting furiously with his staff as they watched their entire IT setup go dark. He quickly realized that what the partner company had reported wasn’t a mere glitch. It had been the first sign of an unfolding attack. He needed to get to his technology operations center.

As Oh made his way out of the press section toward the exit, reporters around him had already begun complaining that the Wi-Fi seemed to have suddenly stopped working. Thousands of internet-linked TVs showing the ceremony around the stadium and in 12 other Olympic facilities had gone black. Every RFID-based security gate leading into every Olympic building was down. The Olympics’ official app, including its digital ticketing function, was broken too; when it reached out for data from backend servers, they suddenly had none to offer.

The Pyeongchang organizing committee had prepared for this: Its cybersecurity advisory group had met 20 times since 2015. They’d conducted drills as early as the summer of the previous year, simulating disasters like cyberattacks, fires, and earthquakes. But now that one of those nightmare scenarios was playing out in reality, the feeling, for Oh, was both infuriating and surreal. “It’s actually happened,” Oh thought, as if to shake himself out of the sense that it was all a bad dream.

Once Oh had made his way through the crowd, he ran to the stadium’s exit, out into the cold night air, and across the parking lot, now joined by two other IT staffers. They jumped into a Hyundai SUV and began the 45-minute drive east, down through the mountains to the coastal city of Gangneung, where the Olympics’ technology operations center was located.

From the car, Oh called staffers at the stadium and told them to start distributing Wi-Fi hot spots to reporters and to tell security to check badges manually, because all RFID systems were down. But that was the least of their worries. Oh knew that in just over two hours the opening ceremony would end, and tens of thousands of athletes, visiting dignitaries, and spectators would find that they had no Wi-Fi connections and no access to the Olympics app, full of schedules, hotel information, and maps. The result would be a humiliating confusion. If they couldn’t recover the servers by the next morning, the entire IT backend of the organizing committee—responsible for everything from meals to hotel reservations to event ticketing—would remain offline as the actual games got underway. And a kind of technological fiasco that had never before struck the Olympics would unfold in one of the world’s most wired countries.


Oh arrived at the technology operations center in Gangneung by 9 pm, halfway into the opening ceremony. The center consisted of a large open room with desks and computers for 150 staffers; one wall was covered with screens. When he walked in, many of those staffers were standing, clumped together, anxiously discussing how to respond to the attack—a problem compounded by the fact that they’d been locked out of many of their own basic services, like email and messaging.

All nine of the Olympic staff’s domain controllers, the powerful machines that governed which employee could access which computers in the network, had somehow been paralyzed, crippling the entire system. The staff decided on a temporary workaround: They set all the surviving servers that powered some basic services, such as Wi-Fi and the internet-linked TVs, to bypass the dead gatekeeper machines. By doing so, they managed to bring those bare-minimum systems back online just minutes before the end of the ceremony.

Over the next two hours, as they attempted to rebuild the domain controllers to re-create a more long-term, secure network, the engineers would find again and again that the servers had been crippled. Some malicious presence in their systems remained, disrupting the machines faster than they could be rebuilt.

A few minutes before midnight, Oh and his administrators reluctantly decided on a desperate measure: They would cut off their entire network from the internet in an attempt to isolate it from the saboteurs who they figured must still have maintained a presence inside. That meant taking down every service—even the Olympics’ public website—while they worked to root out whatever malware infection was tearing apart their machines from within.

For the rest of the night, Oh and his staff worked frantically to rebuild the Olympics’ digital nervous system. By 5 am, a Korean security contractor, AhnLab, had managed to create an antivirus signature that could help Oh’s staff vaccinate the network’s thousands of PCs and servers against the mysterious malware that had infected them, a malicious file that Oh says was named simply winlogon.exe.

At 6:30 am, the Olympics’ administrators reset staffers’ passwords in hopes of locking out whatever means of access the hackers might have stolen. Just before 8 that morning, almost exactly 12 hours after the cyberattack on the Olympics had begun, Oh and his sleepless staffers finished reconstructing their servers from backups and began restarting every service.

Amazingly, it worked. The day’s skating and ski jumping events went off with little more than a few Wi-Fi hiccups. R2-D2-style robots puttered around Olympic venues, vacuuming floors, delivering water bottles, and projecting weather reports. A Boston Globe reporter later called the games “impeccably organized.” One USA Today columnist wrote that “it’s possible no Olympic Games have ever had so many moving pieces all run on time.” Thousands of athletes and millions of spectators remained blissfully unaware that the Olympics’ staff had spent its first night fighting off an invisible enemy that threatened to throw the entire event into chaos.

Illustration: Joan Wong

Within hours of the attack, rumors began to trickle out into the cybersecurity community about the glitches that had marred the Olympics’ website, Wi-Fi, and apps during the opening ceremony. Two days after the ceremony, the Pyeongchang organizing committee confirmed that it had indeed been the target of a cyberattack. But it refused to comment on who might have been behind it. Oh, who led the committee’s response, has declined to discuss any possible source of the attack with WIRED.

The incident immediately became an international whodunit: Who would dare to hack the Olympics? The Pyeongchang cyberattack would turn out to be perhaps the most deceptive hacking operation in history, using the most sophisticated means ever seen to confound the forensic analysts searching for its culprit.

The difficulty of proving the source of an attack—the so-called attribution problem—has plagued cybersecurity since practically the dawn of the internet. Sophisticated hackers can route their connections through circuitous proxies and blind alleys, making it almost impossible to follow their tracks. Forensic analysts have nonetheless learned how to determine hackers’ identities by other means, tying together clues in code, infrastructure connections, and political motivations.

In the past few years, however, state-sponsored cyberspies and saboteurs have increasingly experimented with another trick: planting false flags. Those evolving acts of deception, designed to throw off both security analysts and the public, have given rise to fraudulent narratives about hackers’ identities that are difficult to dispel, even after governments announce the official findings of their intelligence agencies. It doesn’t help that those official findings often arrive weeks or months later, with the most convincing evidence redacted to preserve secret investigative techniques and sources.

When North Korean hackers breached Sony Pictures in 2014 to prevent the release of the Kim Jong-un assassination comedy The Interview, for instance, they invented a hacktivist group called Guardians of Peace and tried to throw off investigators with a vague demand for “monetary compensation.” Even after the FBI officially named North Korea as the culprit and the White House imposed new sanctions against the Kim regime as punishment, several security firms continued to argue that the attack must have been an inside job, a story picked up by numerous news outlets—including WIRED.

When state-sponsored Russian hackers stole and leaked emails from the Democratic National Committee and Hillary Clinton’s campaign in 2016, we now know that the Kremlin likewise created diversions and cover stories. It invented a lone Romanian hacker named Guccifer 2.0 to take credit for the hacks; it also spread the rumors that a murdered DNC staffer named Seth Rich had leaked the emails from inside the organization—and it distributed many of the stolen documents through a fake whistle-blowing site called DCLeaks. Those deceptions became conspiracy theories, fanned by right-wing commentators and then-presidential candidate Donald Trump.

Read More

Cyber Warfare, Illustration, Computer, Rocket

The WIRED Guide to Cyberwar

The threat of cyberwar looms over the future: a new dimension of conflict capable of leapfrogging borders and teleporting the chaos of war to civilians thousands of miles beyond its front.

The deceptions generated a self-perpetuating ouroboros of mistrust: Skeptics dismissed even glaring clues of the Kremlin’s guilt, like Russian-language formatting errors in the leaked documents, seeing those giveaways as planted evidence. Even a joint statement from US intelligence agencies four months later naming Russia as the perpetrator couldn’t shake the conviction of disbelievers. They persist even today: In an Economist/YouGov poll earlier this year, only about half of Americans said they believed Russia interfered in the election.

With the malware that hit the Pyeongchang Olympics, the state of the art in digital deception took several evolutionary leaps forward. Investigators would find in its code not merely a single false flag but layers of false clues pointing at multiple potential culprits. And some of those clues were hidden deeper than any cybersecurity analyst had ever seen before.

From the start, the geopolitical motivations behind the Olympics sabotage were far from clear. The usual suspect for any cyberattack in South Korea is, of course, North Korea. The hermit kingdom has tormented its capitalist neighbors with military provocations and low-grade cyberwar for years. In the run-up to the Olympics, analysts at the cybersecurity firm McAfee had warned that Korean-speaking hackers had targeted the Pyeongchang Olympic organizers with phishing emails and what appeared to be espionage malware. At the time, McAfee analysts hinted in a phone call with me that North Korea was likely behind the spying scheme.

But there were contradictory signals on the public stage. As the Olympics began, the North seemed to be experimenting with a friendlier approach to geopolitics. The North Korean dictator, Kim Jong-un, had sent his sister as a diplomatic emissary to the games and had invited South Korea’s president, Moon Jae-in, to visit the North Korean capital of Pyongyang. The two countries had even taken the surprising step of combining their Olympic women’s hockey teams in a show of friendship. Why would North Korea launch a disruptive cyberattack in the midst of that charm offensive?

Then there was Russia. The Kremlin had its own motive for an attack on Pyeongchang. Investigations into doping by Russian athletes had led to a humiliating result in advance of the 2018 Olympics: Russia was banned. Its athletes would be allowed to compete but not to wear Russian flags or accept medals on behalf of their country. For years in the lead-up to that verdict, a state-sponsored Russian hacker team known as Fancy Bear had been retaliating, stealing and leaking data from Olympics-related targets. Russia’s exile from the games was exactly the sort of slight that might inspire the Kremlin to unleash a piece of disruptive malware against the opening ceremony. If the Russian government couldn’t enjoy the Olympics, then no one would.

If Russia had been trying to send a message with an attack on the Olympics’ servers, however, it was hardly a direct one. Days before the opening ceremony, it had preemptively denied any Olympics-targeted hacking. “We know that Western media are planning pseudo-investigations on the theme of ‘Russian fingerprints’ in hacking attacks on information resources related to the hosting of the Winter Olympic Games in the Republic of Korea,” Russia’s Foreign Ministry had told Reuters. “Of course, no evidence will be presented to the world.”

In fact, there would be plenty of evidence vaguely hinting at Russia’s responsibility. The problem, it would soon become clear, was that there seemed to be just as much evidence pointing in a tangle of other directions too.


Three days after the opening ceremony, Cisco’s Talos security division revealed that it had obtained a copy of Olympics-targeted malware and dissected it. Someone from the Olympics organizing committee or perhaps the Korean security firm AhnLab had uploaded the code to VirusTotal, a common database of malware samples used by cybersecurity analysts, where Cisco’s reverse-engineers found it. The company published its findings in a blog post that would give that malware a name: Olympic Destroyer.

In broad outline, Cisco’s description of Olympic Destroyer’s anatomy called to mind two previous Russian cyberattacks, NotPetya and Bad Rabbit. As with those earlier attacks, Olympic Destroyer used a password-stealing tool, then combined those stolen passwords with remote access features in Windows that allowed it to spread among computers on a network. Finally, it used a data-destroying component to delete the boot configuration from infected machines before disabling all Windows services and shutting the computer down so that it couldn’t be rebooted. Analysts at the security firm CrowdStrike would find other apparent Russian calling cards, elements that resembled a piece of Russian ransomware known as XData.

Yet there seemed to be no clear code matches between Olympic Destroyer and the previous NotPetya or Bad Rabbit worms. Although it contained similar features, they had apparently been re-created from scratch or copied from elsewhere.

The deeper analysts dug, the stranger the clues became. The data-wiping portion of Olympic Destroyer shared characteristics with a sample of data-deleting code that had been used not by Russia but by the North Korean hacker group known as Lazarus. When Cisco researchers put the logical structures of the data-wiping components side by side, they seemed to roughly match. And both destroyed files with the same distinctive trick of deleting just their first 4,096 bytes. Was North Korea behind the attack after all?

There were still more signposts that led in completely different directions. The security firm Intezer noted that a chunk of the password-stealing code in Olympic Destroyer matched exactly with tools used by a hacker group known as APT3—a group that multiple cybersecurity firms have linked to the Chinese government. The company also traced a component that Olympic Destroyer used to generate encryption keys back to a third group, APT10, also reportedly linked to China. Intezer pointed out that the encryption component had never been used before by any other hacking teams, as far as the company’s analysts could tell. Russia? North Korea? China? The more that forensic analysts reverse-engineered Olympic Destroyer’s code, the further they seemed to get from arriving at a resolution.

In fact, all those contradictory clues seemed designed not to lead analysts toward any single false answer but to a collection of them, undermining any particular conclusion. The mystery became an epistemological crisis that left researchers doubting themselves. “It was psychological warfare on reverse-engineers,” says Silas Cutler, a security researcher who worked for CrowdStrike at the time. “It hooked into all those things you do as a backup check, that make you think ‘I know what this is.’ And it poisoned them.”

That self-doubt, just as much as the sabotage effects on the Olympics, seemed to have been the malware’s true aim, says Craig Williams, a researcher at Cisco. “Even as it accomplished its mission, it also sent a message to the security community,” Williams says. “You can be misled.”


The Olympics organizing committee, it turned out, wasn’t Olympic Destroyer’s only victim. According to the Russian security firm Kaspersky, the cyberattack also hit other targets with connections to the Olympics, including Atos, an IT services provider in France that had supported the event, and two ski resorts in Pyeongchang. One of those resorts had been infected seriously enough that its automated ski gates and ski lifts were temporarily paralyzed.

In the days after the opening ceremony attack, Kaspersky’s Global Research and Analysis Team obtained a copy of the Olympic Destroyer malware from one of the ski resorts and began dusting it for fingerprints. But rather than focusing on the malware’s code, as Cisco and Intezer had done, they looked at its “header,” a part of the file’s metadata that includes clues about what sorts of programming tools were used to write it. Comparing that header with others in Kaspersky’s vast database of malware samples, they found it perfectly matched the header of the North Korean Lazarus hackers’ data-wiping malware—the same one Cisco had already pointed to as sharing traits with Olympic Destroyer. The North Korean theory seemed to be confirmed.

But one senior Kaspersky researcher named Igor Soumenkov decided to go a step further. Soumenkov, a hacker prodigy who’d been recruited to Kaspersky’s research team as a teenager years earlier, had a uniquely deep knowledge of file headers, and he decided to double-check his colleagues’ findings.

A tall, soft-spoken engineer, Soumenkov had a habit of arriving at work late in the morning and staying at Kaspersky’s headquarters well after dark—a partially nocturnal schedule that he kept to avoid Moscow traffic.

One night, as his coworkers headed home, he pored over the code at a cubicle overlooking the city’s jammed Leningradskoye Highway. By the end of that night, the traffic had thinned, he was virtually alone in the office, and he had determined that the header metadata didn’t actually match other clues in the Olympic Destroyer code itself; the malware hadn’t been written with the programming tools that the header implied. The metadata had been forged.

This was something different from all the other signs of misdirection that researchers had fixated on. The other red herrings in Olympic Destroyer had been so vexing in part because there was no way to tell which clues were real and which were deceptions. But now, deep in the folds of false flags wrapped around the Olympic malware, Soumenkov had found one flag that was provably false. It was now clear that someone had tried to make the malware look North Korean and failed due to a slipup. It was only through Kaspersky’s fastidious triple-checking that it came to light.

A few months later, I sat down with Soumenkov in a Kaspersky conference room in Moscow. Over an hour-long briefing, he explained in perfect English and with the clarity of a computer science professor how he’d defeated the attempted deception deep in Olympic Destroyer’s metadata. I summarized what he seemed to have laid out for me: The Olympics attack clearly wasn’t the work of North Korea. “It didn’t look like them at all,” Soumenkov agreed.

And it certainly wasn’t Chinese, I suggested, despite the more transparent false code hidden in Olympic Destroyer that fooled some researchers early on. “Chinese code is very recognizable, and this looks different,” Soumenkov agreed again.

Finally, I asked the glaring question: If not China, and not North Korea, then who? It seemed that the conclusion of that process of elimination was practically sitting there in the conference room with us and yet couldn’t be spoken aloud.

“Ah, for that question, I brought a nice game,” Soumenkov said, affecting a kind of chipper tone. He pulled out a small black cloth bag and took out of it a set of dice. On each side of the small black cubes were written words like Anonymous, Cybercriminals, Hacktivists, USA, China, Russia, Ukraine, Cyberterrorists, Iran.

Kaspersky, like many other security firms, has a strict policy of only pinning attacks on hackers using the firm’s own system of nicknames, never naming the country or government behind a hacking incident or hacker group—the safest way to avoid the murky and often political pitfalls of attribution. But the so-called attribution dice that Soumenkov held in his hand, which I’d seen before at hacker conferences, represented the most cynical exaggeration of the attribution problem: That no cyberattack can ever truly be traced to its source, and anyone who tries is simply guessing.

Soumenkov tossed the dice on the table. “Attribution is a tricky game,” he said. “Who is behind this? It’s not our story, and it will never be.”


Michael Matonis was working from his home, a 400-square-foot basement apartment in the Washington, DC, neighborhood of Capitol Hill, when he first began to pull at the threads that would unravel Olympic Destroyer’s mystery. The 28-year-old, a former anarchist punk turned security researcher with a controlled mass of curly black hair, had only recently moved to the city from upstate New York, and he still didn’t have a desk at the Reston, Virginia, office of FireEye, the security and private intelligence firm that employed him. So on the day in February when he started to examine the malware that had struck Pyeongchang, Matonis was sitting at his makeshift workspace: a folding metal chair with his laptop propped up on a plastic table.

On a whim, Matonis decided to try a different approach from much of the rest of the perplexed security industry. He didn’t search for clues in the malware’s code. Instead, in the days after the attack, Matonis looked at a far more mundane element of the operation: a fake, malware-laced Word document that had served as the first step in the nearly disastrous opening ceremony sabotage campaign.

The document, which appeared to contain a list of VIP delegates to the games, had likely been emailed to Olympics staff as an attachment. If anyone opened that attachment, it would run a malicious macro script that planted a backdoor on their PC, offering the Olympics hackers their first foothold on the target network. When Matonis pulled the infected document from VirusTotal, the malware repository where it had been uploaded by incident responders, he saw that the bait had likely been sent to Olympics staff in late November 2017, more than two months before the games began. The hackers had laid in wait for months before triggering their logic bomb.

Matonis began combing VirusTotal and FireEye’s historical collection of malware, looking for matches to that code sample. On a first scan, he found none. But Matonis did notice that a few dozen malware-infected documents from the archives corresponded to his file’s rough characteristics: They similarly carried embedded Word macros and, like the Olympics-targeted file, had been built to launch a certain common set of hacking tools called PowerShell Empire. The malicious Word macro traps, however, looked very different from one another, with their own unique layers of obfuscation.

Over the next two days, Matonis searched for patterns in that obfuscation that might serve as a clue. When he wasn’t at his laptop, he’d turn the puzzle over in his mind, in the shower or lying on the floor of his apartment, staring up at the ceiling. Finally, he found a telling pattern in the malware specimens’ encoding. Matonis declined to share with me the details of this discovery for fear of tipping off the hackers to their tell. But he could see that, like teenage punks who all pin just the right obscure band’s buttons to their jackets and style their hair in the same shapes, the attempt to make the encoded files look unique had instead made one set of them a distinctly recognizable group. He soon deduced that the source of that signal in the noise was a common tool used to create each one of the booby-trapped documents. It was an open source program, easily found online, called Malicious Macro Generator.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite writers.

Matonis speculated that the hackers had chosen the program in order to blend in with a crowd of other malware authors, but it had ultimately had the opposite effect, setting them apart as a distinct set. Beyond their shared tools, the malware group was also tied together by the author names Matonis pulled from the files’ metadata: Almost all had been written by someone named either “AV,” “BD,” or “john.” When he looked at the command and control servers that the malware connected back to—the strings that would control the puppetry of any successful infections—all but a few of the IP addresses of those machines overlapped too. The fingerprints were hardly exact. But over the next days, he assembled a loose mesh of clues that added up to a solid net, tying the fake Word documents together.

Only after he had established those hidden connections did Matonis go back to the Word documents that had served as the vehicles for each malware sample and begin to Google-translate their contents, some written in Cyrillic. Among the files he’d tied to the Olympic Destroyer bait, Matonis found two other bait documents from the collection that dated back to 2017 and seemed to target Ukrainian LGBT activist groups, using infected files that pretended to be a gay rights organization’s strategy document and a map of a Kiev Pride parade. Others targeted Ukrainian companies and government agencies with a tainted copy of draft legislation.

This, for Matonis, was ominously familiar territory: For more than two years, he and the rest of the security industry had watched Russia launch a series of destructive hacking operations against Ukraine, a relentless cyberwar that accompanied Russia’s invasion of the country after its pro-Western 2014 revolution.

Even as that physical war had killed 13,000 people in Ukraine and displaced millions more, a Russian hacker group known as Sandworm had waged a full-blown cyberwar against Ukraine as well: It had barraged Ukrainian companies, government agencies, railways, and airports with wave after wave of data-destroying intrusions, including two unprecedented breaches of Ukrainian power utilities in 2015 and 2016 that had caused blackouts for hundreds of thousands of people. Those attacks culminated in NotPetya, a worm that had spread rapidly beyond Ukraine’s borders and ultimately inflicted $10 billion in damage on global networks, the most costly cyberattack in history.

In Matonis’ mind, all other suspects for the Olympics attack fell away. Matonis couldn’t yet connect the attack to any particular hacker group, but only one country would have been targeting Ukraine, nearly a year before the Pyeongchang attack, using the same infrastructure it would later use to hack the Olympics organizing committee—and it wasn’t China or North Korea.

Strangely, other infected documents in the collection Matonis had unearthed seemed to target victims in the Russian business and real estate world. Had a team of Russian hackers been tasked with spying on some Russian oligarch on behalf of their intelligence taskmasters? Were they engaged in profit-focused cybercrime as a side gig?

Regardless, Matonis felt that he was on his way to finally, definitively cutting through the Olympics cyberattack’s false flags to reveal its true origin: the Kremlin.

Illustration: Joan Wong

After Matonis had made those first, thrilling connections between Olympic Destroyer and a very familiar set of Russian hacking victims, he sensed he had explored beyond the part of Olympic Destroyer that its creators had intended for researchers to see—that he was now peering behind its curtain of false flags. He wanted to find out how much further he could go toward uncovering those hackers’ full identities. So he told his boss that he wouldn’t be coming into the FireEye office for the foreseeable future. For the next three weeks, he barely left his bunker apartment. He worked on his laptop from the same folding chair, with his back to the only window in his home that allowed in sunlight, poring over every data point that might reveal the next cluster of the hackers’ targets.

A pre-internet-era detective might start a rudimentary search for a person by consulting phone books. Matonis started digging into the online equivalent, the directory of the web’s global network known as the Domain Name System. DNS servers translate human-readable domains like facebook.com into the machine-readable IP addresses that describe the location of a networked computer that runs that site or service, like 69.63.176.13.

Matonis began painstakingly checking every IP address his hackers had used as a command and control server in their campaign of malicious Word document phishing; he wanted to see what domains those IP addresses had hosted. Since those domain names can move from machine to machine, he also used a reverse-lookup tool to flip the search—checking every name to see what other IP addresses had hosted it. He created a set of treelike maps connecting dozens of IP addresses and domain names linked to the Olympics attack. And far down the branch of one tree, a string of characters lit up like neon in Matonis’ mind: account-loginserv.com.

A photographic memory can come in handy for an intelligence analyst. As soon as Matonis saw the account-loginserv.com domain, he instantly knew he had seen it nearly a year earlier in an FBI “flash”—a short alert sent out to US cybersecurity practitioners and potential victims. This one had offered a new detail about the hackers who, in 2016, had reportedly breached the Arizona and Illinois state boards of elections. These had been some of the most aggressive elements of Russia’s meddling in US elections: Election officials had warned in 2016 that, beyond stealing and leaking emails from Democratic Party targets, Russian hackers had broken into the two states’ voter rolls, accessing computers that held thousands of Americans’ personal data with unknown intentions. According to the FBI flash alert Matonis had seen, the same intruders had also spoofed emails from a voting technology company, later reported to be the Tallahassee, Florida-based firm VR Systems, in an attempt to trick more election-related victims into giving up their passwords.

Matonis drew up a jumbled map of the connections on a piece of paper that he slapped onto his refrigerator with an Elvis magnet, and marveled at what he’d found. Based on the FBI alert—and Matonis told me he confirmed the connection with another human source he declined to reveal—the fake VR Systems emails were part of a phishing campaign that seemed to have also used a spoofed login page at the account-loginserv.com domain he’d found in his Olympic Destroyer map. At the end of his long chain of internet-address connections, Matonis had found a fingerprint that linked the Olympics attackers back to a hacking operation that directly targeted the 2016 US election. Not only had he solved the whodunit of Olympic Destroyer’s origin, he’d gone further, showing that the culprit had been implicated in the most notorious hacking campaign ever to hit the American political system.

Matonis had, since he was a teenager, been a motorcycle fan. When he was just barely old enough to ride one legally, he had scraped together enough money to buy a 1975 Honda CB750. Then one day a friend let him try riding his 2001 Harley-Davidson with an 1100 EVO engine. In three seconds, he was flying along a country road in upstate New York at 65 miles an hour, simultaneously fearing for his life and laughing uncontrollably.

When Matonis had finally outsmarted the most deceptive malware in history, he says he felt that same feeling, a rush that he could only compare to taking off on that Harley-Davidson in first gear. He sat alone in his DC apartment, staring at his screen and laughing.


By the time Matonis had drawn those connections, the US government had already drawn its own. The NSA and CIA, after all, have access to human spies and hacking abilities that no private-sector cybersecurity firm can rival. In late February, while Matonis was still holed up in his basement apartment, two unnamed intelligence officials told The Washington Post that the Olympics cyberattack had been carried out by Russia and that it had sought to frame North Korea. The anonymous officials went further, blaming the attack specifically on Russia’s military intelligence agency, the GRU—the same agency that had masterminded the interference in the 2016 US election and the blackout attacks in Ukraine, and had unleashed NotPetya’s devastation.

But as with most public pronouncements from inside the black box of the US intelligence apparatus, there was no way to check the government’s work. Neither Matonis nor anyone else in media or cybersecurity research was privy to the trail the agencies had followed.

A set of US government findings that were far more useful and interesting to Matonis came months after his basement detective work. On July 13, 2018, special counsel Robert Mueller unsealed an indictment against 12 GRU hackers for engaging in election interference, laying out the evidence that they’d hacked the DNC and the Clinton campaign; the indictment even included details like the servers they’d used and the terms they’d typed into a search engine.

SIGN UP TODAY

Sign up for our Longreads newsletter for the best features and investigations on WIRED.

Deep in the 29-page indictment, Matonis read a description of the alleged activities of one GRU hacker named Anatoliy Sergeyevich Kovalev. Along with two other agents, Kovalev was named as a member of GRU Unit 74455, based in the northern Moscow suburb of Khimki in a 20-story building known as “the Tower.”

The indictment stated that Unit 74455 had provided backend servers for the GRU’s intrusions into the DNC and the Clinton campaign. But more surprisingly, the indictment added that the group had “assisted in” the operation to leak the emails stolen in those operations. Unit 74455, the charges stated, had helped to set up DCLeaks.com and even Guccifer 2.0, the fake Romanian hacker persona that had claimed credit for the intrusions and given the Democrats’ stolen emails to WikiLeaks.

Kovalev, listed as 26 years old, was also accused of breaching one state’s board of elections and stealing the personal information of some 500,000 voters. Later, he allegedly breached a voting systems company and then impersonated its emails in an attempt to hack voting officials in Florida with spoofed messages laced with malware. An FBI wanted poster for Kovalev showed a picture of a blue-eyed man with a slight smile and close-cropped, blond hair.

Though the indictment didn’t say it explicitly, Kovalev’s charges described exactly the activities outlined in the FBI flash alert that Matonis had linked to the Olympic Destroyer attack. Despite all of the malware’s unprecedented deceptions and misdirections, Matonis could now tie Olympic Destroyer to a specific GRU unit, working at 22 Kirova Street in Khimki, Moscow, a tower of steel and mirrored glass on the western bank of the Moscow Canal.


A few months after Matonis shared those connections with me, in late November of 2018, I stood on a snow-covered path that wound along that frozen waterway on the outskirts of Moscow, staring up at the Tower.

I had, by then, been following the hackers known as Sandworm for two full years, and I was in the final stages of writing a book that investigated the remarkable arc of their attacks. I had traveled to Ukraine to interview the utility engineers who’d twice watched their power grids’ circuit breakers be flipped open by unseen hands. I’d flown to Copenhagen to speak with sources at the shipping firm Maersk who whispered to me about the chaos that had unfolded when NotPetya paralyzed 17 of their terminals at ports around the globe, instantly shutting down the world’s largest shipping conglomerate. And I’d sat with analysts from the Slovakian cybersecurity firm ESET in their office in Bratislava as they broke down their evidence that tied all of those attacks to a single group of hackers.

Beyond the connections in Matonis’ branching chart and in the Mueller report that pinned the Olympics attack on the GRU, Matonis had shared with me other details that loosely tied those hackers directly to Sandworm’s earlier attacks. In some cases, they had placed command and control servers in data centers run by two of the same companies, Fortunix Networks and Global Layer, that had hosted servers used to trigger Ukraine’s 2015 blackout and later the 2017 NotPetya worm. Matonis argued that those thin clues, on top of the vastly stronger case that all of those attacks were carried out by the GRU, suggested that Sandworm was, in fact, GRU Unit 74455. Which would put them in the building looming over me that snowy day in Moscow.

Read More

article image

The Untold Story of NotPetya, the Code that Crashed the World

Crippled ports. Paralyzed corporations. Frozen government agencies. Inside the most devastating cyberattack in history.

Standing there in the shadow of that opaque, reflective tower, I didn’t know exactly what I hoped to accomplish. There was no guarantee that Sandworm’s hackers were inside—they may have just as easily been split between that Khimki building and another GRU address named in the Mueller indictment, at 20 Komsomolskiy Prospekt, a building in central Moscow that I’d walked by that morning on my way to the train.

The Tower, of course, wasn’t marked as a GRU facility. It was surrounded by an iron fence and surveillance cameras, with a sign at its gate that read GLAVNOYE UPRAVLENIYE OBUSTROYSTVA VOYSK—roughly, “General Directorate for the Arrangement of Troops.” I guessed that if I dared ask the guard at that gate if I could speak with someone from GRU Unit 74455, I was likely to end up detained in a room where I would be asked hard questions by Russian government officials, rather than the other way around.

This, I realized, might be the closest I had ever stood to Sandworm’s hackers, and yet I could get no closer. A security guard appeared on the edge of the parking lot above me, looking out from within the Tower’s fence—whether watching me or taking a smoke break, I couldn’t tell. It was time for me to leave.

I walked north along the Moscow Canal, away from the Tower, and through the hush of the neighborhood’s snow-padded parks and pathways to the nearby train station. On the train back to the city center, I glimpsed the glass building one last time, from the other side of the frozen water, before it was swallowed up in the Moscow skyline.


In early April of this year, I received an email via my Korean translator from Sang-jin Oh, the Korean official who led the response to Olympic Destroyer on the ground in Pyeongchang. He repeated what he’d said all along—that he would never discuss who might be responsible for the Olympics attack. He also noted that he and I wouldn’t speak again: He’d moved on to a position in South Korea’s Blue House, the office of the president, and wasn’t authorized to take interviews. But in our final phone conversation months earlier, Oh’s voice had still smoldered with anger when he recalled the opening ceremony and the 12 hours he’d spent desperately working to avert disaster.

“It still makes me furious that, without any clear purpose, someone hacked this event,” he’d said. “It would have been a huge black mark on these games of peace. I can only hope that the international community can figure out a way that this will never happen again.”

Even now, Russia’s attack on the Olympics still haunts cyberwar wonks. (Russia’s foreign ministry didn’t respond to multiple requests for comment from WIRED.) Yes, the US government and the cybersecurity industry eventually solved the puzzle, after some initial false starts and confusion. But the attack set a new bar for deception, one that might still prove to have disastrous consequences when its tricks are repeated or evolve further, says Jason Healey, a cyberconflict-focused researcher at the Columbia School for International and Public Affairs

“Olympic Destroyer was the first time someone used false flags of that kind of sophistication in a significant, national-security-relevant attack,” Healey says. “It’s a harbinger of what the conflicts of the future might look like.”

Healey, who worked in the George W. Bush White House as director for cyber infrastructure protection, says he has no doubt that US intelligence agencies can see through deceptive clues that muddy attribution. He’s more worried about other countries where a misattributed cyberattack could have lasting consequences. “For the folks that can’t afford CrowdStrike and FireEye, for the vast bulk of nations, attribution is still an issue,” Healey says. “If you can’t imagine this with US and Russia, imagine it with India and Pakistan, or China and Taiwan, where a false flag provokes a much stronger response than even its authors intended, in a way that leaves the world looking very different afterwards.”

But false flags work here in the US, too, argues John Hultquist, the director of intelligence analysis at FireEye and Matonis’ former boss before Matonis left the firm in July. Look no further, Hultquist says, than the half of Americans—or 73 percent of registered Republicans—who refuse to accept that Russia hacked the DNC or the Clinton campaign.

As the 2020 election approaches, Olympic Destroyer shows that Russia has only advanced its deception techniques—graduating from flimsy cover stories to the most sophisticated planted digital fingerprints ever seen. And if they can fool even a few researchers or reporters, they can sow even more of the public confusion that misled the American electorate in 2016. “The question is one of audience,” Hultquist says. “The problem is that the US government may never say a thing, and within 24 hours, the damage is done. The public was the audience in the first place.”

The GRU hackers known as Sandworm, meanwhile, are still out there. And Olympic Destroyer suggests they’ve been escalating not only their wanton acts of disruption but also their deception techniques. After years of crossing one red line after another, their next move is impossible to predict. But when those hackers do strike again, they may appear in a form we don’t even recognize.

Source photos: Getty Images; Maxim Shemetov/Reuters (building)


From the book SANDWORM, by Andy Greenberg, to be published on November 5, 2019, by Doubleday, an imprint of the Knopf Doubleday Group, a division of Penguin Random House LLC. Copyright © 2019 by Andy Greenberg. Greenberg is a senior writer for WIRED.

This article appears in the November issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


More Great WIRED Stories

Up Next

Computers Are Learning to Read—But They’re *Still* Not So Smart

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Science

We’re All ‘P-Hacking’ Now

It’s got an entry in the Urban Dictionary, been discussed on Last Week Tonight with John Oliver, scored a wink from Cards Against Humanity, and now it’s been featured in a clue on the TV game show Jeopardy. Metascience nerds rejoice! The term p-hacking has gone mainstream.Results from a study can be analyzed in a…

Published

on

We’re All ‘P-Hacking’ Now

It’s got an entry in the Urban Dictionary, been discussed on Last Week Tonight with John Oliver, scored a wink from Cards Against Humanity, and now it’s been featured in a clue on the TV game show Jeopardy. Metascience nerds rejoice! The term p-hacking has gone mainstream.

Results from a study can be analyzed in a variety of ways, and p-hacking refers to a practice where researchers select the analysis that yields a pleasing result. The p refers to the p-value, a ridiculously complicated statistical entity that’s essentially a measure of how surprising the results of a study would be if the effect you’re looking for wasn’t there.

Suppose you’re testing a pill for high blood pressure, and you find that blood pressures did indeed drop among people who took the medicine. The p-value is the probability that you’d find blood pressure reductions at least as big as the ones you measured, even if the drug was a dud and didn’t work. A p-value of 0.05 means there’s only a 5 percent chance of that scenario. By convention, a p-value of less than 0.05 gives the researcher license to say that the drug produced “statistically significant” reductions in blood pressure.

Journals generally prefer to publish statistically significant results, so scientists have incentives to select ways of parsing and analyzing their data that produce a p-value under 0.05. That’s p-hacking.

“It’s a great name—short, sweet, memorable, and just a little funny,” says Regina Nuzzo, a freelance science writer and senior advisor for statistics communication at the American Statistical Association.

Courtesy of Cards Against Humanity 

P-hacking as a term came into use as psychology and some other fields of science were experiencing a kind of existential crisis. Seminal findings were failing to replicate. Absurd results (ESP is real!) were passing peer review at well-respected academic journals. Efforts were underway to test the literature for false positives and the results weren’t looking good. Researchers began to realize that the problem might be woven into some long-standing and basic research practices.

Psychologists Uri Simonsohn, Joseph Simmons, and Leif Nelson elegantly demonstrated the problem in what is now a classic paper. “False-Positive Psychology,” published in 2011, used well-accepted methods in the field to show that the act of listening to the Beatles song “When I’m Sixty-Four” could take a year and a half off someone’s age. It all started over dinner at a conference where a group of researchers was discussing some findings they found difficult to believe. Afterward, Simonsohn, Simmons, and Nelson decided to see how easy it would be to reverse-engineer an impossible result with a p-value of less than 0.05. “We started brainstorming—if we wanted to show an effect that isn’t true, how would you run a study to get that result without faking anything?” Simonsohn told me.

They produced their absurd conclusion by exploiting what they called “researcher degrees of freedom”: the little decisions that scientists make as they’re designing a study and collecting and analyzing data. These choices include things like which observations to measure, which variables to compare, which factors to combine, and which ones to control for. Unless researchers have committed to a methodology and analysis plan in advance by preregistering a study, they are, in practice, free to make (or even change) these calls as they go.

The problem, as the Beatles song experiment showed, is that this kind of fiddling around allows researchers to manipulate their study conditions until they get the answer that they want—the grownup equivalent of kids at a slumber party applying pressure on the Ouija board planchette until it spells out the words they’re looking for.

A year later, the team went public with its new and better name for this phenomenon. At a psychology conference in 2012, Simonsohn gave a talk in which he used the term p-hacking for the first time.

“We needed a shorter word to describe [this set of behaviors], and p-dash-something seemed to make sense,” Simmons says. “P-hacking was definitely a better term than ‘researcher degrees of freedom’ because you could use it as a noun or an adjective.”

The phrase made its formal debut in a paper the team published in 2014, where they wrote “p-hacking can allow researchers to get most studies to reveal significant relationships between truly unrelated variables.”

They weren’t the first to identify what can go wrong when scientists exploit researcher degrees of freedom, but by coining the term p-hacking, Simonsohn, Simmons, and Nelson had given researchers a language to talk about it. “Our primary goal was to make it easier for us to present our work. The ambitious goal was that it would make it easier for other people to talk to each other about the topic,” Nelson says. “The popular acceptance of the term has outstripped our original ambitions.”

“It is brilliant marketing,” says Brian Nosek, cofounder of the Center for Open Science. The term p-hacking brings together a constellation of behaviors that methodologists have long recognized as undesirable, assigns them a name, and identifies their consequence, he adds. Nosek credits the term with helping researchers “organize and think about how their behaviors impact the quality of their evidence.”

As a wider conversation about reproducibility spread through the field of psychology, rival ways of describing p-hacking and related issues gained attention too. Columbia University statistician Andrew Gelman had used the term “the garden of forking paths” to describe the array of choices that researchers can select from when they’re embarking on a study analysis. Data mining, fishing expeditions and data dredging are other descriptors that had been applied to the act of p-hacking.

Photograph: Jeopardy Productions, Inc. 

Gelman and his colleague Eric Loken didn’t care for these alternatives. In 2013, they wrote that they “regret the spread of the terms ‘fishing’ and ‘p-hacking’ (and even ‘researcher degrees of freedom’),” because they create the “misleading implication that researchers were consciously trying out many different analyses on a single data set.” The “garden of forking paths,” on the other hand, more aptly describes how researchers can get lost in all the decisions that go into data analysis, and not even realize that they’ve gone astray.

“People say p-hacking and it sounds like someone’s cheating,” Gelman says. “The flip side is that people know they didn’t cheat, so they don’t think they did anything wrong. But even if you don’t cheat, it’s still a moral error to misanalyze data on a problem of consequence.”

Simmons is sympathetic to this criticism. “We probably didn’t think enough about the connotations of the word ‘hacking,’ which implies intentions,” he says. “It sounds worse than we wanted it to.” He and his colleagues have been very explicit that p-hacking isn’t necessarily a nefarious endeavor, but rather a human one, and one that they themselves had been guilty of. At its core, p-hacking is really about confirmation bias—the human tendency to seek and preferentially find evidence that confirms what we’d like to believe, while turning a blind eye to things that might contradict our preferred truths.

The “hacking” part makes it sound like some sort of immoral behavior, and that’s not helpful, Simmons says. “People in power don’t understand the inevitability of p-hacking in the absence of safeguards against it. They think p-hacking is something that evil people do. And since we’re not evil, we don’t have to worry about it.” But Simmons says that p-hacking is a human default: “It’s something that every single person will do, that I continue to do when I don’t preregister my studies.” Without safeguards in place, he notes, it’s almost impossible to avoid.

Still, there’s something indisputably appealing about the term p-hacking. “You can’t say that someone got their data and garden-of-forking-pathed it,” Nelson adds. “We wanted to make it into a single action term.”

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

The genesis of the term p-hacking made it easier to talk about this phenomenon across fields by harkening to the fact that this was a behavior—something researchers were actually doing in their work. Even though it was developed by psychologists, the term p-hacking was soon being used by people talking about medicine, nutrition, biology or genetics, Nelson says. “Each of these fields have their own version, and they were like, great. Now we have a term to describe whatever is our version of semilegitimate statistical practices.”

The fact that p-hacking has now spread out of science and into pop culture could indicate a watershed moment in the public understanding of science, and a growing awareness that studies can’t always be taken at face value. But it’s hard to know exactly how the term is being understood at large.

It’s even possible that the popularization of p-hacking has turned the scientific process into a caricature of itself, reinforcing harmful ideas about the scientific method. “I would hate for the concept of p-hacking boiled down to something like ‘you can make statistics say anything you want’ or, worse, that ‘scientists are liars,’” says Nuzzo, the science writer. “Because neither of those things is true.”

In a perfect world, the wider public would understand that p-hacking refers not to some lousy tendency or lazy habit particular to researchers, but one that’s present everywhere. We all p-hack, to some extent, every time we set out to understand the evidence in the world around us. If there’s a takeaway here, it’s that science is hard—and sometimes our human foibles make it even harder.


More Great WIRED Stories

Continue Reading

Science

How to Get Solar Power on a Rainy Day? Beam It From Space

Earlier this year, a small group of spectators gathered in David Taylor Model Basin, the Navy’s cavernous indoor wave pool in Maryland, to watch something they couldn’t see. At each end of the facility there was a 13-foot pole with a small cube perched on top. A powerful infrared laser beam shot out of one…

Published

on

How to Get Solar Power on a Rainy Day? Beam It From Space

Earlier this year, a small group of spectators gathered in David Taylor Model Basin, the Navy’s cavernous indoor wave pool in Maryland, to watch something they couldn’t see. At each end of the facility there was a 13-foot pole with a small cube perched on top. A powerful infrared laser beam shot out of one of the cubes, striking an array of photovoltaic cells inside the opposite cube. To the naked eye, however, it looked like a whole lot of nothing. The only evidence that anything was happening came from a small coffee maker nearby, which was churning out “laser lattes” using only the power generated by the system.

The laser setup managed to transmit 400 watts of power—enough for several small household appliances—through hundreds of meters of air without moving any mass. The Naval Research Lab, which ran the project, hopes to use the system to send power to drones during flight. But NRL electronics engineer Paul Jaffe has his sights set on an even more ambitious problem: beaming solar power to Earth from space. For decades the idea had been reserved for The Future, but a series of technological breakthroughs and a massive new government research program suggest that faraway day may have finally arrived.

Since the idea for space solar power first cropped up in Isaac Asimov’s science fiction in the early 1940s, scientists and engineers have floated dozens of proposals to bring the concept to life, including inflatable solar arrays and robotic self-assembly. But the basic idea is always the same: A giant satellite in orbit harvests energy from the sun and converts it to microwaves or lasers for transmission to Earth, where it is converted into electricity. The sun never sets in space, so a space solar power system could supply renewable power to anywhere on the planet, day or night, rain or shine.

Like fusion energy, space-based solar power seemed doomed to become a technology that was always 30 years away. Technical problems kept cropping up, cost estimates remained stratospheric, and as solar cells became cheaper and more efficient, the case for space-based solar seemed to be shrinking.

That didn’t stop government research agencies from trying. In 1975, after partnering with the Department of Energy on a series of space solar power feasibility studies, NASA beamed 30 kilowatts of power over a mile using a giant microwave dish. Beamed energy is a crucial aspect of space solar power, but this test remains the most powerful demonstration of the technology to date. “The fact that it’s been almost 45 years since NASA’s demonstration, and it remains the high-water mark, speaks for itself,” Jaffe says. “Space solar wasn’t a national imperative, and so a lot of this technology didn’t meaningfully progress.”

John Mankins, a former physicist at NASA and director of Solar Space Technologies, witnessed how government bureaucracy killed space solar power development firsthand. In the late 1990s, Mankins authored a report for NASA that concluded it was again time to take space solar power seriously and led a project to do design studies on a satellite system. Despite some promising results, the agency ended up abandoning it.

In 2005, Mankins left NASA to work as a consultant, but he couldn’t shake the idea of space solar power. He did some modest space solar power experiments himself and even got a grant from NASA’s Innovative Advanced Concepts program in 2011. The result was SPS-ALPHA, which Mankins called “the first practical solar power satellite.” The idea, says Mankins, was “to build a large solar-powered satellite out of thousands of small pieces.” His modular design brought the cost of hardware down significantly, at least in principle.

Jaffe, who was just starting to work on hardware for space solar power at the Naval Research Lab, got excited about Mankins’ concept. At the time he was developing a “sandwich module” consisting of a small solar panel on one side and a microwave transmitter on the other. His electronic sandwich demonstrated all the elements of an actual space solar power system and, perhaps most important, it was modular. It could work beautifully with something like Mankins’ concept, he figured. All they were missing was the financial support to bring the idea from the laboratory into space.

Jaffe invited Mankins to join a small team of researchers entering a Defense Department competition, in which they were planning to pitch a space solar power concept based on SPS-ALPHA. In 2016, the team presented the idea to top Defense officials and ended up winning four out of the seven award categories. Both Jaffe and Mankins described it as a crucial moment for reviving the US government’s interest in space solar power.

They might be right. In October, the Air Force Research Lab announced a $100 million program to develop hardware for a solar power satellite. It’s an important first step toward the first demonstration of space solar power in orbit, and Mankins says it could help solve what he sees as space solar power’s biggest problem: public perception. The technology has always seemed like a pie-in-the-sky idea, and the cost of setting up a solar array on Earth is plummeting. But space solar power has unique benefits, chief among them the availability of solar energy around the clock regardless of the weather or time of day.

It can also provide renewable energy to remote locations, such as forward operating bases for the military. And at a time when wildfires have forced the utility PG&E to kill power for thousands of California residents on multiple occasions, having a way to provide renewable energy through the clouds and smoke doesn’t seem like such a bad idea. (Ironically enough, PG&E entered a first-of-its-kind agreement to buy space solar power from a company called Solaren back in 2009; the system was supposed to start operating in 2016 but never came to fruition.)

“If space solar power does work, it is hard to overstate what the geopolitical implications would be,” Jaffe says. “With GPS, we sort of take it for granted that no matter where we are on this planet, we can get precise navigation information. If the same thing could be done for energy, it would be revolutionary.”

Indeed, there seems to be an emerging race to become the first to harness this technology. Earlier this year China announced its intention to become the first country to build a solar power station in space, and for more than a decade Japan has considered the development of a space solar power station to be a national priority. Now that the US military has joined in with a $100 million hardware development program, it may only be a matter of time before there’s a solar farm in the solar system.


More Great WIRED Stories

Continue Reading

Science

Are Saturn’s Rings Really as Young as the Dinosaurs?

The Cassini spacecraft perished in a literal blaze of glory on September 15, 2017, when it ended its 13-year study of Saturn by intentionally plunging into the gas giant’s swirling atmosphere. The crash came after a last few months of furious study, during which Cassini performed the Grand Finale — a sensational, death-defying dance that…

Published

on

Are Saturn’s Rings Really as Young as the Dinosaurs?

The Cassini spacecraft perished in a literal blaze of glory on September 15, 2017, when it ended its 13-year study of Saturn by intentionally plunging into the gas giant’s swirling atmosphere. The crash came after a last few months of furious study, during which Cassini performed the Grand Finale — a sensational, death-defying dance that saw the spacecraft dive between the planet and its rings 22 times.

Original story reprinted with permission from Quanta Magazine, an editorially indepen­dent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences. |||

As new perspectives often do, this one revealed a surprise. Previously, planetary scientists had assumed that Saturn’s rings were as old as the solar system itself—about 4.5 billion years old. But cosmic clues hidden deep within the rings caused some Cassini scientists to massively revise this figure. The rings aren’t as old as the solar system, they argued in a paper published this summer in the journal Science. They emerged no more than 100 million years ago, back when dinosaurs roamed Earth.

An explosion of media coverage linking the rings to the age of dinosaurs helped to quickly solidify the new findings in the public’s eye. If you enter the search phrase “how old are Saturn’s rings,” Google returns the answer “100.1 million years.”

Aurélien Crida, a planetary scientist at the Côte d’Azur Observatory, was incredulous at this definitive declaration. “I was a bit pissed off by how it was assessed, that the rings are young and it’s over,” he said.

He and other skeptics have pointed out that there are a lot of potential problems with the argument, from the physics of the ring pollution to the origins of the rings themselves. “The rings look young, but that doesn’t mean they really are young,” said Ryuki Hyodo, a planetary scientist at the Japanese Aerospace Exploration Agency. “There are still some processes that we are not considering.”

The rings were named alphabetically in the order they were discovered. Starting from the innermost ring, the D ring is followed by the C, B, A, F, G and E rings.Video: NASA/JPL-Caltech/Space Science Institute

In response to the hypothesis, Crida coauthored a commentary for Nature Astronomy, published in September, that presented a litany of uncertainties. The dinosaurian age of the rings is an eye-catching claim, said Crida, but it circumvents an uncomfortable reality: Too many uncertainties exist to permit any firm estimate of the age of the rings. Despite Cassini’s heroics, “we’re not really far ahead of where we were almost 40 years ago,” back when the Voyager probes first took a good look at Saturn, said Luke Dones, a planetary scientist at the Southwest Research Institute in Boulder, Colorado.

Proponents of the younger age stand by their work. “Every new exciting result gets challenged,” said Burkhard Militzer, a planetary scientist at the University of California, Berkeley, and a coauthor of the Science paper. “It’s the natural way to proceed.”

The debate is about more than the narrow question of the rings’ age. The age of Saturn’s rings will influence how we understand many of Saturn’s moons, including the potentially life-supporting world Enceladus, with its frozen ocean. And it will also push us closer to answering the ultimate question about Saturn’s rings, one that humans have wondered about since Galileo first marveled at them over 400 years ago: Where did they come from in the first place?

Age From a Scale

We know the age of the Earth because we can use the decay of radioactive matter in rocks to work out how old they are. Planetary geologists have done the same for rocks from the moon and Mars.

Saturn’s rings, predominantly composed of ice fragments with trace amounts of rocky matter, don’t lend themselves to this kind of analysis, said Matthew Hedman, a planetary scientist at the University of Idaho. That means age estimates have to be based on circumstantial evidence.

Illustration: Lucy Reading-Ikkanda/Quanta Magazine, Source: NASA/JPL-Caltech/Space Science Institute; NASA’s Goddard Space Flight Center

That evidence, in part, comes from dust. Think of the icy rings as resembling a field of snow: After a pristine start, soot from afar gradually pollutes it. In order to estimate the age of the snow, scientists have to measure the rate at which soot is falling, as well as the total amount of soot already there.

Cassini did the first part with its Cosmic Dust Analyzer, which found that Saturn’s rings are being steadily polluted by darker material—a mixture of rocky dust and organic compounds. Most of this material is being delivered by micrometeoroids from the Kuiper belt, a distant source of icy objects beyond the orbit of Neptune. The spacecraft also found that the sooty material currently makes up about 1 percent of Saturn’s icy rings.

To uncover the total mass of cosmic soot in the rings, researchers then had to weigh the rings themselves. Thankfully, Cassini’s Grand Finale created just such an opportunity. As the spacecraft swooped through the rings, it precisely measured the net gravitational pull at every point. Since gravity fields are dependent on an object’s mass, this feat allowed scientists to directly weigh the entire ring system.

VIDEO: NASA

During Cassini’s Grand Finale, the spacecraft dove between the rings and the planet 22 times. The maneuver began and ended with close flybys of Saturn’s moon Titan, whose orbit is shown in yellow.

With this information—the amount of soot and the rate at which it is falling—scientists estimated that it would have taken between 10 million and 100 million years for that proverbial snowy field to find itself sullied. The findings were generally well received. “Most of the community today is convinced that the rings were formed recently,” said Luciano Iess, an expert in aerospace engineering at Sapienza University of Rome and the Science study’s lead author.

Yet the pollution argument isn’t watertight. Dones points out that the Cassini team analyzing the incoming pollution has not settled on a precise rate. Various values have appeared in several conference presentations, but a final figure hasn’t yet been published. In the Science paper, the researchers chose one of these values and came up with a youthful ring age. But this ambiguity has been “causing a lot of consternation,” said Paul Estrada, a planetary scientist at NASA’s Ames Research Center who is a member of the Cassini team analyzing pollution.

The pollution rate may have also changed relatively recently. “It could just be that the bombardment rate is unusually high at the moment,” said Crida, even if we can’t say what would cause such a spike. In theory, a future mission to Saturn could dig out a rocky core from an old moon, one that preserves the pollution flux over time, said Tracy Becker, a planetary scientist at the Southwest Research Institute in San Antonio, Texas. But such a mission would be decades in the future.

We also don’t fully understand the physics behind the ring darkening. The micrometeoroids from the Kuiper belt slam into the rings’ icy chunks at such high speeds that the impacts are like little explosions, suggesting that not much of the micrometeoroids adheres. This has led to a fudge factor in the literature—guesstimates that 10 percent of the micrometeoroidal matter sticks to the ice and pollutes it.

Dones said that the Dust Accelerator Laboratory at the University of Colorado, Boulder, may be able to replicate this impact process and give us a better idea of the staying power of the pollutants. But for now, we’re in the dark.

Enceladus is an icy world that hides a subsurface ocean of salty water. Geysers on its surface, seen at the bottom of the moon in the image on the right, shoot material out hundreds of miles into space, potentially feeding Saturn’s rings.Photograph: NASA/JPL/Space Science Institute
Video: NASA/JPL-Caltech/Space Science Institute 

Crida’s commentary also suggested that an incognito planetary scrubber may be removing pollution to make the rings appear deceptively youthful. We’ve known since the Voyager days that material from the rings rains down onto the surface of Saturn. But we haven’t known what that material is made of. Cassini measured the rain using two separate instruments. Both found that it contains surprisingly little ice—as little as 24 percent. “That’s very confusing, given that the rings are measured to be over 95 percent water,” said James O’Donoghue, a planetary scientist at the Japanese Aerospace Exploration Agency. The “rain” is preferentially removing dirt, but no one knows why.

“There is something that is cleaning the rings,” said Crida. “We don’t know what it is, but it is now an observed fact, it’s not just a conjecture.”

Crida said that perhaps the ice ejected by micrometeoroid impacts tends to reattach itself to the rings, while the ejected pollutants rain out. Becker conjectures that pollution is being preferentially ejected by impacts, regardless of whether the ice is reattaching itself in this manner. And Hyodo wonders whether the geysers on Enceladus’ south pole are adding more water, diluting the rings’ pollution. But no one knows for sure.

But not everyone believes that there’s a lot of cleaning going on. “Getting the stuff dirty is easy,” said Militzer. “Cleaning is hard.”

Where They Came From

What if, said Crida, the pollution argument is correct? What if the rings have always been exposed to an unchanging influx of cosmic dust, and the rings are 100 million years old at most? Then we would have to explain how the rings formed so recently, which is a tricky prospect.

First, we have no idea what created the rings, so assigning them an origin story at any point in time is difficult. The rings may be the vestige of a comet torn asunder by Saturn’s gravitational tides, or the product of a collision between a comet and an icy moon, or the result of something that disturbed the orbit of several moons, causing them to smash into each other.

A sample-return mission to Saturn’s icy loops could find the remnants of the original bodies that were annihilated and used to forge the rings, said Militzer. But no such mission is forthcoming.

At the edge of Saturn’s B ring, vertical structures rise as high as 2.5 kilometers above the plane of the rings, casting long shadows. The typical thickness of the rings is only about 10 meters.Photograph: NASA/JPL/Space Science Institute

Second, the solar system’s first billion years or so were a pinball-like pandemonium, with protoplanetary objects constantly colliding. These days, said Crida, things are far more settled, so the likelihood of a catastrophic collision leading to Saturn’s rings is far lower. If they did form in a recent cataclysm, said Militzer, such an event would dramatically change our perspective: It would imply that our planetary neighborhood hasn’t entirely outgrown the bedlam of its primeval days just yet.

Linda Spilker, the Cassini project scientist at NASA’s Jet Propulsion Laboratory, said clues may lie in Saturn’s moons, as their development is somewhat linked to that of the rings. But their own stories are also riddled with uncertainties, from their origins to their ages.

A 2016 model, using the current positions of the moons to peer backward through time, suggests that the present system of rings and inner moons could have been created when a pair of midsize moons smashed into each other about 100 million years ago.

But the ability of such a collision to form the rings we see now, said Dones, is an active controversy; a much-debated 2017 study, for example, suggests that not enough material would have been available to make today’s rings. “It just doesn’t work,” said Crida, adding that the only way this two-moon impact could have created all those moons and rings is through “magic.”

“The question of whether the rings are old or young will one day be definitively answered,” said Becker. But right now, there is enough evidence on both sides that “there’s still plenty to argue about before we can say anything conclusively.”

While the past is unclear, the future seems more certain. The rings may look permanent, but the opposite is true. Observations from a telescope atop Hawaii’s Mauna Kea volcano found torrents of material raining out from the rings. When scientists add this to the materia

Continue Reading

Recent Posts

Title

Categories

Trending