Connect with us


Why it feels like July, not October, over a huge part of the U.S.

Red colors show tempertures well above normal on Oct. 2. Image: climate reanalyzer / university of maine  By Mark Kaufman2019-10-02 08:20:05 -0700 Summer’s passed, but temperatures in the nation’s capital are expected to reach between 20 and 25 degrees above normal on Wednesday, meaning highs of around 96 degrees Fahrenheit. And the heat trend is…



Why it feels like July, not October, over a huge part of the U.S.
Red colors show tempertures well above normal on Oct. 2.
Red colors show tempertures well above normal on Oct. 2.

Image: climate reanalyzer / university of maine 

By Mark Kaufman

Summer’s passed, but temperatures in the nation’s capital are expected to reach between 20 and 25 degrees above normal on Wednesday, meaning highs of around 96 degrees Fahrenheit.

And the heat trend is widespread. Well over 100 daily high temperature records will likely fall across a vast portion of the central and eastern U.S. over the coming days.

What’s gives?

A pattern of unusual heat in the east and chilly temperatures out West (including a major blizzard) is a well-predicted consequence of a powerful weather system, called the jet stream, that has sliced the nation in half, temperature-wise.

“It will be a tale of two nations,” Jeff Weber, a meteorologist at the University Corporation for Atmospheric Research told Mashable last week. “It will be a big contrast.”  

Indeed there is. The jet stream — a relatively narrow band of high altitude, westerly, powerful winds traveling some four to eight miles up in the atmosphere — acts as a powerful barrier separating colder northern temperatures from warmer southern temperatures. 

But, as atmospheric scientist Paul Roebber told Mashable, this band of wind can bend and become wavier. This has happened in a significant way, as the jet stream has bent down into the western U.S., allowing colder air to come down, somewhat like a freezer door being left open. 

The jet stream on Oct. 2 ,2019.

The jet stream on Oct. 2 ,2019.


Meanwhile, the bent jet stream has allowed warmer southern air to travel up and then settle over the other half of the U.S. “There will be dramatic heat,” said UCAR’s Weber. He was right. 

Record heat, reaching into the upper 90s, hit the Midwest and South on Tuesday. More daily high temperatures will fall on Wednesday and Thursday.

These days, with overall global temperatures boosted by a relentlessly warming climate, high temperature records are much more likely to break than low temperature records. Every weather event is happening on a heated planet. That’s why over the last decade, twice as many daily high records have been set as low temperature records in the U.S.

What’s more, there’s growing evidence that the jet stream is becoming more prone to wavier behavior, which produces more extreme weather and temperature events, particularly heat waves. This is an intensive, ongoing area of scientific research. 

For now, the jet stream has bent over the U.S., and a large part of the nation is experiencing a potent bout of “summertime” heat.

Up Next

Could you survive ‘Back to the Future’? This video investigates.

Don't Miss

This plant-based egg tastes just like a real egg

Click to comment

You must be logged in to post a comment Login

Leave a Reply


We’re All ‘P-Hacking’ Now

It’s got an entry in the Urban Dictionary, been discussed on Last Week Tonight with John Oliver, scored a wink from Cards Against Humanity, and now it’s been featured in a clue on the TV game show Jeopardy. Metascience nerds rejoice! The term p-hacking has gone mainstream.Results from a study can be analyzed in a…



We’re All ‘P-Hacking’ Now

It’s got an entry in the Urban Dictionary, been discussed on Last Week Tonight with John Oliver, scored a wink from Cards Against Humanity, and now it’s been featured in a clue on the TV game show Jeopardy. Metascience nerds rejoice! The term p-hacking has gone mainstream.

Results from a study can be analyzed in a variety of ways, and p-hacking refers to a practice where researchers select the analysis that yields a pleasing result. The p refers to the p-value, a ridiculously complicated statistical entity that’s essentially a measure of how surprising the results of a study would be if the effect you’re looking for wasn’t there.

Suppose you’re testing a pill for high blood pressure, and you find that blood pressures did indeed drop among people who took the medicine. The p-value is the probability that you’d find blood pressure reductions at least as big as the ones you measured, even if the drug was a dud and didn’t work. A p-value of 0.05 means there’s only a 5 percent chance of that scenario. By convention, a p-value of less than 0.05 gives the researcher license to say that the drug produced “statistically significant” reductions in blood pressure.

Journals generally prefer to publish statistically significant results, so scientists have incentives to select ways of parsing and analyzing their data that produce a p-value under 0.05. That’s p-hacking.

“It’s a great name—short, sweet, memorable, and just a little funny,” says Regina Nuzzo, a freelance science writer and senior advisor for statistics communication at the American Statistical Association.

Courtesy of Cards Against Humanity 

P-hacking as a term came into use as psychology and some other fields of science were experiencing a kind of existential crisis. Seminal findings were failing to replicate. Absurd results (ESP is real!) were passing peer review at well-respected academic journals. Efforts were underway to test the literature for false positives and the results weren’t looking good. Researchers began to realize that the problem might be woven into some long-standing and basic research practices.

Psychologists Uri Simonsohn, Joseph Simmons, and Leif Nelson elegantly demonstrated the problem in what is now a classic paper. “False-Positive Psychology,” published in 2011, used well-accepted methods in the field to show that the act of listening to the Beatles song “When I’m Sixty-Four” could take a year and a half off someone’s age. It all started over dinner at a conference where a group of researchers was discussing some findings they found difficult to believe. Afterward, Simonsohn, Simmons, and Nelson decided to see how easy it would be to reverse-engineer an impossible result with a p-value of less than 0.05. “We started brainstorming—if we wanted to show an effect that isn’t true, how would you run a study to get that result without faking anything?” Simonsohn told me.

They produced their absurd conclusion by exploiting what they called “researcher degrees of freedom”: the little decisions that scientists make as they’re designing a study and collecting and analyzing data. These choices include things like which observations to measure, which variables to compare, which factors to combine, and which ones to control for. Unless researchers have committed to a methodology and analysis plan in advance by preregistering a study, they are, in practice, free to make (or even change) these calls as they go.

The problem, as the Beatles song experiment showed, is that this kind of fiddling around allows researchers to manipulate their study conditions until they get the answer that they want—the grownup equivalent of kids at a slumber party applying pressure on the Ouija board planchette until it spells out the words they’re looking for.

A year later, the team went public with its new and better name for this phenomenon. At a psychology conference in 2012, Simonsohn gave a talk in which he used the term p-hacking for the first time.

“We needed a shorter word to describe [this set of behaviors], and p-dash-something seemed to make sense,” Simmons says. “P-hacking was definitely a better term than ‘researcher degrees of freedom’ because you could use it as a noun or an adjective.”

The phrase made its formal debut in a paper the team published in 2014, where they wrote “p-hacking can allow researchers to get most studies to reveal significant relationships between truly unrelated variables.”

They weren’t the first to identify what can go wrong when scientists exploit researcher degrees of freedom, but by coining the term p-hacking, Simonsohn, Simmons, and Nelson had given researchers a language to talk about it. “Our primary goal was to make it easier for us to present our work. The ambitious goal was that it would make it easier for other people to talk to each other about the topic,” Nelson says. “The popular acceptance of the term has outstripped our original ambitions.”

“It is brilliant marketing,” says Brian Nosek, cofounder of the Center for Open Science. The term p-hacking brings together a constellation of behaviors that methodologists have long recognized as undesirable, assigns them a name, and identifies their consequence, he adds. Nosek credits the term with helping researchers “organize and think about how their behaviors impact the quality of their evidence.”

As a wider conversation about reproducibility spread through the field of psychology, rival ways of describing p-hacking and related issues gained attention too. Columbia University statistician Andrew Gelman had used the term “the garden of forking paths” to describe the array of choices that researchers can select from when they’re embarking on a study analysis. Data mining, fishing expeditions and data dredging are other descriptors that had been applied to the act of p-hacking.

Photograph: Jeopardy Productions, Inc. 

Gelman and his colleague Eric Loken didn’t care for these alternatives. In 2013, they wrote that they “regret the spread of the terms ‘fishing’ and ‘p-hacking’ (and even ‘researcher degrees of freedom’),” because they create the “misleading implication that researchers were consciously trying out many different analyses on a single data set.” The “garden of forking paths,” on the other hand, more aptly describes how researchers can get lost in all the decisions that go into data analysis, and not even realize that they’ve gone astray.

“People say p-hacking and it sounds like someone’s cheating,” Gelman says. “The flip side is that people know they didn’t cheat, so they don’t think they did anything wrong. But even if you don’t cheat, it’s still a moral error to misanalyze data on a problem of consequence.”

Simmons is sympathetic to this criticism. “We probably didn’t think enough about the connotations of the word ‘hacking,’ which implies intentions,” he says. “It sounds worse than we wanted it to.” He and his colleagues have been very explicit that p-hacking isn’t necessarily a nefarious endeavor, but rather a human one, and one that they themselves had been guilty of. At its core, p-hacking is really about confirmation bias—the human tendency to seek and preferentially find evidence that confirms what we’d like to believe, while turning a blind eye to things that might contradict our preferred truths.

The “hacking” part makes it sound like some sort of immoral behavior, and that’s not helpful, Simmons says. “People in power don’t understand the inevitability of p-hacking in the absence of safeguards against it. They think p-hacking is something that evil people do. And since we’re not evil, we don’t have to worry about it.” But Simmons says that p-hacking is a human default: “It’s something that every single person will do, that I continue to do when I don’t preregister my studies.” Without safeguards in place, he notes, it’s almost impossible to avoid.

Still, there’s something indisputably appealing about the term p-hacking. “You can’t say that someone got their data and garden-of-forking-pathed it,” Nelson adds. “We wanted to make it into a single action term.”


Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

The genesis of the term p-hacking made it easier to talk about this phenomenon across fields by harkening to the fact that this was a behavior—something researchers were actually doing in their work. Even though it was developed by psychologists, the term p-hacking was soon being used by people talking about medicine, nutrition, biology or genetics, Nelson says. “Each of these fields have their own version, and they were like, great. Now we have a term to describe whatever is our version of semilegitimate statistical practices.”

The fact that p-hacking has now spread out of science and into pop culture could indicate a watershed moment in the public understanding of science, and a growing awareness that studies can’t always be taken at face value. But it’s hard to know exactly how the term is being understood at large.

It’s even possible that the popularization of p-hacking has turned the scientific process into a caricature of itself, reinforcing harmful ideas about the scientific method. “I would hate for the concept of p-hacking boiled down to something like ‘you can make statistics say anything you want’ or, worse, that ‘scientists are liars,’” says Nuzzo, the science writer. “Because neither of those things is true.”

In a perfect world, the wider public would understand that p-hacking refers not to some lousy tendency or lazy habit particular to researchers, but one that’s present everywhere. We all p-hack, to some extent, every time we set out to understand the evidence in the world around us. If there’s a takeaway here, it’s that science is hard—and sometimes our human foibles make it even harder.

More Great WIRED Stories

Continue Reading


How to Get Solar Power on a Rainy Day? Beam It From Space

Earlier this year, a small group of spectators gathered in David Taylor Model Basin, the Navy’s cavernous indoor wave pool in Maryland, to watch something they couldn’t see. At each end of the facility there was a 13-foot pole with a small cube perched on top. A powerful infrared laser beam shot out of one…



How to Get Solar Power on a Rainy Day? Beam It From Space

Earlier this year, a small group of spectators gathered in David Taylor Model Basin, the Navy’s cavernous indoor wave pool in Maryland, to watch something they couldn’t see. At each end of the facility there was a 13-foot pole with a small cube perched on top. A powerful infrared laser beam shot out of one of the cubes, striking an array of photovoltaic cells inside the opposite cube. To the naked eye, however, it looked like a whole lot of nothing. The only evidence that anything was happening came from a small coffee maker nearby, which was churning out “laser lattes” using only the power generated by the system.

The laser setup managed to transmit 400 watts of power—enough for several small household appliances—through hundreds of meters of air without moving any mass. The Naval Research Lab, which ran the project, hopes to use the system to send power to drones during flight. But NRL electronics engineer Paul Jaffe has his sights set on an even more ambitious problem: beaming solar power to Earth from space. For decades the idea had been reserved for The Future, but a series of technological breakthroughs and a massive new government research program suggest that faraway day may have finally arrived.

Since the idea for space solar power first cropped up in Isaac Asimov’s science fiction in the early 1940s, scientists and engineers have floated dozens of proposals to bring the concept to life, including inflatable solar arrays and robotic self-assembly. But the basic idea is always the same: A giant satellite in orbit harvests energy from the sun and converts it to microwaves or lasers for transmission to Earth, where it is converted into electricity. The sun never sets in space, so a space solar power system could supply renewable power to anywhere on the planet, day or night, rain or shine.

Like fusion energy, space-based solar power seemed doomed to become a technology that was always 30 years away. Technical problems kept cropping up, cost estimates remained stratospheric, and as solar cells became cheaper and more efficient, the case for space-based solar seemed to be shrinking.

That didn’t stop government research agencies from trying. In 1975, after partnering with the Department of Energy on a series of space solar power feasibility studies, NASA beamed 30 kilowatts of power over a mile using a giant microwave dish. Beamed energy is a crucial aspect of space solar power, but this test remains the most powerful demonstration of the technology to date. “The fact that it’s been almost 45 years since NASA’s demonstration, and it remains the high-water mark, speaks for itself,” Jaffe says. “Space solar wasn’t a national imperative, and so a lot of this technology didn’t meaningfully progress.”

John Mankins, a former physicist at NASA and director of Solar Space Technologies, witnessed how government bureaucracy killed space solar power development firsthand. In the late 1990s, Mankins authored a report for NASA that concluded it was again time to take space solar power seriously and led a project to do design studies on a satellite system. Despite some promising results, the agency ended up abandoning it.

In 2005, Mankins left NASA to work as a consultant, but he couldn’t shake the idea of space solar power. He did some modest space solar power experiments himself and even got a grant from NASA’s Innovative Advanced Concepts program in 2011. The result was SPS-ALPHA, which Mankins called “the first practical solar power satellite.” The idea, says Mankins, was “to build a large solar-powered satellite out of thousands of small pieces.” His modular design brought the cost of hardware down significantly, at least in principle.

Jaffe, who was just starting to work on hardware for space solar power at the Naval Research Lab, got excited about Mankins’ concept. At the time he was developing a “sandwich module” consisting of a small solar panel on one side and a microwave transmitter on the other. His electronic sandwich demonstrated all the elements of an actual space solar power system and, perhaps most important, it was modular. It could work beautifully with something like Mankins’ concept, he figured. All they were missing was the financial support to bring the idea from the laboratory into space.

Jaffe invited Mankins to join a small team of researchers entering a Defense Department competition, in which they were planning to pitch a space solar power concept based on SPS-ALPHA. In 2016, the team presented the idea to top Defense officials and ended up winning four out of the seven award categories. Both Jaffe and Mankins described it as a crucial moment for reviving the US government’s interest in space solar power.

They might be right. In October, the Air Force Research Lab announced a $100 million program to develop hardware for a solar power satellite. It’s an important first step toward the first demonstration of space solar power in orbit, and Mankins says it could help solve what he sees as space solar power’s biggest problem: public perception. The technology has always seemed like a pie-in-the-sky idea, and the cost of setting up a solar array on Earth is plummeting. But space solar power has unique benefits, chief among them the availability of solar energy around the clock regardless of the weather or time of day.

It can also provide renewable energy to remote locations, such as forward operating bases for the military. And at a time when wildfires have forced the utility PG&E to kill power for thousands of California residents on multiple occasions, having a way to provide renewable energy through the clouds and smoke doesn’t seem like such a bad idea. (Ironically enough, PG&E entered a first-of-its-kind agreement to buy space solar power from a company called Solaren back in 2009; the system was supposed to start operating in 2016 but never came to fruition.)

“If space solar power does work, it is hard to overstate what the geopolitical implications would be,” Jaffe says. “With GPS, we sort of take it for granted that no matter where we are on this planet, we can get precise navigation information. If the same thing could be done for energy, it would be revolutionary.”

Indeed, there seems to be an emerging race to become the first to harness this technology. Earlier this year China announced its intention to become the first country to build a solar power station in space, and for more than a decade Japan has considered the development of a space solar power station to be a national priority. Now that the US military has joined in with a $100 million hardware development program, it may only be a matter of time before there’s a solar farm in the solar system.

More Great WIRED Stories

Continue Reading


Are Saturn’s Rings Really as Young as the Dinosaurs?

The Cassini spacecraft perished in a literal blaze of glory on September 15, 2017, when it ended its 13-year study of Saturn by intentionally plunging into the gas giant’s swirling atmosphere. The crash came after a last few months of furious study, during which Cassini performed the Grand Finale — a sensational, death-defying dance that…



Are Saturn’s Rings Really as Young as the Dinosaurs?

The Cassini spacecraft perished in a literal blaze of glory on September 15, 2017, when it ended its 13-year study of Saturn by intentionally plunging into the gas giant’s swirling atmosphere. The crash came after a last few months of furious study, during which Cassini performed the Grand Finale — a sensational, death-defying dance that saw the spacecraft dive between the planet and its rings 22 times.

Original story reprinted with permission from Quanta Magazine, an editorially indepen­dent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences. |||

As new perspectives often do, this one revealed a surprise. Previously, planetary scientists had assumed that Saturn’s rings were as old as the solar system itself—about 4.5 billion years old. But cosmic clues hidden deep within the rings caused some Cassini scientists to massively revise this figure. The rings aren’t as old as the solar system, they argued in a paper published this summer in the journal Science. They emerged no more than 100 million years ago, back when dinosaurs roamed Earth.

An explosion of media coverage linking the rings to the age of dinosaurs helped to quickly solidify the new findings in the public’s eye. If you enter the search phrase “how old are Saturn’s rings,” Google returns the answer “100.1 million years.”

Aurélien Crida, a planetary scientist at the Côte d’Azur Observatory, was incredulous at this definitive declaration. “I was a bit pissed off by how it was assessed, that the rings are young and it’s over,” he said.

He and other skeptics have pointed out that there are a lot of potential problems with the argument, from the physics of the ring pollution to the origins of the rings themselves. “The rings look young, but that doesn’t mean they really are young,” said Ryuki Hyodo, a planetary scientist at the Japanese Aerospace Exploration Agency. “There are still some processes that we are not considering.”

The rings were named alphabetically in the order they were discovered. Starting from the innermost ring, the D ring is followed by the C, B, A, F, G and E rings.Video: NASA/JPL-Caltech/Space Science Institute

In response to the hypothesis, Crida coauthored a commentary for Nature Astronomy, published in September, that presented a litany of uncertainties. The dinosaurian age of the rings is an eye-catching claim, said Crida, but it circumvents an uncomfortable reality: Too many uncertainties exist to permit any firm estimate of the age of the rings. Despite Cassini’s heroics, “we’re not really far ahead of where we were almost 40 years ago,” back when the Voyager probes first took a good look at Saturn, said Luke Dones, a planetary scientist at the Southwest Research Institute in Boulder, Colorado.

Proponents of the younger age stand by their work. “Every new exciting result gets challenged,” said Burkhard Militzer, a planetary scientist at the University of California, Berkeley, and a coauthor of the Science paper. “It’s the natural way to proceed.”

The debate is about more than the narrow question of the rings’ age. The age of Saturn’s rings will influence how we understand many of Saturn’s moons, including the potentially life-supporting world Enceladus, with its frozen ocean. And it will also push us closer to answering the ultimate question about Saturn’s rings, one that humans have wondered about since Galileo first marveled at them over 400 years ago: Where did they come from in the first place?

Age From a Scale

We know the age of the Earth because we can use the decay of radioactive matter in rocks to work out how old they are. Planetary geologists have done the same for rocks from the moon and Mars.

Saturn’s rings, predominantly composed of ice fragments with trace amounts of rocky matter, don’t lend themselves to this kind of analysis, said Matthew Hedman, a planetary scientist at the University of Idaho. That means age estimates have to be based on circumstantial evidence.

Illustration: Lucy Reading-Ikkanda/Quanta Magazine, Source: NASA/JPL-Caltech/Space Science Institute; NASA’s Goddard Space Flight Center

That evidence, in part, comes from dust. Think of the icy rings as resembling a field of snow: After a pristine start, soot from afar gradually pollutes it. In order to estimate the age of the snow, scientists have to measure the rate at which soot is falling, as well as the total amount of soot already there.

Cassini did the first part with its Cosmic Dust Analyzer, which found that Saturn’s rings are being steadily polluted by darker material—a mixture of rocky dust and organic compounds. Most of this material is being delivered by micrometeoroids from the Kuiper belt, a distant source of icy objects beyond the orbit of Neptune. The spacecraft also found that the sooty material currently makes up about 1 percent of Saturn’s icy rings.

To uncover the total mass of cosmic soot in the rings, researchers then had to weigh the rings themselves. Thankfully, Cassini’s Grand Finale created just such an opportunity. As the spacecraft swooped through the rings, it precisely measured the net gravitational pull at every point. Since gravity fields are dependent on an object’s mass, this feat allowed scientists to directly weigh the entire ring system.


During Cassini’s Grand Finale, the spacecraft dove between the rings and the planet 22 times. The maneuver began and ended with close flybys of Saturn’s moon Titan, whose orbit is shown in yellow.

With this information—the amount of soot and the rate at which it is falling—scientists estimated that it would have taken between 10 million and 100 million years for that proverbial snowy field to find itself sullied. The findings were generally well received. “Most of the community today is convinced that the rings were formed recently,” said Luciano Iess, an expert in aerospace engineering at Sapienza University of Rome and the Science study’s lead author.

Yet the pollution argument isn’t watertight. Dones points out that the Cassini team analyzing the incoming pollution has not settled on a precise rate. Various values have appeared in several conference presentations, but a final figure hasn’t yet been published. In the Science paper, the researchers chose one of these values and came up with a youthful ring age. But this ambiguity has been “causing a lot of consternation,” said Paul Estrada, a planetary scientist at NASA’s Ames Research Center who is a member of the Cassini team analyzing pollution.

The pollution rate may have also changed relatively recently. “It could just be that the bombardment rate is unusually high at the moment,” said Crida, even if we can’t say what would cause such a spike. In theory, a future mission to Saturn could dig out a rocky core from an old moon, one that preserves the pollution flux over time, said Tracy Becker, a planetary scientist at the Southwest Research Institute in San Antonio, Texas. But such a mission would be decades in the future.

We also don’t fully understand the physics behind the ring darkening. The micrometeoroids from the Kuiper belt slam into the rings’ icy chunks at such high speeds that the impacts are like little explosions, suggesting that not much of the micrometeoroids adheres. This has led to a fudge factor in the literature—guesstimates that 10 percent of the micrometeoroidal matter sticks to the ice and pollutes it.

Dones said that the Dust Accelerator Laboratory at the University of Colorado, Boulder, may be able to replicate this impact process and give us a better idea of the staying power of the pollutants. But for now, we’re in the dark.

Enceladus is an icy world that hides a subsurface ocean of salty water. Geysers on its surface, seen at the bottom of the moon in the image on the right, shoot material out hundreds of miles into space, potentially feeding Saturn’s rings.Photograph: NASA/JPL/Space Science Institute
Video: NASA/JPL-Caltech/Space Science Institute 

Crida’s commentary also suggested that an incognito planetary scrubber may be removing pollution to make the rings appear deceptively youthful. We’ve known since the Voyager days that material from the rings rains down onto the surface of Saturn. But we haven’t known what that material is made of. Cassini measured the rain using two separate instruments. Both found that it contains surprisingly little ice—as little as 24 percent. “That’s very confusing, given that the rings are measured to be over 95 percent water,” said James O’Donoghue, a planetary scientist at the Japanese Aerospace Exploration Agency. The “rain” is preferentially removing dirt, but no one knows why.

“There is something that is cleaning the rings,” said Crida. “We don’t know what it is, but it is now an observed fact, it’s not just a conjecture.”

Crida said that perhaps the ice ejected by micrometeoroid impacts tends to reattach itself to the rings, while the ejected pollutants rain out. Becker conjectures that pollution is being preferentially ejected by impacts, regardless of whether the ice is reattaching itself in this manner. And Hyodo wonders whether the geysers on Enceladus’ south pole are adding more water, diluting the rings’ pollution. But no one knows for sure.

But not everyone believes that there’s a lot of cleaning going on. “Getting the stuff dirty is easy,” said Militzer. “Cleaning is hard.”

Where They Came From

What if, said Crida, the pollution argument is correct? What if the rings have always been exposed to an unchanging influx of cosmic dust, and the rings are 100 million years old at most? Then we would have to explain how the rings formed so recently, which is a tricky prospect.

First, we have no idea what created the rings, so assigning them an origin story at any point in time is difficult. The rings may be the vestige of a comet torn asunder by Saturn’s gravitational tides, or the product of a collision between a comet and an icy moon, or the result of something that disturbed the orbit of several moons, causing them to smash into each other.

A sample-return mission to Saturn’s icy loops could find the remnants of the original bodies that were annihilated and used to forge the rings, said Militzer. But no such mission is forthcoming.

At the edge of Saturn’s B ring, vertical structures rise as high as 2.5 kilometers above the plane of the rings, casting long shadows. The typical thickness of the rings is only about 10 meters.Photograph: NASA/JPL/Space Science Institute

Second, the solar system’s first billion years or so were a pinball-like pandemonium, with protoplanetary objects constantly colliding. These days, said Crida, things are far more settled, so the likelihood of a catastrophic collision leading to Saturn’s rings is far lower. If they did form in a recent cataclysm, said Militzer, such an event would dramatically change our perspective: It would imply that our planetary neighborhood hasn’t entirely outgrown the bedlam of its primeval days just yet.

Linda Spilker, the Cassini project scientist at NASA’s Jet Propulsion Laboratory, said clues may lie in Saturn’s moons, as their development is somewhat linked to that of the rings. But their own stories are also riddled with uncertainties, from their origins to their ages.

A 2016 model, using the current positions of the moons to peer backward through time, suggests that the present system of rings and inner moons could have been created when a pair of midsize moons smashed into each other about 100 million years ago.

But the ability of such a collision to form the rings we see now, said Dones, is an active controversy; a much-debated 2017 study, for example, suggests that not enough material would have been available to make today’s rings. “It just doesn’t work,” said Crida, adding that the only way this two-moon impact could have created all those moons and rings is through “magic.”

“The question of whether the rings are old or young will one day be definitively answered,” said Becker. But right now, there is enough evidence on both sides that “there’s still plenty to argue about before we can say anything conclusively.”

While the past is unclear, the future seems more certain. The rings may look permanent, but the opposite is true. Observations from a telescope atop Hawaii’s Mauna Kea volcano found torrents of material raining out from the rings. When scientists add this to the materia

Continue Reading

Recent Posts