When you’re thinking deep space, it’s essential to start planning early, at least at our current state of technology. Sedna, for example, is getting attention as a mission target because while it’s on an 11,000 year orbit around the Sun, its perihelion at 76 AU is coming up in 2075. Given travel times in decades, we’d like to launch as soon as possible, which realistically probably means sometime in the 2040s. The small body of scientific literature building up around such a mission now includes a consideration of two alternative propulsion strategies.
Because we’ve recently discussed one of these – an inflatable sail taking advantage of desorption on an Oberth maneuver around the Sun – I’ll focus on the second, a Direct Fusion Drive (DFD) rocket engine now under study at Princeton University Plasma Physics Laboratory. Here the fusion fuel would be deuterium and helium-3, creating a thermonuclear propulsion thruster that produces power through a plasma heating system in the range of 1 to 10 MW.
DFD is a considerable challenge given the time needed to overcome issues like plasma stability, heat dissipation, and operational longevity, according to authors Elena Ancona and Savino Longo (Politecnico di Bari, Italy) and Roman Ya. Kezerashvili (CUNY), the latter having offered up the sail concept mentioned above. See Inflatable Technologies for Deep Space for more on this sail. A mission to so distant a target as Sedna demands evaluation of long-term operations and the production of reliable power for onboard instruments.
Nonetheless, the Princeton work has captured the attention of many in the space community as being one of the more promising studies of a propulsion method that could have profound consequences for operations far from the Sun. And it’s also true that getting off at a later date is not a showstopper. Sedna spends about 50 years within 85 AU of the Sun and almost two centuries within 100 AU, so there is an ample window for developing such a mission. Some mission profiles for closer targets, such as Titan and various Trans-Neptunian objects, and other Solar System destinations are already found in the literature.
Image: Schematic diagram of the Direct Fusion Drive engine subsystems with its simple linear configuration and directed exhaust stream. A propellant is added to the gas box. Fusion occurs in the closed-field-line region. Cool plasma flows around the fusion region, absorbs energy from the fusion products, and is then accelerated by a magnetic nozzle. Credits [5].
The Direct Fusion Drive produces electrical power as well as propulsion from its reactor, and shows potential for all these targets as well as, obviously, more immediate destinations like Mars. What is being called a ‘radio frequency plasma heating’ method uses a magnetic field that contains the hot plasma and ensures stability that has been hard to achieve in other fusion designs. Deuterium and tritium turn out to be the most effective fuels in terms of energy produced, but the deuterium/helium-3 reaction is aneutronic, and therefore does not require the degree of shielding that would otherwise be needed.
The disadvantages are also stark, and merit a look lest we get overly optimistic about the calendar. Helium-3 and deuterium require reactor temperatures as much as six times higher than demanded with the D-T reaction. Moreover, there is the supply problem, for the amount of helium-3 available is limited:
The D-3He reaction is appealing since it has a high energy release and produces only charged particles (making it aneutronic). This allows for easier energy containment within the reactor and avoids the neutron production associated with the D-T reaction. However, the D-3He reaction faces the challenge of a higher Coulomb barrier [electrostatic repulsion between positively charged nuclei], requiring a reactor temperature approximately six times greater than that of a D-T reactor to achieve a comparable reaction rate.
So what is the outline of a Direct Fusion Drive mission if we manage to overcome these issues? The authors posit a 1.6 MW DFD, working the numbers on the Earth escape trajectory, interplanetary cruise (a coasting phase) and final maneuvers at target. In an earlier paper, Kezerashvili has shown that the DFD option could reach the dwarf planet Eris in 10 years. The distance of 78 AU matches with Sedna’s perihelion, meaning Sedna itself could be visited in roughly half the time calculated for any other propulsion system considered.
Bear in mind that it has taken New Horizons 19 years to reach 61 AU. DFD is considerably faster, but notice that the mission outlined here assumes that the drive can be switched on and off for thrust generation, meaning a period of inactivity during the coasting phase. Will DFD have this capability? The authors also evaluate a constant thrust profile, with the disadvantage that it would require additional propellant, reducing payload mass.
In the thrust-coast-thrust profile, the authors’ goal is to deliver a 1500 kg payload to Sedna in less than 10 years. The authors calculate that approximately 4000 kg of propellant would be demanded for the mission. The DFD engine itself weighs 2000 kg. The launch mass bumps up to 7500 kg, all varying depending on instrumentation requirements. From the paper:
The total ∆V for the mission reaches 80 km/s, with half of that needed to slow down during the rendezvous phase, where the coasting velocity is 38 km/s. Each maneuver would take between 250 and 300 days, requiring about 1.5 years of thrust over the 10-year journey. However, the engine would remain active to supply power to the system. The amount of 3He required is estimated at 0.300 kg.
The launch opportunities considered here begin in 2047, offering time for DFD technology to mature. As I mentioned above, I have not developed the solar sail alternative with desorption here since we’ve examined that option in recent posts. But by way of comparison, the DFD option, if it becomes available, would enable much larger payloads and the rich scientific reward of a mission that could orbit its target rather than perform a flyby. Payloads of up to 1500 kg may become feasible, as opposed to the solar sail mission, whose payload is 1.5 kg.
The two missions offer stark alternatives, with the authors making the case that a fast sail flyby taking advantage of advances in miniaturization still makes a rich contribution – we can refer again to New Horizons for proof of that. The solar sail analysis with reliance on sail desorption and a Jupiter gravity assist makes it to Sedna in a surprising seven years, thus beating even DFD. The velocity is achieved by coating the sail with materials that are powerfully ‘desorbed’ or emitted from it once it reaches a particular heliocentric distance.
Thus my reference to an ‘Oberth maneuver,’ a propulsive kick delivered as the spacecraft reaches perihelion. Both concepts demand extensive development. Remember that this paper is intended as a preliminary feasibility assessment:
Rather than providing a fully optimized mission design, this work explores the trade-offs and constraints associated with each approach, identifying the critical challenges and feasibility boundaries. The analysis includes trajectory considerations, propulsion system constraints, and an initial assessment of science payload accommodation. By structuring the feasibility assessment across these categories, this study provides a foundation for future, more detailed mission designs.
Image: Diagram showing the orbits of the 4 known sednoids (pink) as of April 2025. Positions are shown as of 14 April 2025, and orbits are centered on the Solar System Barycenter. The red ring surrounding the Sun represents the Kuiper belt; the orbits of sednoids are so distant they never intersect the Kuiper belt at perihelion. Credit: Nrco0e, via Wikimedia Commons CC BY-SA 4.0.
The boomlet of interest in Sedna arises from several factors, including the fact that its eccentric orbit takes it well beyond the heliopause. In fact, Sedna spends only between 200 and 300 years within 100 AU, which is less than 3% of its orbital period. Thus its surface is protected from solar radiation affecting its composition. Any organic materials found there would help us piece together information about photochemical processes in their formation as opposed to other causes, a window into early chemical reactions in the origin of life. The hypothesis that Sedna is a captured object only adds spice to the quest.
The paper is Ancona et al., “Feasibility study of a mission to Sedna – Nuclear propulsion and advanced solar sailing concepts,” available as a preprint.
We have so few exoplanets that can actually be seen rather than inferred through other data that the recent news concerning the star TWA 7 resonates. The James Webb Space Telescope provided the data on a gap in one of the rings found around this star, with the debris disk itself imaged by the European Southern Observatory’s Very Large Telescope as per the image below. The putative planet is the size of Saturn, but that would make it the planet with the smallest mass ever observed through direct imaging.
Image: Astronomers using the NASA/ESA/CSA James Webb Space Telescope have captured compelling evidence of a planet with a mass similar to Saturn orbiting the young nearby star TWA 7. If confirmed, this would represent Webb’s first direct image discovery of a planet, and the lightest planet ever seen with this technique. Credit: © JWST/ESO/Lagrange.
Adding further interest to this system is that TWA 7 is an M-dwarf, one whose pole-on dust ring was discovered in 2016, so we may have an example of a gas giant in formation around such a star, a rarity indeed. The star is a member of the TW Hydrae Association, a grouping of young, low-mass stars sharing a common motion and, at about a billion years old, a common age. As is common with young M-dwarfs, TWA 7 is known to produce strong X-ray flares.
We have the French-built coronagraph installed on JWST’s MIRI instrument to thank for this catch. Developed through the Centre national de la recherche scientifique (CNRS), the coronagraph masks starlight that would otherwise obscure the still unconfirmed planet. It is located within a disk of debris and dust that is observed ‘pole on,’ meaning the view as if looking at the disk from above. Young planets forming in such a disk are hotter and brighter than in developed systems, much easier to detect in the mid-infrared range.
In the case of TWA 7, the ring-like structure was obvious. In fact, there are three rings here, the narrowest of which is surrounded by areas with little matter. It took observations to narrow down the planet candidate, but also simulations that produced the same result, a thin ring with a gap in the position where the presumed planet is found. Which is to say that the planet solution makes sense, but we can’t yet call this a confirmed exoplanet.
The paper in Nature runs through other explanations for this object, including a distant dwarf planet in our own Solar System or a background galaxy. The problem with the first is that no proper motion is observed here, as would be the case even with a very remote object like Eris or Sedna, both of which showed discernible proper motion at the time of their discovery. As to background galaxies, there is nothing reported at optical or near-infrared wavelengths, but the authors cannot rule out “an intermediate-redshift star-forming [galaxy],” although they calculate that probability at about 0.34%.
The planet option seems overwhelmingly likely, as the paper notes:
The low likelihood of a background galaxy, the successful fit of the MIRI flux and SPHERE upper limits by a 0.3-MJ planet spectrum and the fact that an approximately 0.3-MJ planet at the observed position would naturally explain the structure of the R2 ring, its underdensity at the planet’s position and the gaps provide compelling evidence supporting a planetary origin for the observed source. Like the planet β Pictoris b, which is responsible for an inner warp in a well-resolved—from optical to millimetre wavelengths—debris disk, TWA 7b is very well suited for further detailed dynamical modelling of disk–planet interactions. To do so, deep disk images at short and millimetre wavelengths are needed to constrain the disk properties (grain sizes and so on).
So we have a probable planet in formation here, a hot, bright object that is at least 10 times lighter than any exoplanet that has ever been directly imaged. Indeed, the authors point out something exciting about JWST’s capabilities. They argue that planets as light as 25 to 30 Earth masses could have been detected around this star. That’s a hopeful note as we move the ball forward on detecting smaller exoplanets down to Earth-class with future instruments.
Image: The disk around the star TWA 7 recorded using ESO’s Very Large Telescope’s SPHERE instrument. The image captured with JWST’s MIRI instrument is overlaid. The empty area around TWA 7 B is in the R2 ring (CC #1). Credit: © JWST/ESO/Lagrange.
The paper is Lagrange et al., “Evidence for a sub-Jovian planet in the young TWA 7 disk,” Nature 25 June 2025 (full text).
This morning’s post grows out of listening to John Coltrane’s album Sun Ship earlier in the week. If you’re new to jazz, Sun Ship is not where you want to begin, as Coltrane was already veering in a deeply avant garde direction when he recorded it in 1965. But over the years it has held a fascination for me. Critic Edward Mendelowitz called it “a riveting glimpse of a band traveling at warp speed, alternating shards of chaos and beauty, the white heat of virtuoso musicians in the final moments of an almost preternatural communion…” McCoy Tyner’s piano is reason enough to listen.
As music often does for me, Sun Ship inspired a dream that mixed the music of the Coltrane classic quartet (Tyner, Jimmy Garrison and Elvin Jones) with an ongoing story. The Parker Solar Probe is, after all, a real ‘sun ship,’ one that on December 24 of last year made its closest approach to the Sun. Moving inside our star’s corona is a first – the craft closed to within 6.1 million kilometers of the solar surface.
When we think of human technology in these hellish conditions, those of us with an interstellar bent naturally start musing about ‘sundiver’ trajectories, using a solar slingshot to accelerate an outbound spacecraft, perhaps with a propulsive burn at perihelion. The latter option makes this an ‘Oberth maneuver’ and gives you a maximum outbound kick. Coltrane might have found that intriguing – one of his later albums was, after all, titled Interstellar Space.
I find myself musing on speed. The fastest humans have ever moved is the 39,897 kilometers per hour that the trio of Apollo 10 astronauts – Tom Stafford, John Young and Eugene Cernan – experienced on their return to Earth in 1969. The figure translates into just over 11 kilometers per second, which isn’t half bad. Consider that Voyager 1 moves at 17.1 km/sec, and it’s the fastest object we’ve yet been able to send into deep space.
True, New Horizons has the honor of being the fastest craft immediately after launch, moving at over 16 km/sec and thus eclipsing Voyager 1’s speed before the latter’s gravity assists. But New Horizons has since slowed as it climbs out of the Sun’s gravitational well, now making on the order of 14.1 km/sec, with no gravity assists ahead. Wonderfully, operations continue deep in the Kuiper Belt.
It’s worth remembering that at the beginning of the 20th Century, a man named Fred Mariott became the fastest man alive when he managed 200 kilometers per hour in a steam-powered car (and somehow survived). Until we launched the Parker Solar Probe, the two Helios missions counted as the fastest man-made objects, moving in elliptical orbits around the Sun that reached 70 kilometers per second. Parker outdoes this: At perihelion in late 2024, it managed 191.2 km/sec, so it now holds velocity as well as proximity records.
191.2 kilometers per second gets you to Proxima Centauri in something like 6,600 years. A bit long even for the best equipped generation ship, I think you’ll agree. Surely Heinlein’s ‘Vanguard,’ the starship in Orphans of the Sky was moving at a much faster clip even if its journey took many centuries to reach the same star. I don’t think Heinlein ever let us know just how many. Of course, we can’t translate the Parker spacecraft’s infalling velocity into comparable numbers on an outbound journey, but it’s fun to speculate on what these numbers imply.
Image: The United Launch Alliance Delta IV Heavy rocket launches NASA’s Parker Solar Probe to touch the Sun, Sunday, Aug. 12, 2018, from Launch Complex 37 at Cape Canaveral Air Force Station, Florida. Parker Solar Probe is humanity’s first-ever mission into a part of the Sun’s atmosphere called the corona. The mission continues to explore solar processes that are key to understanding and forecasting space weather events that can impact life on Earth. It also gives a nudge to interstellar dreamers. Credit: NASA/Bill Ingalls.
Speaking of Voyager 1, another interesting tidbit relates to distance: In 2027, the perhaps still functioning spacecraft will become the first human object to reach one light-day from the Sun. That’s just a few steps in terms of an interstellar journey, but nonetheless meaningful. Currently radio signals take over 23 hours to reach the craft, with another 23 required for a response to be recorded on Earth. Notice that 2027 will also mark the 50th year since the two Voyagers were launched. January 28, 2027 is a day to mark in your calendar.
Since we’re still talking about speeds that result in interstellar journeys in the thousands of years, it’s also worth pointing out that 11,000 work-years were devoted to the Voyager project through the Neptune encounter in 1989, according to NASA. That is roughly the equivalent of a third of the effort estimated to complete the Great Pyramid at Giza during the reign of Khufu, (~2580–2560 BCE) in the fourth dynasty of the Old Kingdom. That’s also a tidbit from NASA, telling me that someone there is taking a long term perspective.
Coltrane’s Sun Ship has also led me to the ‘solar boat’ associated with Khufu. The vessel was found sealed in a space near the Great Pyramid and is the world’s oldest intact ship, buried around 2500 BCE. It’s a ritual vessel that, according to archaeologists, was intended to carry the resurrected Khufu across the sky to reach the Sun god the Egyptians called Ra.
Image: The ‘sun ship’ associated with the Egyptian king Khufu, in the pre-Pharaonic era of ancient Egypt. Credit: Olaf Tausch, CC BY 3.0. Wikimedia Commons.
My solar dream reminds me that interstellar travel demands reconfiguring our normal distance and time scales as we comprehend the magnitude of the problem. While Voyager 1 will soon reach a distance of 1 light day, it takes light 4.2 years to reach Proxima Centauri. To get around thousand-year generation ships, we are examining some beamed energy solutions that could drive a small sail to Proxima in 20 years. We’re a long way from making that happen, and certainly nowhere near human crew capabilities for interstellar journeys.
But breakthroughs have to be imagined before they can be designed. Our hopes for interstellar flight exercise the mind, forcing the long view forward and back. Out of such perspectives dreams come, and one day, perhaps, engineering.
Sometimes all it takes to spawn a new idea is a tiny smudge in a telescopic image. What counts, of course, is just what that smudge implies. In the case of the object called ‘Oumuamua, the implication was interstellar, for whatever it was, this smudge was clearly on a hyperbolic orbit, meaning it was just passing through our Solar System. Jim Bickford wanted to see the departing visitor up close, and that was part of the inspiration for a novel propulsion concept.
Now moving into a Phase II study funded by NASA’s Innovative Advanced Concepts office (NIAC), the idea is dubbed Thin-Film Nuclear Engine Rocket (TFINER). Not the world’s most pronounceable acronym, but if the idea works out, that will hardly matter. Working at the Charles Stark Draper Laboratory, a non-profit research and development company in Cambridge MA, Bickford is known to long-time Centauri Dreams readers for his work on naturally occurring antimatter capture in planetary magnetic fields. See Antimatter Acquisition: Harvesting in Space for more on this.
Image: Draper Laboratory’s Jim Bickford, taking a deep space propulsion concept to the next level. Credit: Charles Stark Draper Laboratory.
Harvesting naturally occurring antimatter in space offers some hope of easing one of the biggest problems of such propulsion strategies, namely the difficulty in producing enough antimatter to fuel an engine. With the Thin Film Nuclear Engine Rocket, Bickford again tries to change the game. The notion is to use energetic radioisotopes in thin layers, allowing their natural decay products to propel a spacecraft. The proper substrate, Bickford believes, can control the emission direction, and the sail-like system packs a punch: Velocity changes on the order of 100 kilometers per second using mere kilograms of fuel.
I began this piece talking about ‘Oumuamua, but that’s just for starters. Because if we can create a reliable propulsion system capable of such tantalizing speed. we can start thinking about mission targets as distant as the solar gravitational focus, where extreme magnifications become possible. Because the lensing effect for practical purposes begins at 550 AU and continues with a focal line to infinity, we are looking at a long journey. Bear in mind that Voyager 1, our most distant working spacecraft, has taken almost half a century to reach, as of now, 167 AU. To image more than one planet at the solar lens, we’ll also need a high degree of maneuverability to shift to multiple exoplanetary systems.
Image: This is Figure 3-1 from the Phase 1 report. Caption: Concept for the thin film thrust sheet engine. Alpha particles are selectively emitted in one direction at approximately 5% of the speed of light. Credit: NASA/James Bickford.
So we’re looking at a highly desirable technology if TFINER can be made to work, one that could offer imaging of exoplanets, outer planet probes, and encounters with future interstellar interlopers. Bickford’s Phase 1 work will be extended in the new study to refine the mission design, which will include thruster experiments as well as what the Phase II summary refers to as ‘isotope production paths’ while also considering opportunities for hybrid missions that could include the Oberth ‘solar dive’ maneuver. More on that last item soon, because Bickford will be presenting a paper on the prospect this summer.
Image: Artist concept highlighting the novel approach proposed by the 2025 NIAC awarded selection of the TFINER concept. This is the baseline TFINER configuration used in the system analysis. Credit: NASA/James Bickford.
But let’s drop back to the Phase I study. I’ve been perusing Bickford’s final report. Developing out of Wolfgang Moeckel’s work at NASA Lewis in the 1970s, the TFINER design uses thin sheets of radioactive elements. The solution leverages exhaust velocities for alpha particle decays that can exceed 5 percent of the speed of light. You’ll recall my comment on space sails in the previous post, where we looked at the advantage of inflatable components to make sails more scalable. TFINER is more scalable still, with changes to the amount of fuel and sheet area being key variables.
Let’s begin with a ~10-micron thick Thorium-228 radioisotope film, with each sheet containing three layers, integrating the active radioisotope fuel layer in the middle. Let me quote from the Phase I report on this:
It must be relatively thin to avoid substantial energy losses as the alpha particles exit the sheet. A thin retention film is placed over this to ensure that the residual atoms do not boil off from the structure. Finally, a substrate is added to selectively absorb alpha emission in the forward direction. Since decay processes have no directionality, the substrate absorber produces the differential mass flow and resulting thrust by restricting alpha particle trajectories to one direction.
The TFINER baseline uses 400 meter tethers to connect the payload module. The sheet’s power comes from Thorium-228 decay (alpha decay) — the half-life is 1.9 years. We get a ‘decay cascade chain’ that produces daughter products – four additional alpha emissions result with half-lives ranging from 300 ns to 3 days. The uni-directional thrust is produced thanks to the beryllium absorber (~35-microns thick) that coats one side of the thin film to capture emissions moving forward. Effective thrust is thus channeled out the back.
Note as well the tethers in the illustration, necessary to protect the sensor array and communications component to minimize the radiation dose. Manipulation of the tethers can control trajectory on single-stage missions to deep space targets. Reconfiguring the thrust sheet is what makes the design maneuverable, allowing thrust vectoring, even as longer half-life isotopes can be deployed in the ‘staged’ concept to achieve maximum velocities for extended missions.
Image: This is Figure 7-8 from the report. Caption: Example thrust sheet rotation using tether control. Credit: NASA/James Bickford.
From the Phase I report:
The payload module is connected to the thrust sheet by long tethers. A winch on the payload module can individually pull-in or let-out line to manipulate the sail angle relative to the payload. The thrust sheet angle will rotate the mean thrust vector and operate much like trimming the sail of a boat. Of course, in this case, the sail (sheet) pressure comes from the nuclear exhaust rather than the wind
It’s hard to imagine the degree of maneuverability here being replicated in any existing propulsion strategy, a big payoff if we can learn how to control it:
This approach allows the thrust vector to be rotated relative to the center of mass and velocity vector to steer the spacecraft’s main propulsion system. However, this is likely to require very complex controls especially if the payload orientation also needs to be modified. The maneuvers would all be slow given the long lines lengths and separations involved.
Spacecraft pointing and control is an area as critical as the propulsion system. The Phase I report goes into the above options as well as thrust vectoring through sheets configured as panels that could be folded and adjusted. The report also notes that thermo-electrics within the substrate may be used to generate electrical power, although a radioisotope thermal generator integrated with the payload may prove a better solution. The report offers a good roadmap for the design refinement of TFINER coming in Phase II.
One idea for deep space probes that resurfaces every few years is the inflatable sail. We’ve seen keen interest especially since Breakthrough Starshot’s emergence in 2016 in meter-class sails, perhaps as small as four meters to the side. But if we choose to work with larger designs, sails don’t scale well. Increase sail area and problems of mass arise thanks to the necessary cables between sail and payload. An inflatable beryllium sail filled with a low-pressure gas like hydrogen avoids this problem, with payload mounted on the space-facing surface. Such sails have already been analyzed in the literature (see below).
Roman Kezerashvili (City University of New York), in fact, recently analyzed an inflatable torus-shaped sail with a twist, one that uses compounds incorporated into the sail material itself as a ‘propulsive shell’ that can take advantage of desorption produced by a microwave beam or a close pass of the Sun. Laser beaming also produces this propulsive effect but microwaves are preferable because they do not damage the sail material. Kezerashvili has analyzed carbon in the sail lattice in solar flyby scenarios. The sail is conceived as “a thin reflective membrane attached to an inflatable torus-shaped rim.”
Image: This is Figure 1 from the paper. Credit: Kezerashvili et al.
Inflatable sails go back to the late 1980s. Joerg Strobl published a paper on the concept in the Journal of the British Interplanetary Society, and it received swift follow-up in a series of studies in the 1990s examining an inflatable radio telescope called Quasat. A series of meetings involving Alenia Spazio, an Italian aerospace company based in Turin, took the idea further. In 2018, Claudio Maccone, Greg Matloff, and NASA’s Les Johnson joined Kezerashvili in analyzing inflatable technologies for missions as challenging as a probe to the Oort Cloud.
Indeed, working with Joseph Meany in 2023, Matloff would also describe an inflatable sail using aerograpahite and graphene, enabling higher payload mass and envisioning a ‘sundiver’ trajectory to accelerate an Alpha Centauri mission. The conception here is for a true interstellar ark carrying a crew of several hundred, using a sail with radius of 764 kilometers on a 1300 year journey. So the examination of inflatable sails in varying materials is clearly not slowing down.
But there are uses for inflatable space structures that go beyond outer system missions. I see that they are now the subject of a NIAC Phase I study by John Mather (NASA GSFC) that puts a new wrinkle into the concept. Mather’s interest is not propulsion but an inflatable starshade that, in various configurations, could work either with the planned Habitable Worlds Observatory or an Extremely Large Telescope like the 39 m diameter European Extremely Large Telescope now being built in Chile. Starshades are an alternative to coronagraph technologies that suppress light from a star to reveal its planetary companions.
Inflatables may well be candidates for any kind of large space structure. Current planning on the Habitable Worlds Observatory, scheduled for launch no earlier than the late 2030s, includes a coronagraph, but Mather thinks the two technologies offer useful synergies. Here’s a quote from the study summary:
A starshade mission could still become necessary if: A. The HWO and its coronagraph cannot be built and tested as required; B. The HWO must observe exoplanets at UV wavelengths, or a 6 m HWO is not large enough to observe the desired targets; C. HWO does not achieve adequate performance after launch, and planned servicing and instrument replacement cannot be implemented; D. HWO observations show us that interesting exoplanets are rare, distant, or are hidden by thick dust clouds around the host star, or cannot be fully characterized by an upgraded HWO; or E. HWO observations show that the next step requires UV data, or a much larger telescope, beyond the capability of conceivable HWO coronagraph upgrades.
So Mather’s idea is also meant to be a kind of insurance policy. It’s worth pointing out that coronagraphs are well studied and compact, while starshade technologies are theoretically sound but untested in space. But as the summary mentions, work at ultraviolet wavelengths is out of the coronagraph’s range. To get into that part of the spectrum, pairing Habitable Worlds Observatory with a 35-meter starshade seems the only option. This conceivably would allow a relaxation of some HWO optical specs, and thus lower the overall cost. The NIAC study will explore these options for a 35-meter as well as a 60-meter starshade.
I mentioned the possibility of combining a starshade with observations through an Extremely Large Telescope, an eye-widening notion that Mather proposed in a 2022 NIAC Phase I study. The idea here is to place the starshade in an orbit that would match position and velocity with the telescope, occulting the star to render the planets more visible. This would demand an active propulsion system to maintain alignment during the observation, while also making use of the adaptive optics already built into the telescope to suppress atmospheric distortion. The mission is called Hybrid Observatory for Earth-like Exoplanets.
Image: Artist concept highlighting the novel approach proposed by the 2025 NIAC awarded selection of Inflatable Starshade for Earthlike Exoplanets concept. Credit: NASA/John Mather.
As discussed earlier, mass considerations play into inflatable designs. In the HOEE study, Mather referred to his plan “to cut the starshade mass by more than a factor of 10. There is no reason to require thousands of kg to support 400 kg of thin membranes.” His design goal is to create a starshade that can be assembled in space, thus avoiding launch and deployment issues. All this to create what he calls “the most powerful exoplanet observatory yet proposed.”
You can see how the inflatable starshade idea grows out of the hybrid observatory study. By experimenting with designs producing the needed strength, stiffness, stability and thermal requirements and the issues raised by bonding large sheets of materials of the requisite strength, the mass goals may be realized. So the inflatable sail option once again morphs into a related design, one optimized as an adjunct to an exoplanet observatory, provided this early work can lead to a solution that could benefit both astronomy and sails.
The paper on inflatable beryllium sails is Matloff, G. L., Kezerashvili, R. Ya., Maccone, C. and Johnson, L., “The Beryllium Hollow-Body Solar Sail: Exploration of the Sun’s Gravitational Focus and the Inner Oort Cloud”, arXiv:0809.3535v1 [physics.space-ph] 20 Sep 2008. The later Kezerashvili paper is Kezerashvili et al., “A torus-shaped solar sail accelerated via thermal desorption of coating,” Advances in Space Research Vol. 67, Issue 9 (2021), pp. 2577-2588 (abstract). The Matloff and Meany paper on an Alpha Centauri interstellar ark is ”Aerographite: A Candidate Material for Interstellar Photon Sailing,” JBIS Vol. 77 (2024) pp. 142-146. Thanks to Thomas Mazanec and Antonio Tavani for the pointer to Mather’s most recent NIAC study.
The assumption that gas giant planets are unlikely around red dwarf stars is reasonable enough. A star perhaps 20 percent the mass of the Sun should have a smaller protoplanetary disk, meaning sufficient gas and dust to build a Jupiter-class world are lacking. The core accretion model (a gradual accumulation of material from the disk) is severely challenged. Moreover, these small stars are active in their extended youth, sending out frequent flares and strong stellar winds that should dissipate such a disk quickly. Gravitational instabilities within the disk are one possible alternative.
Planet formation around such a star must be swift indeed, which accounts for estimates as low as 1 percent of such stars having a gas giant in the system. Exceptions like GJ 3512 b, discovered in 2019, do occur, and each is valuable. Here we have a giant planet, discovered through radial velocity methods, orbiting a star a scant 12 percent of the Sun’s mass. Or consider the star GJ 876, which has two gas giants, or the exoplanet TOI-5205 b, a transiting gas giant discovered in 2023. Such systems leave us hoping for more examples to begin to understand the planet forming process in such a difficult environment.
Let me drop back briefly to a provocative study from 2020 by way of putting all this in context before we look at another such system that has just been discovered. In this earlier work, the data were gathered at the Atacama Large Millimeter/submillimeter Array (ALMA), taken at a wavelength of 0.87 millimeters. The team led by Nicolas Kurtovic (Max Planck Institute for Astronomy, Heidelberg) found evidence of ring-like structures in protoplanetary disks that extend between 50 and 90 AU out.
Image: This is a portion of Figure 2 from the paper, which I’m including because I doubt most of us have seen images of a red dwarf planetary disk. Caption: Visibility modeling versus observations of our sample. From left to right: (1) Real part of the visibilities after centering and deprojecting the data versus the best fit model of the continuum data, (2) continuum emission of our sources where the scale bar represents a 10 au distance, (3) model image, (4) residual map (observations minus model), (5) and normalized, azimuthally averaged radial profile calculated from the beam convolved images in comparison with the model without convolution (purple solid line) and after convolution (red solid line). In the right most plots, the gray scale shows the beam major axis. Credit: Kurtovic et al.
Gaps in these rings, possibly caused by planetary embryos, would accommodate planets of the Saturn class, and the researchers find that gaps formed around three of the M-dwarfs in the study. The team suggests that ‘gas pressure bumps’ develop to slow the inward migration of the disk, allowing such giant worlds to coalesce. It’s an interesting possibility, but we’re still early in the process of untangling how this works. For more, see How Common Are Giant Planets around Red Dwarfs?, a 2021 entry in these pages.
Now we learn of TOI-6894 b, a transiting gas giant found as part of Edward Bryant’s search for such worlds at the University of Warwick and the University of Liège. An international team of astronomers confirmed the find using telescopes at the SPECULOOS and TRAPPIST projects. The work appears in Nature Astronomy (citation below). Here’s Bryant on the scope of the search for giant M-dwarf planets:
“I originally searched through TESS observations of more than 91,000 low-mass red-dwarf stars looking for giant planets. Then, using observations taken with one of the world’s largest telescopes, ESO’s VLT, I discovered TOI-6894 b, a giant planet transiting the lowest mass star known to date to host such a planet. We did not expect planets like TOI-6894b to be able to form around stars this low-mass. This discovery will be a cornerstone for understanding the extremes of giant planet formation.”
TOI-6894 b has a radius only a little larger than Saturn, although it has only about half of Saturn’s mass. What adds spice to this particular find is that the host star is the lowest mass star found to have a transiting giant planet. In fact, TOI-6894 is only 60 percent the size of the next smallest red dwarf with a transiting gas giant. Given that 80 percent of stars in the Milky Way are red dwarfs, determining an accurate percentage of red dwarf gas giants is significant for assessing the total number in the galaxy.
Image: Artwork depicting the exoplanet TOI-6894 b around a red dwarf star. This planet is unusual because, given the size/mass of the planet relative to the very low mass of the star, this planet should not have been able to form. The planet is vary large compared to its parent star, shown here to scale. With the known temperature of the star, the planet is expected to only be approximately 420 degrees Kelvin at the top of its atmosphere. This means its atmosphere may contain methane and ammonia, amongst other species. This would make this planet one of the first planets outside the Solar System where we can observe nitrogen, which alongside carbon and oxygen is a key building block for life. Credit: University of Warwick / Mark Garlick.
TOI-6894 b produces deep transits and sports temperatures in the range of 420 K, according to the study. Clearly this world is not in the ‘hot Jupiter’ category. Amaury Triaud (University of Birmingham) is a co-author on this paper:
“Based on the stellar irradiation of TOI-6894 b, we expect the atmosphere is dominated by methane chemistry, which is exceedingly rare to identify. Temperatures are low enough that atmospheric observations could even show us ammonia, which would be the first time it is found in an exoplanet atmosphere. TOI-6894 b likely presents a benchmark exoplanet for the study of methane-dominated atmospheres and the best ‘laboratory’ to study a planetary atmosphere containing carbon, nitrogen, and oxygen outside the Solar System.”
Thus it’s good to know that JWST observations targeting the atmosphere of this world are already on the calendar within the next 12 months. Rare worlds that can serve as benchmarks for hitherto unexplained processes are pure gold for our investigation of where and how giant planets form.
The paper is Bryant et al., “A transiting giant planet in orbit around a 0.2-solar-mass host star,” Nature Astronomy (2025). Full text. The Kurtovic study is Kurtovic, Pinilla, et al., “Size and Structures of Disks around Very Low Mass Stars in the Taurus Star-Forming Region,” Astronomy & Astrophysics, 645, A139 (2021). Full text.
When one set of data fails to agree with another over the same phenomenon, things can get interesting. It’s in such inconsistencies that interesting new discoveries are sometimes made, and when the inconsistency involves the expansion of the universe, there are plenty of reasons to resolve the problem. Lately the speed of the expansion has been at issue given the discrepancy between measurements of the cosmic microwave background and estimates based on Type Ia supernovae. The result: The so-called Hubble Tension.
It’s worth recalling that it was a century ago that Edwin Hubble measured extragalactic distances by using Cepheid variables in the galaxy NGC 6822. The measurements were necessarily rough because they were complicated by everything from interstellar dust effects to lack of the necessary resolution, so that the Hubble constant was not known to better than a factor of 2. Refinements in instruments tightened up the constant considerably as work progressed over the decades, but the question of how well astronomers had overcome the conflict with the microwave background results remained.
Now we have new work that looks at the rate of expansion using data from the James Webb Space Telescope, doubling the sample of galaxies used to calibrate the supernovae results. The paper’s lead author, Wendy Freedman of the University of Chicago, argues that the JWST results resolve the tension. With Hubble data included in the analysis as well, Freedman calculates a Hubble value of 70.4 kilometers per second per megaparsec, plus or minus 3%. This result brings the supernovae results into statistical agreement with recent cosmic microwave background data showing 67.4, plus or minus 0.7%.
Image: The University of Chicago’s Freedman, a key player in the ongoing debate over the value of the Hubble Constant. Credit: University of Chicago.
While the cosmic microwave background tells us about conditions early in the universe’s expansion, Freedman’s work on supernovae is aimed at pinning down how fast the universe is expanding in the present, which demands accurate measurements of interstellar distances. Knowing the maximum brightness of supernovae allows the use of their apparent luminosities to calculate their distance. Type 1a supernovae are consistent in brightness at their peak, making them, like the Cepheid variables Hubble used, helpful ‘standard candles.’
The same factors that plagued Hubble, such as the effect of dimming from interstellar dust and other factors that affect luminosity, have to be accounted for, but JWST has four times the resolution of the Hubble Space Telescope, and is roughly 10 times as sensitive, making its measurements a new gold standard. Co-author Taylor Hoyt (Lawrence Berkeley Laboratory) notes the result:
“We’re really seeing how fantastic the James Webb Space Telescope is for accurately measuring distances to galaxies. Using its infrared detectors, we can see through dust that has historically plagued accurate measurement of distances, and we can measure with much greater accuracy the brightnesses of stars.”
Image: Scientists have made a new calculation of the speed at which the universe is expanding, using the data taken by the powerful new James Webb Space Telescope on multiple galaxies. Above, Webb’s image of one such galaxy, known as NGC 1365. Credit: NASA, ESA, CSA, Janice Lee (NOIRLab), Alyssa Pagan (STScI).
A lack of agreement between the CMB findings and the supernovae data could have been pointing to interesting new physics, but according to this work, the Standard Model of the universe holds up. In a way, that’s too bad for using the discrepancy to probe into mysterious phenomena like dark energy and dark matter, but it seems we’ll have to be looking elsewhere for answers to their origin. Ahead for Freedman and team are new measurements of the Coma cluster that Freedman suggests could fully resolve the matter within years.
As the paper notes:
While our results show consistency with ΛCDM (the Standard Model), continued improvement to the local distance scale is essential for further reducing both systematic and statistical uncertainties.
The paper is Freedman et al., “Status Report on the Chicago-Carnegie Hubble Program (CCHP): Measurement of the Hubble Constant Using the Hubble and James Webb Space Telescopes,” The Astrophysical Journal Vol. 985, No 2 (27 May 2025), 203 (full text).
Here about the beach I wander’d, nourishing a youth sublime
With the fairy tales of science, and the long result of Time…
—Tennyson
Temporal coincidence plays havoc with our ideas about other civilizations in the cosmos. If we want to detect them, their society must at least have developed to the point that it can manipulate electromagnetic waves. But its technology has to be of sufficient strength to be noticed. The kind of signals people were listening to 100 years ago on crystal sets wouldn’t remotely fit the bill, and neither would our primitive TV signals of the 1950s. So we’re looking for strong signals and cultures older than our own.
Now consider how short a time we’re talking about. We have been using radio for a bit over a century, which is on the order of one part in 100,000,000 of the lifespan of our star. You may recall the work of Brian Lacki, which I wrote about four years ago (see Alpha Centauri and the Search for Technosignatures). Lacki, now at Oxford, points out how unlikely it would be to find any two stars remotely near each other whose civilization ‘window’ corresponded to our own. In other words, even if we last a million years as a technological civilization, we’re just the blink of an eye in cosmic time.
Image: Brian Lacki, whose work for Breakthrough Listen continues to explore both the scientific and philosophical implications of SETI. Credit: University of Oxford.
Adam Frank at the University of Rochester has worked this same landscape. He thinks we might well find ourselves in a galaxy that at one time or another had flourishing civilizations that are long gone. We are separated not only in space but also in time. Maybe there are such things as civilizations that are immortal, but it seems more likely that all cultures eventually end, even if by morphing into some other form.
What would a billion year old civilization look like? Obviously we have no idea, but it’s conceivable that such a culture, surely non-biological and perhaps non-corporeal, would be able to manipulate matter and spacetime in ways that might simply mimic nature itself. Impossible to find that one. A more likely SETI catch would be a civilization that has had space technologies just long enough to have the capability of interstellar flight on a large scale. In a new paper, Lacki looks at what its technosignature might look like. If you’re thinking Dyson spheres or swarms, you’re on the right track, but as it turns out, such energy gathering structures have time problems of their own.
Lacki’s description of a megaswarm surrounding a star:
These swarms, practically by definition, need to have a large number of elements, whether their purpose is communication or exploitation. Moreover, the swarm orbital belts need to have a wide range of inclinations. This ensures that the luminosity is being collected or modulated in all directions. But this in turn implies a wide range of velocities, comparable to the circular orbital velocity. Another problem is that the number of belts that can “fit” into a swarm without crossing is limited.
Image: Artist’s impression of a Dyson swarm. Credit: Archibald Tuttle / Wikimedia Commons. CC BY-SA 4.0.
The temporal problem persists, for even a million year ‘window’ is a sliver on the cosmic scale. The L factor in the Drake equation is a great unknown, but it is conceivable that the death of a million-year old culture would be survived by its artifacts, acting to give us clues to its past just as fossils tell us about the early stages of life on Earth. So might we hope to find an ancient, abandoned Dyson swarm around a star close enough to observe?
Lacki is interested in failure modes, the problem of things that break down. Helpfully, megastructures are by definition gigantic, and it is not inconceivable that. Dyson structures of one kind or another could register in our astronomical data. As the paper notes, a wide variety covering different fractions of the host star can be imagined. We can scale a Dyson swarm down or up in size, with perhaps the largest ever proposed being from none other than Nikolai Kardashev, who discusses in a 1985 paper a disk parsecs-wide built around a galactic nucleus (!).
I’m talking about Dyson swarms instead of spheres because from what we know of material science, solid structures would suffer from extreme instabilities. But swarms can be actively managed. We have a history of interest in swarms dating back to 1958, when Project Needles at MIT contemplated placing a ring of 480,000,000 copper dipole antennas in orbit to enhance military communications (the idea was also known as Project West Ford). Although two launches were carried out experimentally, the project was eventually shelved because of advances in communications satellites.
So we humans already ponder enclosing the planet in one way or another, and planetary swarms, as Lacki notes, are already with us, considering the constellations of satellites in Earth orbit, the very early stages of a mini Dyson swarm. Just yesterday, the announcement by SpinLaunch that it will launch hundreds of microsatellites into orbit using a centrifugal cannon gave us another instance. Enclosing a star in a gradually thickening swarm seems like one way to harvest energy, but if such structures were built, they would have to be continuously maintained. The civilization behind a Dyson swarm needs to survive if the swarm itself is to remain viable.
For the gist of Lacki’s paper is that on the timescales we’re talking about, an abandoned Dyson swarm would be in trouble within a surprisingly short period of time. Indeed, collisions can begin once the guidance systems in place begin to fail. What Lacki calls the ‘collisional time’ is roughly an orbital period divided by the covering fraction of the swarm. How long it takes to develop into a ‘collisional cascade’ depends upon the configuration of the swarm. Let me quote the paper, which identifies:
…a major threat to megastructure lifespans: if abandoned, the individual elements eventually start crashing into each other at high speeds (as noted in Lacki 2016; Sallmen et al. 2019; Lacki 2020). Not only do the collisions destroy the crashed swarm members, but they spray out many pieces of wreckage. Each of these pieces is itself moving at high speeds, so that even pieces much smaller than the original elements can destroy them. Thus, each collision can produce hundreds of missiles, resulting in a rapid growth of the potentially dangerous population and accelerating the rate of collisions. The result is a collisional cascade, where the swarm elements are smashed into fragments, that are in turn smashed into smaller pieces, and so on, until the entire structure has been reduced to dust. Collisional cascades are thought to have shaped the evolution of minor Solar System body objects like asteroid families and the irregular satellites of the giant planets (Kessler 1981; Nesvorn.
You might think that swarm elements could be organized so that their orbits reduce or eliminate collisions or render them slow enough to be harmless. But gravitational perturbations remain a key problem because the swarm isn’t an isolated system, and in the absence of active maintenance, its degradation is relatively swift.
Image: This is Figure 2 from the paper. Caption: A sketch of a series of coplanar belts heating up with randomized velocities. In panel (a), the belt is a single orbit on which elements are placed in an orderly fashion. Very small random velocities (meters per second or less) cause small deviations in the elements’ orbits, though so small that the belt is still “sharp”, narrower than the elements themselves (b). The random velocities cause the phases to desynchronize, leading to collisions, although they are too slow to damage the elements (cyan bursts). The collision time decreases rapidly in this regime until the belt is as wide as the elements themselves and becomes “fuzzy” (c). The collision time is at its minimum, although impacts are still too small to cause damage. In panel (d), the belts are still not wide enough to overlap, but relative speeds within the belts have become fast enough to catastrophically damage elements (yellow explosions), and are much more frequent than the naive collisional time implies because of the high density within belts. Further heating causes the density to fall and collisions to become rarer until the belts start to overlap (e). Finally, the belts grow so wide that each belt overlaps several others, with collisions occuring between objects in different belts too (f), at which point the swarm is largely randomized. Credit: Brian Lacki.
Lacki’s mathematical treatment of swarm breakdown is exhaustive and well above my payscale, so I send you to the paper if you want to track the calculations that drive his simulations. But let’s talk about the implications of his work. Far from being static technosignatures, megaswarms surrounding stars are shown to be highly vulnerable. Even the minimal occulter swarm he envisions turns out to have a collision time of less than a million years. A megaswarm needs active maintenance – in our own system, Jupiter’s gravitational effect on a megaswarm would destroy it within several hundred thousand years. These are wafer-thin time windows if scaled against stellar lifetimes.
The solution is to actively maintain the megaswarm and remove perturbing objects by ejecting them from the system. An interesting science fiction scenario indeed, in which extraterrestrials might sacrifice systems planet by planet to maintain a swarm. Lacki works the simulations through gravitational perturbations from passing stars and in-system planets and points to the Lidov-Kozai effect, which turns circular orbits at high inclination into eccentric orbits at low inclination. Also considered is radiation pressure from the host star and radiative forces resulting from the Yarkovsky effect.
How else to keep a swarm going? From the paper:
For all we know, the builders are necessarily long-lived and can maintain an active watch over the elements and actively prevent collisions, or at least counter perturbations. Conceivably, they could also launch tender robots to do the job for them, or the swarm elements have automated guidance. Admittedly, their systems would have to be kept up for millions of years, vastly outlasting anything we have built, but this might be more plausible if we imagine that they are self-replicating. In this view, whenever an element is destroyed, the fragments are consumed and forged into a new element; control systems are constantly regenerated as new generations of tenders are born. Even then, self-replication, repair, and waste collection are probably not perfectly efficient.
The outer reaches of a stellar system would be a better place for a Dyson swarm than the inner system, which would be hostile to small swarm elements, even though the advantage of such a position would be more efficient energy collection. The habitable zone around a star is perhaps the least likely place to look for such a swarm given the perturbing effects of other planets. And if we take the really big picture, we can talk about where in the galaxy swarms might be likely: Low density environments where interactions with other stars are unlikely, as in the outskirts of large galaxies and in their haloes. “This suggests,” Lacki adds, “that megaswarms are more likely to be found in regions that are sometimes considered disfavorable for habitability.”
Ultimately, an abandoned Dyson swarm is ground into microscopie particles via the collision cascades Lacki describes, evolving into nothing more than dispersed ionized gas. If we hope to find an abandoned megastructure like this in our practice of galactic archaeology, what are the odds that we will find it within the window of time within which it can survive without active maintenance? We’d better hope that the swarm creators have extremely long-lived civilizations if we are to exist in the same temporal window as the swarm we want to observe. A dearth of Dyson structures thus far observed may simply be a lack of temporal coincidence, as we search for systems that are inevitably wearing down without the restoring hand of their creators.
The paper is Lacki, “Ground to Dust: Collisional Cascades and the Fate of Kardashev II Megaswarms,” accepted at The Astrophysical Journal (preprint). The Kardashev paper referenced above is “On the Inevitability and the Possible Structure of Super Civilizations,” in The Search for Extraterrestrial Life: Recent Developments, ed. M. D. Papagiannis, Vol. 112, 497–504.
It’s no surprise, human nature being what it is, that our early detections of possible life on other worlds through ‘biosignatures’ are immediately controversial. We have to separate signs of biology from processes that may operate completely outside of our conception of life, abiotic ways to produce the same results. My suspicion is that this situation will persist for decades, claim vs. counter-claim, with heated conference sessions and warring papers. But as Alex Tolley explains in today’s essay, even a null result can be valuable. Alex takes us into the realm of Bayesian statistics, where prior beliefs are gradually adjusted as new data come in. We’re still dealing with probabilities, but in a fascinating way, uncertainties are gradually being decreased though never eliminated. We’re going to be hearing a lot more about these analytical tools as the hunt continues with next generation telescopes.
by Alex Tolley
Introduction
The venerable Drake equation’s early parameters are increasingly constrained as our exoplanet observations continue. We now have a good sample of thousands of exoplanets to estimate the fraction of planets in the habitable zone that could support life. This last firms up the term ne, the mean number of planets that could support life per star with planets.
This is now a shift to focus on the fraction of habitable planets with life (fl). The first to confirm a planet with life will likely make the history books.
However, as with the failure of SETI to receive a signal from extraterrestrial intelligence (ETI) since the 1960s, there will be disappointments in detecting extraterrestrial life. The early expectation of Martian vegetation proved incorrect, as did the controversial Martian microbes thought to have been detected by the Viking lander life detection experiments in 1976. More recently, the phosphine biosignature in the Venusian atmosphere has not been confirmed, and now the claimed dimethyl sulfide (DMS) biosignature on K2-18b is also questioned.
While we hope that an unambiguous biosignature is detected, are null results just disappointments that have no value in determining whether life is present in the cosmos, or do they add some value in determining a frequency of habitable planets with life?
Before diving into a recent paper that attempts to answer this question, I want to give a quick introduction to statistics. The most common type of statistics is Fisher statistics, where collected sample data is used to calculate the distribution parameters for the population from which the sample is taken. This approach is used when the sample size is greater than 1 or 2, and is most often deployed in calculating the accuracy of a mean value and 95% range of values as part of a test of significance. This approach works well when the sample contains sufficient examples to represent the population. For binary events, such as heads in a coin test, the Binomial distribution will provide the expected frequencies of unbiased and small biases in coin tosses.
However, a problem arises when the frequency of a binary event is extremely low, so that the sample of events detects no positive events, such as heads, at all. In the pharmaceutical industry, while efficacy of a new drug needs a large sample size for validity, the much larger phase 4 marketing period is used to monitor for rare side effects that are not discoverable in the clinical trials. There have been a number of well known drugs that were withdrawn from the market during this period, perhaps the most famous being thalidomide and its effects on fetal development. In such circumstances, Fisherian statistics are unhelpful in determining probabilities of rare events with sample sizes inadequate to catch these rare events. As we have seen with SETI, the lack of any detected signal provides no value for the probability that ETI exists, only that it is either rare, or that ETI is not signaling. All SETI scientists can do is keep searching with the hope that eventually a signal will be detected.
Bayesian statistics are a different approach that can help overcome the problem of determining the probability of rare events, one that has gained in popularity over the last few decades. It assumes a prior belief, perhaps no more than a guess, of the probability of an event, and then adjusts it with new observed data as they are acquired. For example, one assumes a coin toss is 50:50 heads or tails. If the succeeding tosses show only tails, then the coin toss is biased, and each new resulting tail decreases the probability of a head resulting on the next toss. For our astrobiological example, if life is very infrequent on habitable worlds, Bayesian statistics can be informative to estimate the probability of detection success.
In essence, the Bayesian method updates beliefs in the probability of events, given the new observations of the event. With a large enough number of observations, the true probability of an event value will emerge that will either converge or diverge from the initial expected probability.
I hope it is clear that this Bayesian approach is well-suited to the announcement of detecting a biosignature on a planet, where detections to date have either been absent or controversial. Each detection or lack of detection in a survey will update our expectations of the frequency of life. At this time, the probability of life on a potentially habitable planet ranges from 0 (life is unique to Earth) to 1.0 (some form of life appears wherever it is possible) Beliefs that the abiogenesis of life is extremely hard due to its complexity push the probability of life being detected as close to 0. Conversely, the increasing evidence that life emerges quickly on a new planet, such as within 100 million years on Earth [6], implies that the probability of a habitable planet having life is close to 1.0.
The Angerhausen et al paper I am looking at today (citation below) considers a number of probability distributions depending on beliefs about the probability of life, rather than a single value for each belief. These are shown in Figure 1 and explained in Box 2. I would in particular note the Kerman and Jeffreys distributions that are bimodal with the highest likelihoods for the distributions as the extremes, and reflect the “fine tuning” argument for life by Kipping et al [2] explained in the Centauri Dreams post [3] i.e., either life will be almost absent, or ubiquitous, and not some intermediate probability of appearing on a habitable planet, In other words, the probability is either very close to 0 or close to 1.0, but unlikely to be some intermediate probability. The paper relies on the Beta function [Box 3] that uses the probability of positive and negative events defined by 2 parameters for the binary state of the event, e.g. life detected or not detected. This function can approximate the Binomial distribution, but can handle the different probability distributions.
Figure 1. The five different prior distributions as probability density functions (PDF) used in the paper and explained in Box 2. Note the Kerman and Jeffreys distributions that bias the probabilities at the extremes, compared to the “biased optimist” that has 3 habitable worlds around the sun (Venus, Earth, and Mars), but with only the Earth having life.
The Beta function is adjusted by the number of observations or positive and negative detections of biosignatures. At this point, the positive and negative observations are based on the believed prior distributions which can take any values, from guesses to preliminary observational results, which at this time are relatively few. After all, we are still arguing over whether we have even detected biosignature molecules, let alone confirmed their detection. We then adjust those expectations by the new observations.
What happens when we start a survey and gain events of biosignature detection? Using the Jeffreys prior distribution, let us see the effect of observing no biosignature detections for up to 100 negative biosignature observations.
Figure 2a. The effect of increasing the null observations on a skewed distribution that shows the increasing certainty of the low probability frequencies. While apparently the high probabilities also rise, the increase in null detections implies that the relative frequency of positives declines.
Figure 2b. The increasing certainty that the frequency of life on habitable planets tends towards 0 as the number of null biosignature detections increases. The starting value of 0.5 is taken from the Jeffreys prior distribution. The implied frequency is the new frequency of positives as the null detections reduce the frequency observed and push the PDF towards the lower bound of 0 (see figure 1)
So far, so good. If we can be sure that the biosignature detection is unambiguous and that the inference that life is present or absent can be inferred with certainty based on the observations, then the sampling of up to 100 habitable worlds will indicate whether life is rare or ubiquitous and can be determined with high confidence. If every star system had at least 1 habitable world, this sample would include most stars within 20 ly of Earth. In reality, if we limit our stars to spectral types F, G & K, which represent 5-10% of all stars, and half of these have at least 1 habitable world, then we need to search 2000-4000 star systems, which are well within 100 ly, a tiny fraction of the galaxy.
The informed reader should now balk at the status of this analysis. Biosignatures are not unambiguous [4]. Firstly, detecting a faint trace of a presumed biosignature gas is not certain, as the phosphine on Venus and the DMS/DMDS on TOI-270d detections make clear. They are both controversial. In the case of Venus, we are neither certain that the phosphine signal is present and that the correct identification has been made, nor that there is no abiogenic mechanism to create phosphine in Venus’ very different environment. As discussed in my post on the ambiguity of biosignatures, prior assumptions about biosignatures as unambiguous were reexamined, with the response that astrobiologists built a scale of certainties for assessing whether a planet is inhabited based on the contextual interpretation of biosignature data.[4].
The authors of the paper allow for this by modifying the formula to allow for both false-positive and false-negative biosignature detection rates, and also for interpretation uncertainty of the detected biosignature. The authors also calculate the upper bound at about 3 sigma (99.9%) of the frequency of observations. Figure 3 shows the effect of these uncertainties on the location and size of the maximal probability density function for the Jeffrey’s Bayesian priors.
Figure 3. The effects of sample and interpretation, best fit, and 99.9% uncertainties for null detections. As both sample and interpretation uncertainty increase, the expected number of positive detections increases. The Jeffrey prior’s distribution is used.
Figure 3 implies that with interpretation uncertainty of just 10%, even 100 null observations, the calculated frequency of life increases 2 orders of magnitude from 0.1% to 10%. The upper bound increases from less than 10% to between 20 and 30%. Therefore, even if 100 new observations of habitable planets with no detected biosignatures, the frequency of inhabited planets is between ⅕ and ⅓ of habitable planets at this level of certainty. As one can see from the asymptotes, no amount of further observations will increase the certainty that life is absent in the population of stars in the galaxy. Uncertainty is the gift that allows astrobiologists to maintain hope that there are living worlds to discover.
Lastly, the authors apply their methodology to 2 projects to discover habitable worlds; the Habitable Worlds Observatory [7] and the Large Interferometer for Exoplanets (LIFE} concepts. The analyses are shown in figure 4. The vertical lines indicate the expected number of positive detections by the conceptual methods and the expected frequencies of detections with their associated upper bounds due to uncertainty.
Figure 4. Given the uncertainties, the authors calculate the 99.9% ( > 3 sigma) upper limit on the null hypothesis of no life and matched against data obtained by 2 surveys by Morgan with The Habitable Worlds Observatory (HWO) and 2 by Kammerer with The Large Interferometer for Exoplanets (LIFE) [7, 8].
The authors note that it may be incorrect to use the term “habitable” if water is detected, or “living” if a biosignature[s] is detected. They suggest it would be better to just use the calculation for the detection method, rather than the implication of the detection, that is, that the sample uncertainty, but not the interpretation uncertainty, is calculated. As we see in the popular press, if a planet in the habitable zone (HZ) has about an Earth-size mass and density, this planet is sometimes referred to as “Earth 2.0” with all the implications of the comparison to our planet. However, we know that our current global biosphere and climate are relatively recent in Earth’s history. The Earth has experienced different states from anoxic atmosphere, to extremely hot, and conversely extremely cold periods in the past. It is even possible the world may be a dry desert, like Venus, or conversely a hycean world with no land for terrestrial organisms to evolve.
However, even if life and intelligence prove rare and very sparsely distributed, a single, unambiguous signature, whether of a living world or a signal with information, is detected, the authors state:
Last but not least we want to remind the reader here that, even if this paper is about null results, a single positive detection would be a watershed moment in humankind’s history.
In summary, Bayesian analysis of null detections against prior expectations of frequencies can provide some estimate of the upper limit frequency of living worlds, with many null detections reducing the frequencies and their upper limits. Using Fisherian statistics, many null detections would provide no such estimates, as all the data values would be 0 (null detections). The statistics would be uninformative other than that as the number of null detections increased, the expectation of the frequency of living worlds would qualitatively decrease.
While planetologists and astrobiologists would hope that they would observationally detect habitable and inhabited exoplanets, as the uncertainties are decreased and the number of observations continues to show null results, how long before such activities become a fringe, uneconomic activity that results in lost opportunity costs for other uses of expensive telescope time?
The paper is Angerhausen, D., Balbi, A., Kovačević, A. B., Garvin, E. O., & Quanz, S. P. (2025). “What if we Find Nothing? Bayesian Analysis of the Statistical Information of Null Results in Future Exoplanet Habitability and Biosignature Surveys”. The Astronomical Journal, 169(5), 238. https://doi.org/10.3847/1538-3881/adb96d
References
1. Wikipedia “Drake equation” https://en.wikipedia.org/wiki/Drake_equation. Accessed 04/12/2025
2. Kipping & Lewis, “Do SETI Optimists Have a Fine-Tuning Problem?” submitted to International Journal of Astrobiology (preprint). https://arxiv.org/abs/2407.07097
3. Gilster P. “The Odds on an Empty Cosmos“ Centauri Dreams, Aug 16, 2024 https://www.centauri-dreams.org/2024/08/16/the-odds-on-an-empty-cosmos/
4. Tolley A. “The Ambiguity of Exoplanet Biosignatures“ Centauri Dreams Jun 21, 2024
https://www.centauri-dreams.org/2024/06/21/the-ambiguity-of-exoplanet-biosignatures/
5. Foote, Searra, Walker, Sara, et al. “False Positives and the Challenge of Testing the Alien Hypothesis.” Astrobiology, vol. 23, no. 11, Nov. 2023, pp. 1189–201. https://doi.org/10.1089/ast.2023.0005.
6. Tolley, A. Our Earliest Ancestor Appeared Soon After Earth Formed. Centauri Dreams, Aug 28, 2024 https://www.centauri-dreams.org/2024/08/28/our-earliest-ancestor-appeared-soon-after-earth-formed/
7. Wikipedia “Habitable Worlds Observatory” https://en.wikipedia.org/wiki/Habitable_Worlds_Observatory. Accessed 05/02/2025
8. Kammerer, J. et al (2022) “Large Interferometer For Exoplanets (LIFE) – VI. Detecting rocky exoplanets in the habitable zones of Sun-like stars. A&A, 668 (2022) A52
DOI: https://doi.org/10.1051/0004-6361/202243846
Finding unusual things in the sky should no longer astound us. It’s pretty much par for the course these days in astronomy, what with new instrumentation like JWST and the soon to be arriving Extremely Large Telescope family coming online. Recently we’ve had planet-forming disks in the inner reaches of the galaxy and the discovery of a large molecular cloud (Eos by name) surprisingly close to our Sun at the edge of the Local Bubble, about 300 light years out.
So I’m intrigued to learn now of Teleios, which appears to be a remnant of a supernova. The name, I’m told, is classical Greek for ‘perfection,’ an apt description for this evidently perfect bubble. An international team led by Miroslav Filipović of Western Sydney University in Australia is behind this work and has begun to analyze what could have produced the lovely object in a paper submitted to Publications of the Astronomical Society of Australia (citation below). Fortunately for us, Teleios glows at radio wavelengths in ways that illuminate its origins.
Image: Australian Square Kilometre Array Pathfinder radio images of Teleios as Stokes (the Stokes parameters are a set of numbers used to describe the polarization state of electromagnetic radiation). Credit: Filipović et al.
I’m not going to spend much time on Teleios, although its wonderful symmetry sets it apart from most supernova remnants without implying anything other than a chance occurrence in an unusually empty part of space. Its lack of X-ray emissions is a curiosity, to be sure, as the authors point out:
We have made an exhaustive exploration of the possible evolutionary state of the SN based on its surface brightness apparent size and possible distances. All possible scenarios have their challenges, especially considering the lack of X-ray emission that is expected to be detectable given our evolutionary modelling. While we deem the Type Ia scenario the most likely, we note that no direct evidence is available to definitively confirm any scenario and new sensitive and high-resolution observations of this object are needed.
So there you are, a celestial mystery. Another one comes from Richard Stanton, now retired but for years a fixture at JPL, where he worked on Voyager, among other missions. These days he runs Shay Meadow Observatory near Big Bear, CA where he deploys a 30-inch telescope coupled with a photometer designed by himself for the task at hand – the search for optical SETI signals. Thus far the indefatigable retiree has observed more than 1300 stars in this quest.
Several unusual things have turned up in his data. What they mean demands further study. The star HD 9389 produced “two fast identical pulses, separated by 4.4s,” according to the paper on his work. That was interesting, but even more so is the fact that looking back over his earlier data, Stanton realized that a pair of similar pulses had occurred in observations of the star HD 217014 that were taken four years before. In the ‘second’ observation, the twin pulses were separated by 1.3 seconds, 3.5 times less than for the HD 89389 event. But Stanton notes that while the separation is less, the pulse shapes and separation are very similar in both events.
Stanton’s angle into optical SETI differs from the norm, as he describes it in a paper in Acta Astronautica. The work is:
…different from that employed in many other optical SETI searches. Some [3,4] look for nanosecond pulses of sufficient intensity to momentarily outshine the host star’s light, as first suggested by Schwartz and Townes [5]. Others search optical spectra of stars for unusual features [6] or emission close to a star that could have been sent from an orbiting planet [7]. The equipment used here is not capable of making any of these measurements. Instead it relies on detecting unexplained changes in a star’s light as indications of intelligent activity. Starting with time samples of 100μs, the search is capable of detecting optical pulses of this duration and longer, and also of finding optical tones in the frequency range ∼0.01–5000Hz.
HD 89389 is an F-class star about 100 light years away from the Solar System. Using the equipment Stanton has been working with, all kinds of things can present a problem, everything from an airplane blocking out starlight, satellites (a growing problem because of the increasing number of Internet access satellites), meteors and birds. Atmospheric scintillation and noise has to be accounted for as well. I’m simplifying here and send you to the paper, where all these factors are painstakingly considered. Stanton’s analysis is thorough.
Here is a photograph which shows the typical star-field during an observation of HD 89389, with the target star in the center of a field that is roughly 15 × 20 arcmin in size. The unusual pulses from this star occurred during this exposure.
Image: The HD 89389 star-field. “A careful examination was made of each photograph to detect any streaks or transitory point images that might have been objects moving through the field. Nothing was found in any of these frames, suggesting that the source of the pulses was either invisible, such as due to some atmospheric effect, or too far away to be detected.” Credit: Richard Stanton.
A closer look at these unusual observations: They consisted of two identical pulses, with the star rapidly brightening, then decreasing in brightness, then increasing again, all in the fraction of a single second. The second pulse followed 4.2 seconds later in the case of HD 89389, and 1.3 seconds later at HD 217014. According to Stanton, in over 1500 hours of searching he had never seen a pulse like this, in which the star’s light is attenuated by about 25 percent.
Note this: “This is much too fast to attribute to any known phenomenon at the star’s distance. Light from a star a million kilometers across cannot be attenuated so quickly.” In other words, something on the scale of a star cannot partially disappear in a fraction of a second, meaning the cause of this effect is not as distant as the star. If the star’s light is modulated without something moving across the field of view, then what process could cause this?
The author argues that the starlight variation in each pulse itself eliminates all the common signals discussed above, from airplanes to meteors. He also notes that unlike what happens when an asteroid or airplane occultation occurs, the star never disappears during the event. The second event, in the light of the star HD 217014, was discovered later, although the data were taken four years earlier. Stanton runs through all the possibilities, including shock waves in the atmosphere, partial eclipses by orbiting bodies, and passing gravity waves.
One way of producing this kind of modulation, Stanton points out, is through diffraction of starlight by a distant body between us and the star. Keep in mind that we are dealing with two stars that have shown the same pattern, with similar pulses. Edge diffraction results when light is diffracted by a straight edge, producing ‘intensity ripples’ that correspond to the pulses. The author gives this phenomenon considerable attention, explaining how the pulses would change with distance but coming up short on a distance to the sources here.
From his conclusion:
The fact that these pulses have been detected only in pairs must surely be a clue to their origin. How can the two detected events separated by years, and from seemingly random directions in the sky, be so similar to each other? Even if the diffraction theory is correct, these data alone cannot determine the object’s distance or velocity.
He goes on to produce a model that could explain the pulses, using the figure below.
This thin opaque ring, located somewhere in the solar system, would sequentially occult the star as it moved across the field. If anything like this were found, it would immediately raise the questions of where it came from and how it could survive millions of years of collisions with other objects. Alternatively, if the measured transverse velocity proved greater than that required to escape our solar system, a different set of questions would arise. Whatever is found, those speculating that our best chance of finding evidence of extraterrestrial intelligence lies within our own solar system [15], might have much to ponder!
If there is indeed some sort of occulting object, observations with widely spaced telescopes could potentially determine its size and distance. Meanwhile, a third double pulse event has turned up in Stanton’s data from January 18, 2025, where light from the star HD 12051 is found to pulse, with the pulses separated by 1.52 seconds. This last observation doesn’t make it into the paper other than as a footnote, but it’s an indication that Stanton may be on to something that is going to continue creating ripples. As in the case of Teleios, we have an unusual phenomenon that demands continued observation.
The paper on the unusual circular object is Filipović et al., “Teleios (G305.4-2.2) — the mystery of a perfectly shaped new Galactic supernova remnant,” accepted at Publications of the Astronomical Society of Australia and available as a preprint. The paper on the pulse phenomenon is Stanton, “Unexplained starlight pulses found in optical SETI searches,” Acta Astronautica Vol. 233 (August 2025), pp. 302-314. Full text. Thanks to Centauri Dreams readers Frank Henriquez and Antonio Tavani for the pointer to this work.
Science fiction collectors may well look a the two images below and think they’re both Richard Powers’ artwork, so prominent on the covers of science fiction titles in the mid-20th Century. Powers worked often for Ballantine in the 1950s and later, always refining the style he first exhibited when doing covers for Doubleday in the 1940s. The top image here is from one of the Doubleday titles, but I think of Powers most for his Ballantine work. His paintings could make a paperback rack into a moody, mysterious experience, a display of artistry that moved from the surreal to the purely abstract. At his best, Powers’ renderings seemed to draw out the wonder of the mind-bending fiction they encased.
What we have in the second image, though, is not abstract art but the manifestation of what is being described as “the world’s largest turbulence simulation.” The work comes from a project described in a new paper in Nature Astronomy, where lead author James Beattie describes his investigations at the Canadian Institute for Theoretical Astrophysics, where he is probing magnetism and turbulence as they occur in the interstellar medium. In this image, Beattie’s caption describes “…the fractal structure of the density, shown in yellow, black and red, and magnetic field, shown in white.”
And while Beattie may or may not be familiar with Richard Powers, he does have an eye for the art that this kind of turbulence can produce, saying:
“I love doing turbulence research because of its universality. It looks the same whether you’re looking at the plasma between galaxies, within galaxies, within the solar system, in a cup of coffee or in Van Gogh’s The Starry Night. There’s something very romantic about how it appears at all these different levels…”
And honestly, doesn’t this remind you of Powers?
What Beattie and team have produced, using the computing muscle of the SuperMUC-NG supercomputer at the Leibniz Supercomputing Centre in Germany, is helping us better understand the nature of the interstellar medium. In particular, it is a computer simulation that explores the interactions of magnetism and turbulence in the ISM, which addresses magnetism at the galactic level as well as individual astrophysical phenomena such as star formation. Beattie’s team is international in scope, with co-authors at Princeton University, Australian National University, Universität Heidelberg; the Center for Astrophysics, Harvard & Smithsonian; Harvard University; and the Bavarian Academy of Sciences and Humanities.
So what is the turbulence Beattie is describing? The phenomenon is ubiquitous, showing up in everything from cream swirling in a black cup of coffee to ocean currents to particles moving in chaotic flows in the solar wind. We can produce ultra-high vacuums on Earth, but even in these there are far more particles than are found in the average sample of the ISM. Despite the fact that so few particles exist in the ISM, though, their motions do generate a magnetic field, one that the researchers liken to the motion of our Earth’s molten core which generates the magnetic field that protects us.
The galactic magnetic field is weak indeed, but it can be modeled for the first time at a level of accuracy that is both scalable and high in resolution. At its highest setting, Beattie’s simulation can depict a volume of space 30 light years to a side, but can be scaled down by a factor of 5000 to explore smaller spaces. The latter has implications for how we study the solar wind, which not only produces ‘space weather’ but is also a factor in certain space sail concepts that use superconducting rings to produce a strong magnetic field that can harness the solar wind as thrust.
Always keep in mind that we have anything but a uniform interstellar medium. Some of the early writing about Robert Bussard’s ramjet concepts noted that a design that harnessed interstellar hydrogen would thrive best in dense star-forming regions, where hydrogen would be plentiful. The Bussard concept has fallen on hard times given issues with drag that seem to knock it out of contention, but magsail work remains interesting both as a way of harnessing solar wind particles or braking against the same upon entering a destination stellar system. So the more we can learn about the extreme density variations in the ISM, the better we can envision future interstellar flight.
Moreover, star formation is implicated in the same model. The better our simulations of interstellar turbulence, the more we can learn about the magnetic forces that push outward against the collapse of a nebula that will eventually produce one or more stars. And the model the team has developed stacks up well when run against actual data from the solar wind, which points to short term gains in the forecasting of space weather, the ‘rain’ of charged particles that affects both Earth and spacecraft.
The ubiquity of chaotic turbulence and its coupling with the galaxy’s ambient magnetic fields makes its study all the more provocative. Both generating and scattering off the plasma phenomena known as Alfvén waves, cosmic rays are strongly affected. From the paper:
In the cold (T ≈ 10 K) molecular phase of the ISM, [turbulence] changes the ionization state of the plasma by controlling the diffusion of cosmic rays [1–5], gives rise to the filamentary structures that shape and structure the initial conditions for star formation [6, 7], and through turbulent and magnetic support, changes the rate at which the cold plasma converts mass density into stars [8–13].
So there is plenty to work with here. And a brief return to van Gogh’s ‘The Starry Night,’ which Beattie mentioned in the quote above. Come to find that the author and co-author Neco Kriel (Queensland University of Technology) have produced a paper on the subject called “Is The Starry Night Turbulent?” The goal was to learn whether the night sky in this famous painting “has a power spectrum that resembles a supersonic turbulent flow.” And indeed, “‘The Starry Night’ does exhibit some similarities to turbulence, which happens to be responsible for the real, observable, starry night sky.”
Which I think only means that van Gogh was turning what he saw into art, recognizable to us precisely because it did reflect the night sky he was observing. Still, it’s fun to see these methods, which draw on deep research into turbulent interactions, applied to a cultural icon. I wonder what Beattie’s team would dig out of a deep dive into Powers’ work in the century after van Gogh?
Image: van Gogh’s ‘The Starry Night’ is Figure 1 in Beattie’s paper with Kriel. Caption: Vincent van Gogh’s The Starry Night, accessed from WallpapersWide.com (2018). We see eddies painted through the starry night sky that resemble the structures comparable to what we see in turbulent flows.”
The paper is Beattie et al., “The spectrum of magnetized turbulence in the interstellar medium,” Nature Astronomy 13 May 2025 (abstract / preprint). The paper on van Gogh is Beattie & Kriel, “Is The Starry Night Turbulent?” available as a preprint.
HD 219134, an orange K-class star in Cassiopeia, is relatively close to the Sun (21 light years) and already known to have at least five planets, two of them being rocky super-Earths that can be tracked transiting their host. We know how significant the transit method has become thanks to the planet harvests of, for example, the Kepler mission and TESS, the Transiting Exoplanet Survey Satellite. It’s interesting to realize now that an entirely different kind of measurement based on stellar vibrations can also yield useful planet information.
The work I’m looking at this morning comes out of the Keck Observatory on Mauna Kea (Hawaii), where the Keck Planet Finder (KFP) is being used to track HD 219134’s oscillations. The field of asteroseismology is a window into the interior of a star, allowing scientists to hear the frequencies at which individual stars resonate. That makes it possible to refine our readings on the mass of the star, and just as significantly, to determine its age with higher accuracy.
KPF uses radial velocity measurements to do its work, a technique often discussed in these pages to identify exoplanet candidates. But in this case measuring the motion of the stellar surface to and from the Earth is a way of collecting the star’s vibrations, which are the key to stellar structure. Says lead author Yaguang Li (University of Hawaii at Mānoa):
“The vibrations of a star are like its unique song. By listening to those oscillations, we can precisely determine how massive a star is, how large it is, and how old it is. KPF’s fast readout mode makes it perfectly suited for detecting oscillations in cool stars, and it is the only spectrograph on Mauna Kea currently capable of making this type of discovery.”
Image: Artist’s concept of the HD219134 system. Sound waves propagating through the stellar interior were used to measure its age and size, and characterize the planets orbiting the star. Credit: openAI, based on original artwork from Gabriel Perez Diaz/Instituto de Astrofísica de Canarias. The 10-second audio clip transforms the oscillations of HD219134 measured using the Keck Planet Finder into audible sound. The star pulses roughly every four minutes. When sped up by a factor of ~250,000, its internal vibrations shift into the range of human hearing. By “listening” to starlight in this way, astronomers can explore the hidden structure and dynamics beneath the star’s surface.
What we learn here is that HD 219134 is more than twice the age of the Sun at about 10.2 billion years old. The age of a star can be difficult to determine. The most widely used measurement involves gyrochronology, which focuses on how swiftly a star spins, the assumption being that younger stars rotate more rapidly than older ones, with the gradual loss of angular momentum traceable over time. The problem: Older stars don’t necessarily follow this script, with their spin-down evidently stalling at older ages. Asteroseismology allows a more accurate reading for stars like this and provides a different reference point, providing that our models of stellar evolution allow us to interpret the results correctly..
We need to track this work because how old a star is has implications across the board. For one thing, understanding basic factors such as its temperature and luminosity requires a context to determine whether we’re dealing with a young, evolving system or a star nearing the transition to a red giant. From an astrobiological point of view, we’d like to know how old any planets in the system are, and whether they’ve had sufficient time to develop life. SETI also takes on a new dimension when considering stellar age, as targeting older exoplanet systems allows us to put our focus on higher priority targets.
Yaguang Li thinks the KPF work brings new levels of precision to these measurements, calling the result ‘a long-lost tuning fork for stellar clocks.’ From the exoplanet standpoint, stellar age is also quite informative. For the measurements have allowed the researchers to determine that HD 219134 is smaller than previously thought by about 4% in radius – this contrasts with interferometry measurements that measured its size via multiple telescopes. A more accurate reading on the size of the star affects all inferences about its planets.
That 4% difference, though, raises questions, and the authors note that it requires the models of stellar evolution they are using to be accurate. From the paper:
We were unable to easily attribute this discrepancy to any systematic uncertainties related to interferometry, variations in the canonical choices of atmospheric boundary conditions or mixing-length theory used in stellar modeling, magnetic fields, or tidal heating. Without any insight into the cause of this discrepancy, our subsequently derived quantities and treatment of rotational evolution—all of which are contingent on these model ages and radii—must necessarily be regarded as being only conditional, pending a better understanding of the physical origin for this discrepancy. Future direct constraints on stellar radii from asteroseismology (e.g., through potential breakthroughs in understanding and mitigating the surface term) may alleviate this dependence on evolutionary modeling.
So we have to be cautious in our conclusions here. If indeed the tension between the KPF measurements and interferometry is correct, we will have adjusted our calibration tools for transiting exoplanets but still need to probe the reasons for the discrepancy. That’s important, because with tuned-up measurements of a star’s size, the radii and densities of transiting planets can be more accurately measured. The updated values KPF has given us – assembled through over 2000 velocity measurements of the star – point to a significant aspect of stellar modeling that may need further adjustment.
The paper is Yaguang Li et al., “K Dwarf Radius Inflation and a 10 Gyr Spin-down Clock Unveiled through Asteroseismology of HD 219134 from the Keck Planet Finder,” Astrophysical Journal Vol. 984, No. 2 (6 May 2025), 125 (full text).
Communicating with extraterrestrials isn’t going to be easy, as we’ve learned in science fiction, all the way from John Campbell’s Who Goes There? To Ted Chiang’s Story of Your Life (and the movie Arrival). Indeed, just imagining the kinds of civilizations that might emerge from life utterly unlike what we have on Earth calls for a rare combination of insight and speculative drive. Michael Chorost has been thinking about the problem for over a decade now, and it’s good to see him back in these pages to follow up on a post he wrote in 2015. As I’ve always been interested in how science fiction writers do their worldbuilding, I’m delighted to publish his take on his own experience at the craft. Michael is also the author of the splendid World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011) and Rebuilt: My Journey Back to the Hearing World (Mariner, 2006).
by Michael Chorost
Ten years ago, Paul Gilster kindly invited me to guest-publish an entry on Centauri Dreams titled, Can Social Insects Have a Civilization? At the time I was planning to write a nonfiction book about the linguistic issues of communicating with extraterrestrials. I sold the concept to Yale University Press. The working title was HOW TO TALK TO ALIENS.
I got busy, but I soon began to feel that the project was rather empty. I realized that in an actual First Contact situation, we’ll suddenly find ourselves awash in complications with a deeply unfamiliar interlocutor that has an agenda of its own. We’ll inevitably find ourselves winging it. Theory could be irrelevant. Useless.
For a while I tried to finesse the problem by putting scenarios in between the theoretical stuff. I imagined a human and an alien talking—specific humans, specific aliens, concrete settings. I soon realized that the scenarios were the most interesting material in the book.
That’s because positing a concrete situation made the problems, and possible solutions, stand out clearly. Given a particular situation, what would scientists actually do?
It dawned on me: It made more sense to write the book as a novel. I withdrew from my contract at Yale and returned the advance. They were kind and understanding about it.
So I committed myself to a novel—but I wanted it to be as content-rich as the book I’d promised to Yale. I decided that the human characters would be scientists: an entomologist, a linguist, a neuroscientist, and a physicist. To succeed they’d have to pool their expertise, educating each other. That would imbue the novel with the scientific content.
But this risked me writing a deadly dull novel larded with exposition. I wanted the characters—both human and alien—to be vivid and unforgettable, and for their actions to drive a propulsive plot. I wanted the reader to be unable to put the book down.
I think I succeeded. I hired an editor to help me, and she made me do rewrites for three years; she was relentless. But at last she said, “You set yourself one of the hardest imaginative problems you could possibly have chosen, especially for a first novel. I think you managed it in a way that feels genuinely convincing. I want to say clearly upfront: this book is worth it. There is no story like this in the world.”
So I’m pretty confident that I now have a publishable novel—but getting there was really hard. I thought it would take about three years to write, which is how long my two nonfiction books took. I was wrong. It took eight.
When I started, I knew I wanted the aliens to be really alien: no pointy-eared, English-speaking Vulcans. I decided to make them sapient social insect colonies. That would make them aliens without a defined physical shape. Aliens without hands as we know them. Aliens without faces.
Therefore, I first had to figure out what a social insect civilization looks like. I didn’t want to take the easy way out by positing (as Orson Scott Card did) that a social insect colony would have a centralized intelligence, e.g. a Queen that gives orders. I felt that was cheating. I wanted the colonies to be genuinely distributed entities in which no individual insect has language or even much in the way of consciousness. No giant bugs, either, so no big muscles or big brains. They had to be no bigger than Earthly ants or bees.
This resulted in some very challenging questions. (From now on I’ll use the word “hive” as shorthand for “social insect colony.”)
• How does a hive pick up a hammer?
• How does a hive store and process the information needed for consciousness and language?
• What’s the physical structure of the hives?
• How does a distributed consciousness behave?
• What does this civilization’s technology look like?
• What does the language look like? What’s its morphology, grammar, vocabulary?
• What does a society of hives look like?
• What set this species on the path to language and technology?
It took me two years to answer the first one. I would imagine a bunch of ants clustering around a hammer and completely failing to get any leverage. I’d give up, decide the project was hopeless, and the next day go right back to it. Finally, I figured it out: the hives used parasitized mammals to manipulate objects.
This worldbuilding was fun, but it was the least efficient way imaginable to write a novel. I designed the aliens and their world before working out the plot. This led to a big problem.
Which was this: the aliens were so alien that I didn’t know why they would want to talk to humans, or vice versa. What do we talk about? The weather? I had no idea what the plot of the novel would be.
I didn’t want to default to science fiction’s classic reasons for interspecies communication: war and trade. They struck me as stereotypical answers that would lead to a stereotypical novel. Besides, they begged the question. Species that are trading or fighting have to be similar enough to have things to trade or fight about. That would vitiate my goal of writing really alien aliens.
So I knew what I didn’t want to do, but that didn’t tell me what I should do. I sat down every day and wrote. And wrote and wrote and wrote.
Let’s pause to consider how inefficient my process was. Why didn’t I practice by writing, and publishing, a few short stories? A short novel? Build up my cred, get my name out there? But I didn’t want to do those things. I wanted to write this novel. I grimly stuck to it, day after day.
After a while I had a bare-bones plot. When Jonah Loeb, a deaf graduate student in entomology, asks how to deal with an ant colony besieging Washington, D.C., the answer is, “Ask it to stop.” Jonah gathers a team and travels to a more advanced civilization of hives on another planet in order to learn how.
I gave the other scientists names, figured out their dissertation topics, and worked out some of their characteristics. The neuroscientist was arrogant. The linguist was prickly and defensive. The physicist was socially awkward. Jonah, the protagonist, was deaf, like me, with cochlear implants. He was smart, but neurotic.
But I didn’t know how to make the characters come alive on the page. They all talked the same. Their only motivation was scientific interest. They had scant backstories or inner lives. They were, in short, boring.
I was even more at sea with the alien characters. They had no personality. I mean, really, how do you give a social insect colony a personality?
The plot, too, remained threadbare. I fabricated encounters, goings to-and-fro, arguments. But it just didn’t hold together. Often I’d add a new element only to realize it invalidated another element.
So I had dull characters and a plot made out of cardboard and duct tape. Finally, I admitted I needed help. I hired a freelance editor, and we started fresh.
The editor had me write up descriptions of each character’s goals and motives, and a detailed plot outline. We went through the manuscript one scene at a time, and she often told me to rework it before we went on to the next.
Slowly, the characters came to life on the page. I had made the protagonist, Jonah, deaf because I thought that would underscore the theme of communication. But Jonah only came to life when I thought back to my own feelings when I was in my early twenties. I realized that Jonah was driven by resentment. Resentment at being excluded, sidelined, underestimated. He desperately wants to prove himself.
This characterization let me set up a key dynamic: a resentful entomologist trying to negotiate peace with an angry insect colony. Clarity for the characters led to clarity for the story.
I slowly got better at solving problems by framing them in terms of character and plot. I knew that Tokic, the hives’ language, would have to be exotic—but creating it overwhelmed me. I’m no grammarian, and certainly no inventor of languages.
But then I realized I only had to develop enough of the language to support the plot. I wanted the plot to turn on misunderstandings and mistranslations as the humans struggled to learn the language.
A key source of confusion, I realized, would come from how differently shaped the hives and humans are. Humans have arms and legs that are attached to them. On the other hand, a hive is essentially a giant, stationary head with dozens of “hands” roaming the landscape. Not only that, the “hands,” as parasitized mammals, have minds of their own. Hives give their hands general orders, and the hands work out the details. A hive can argue with its parts, and its parts can argue right back.
I realized that the part/whole distinction would be built deeply into Tokic, rather like how human languages build gender deeply into their grammar. (In English, consider how hard it is to talk about a person if you don’t know their gender.) When you’re addressing another entity in Tokic, you have to be very precise, on the level of grammar, about its partness or wholeness.
Now consider: To a hive, is a human being a whole or a part?
A hive would find this question really hard to answer. As a mammal, a human being looks like a “hand”—a part—but it talks like a whole. Yet in Jonah’s team, each member is legitimately a part. We literally call a participant in a team a “member.” In Latin, membrum means “limb” or “part of the body.”
Jonah, as a cochlear implant user, is even trickier for a hive to understand. A cochlear implant is a computer; it runs on code and constantly makes decisions about what’s important for the user to hear. It’s a body part that literally thinks for itself. As such, Jonah is kind of hive-like. When a hive asks what Jonah is and the team gives it an answer it doesn’t understand, the hive attacks the team and they must run for their lives.
I worked out Tokic’s parts/wholes grammar, and that made it possible for me to write the scenes where things went wrong. These were tough scenes to write, because I had to keep track of what a hive said, what the humans thought it said, the humans’ mistaken reply, and so on. I also had to be careful not to let the scenes get bogged down. But I got it done, and now I’m pitching the novel to agents.
I’ve noted how inefficient my writing process was. But I do think it was productive in one way: I spent so much time thinking about the novel that a great deal of information accreted in my mind. I think that led to more richness in the worldbuilding and the story than would have happened if I’d written it faster.
There’s so much more I haven’t mentioned, like how the alien robot caretaker reads Wallace Stevens’s poetry and names itself after him; the brutal 1.8-gee gravity of Formicaris and the unexpected solution that lets the human team function there; the superheavy stable element that facilitates interstellar travel; the electromagnetic weapon that gives humans Capgras syndrome; the octopoidal surgeon who operates on Jonah and Daphne to upgrade their cyborg parts; and the illustrations. I had those done by professional science illustrators.
So now you have a sense of what my novel’s about. It’s still titled HOW TO TALK TO ALIENS; I think its unconventionality, and slightly academic air, will help it stand out. I hope you’re now as excited about it as I am. If you know of any agents who’d be interested—please let me know.
It was Robert Browning who said “Ah, but a man’s reach should exceed his grasp, or what’s a heaven for?” A rousing thought, but we don’t always know where we should reach. In terms of space exploration, a distant but feasible target is the solar gravitational lens distance beginning around 550 AU. So far the SGL definitely exceeds our grasp, but solid work in mission design by Slava Turyshev and team is ongoing at the Jet Propulsion Laboratory.
Targets need to tantalize, and maybe a target that we hadn’t previously considered is now emerging. Planet Nine, the hypothesized world that may lurk somewhere in our Solar System’s outer reaches, would be such an extraordinary discovery that it would tempt future mission designers in the same way.
This is interesting because right now our deep space targets need to be well defined. I love the idea of Interstellar Probe, the craft designed at JHU/APL, but it’s hard to excite the public with the idea of looking back at the heliosphere from the outside (although the science return would be fabulous). Pluto was hard enough to sell to the powers that be, but Allen Stern and team got the job done because they had a whole world that had never been seen up close. Will Planet Nine, if found, turn out to be the destination that some budget-strapped team finds a way to explore?
We have a lot to learn about planetary demographics given that our planet-finding technologies work best for larger worlds closer in to their stars. But a recent microlensing study suggests that as many as one out of every three stars in our galaxy should host a super-Earth in a Jupiter-like orbit (citation below). Microlensing is helpful because it can reveal planets at large distances from their parent stars. Super-Earths appear to be abundant. If it exists, Planet Nine isn’t in a Jovian orbit, but it’s probably smaller than Neptune.
In so many ways this planet makes sense. We’ve found planets like it in innumerable stellar systems. We also know that mechanisms of gravitational slingshotting can push a planet either out of its system entirely or into an entirely new orbit, explaining our prior lack of detection. We’re talking about a planet on the order of a super-Earth or a mini-Neptune, the former larger than our planet but still rocky, the latter smaller than Neptune but gaseous.
Image: An illustration shows what Planet Nine might look like orbiting far from our Sun. We have found many such worlds around other stars, but finding one in our own backyard is taking time because of its distance and an orbit that is probably well off the ecliptic. Assuming a planet is there at all. Image credit: NASA/Caltech.
The evidence in the orbits of a number of outer system objects points to something perturbing their paths, and seems to implicate something big. Now we have, after years of evidence gathering and debate, at least a possible candidate for this world. In process at Publications of the Astronomical Society of Australia (PASA), although not yet published, the paper digs deep into data from the Infrared Astronomical Satellite as well as AKARI, a Japanese satellite somewhat more sensitive than IRAS and launched in 2006.
The distant Sedna and other similar bodies with unusual orbital characteristics tell us that something has to account for their high eccentricity and inclination, while a number of Kuiper Belt objects seem to be similarly affected, as shown in the orientation of their orbits (described through what is known as the argument of perihelion). Computations thus far indicate mass on the order of something equal to or greater than 10 Earth masses, with a semi-major axis of about 700 AU.
Image: The six most distant known objects in the solar system with orbits exclusively beyond Neptune (magenta) all mysteriously line up in a single direction. Moreover, when viewed in 3-D, the orbits of all these icy little objects are tilted in the same direction, away from the plane of the solar system. PL-Caltech/R. Hurt.
A huge amount of work has gone into this analysis, all ably summarized in the paper, which comes from Terry Long Phan (National Tsing Hua University, Taiwan) and colleagues. Optical surveys have heretofore failed to find this object, but 700 AU is fully 23 times the distance of Neptune from the Sun, far enough out that visible wavelengths need to give way to infrared to find it.
Phan’s team looked for candidate objects in the range of 500 to 700 AU using the two far-infrared surveys mentioned above, which are both all-sky in range but separated by 23 years, making for useful comparisons. The idea was to find an object that moved from an IRAS position to an AKARI position after those 23 years. Assuming the kind of mass and distance they were looking for, they had to remove stars and noisy sources toward galactic center. Narrowing the field to 13 pairings, Phan and his doctoral advisor Tomotsugu Goto found only one that matched in color and brightness, indicating they were the same object.
The paper cites the result this way:
After the rigorous selection including the visual image inspection, we found one good candidate pair, in which the IRAS source was not detected at the same position in the AKARI image and vice versa, with the expected angular separation of 42′ – 69.6′.
The AKARI detection probability map indicated that the AKARI source of our candidate pair satisfied the requirements for a slow-moving object with two detections on one date and no detection on the date of 6 months before.
Image: This is Figure 5 from the paper. Caption: Comparison between IRAS (left) and AKARI (right) cutout images of our good candidate pair. The green circle indicates the location of IRAS source, while the white circle indicates the location of AKARI source. The size of each circle is 25′′. The yellow arrow with a number in arcminute shows the angular separation between IRAS and AKARI sources. The colour bar represents the pixel intensity in each image in the unit of MJy/sr. The AKARI source in the right panel is not visible as a real physical source due to the characteristics of AKARI-MUSL, which include moving sources without monthly confirmation. Credit: Phan et al.
This could be considered tantalizing but little more, because it is impossible from these two observations to determine an orbit for this object. The authors say that the 570 megapixel Dark Energy Camera (DECam) may be useful for follow-up observations. But I also noted an article in Science by Hannah Richter. The author quotes Caltech astronomer Mike Brown, who came up with the original Planet Nine hypothesis in 2016.
Brown argues that this object is not likely to be Planet Nine because its orbit would be far more tilted than what is predicted for the undiscovered world. In other words, a planet in this position would not have the observed effects on the Solar System. In fact, a planet in this orbit would make the calculated Planet Nine orbit itself unstable, which would eliminate Planet Nine altogether.
Is there an entirely different planet out there? Future observations will have to sort this out. It surprises me that it has taken so long for this kind of search to occur. Discoveries are made when seasoned observers can prompt up-and-coming scientists to consider avenues hitherto unexplored. I think we can applaud Phan’s doctoral adviser, Tomotsugu Goto, for the insight to suggest this direction of study for a young astronomer who will bear watching.
For now, returning to the thoughts with which I began this post, there are a few implications even for missions in their current planning stages. From the paper:
Several recent studies proposed and evaluated the prospects of future planetary and deep-space missions for the Planet Nine search, including a dedicated mission to measure modifications of gravity out to 100 AU (Buscaino et al. 2015), the Uranus Orbiter and Probe mission (Bucko, Soyuer, and Zwick 2023), and the Elliptical Uranian Relativity Orbiter mission (Iorio, Girija, and Durante 2023).
We’ll be going deeper into these missions in the future. For now, I find the interest in Planet Nine heartening, because even in budget-barren times, we can be doing useful science by way of exploration with existing instruments and considering designs for missions we will one day be able to send. On that score, Planet Nine – or perhaps even a different world equally distant from the Sun – is an incentive for propulsion science and a driver for the imagination.
The paper is Phan et al., “A Search for Planet Nine with IRAS and AKARI Data,” accepted for publication in Publications of the Astronomical Society of Australia (PASA). Preprint. The paper on super-Earths is Weicheng Zang et al., “Microlensing events indicate that super-Earth exoplanets are common in Jupiter-like orbits,” Science Vol 388, Issue 6745 (24 April 2025), 400-404 (abstract).
Even as I’ve been writing about the need to map out regions just outside the Solar System, I’ve learned of a new study that finds (admittedly scant) evidence for a Planet 9 candidate. I won’t get into that one today but save it for the next post, as we need to dispose of the New Horizons news first. But it’s exciting that a region extending from the Kuiper Belt to the inner Oort is increasingly under investigation, and the very ‘walls’ of the plasma bubble within which our system resides are slowly becoming defined. And if we do find Planet 9 some time soon, imagine the focus that will bring to this region.
As to New Horizons, there are reasons for building spacecraft that last. The Voyagers may be nearing the end of their lives, but given that they were only thought to be operational for a scant five years, I’d say their 50-year credentials are proven. And because they had the ability to hang in there, they’ve become our first interstellar mission, still reporting data, indispensable. Now we can hope that New Horizons carries on that tradition, since so far it has proven equally robust and remains productive.
Image: This is Figure 1 from the paper we’ll be discussing today (citation below). Caption: The trajectories of the five spacecraft currently leaving the solar system: Pioneer 10 and 11 (orange and light green, respectively), Voyager 1 and 2 (gray and green, respectively), and New Horizons (NH, red) are shown projected onto the plane of the ecliptic, along with several planet orbits (black) and the direction of the flow of interstellar hydrogen atoms (purple arrows). The locations where great-circle scans of interplanetary medium (IPM) Lyα [a uniquely useful spectral line of hydrogen] were made with the NH Alice UV spectrograph are indicated (red), including the all-sky Lyα map described here, which was executed during 2023 September 2–11 at a distance from the Sun of 56.9 au. Credit: Gladstone et al.
To understand the image, we have to talk about ultraviolet Lyman-alpha (Lya) emissions, which happen when an electron in hydrogen transitions from the second energy level (n=2) down to the ground state (n=1). It is the transition that results in the production of a Lyman-alpha photon with a wavelength of about 121.6 nanometers. Since we’ve been talking about the interstellar medium lately, it’s helpful to know that photons in this far-ultraviolet part of the spectrum are absorbed and re-emitted by interstellar gas. They’re useful in the study of star-forming regions and molecular hydrogen clouds.
These emissions are at the heart of a new paper using data from New Horizons that has just appeared from the spacecraft’s team, under the guidance of Randy Gladstone, lead investigator and first author. Says Gladstone:
“Understanding the Lyman-alpha background helps shed light on nearby galactic structures and processes. This research suggests that hot interstellar gas bubbles like the one our solar system is embedded within may actually be regions of enhanced hydrogen gas emissions at a wavelength called Lyman alpha.”
So what we have in these Lyman-alpha emissions is a wavelength of ultraviolet light that is an outstanding tool for the study of the interstellar medium, not to mention our Solar System’s immediate surroundings within the Milky Way. The beauty of having New Horizons in the Kuiper Belt is that along the way to Pluto, the spacecraft was bankrolling Lya emissions with its ultraviolet spectrograph, charmingly dubbed Alice. Developed at SwRI, Alice was turned to the task of surveying Lya activity as the craft continued to travel ever farther from the Sun. One set of scans mapped 83% of the sky.
Image: The SwRI-led NASA New Horizons mission’s extensive observations of Lyman-alpha emissions have resulted in the first-ever map from the galaxy in Lyman-alpha light. This SwRI-developed Alice spectrograph map (in ecliptic coordinates, centered on the direction opposite the Sun) depicts the relatively uniform brightness of the Lyman-alpha background surrounding our heliosphere. The black dots represent approximately 90,000 known UV-bright stars in our galaxy. The north and south galactic poles are indicated (NGP & SGP, respectively), along with the flow direction of the interstellar medium through the solar system (both upstream and downstream). Credit: Courtesy of SwRI.
The method is about what you would assume: The New Horizons team could subtract the Lya activity from the Sun from the rest of the spectrographic data so as to get a read on the rest of the sky at the Lyman-alpha wavelength. What the team has found is a Lya sky about 10 times stronger than was expected from earlier estimates. And now we turn back to the issue of that hot bubble of gas – the Local Bubble – we talked about last time. 300 light years wide, the hot ionized plasma of the bubble was created by supernovae between 10 and 20 million years ago. The Sun resides within the Bubble along with low-density clouds of neutral hydrogen atoms, as we saw yesterday.
New Horizons has charted the emission of Lyman-alpha photons in the shell of the bubble, but the hydrogen ‘wall’ that has been theorized at the edge of the heliosphere, and the nearby cloud structures, show no correlation with the data. It is in the Local Interstellar Medium (LISM) background that the relatively bright and uniform signature of Lya is most apparent, evidently the result of hot, young stars within the Local Bubble shining on its interior walls. But as the authors note, this is currently just a conjecture. Further work from the doughty spacecraft may be in the cards. From the paper:
The NH Alice instrument has been used to obtain the first detailed all-sky map of Lyα emission observed from the outer solar system, where the Galactic and solar contributions to the observed brightness are comparable, and the solar contribution can be reasonably removed….A follow-up NH Alice all-sky Lyα map may be made in the future, if possible, and combining that map with this map could result in a considerable improvement in angular resolution. Finally, the maps presented here were obtained using the Alice spectrograph as a photometer, since its spectral resolution is too coarse to resolve the details of the Lyα line structure. However, there are instruments capable of resolving the Lyα line profile (e.g., J. T. Clarke et al. 1998; S. Hosseini & W. M. Harris 2020) which could possibly study this emission in more detail, and thus (even from Earth orbit) provide a new window on the LISM and H populations in the heliosphere.
Thus we learn something more about the boundary between our system and the interstellar medium in a map that should provide plenty of ground for new investigations at these wavelengths. And we’re reminded of the tenacity of well-built spacecraft and their continuing ability to return solid data well beyond their initial mission plan. Where and when the next interstellar craft gets built remains an open question, but for now New Horizons seems capable of a great deal more exploration.
The paper is Gladstone et al., “The Lyα Sky as Observed by New Horizons at 57 au,” The Astronomical Journal Vol. 169, No. 5 (25 April 2025), 275. Full text.
We don’t talk about the interstellar medium as much as we ought to. After all, if a central goal of our spacefaring is to probe ever further into the cosmos, we’re going to need to understand a lot more about what we’re flying through. The heliosphere is being mapped and we’ve penetrated it with still functioning spacecraft, but out there in what we can call the local interstellar medium (LISM) and beyond, the nature of the journey changes. Get a spacecraft up to a few percent of the speed of light and we have to think about encounters with dust and gas that affect mission design.
Early thinking on this was of the sort employed by the British Interplanetary Society’s Project Daedalus team, working their way through the design of a massive two-staged craft that was intended to reach Barnard’s Star. Daedalus was designed to move at 12 percent of lightspeed, carrying a 32 meter beryllium shield for its cruise phase. Designer Alan Bond opined that the craft could also deploy a cloud of dust from the main vehicle to vaporize enroute particles before they could reach the shield.
Other concepts for dust mitigation have included using beamed energy to deflect larger objects, or in the case of micro-designs like Breakthrough Starshot, simply turning the craft to present as little surface area along the path of flight as possible, acknowledging that there is going to be an attrition rate in any swarm of outgoing probes. Send enough probes, in other words, and a workable percentage of them should get through.
Image: A dark cloud of cosmic dust snakes across this spectacular wide field image, illuminated by the brilliant light of new stars. This dense cloud is a star-forming region called Lupus 3, where dazzlingly hot stars are born from collapsing masses of gas and dust. This image was created from images taken using the VLT Survey Telescope and the MPG/ESO 2.2-metre telescope and is the most detailed image taken so far of this region. Credit: ESO/R. Colombari.
However we manage it, dust mitigation appears essential. Dana Andrews once calculated that for a starship moving at 0.3 c, a tenth of a micron grain of typical carbonaceous dust would have a relative kinetic energy of 37,500,000 GeV.. So understanding what’s around us, and in particular the gas and dust environment beyond but near the Solar System, is essential for future probes. For those wanting to brush up, the best overview on the interstellar medium remains Bruce Draine’s Physics of the Interstellar and Intergalactic Medium (Princeton, 2010).
Of course, most of the interstellar medium is made up of gas rather than dust, at a ratio of something like 99 to 1. The dust effect can be catastrophic, but the presence of this hydrogen and helium has been invoked by people like Robert Bussard because it could theoretically provide fuel to an interstellar ramscoop craft. Current thinking leans away from the idea, but it’s interesting to learn that we have sources of hydrogen far larger than we realized and also much closer to the Sun than, say, the Orion Nebula.
Thus my interest in a new paper out of Rutgers University. An international team working with astrophysicist Blakesley Burkhart has identified a hitherto unobserved cloud that could one day spawn new stars. Composed primarily of molecular hydrogen and dust, along with carbon monoxide, the cloud emerged in studies at far-ultraviolet wavelengths. The discovery is unusual because it is difficult to observe molecular hydrogen directly. Most observations of such clouds have to employ radio and infrared data, where the signature of CO is a clear tracer of the mass of molecular hydrogen.
Image: Blakesley Burkhart, a Rutgers University astrophysicist, has led a team that discovered the molecular hydrogen gas cloud, Eos. Credit: Rutgers University.
The data for this work were collected by FIMS-SPEAR (fluorescent imaging spectrograph), a far-ultraviolet instrument aboard the Korean satellite STSAT-1. Named Eos, the new cloud turns out to be one of the largest single structures in the sky. It is located some 300 light years away at the edge of the Local Bubble, and masses about 3400 times the mass of the Sun. The Solar System itself resides within the Local Bubble, which is several hundred parsecs in diameter and was probably formed by supernovae that created a hot interior cavity now surrounded by gas and dust. The nearby star-forming regions closest to the Sun are found along the surface of this bubble.
If we could see it with the naked eye, Eos would measure about forty moons across the sky. The likely reason it has not been detected before is that it is unusually deficient in carbon monoxide and thus hard to find with more conventional methods. Says Burkhart:
“This is the first-ever molecular cloud discovered by looking for far ultraviolet emission of molecular hydrogen directly. The data showed glowing hydrogen molecules detected via fluorescence in the far ultraviolet. This cloud is literally glowing in the dark.”
And getting back to dust, this work usefully shows us how we’re beginning to map relatively nearby space to locate concentrated areas of such matter. From the paper:
Using Dustribution [a 3D dust mapping algorithm], we compute a 3D dust map of the solar neighbourhood out to a distance of 350 pc. Figure 3 [below] shows the reconstructed map, focusing on the region of the Eos cloud as a function of distance. We see a single, distinct cloud that corresponds to the H2 emission stretching from 94 to 130 pc; no other clouds are visible along the same lines of sight in Fig. 3. This establishes the cloud as among the closest to our Solar System, on the near side of the Sco–Cen OB association [Scorpius–Centaurus Association, young stars including the Southern Cross formed from the same molecular cloud]…
Image: This is part of Figure 3 from the paper. Caption: 3D dust density structure of the Eos cloud. Two-dimensional slices of the cloud at different distances. The colour bar indicates both the total dust extinction density and the total mass density, assuming dust properties from ref. 29 and a gas-to-dust mass ratio of 124… [T]he magenta contour shows the region with high H2/FUV ratio. Credit: Burkhart et al.
And I quite like this video the scientists have put together illustrating the cloud and its relationship to the Solar System.
Image: Scientists have discovered a potentially star-forming cloud and called it “Eos.” It is one of the largest single structures in the sky and among the closest to the Sun and Earth ever to be detected. Credit: Thomas Müller (HdA/MPIA) and Thavisha Dharmawardena (NYU).
The paper is Burkhart et al., “A nearby dark molecular cloud in the Local Bubble revealed via H2 fluorescence,” Nature Astronomy 28 April 2025). Full text.
The sublime, almost fearful nature of deep time sometimes awes me even more than the kind of distances we routinely discuss in these pages. Yes, the prospect of a 13.8 billion year old cosmos strung with stars and galaxies astonishes. But so too do the constant reminders of our place in the vast chronology of our planet. Simple rocks can bring me up short, as when I consider just how they were made, how long the process took, and what they imply about life.
Consider the shifts that have occurred in continents, which we can deduce from careful study at sites with varying histories. Move into northern Quebec, for example, and you can uncover rock that was found on the continent we now call Laurentia, considered a relatively stable region of the continental crust (the term for such a region is craton). Move to the Ukraine and you can investigate the geology of the continent called Baltica. Gondwana can be studied in Brazil, an obvious reminder of how much the surface has changed.
Image: With Earth’s surface constantly changing over time, we can only do snapshots to suggest some of these changes. Here is a view of the Pannotia supercontinent, showing components as they existed about 545 million years ago. Based on: Dalziel, I. W. (1997). “Neoproterozoic-Paleozoic geography and tectonics: Review, hypothesis, environmental speculation”. Geological Society of America Bulletin 109 (1): 16–42. DOI:<0016:ONPGAT>2.3.CO;2 10.1130/0016-7606(1997)109<0016:ONPGAT>2.3.CO;2. Fig. 12, p. 31. Credit: Wikimedia Commons.
Studying these matters has something of the effect of an earthquake on me. By which I mean that in the two minor earthquakes I have experienced, the sense of something taken for granted – the deep stability of the ground under my feet – suddenly became questionable. Anyone who has gone through such events knows that the initial human response is often mystification. What’s going on? And then suddenly the big picture emerges even as the quake passes.
So it can be with scientific realizations. Let’s talk about how scientists use various methods to study rock from different eras to infer changes to Earth’s magnetic field. Tiny crystal grains a mere 50 to a few hundreds of nanometers in size can hold a single magnetic domain, meaning a place where the magnetization exists in a uniform direction. Think of this as a locking in of magnetic field conditions that can be studied at various times in the planet’s history to see what the Earth’s magnetic field was doing then.
The subject is vast, and the work richly rewarding for anyone asking questions about how the planet has evolved. But now we’re learning that it also holds implications for the evolution of life. In a new feature in Physics Today, John Tarduno (University of Rochester) explains his team’s recent work on paleomagnetism, which is the study of the magnetic field throughout Earth’s history as captured in rock and sediment. For the magnetic poles move, and sometimes reverse, and within that history quite a story is emerging.
Image: The University of Rochester’s Tarduno, whose recent work explores the effects of a changing magnetic field on biological evolution. Credit: University of Rochester.
For Tarduno, the implications of some of these changes are striking. They grow out of the fact that the magnetic field, which is produced in our planet’s outer core (mostly liquid iron) continually varies, and the timescales involved can be short (mere years) or extensive (hundreds of millions of years). It’s been understood for a long time that the field reverses its polarity, but accompanying this change is the less known fact that a polarity change also decreases the field strength. Sometimes these transitions are short, but sometimes lengthy indeed.
Consider that new evidence, presented by Tarduno is this deeply researched backgrounder, shows that some 575 to 565 million years ago the Earth’s magnetic field all but collapsed, and remained collapsed for a period of tens of millions of years. If that time range piques your interest, it’s probably because this is coincident with a period known as the Avalon explosion, when macroscopic animal life, rich in complexity, begins to appear. And now we’re in the realm of evolution being spurred by magnetic field changes. The implications run to life’s own history and even as far out as SETI.
Named after the peninsula in which evidence for it was found, the Avalon explosion occurred tens of millions of years earlier than the Cambrian period which has heretofore been considered the time when complex lifeforms began to appear on Earth. It came during the Ediacaran period that preceded the Cambrian, and seems to have produced the first complex multicellular organisms. The sudden diversification of body scheme and distinct morphologies has been traced at sites in Canada as well as the UK.
Image: Archaeaspinus, a representative of phylum Proarticulata which also includes Dickinsonia, Karakhtia and numerous other organisms. They are members of the Ediacaran biota. Credit: Masahiro miyasaka / Wikimedia Commons.
So this is an ‘unexpected twist,’ as Tarduno puts it, that may relate significant evolutionary changes to the magnetic field as it reconfigured itself. Scientists studying the Ediacaran period (between 635 and 541 million years ago) have found that rocks formed then show odd magnetic directions. Some researchers concluded that the magnetic field in this period was reversing polarity, and we already knew that during a polarity reversal (the last was 800,000 years ago), the magnetic field could take on unusual shapes. Recent work shows that its strength in this period was a mere tenth of the values of today’s field.
This work was done in northern Quebec (ancient Laurentia), but later work from the Ukraine (Baltica) and Brazil showed an even lower field strength. We’re talking about a long period here of ultralow magnetic activity, perhaps 26 million years or more. Events deep inside the Earth’s inner core seem to have spurred this. I won’t go into the details about all the research on the core – it’s fascinating but would take us deeply into the weeds. For now, just consider that a consensus has been building that relates core activity to the odd Ediacaran geomagnetic field, one that correlates with a profound evolutionary event.
Image: This is Figure 2 from the paper. Caption: Changes deep inside Earth have affected the behavior of the geodynamo over time. In the fluid outer core, shown at right, convection currents (orange and yellow arrows and ribbons) form into rolls because of the Coriolis effect from the planet’s rotation and generate Earth’s magnetic field (black arrows). Structures in the mantle—for example, slabs of subducted oceanic crust, mantle plumes, and regions that are anomalously hot or dense—can affect the heat flow at the core–mantle boundary and, in turn, influence the efficiency of the geodynamo. As iron freezes onto the growing solid inner core, both latent heat of crystallization and composition buoyancy from release of light elements provide power to the geodynamo. (Left: Earth layers image adapted from Rory Cottrell, Earth surface image adapted from EUMETSAT/ESA; right: image adapted from Andrew Z. Colvin/CC BY-SA 4.0.)
In the models Carduno describes, the Cambrian explosion itself was driven by a greater incidence of energetic solar particles during periods of weak magnetic field strength. Thus we have the basis for a weak field increasing mutation rates and stimulating evolutionary processes. Tarduno cites his own work here:
Eric Blackman, David Sibeck, and I have considered whether the linkage might be found in changes to the paleomagnetosphere. Records of the strength of the time-averaged field can be derived from paleomagnetism, whereas solar-wind pressure can be estimated using data from solar analogues of different ages. My research group and collaborators have traced the history of solar–terrestrial interactions in the past by calculating the magnetopause standoff distance, where the solar-wind pressure is balanced by the magnetic field pressure… We know that the ultralow geomagnetic fields 590 million years ago would have been associated with extraordinarily small standoff distances, some 4.2 Earth radii (today it is 10–11 Earth radii) and perhaps as low as 1.6 Earth radii during coronal mass ejection events.
As Tarduno explains, all this leads to increased oxygenation during a period of magnetic field strength diminished to an all-time low, along with an accompanying boom in the complexity of animal life from the Ediacaran leading into the Cambrian.
Notice the profound shift we are talking about here. Classically, scientists have assumed that it was the shielding effects of the magnetic field that offered life the chance to survive. Indeed, we talk about a lack of magnetic fields in exoplanets like Proxima Centauri b as being a strong danger to life because of incoming radiation. This new work is saying something profound:
If our hypothesis is correct, we will have flipped the classic idea that magnetic shielding of atmospheric loss was most important for life, at least during the Ediacaran Period: The prolonged interlude when the field almost vanished was a critical spark that accelerated evolution.
Maybe we have been too simplistic in our views of how a magnetic field influences the development and growth of lifeforms. In recent decades, work has been showing linkages between these magnetic changes, which can last for millions of years, and spurts in evolutionary activity. So that it is precisely because of low magnetic field strength, rather than in spite of it, that life suddenly explodes into new forms during these periods of high activity.
Jim Benford has often commented to me that despite having mounted the most intensive SETI search with the most powerful tools ever available, we still have not a trace of a signal from another civilization. Is it possible – and Jim was the one who pointed out this paper to me – that the reason is that the magnetic field changes that so affected life on our planet are rare elsewhere? Because it now looks as though a magnetic field should be considered less a binary situation than as a variable, one whose mutability because of core activity can take the world it engulfs into periods of low to high magnetic strength, and some eras millions of years long in which there is hardly any field at all.
I mentioned Proxima Centauri b above. Whether or not it has a magnetic field has yet to be determined, which points out how little we know about such fields around exoplanets at large. Further investigation of Earth’s magnetic past will help us understand how such fields change over time, and whether Earth’s own history has been unusually kind to evolution.
The article is Tarduno, “Earth’s magnetic dipole collapses, and life explodes,” Physics Today 78 (4) (2025), 26-33 (abstract).
Sub-Neptune planets are going to be occupying scientists for a long time. They’re the most common type of planet yet discovered, and they have no counterpart in our own Solar System. The media buzz about K2-18b that we looked at last time focused solely on the possibility of a biosignature detection. But this world, and another that I’ll discuss in just a moment, have a significance that can’t be confined to life. Because whether or not what is happening in the atmosphere of K2-18b is a true biosignature, the presence of a transiting sub-Neptune relatively close to the Sun offers immense advantages in studying the atmosphere and composition of this entire category.
Are these ‘ hycean’ worlds with global oceans beneath an atmosphere largely made up of hydrogen? It’s a possibility, but it appears that not all sub-Neptunes are the same. Helpfully, we have another nearby transiting sub-Neptune, a world known as TOI-270 d, which at 73 light years is even closer than K2-18b, and has in recent work from the Southwest Research Institute provided us with perhaps the clearest data yet on the atmosphere of such a world. This work may prompt re-thinking of both planets, for rather than oceans, we may be dealing with rocky surfaces under a hot atmosphere.
TOI-270 d exists within its star’s habitable zone. The primary here is a red dwarf in the constellation Pictor, about 40 percent as massive as the Sun. Three planets are known in the system, discovered by the TESS satellite by detection of their transits.
SwRI’s Christopher Glein is lead author of the paper on this work. He and his team are working with data from the James Webb Space Telescope, collected by Björn Benneke and reported in a 2024 paper that was startling in its detail (citation below). Seeing TOI-270 d as a possible “archetype of the overall population of sub-Neptunes,” the Benneke paper describes it as a planet in which the atmosphere is not largely hydrogen but enriched throughout, blending hydrogen and helium with heavier elements (the term is ‘miscible’) rather than formed with a stratified hydrogen layer at the top.
Image: SwRI’s Christopher Glein. Credit: Ian McKinney/SwRI.
Glein acknowledges the appeal of planets with the potential for life, the search for which drives much of the energy in exoplanet research. And the new data offer much to consider:
“The JWST data on TOI-270 d collected by Björn Benneke and his team are revolutionary. I was shocked by the level of detail they extracted from such a small exoplanet’s atmosphere, which provides an incredible opportunity to learn the story of a totally alien planet. With molecules like carbon dioxide, methane and water detected, we could start doing some geochemistry to learn how this unusual world formed.”
TOI-270 d, in other words, offers up plenty of detail, with carbon dioxide, methane and water readily detected, allowing as Glein notes the possibility of doing a geochemical analysis to delve into not just the atmosphere’s composition, but how this super-Earth formed in the first place. We have to begin with temperature, for the gases that showed up in the JWST data were at temperatures close to 550 degrees C. Hotter, in other words, than the surface of Venus, a fact that we need to reckon with if we’re holding out hope for global oceans. For at these temperatures gases do some interesting things.
Here the term is ‘equilibration process.’ At a certain level of the atmosphere, pressures and temperatures are high enough that gases reach chemical equilibrium – they become a stable mix. Going higher means both temperature and pressure drop, thus slowing reaction rates. But it is possible for gases to move upward faster than their chemical reactions can adjust to the change, which ‘freezes’ in the composition that was set at the equilibrium level. The mixture ‘quenches,’ in the terminology, and at that point the chemical ratios can no longer change. Finding out where this happens allows scientists to interpret what they see in data taken much higher in the atmosphere.
We are left with the chemical signature of the deep atmosphere where equilibrium occurs. We draw these inferences from the data taken by JWST from the upper atmosphere, offering a broader view of the atmosphere’s composition throughout.
The paper analyzes the balance between methane and carbon dioxide in terms of this quenching, as the relative amounts of the two gases become ‘frozen’ as they move upward. Working out where the balance would occur in a hydrogen-rich atmosphere allowed the team to work out that the freeze out occurred at temperatures between 885 K and 1112 K, with pressures ranging from one to 13 times Earth sea-level pressure. All this points to a thick, hot atmosphere, and one with a persistent conundrum.
For while models suggest that we should find ammonia in the atmosphere of temperate sub-Neptunes, it fails to appear. A nitrogen-poor atmosphere, the authors believe, is possibly the result of nitrogen being sequestered in a magma ocean. The speculation points to a world that is anything but hycean – no water oceans here! We may in fact be observing a planet with a thick atmosphere rich in hydrogen and helium that is well mixed with “metals” (elements heavier than helium), all of this over a rocky surface.
Image: An SwRI-led study modeled the chemistry of TOI-270 d, a nearby exoplanet between Earth and Neptune in size, finding evidence that it is a giant rocky world (super-Earth) surrounded by a deep, hot atmosphere. NASA’s JWST detected gases emanating from a region of the atmosphere over 530 degrees Celsius — hotter than the surface of Venus. The model illustrates a potential magma ocean removing ammonia (NH3) from the atmosphere. Hot gases then undergo an equilibration process and are lofted into the planet’s photosphere where JWST can detect them. Credit: SwRI / Christopher Glein.
The paper also notes the lack of carbon monoxide, explaining this by a model showing that CO would have frozen out even deeper in the atmosphere. Both modeling and data offer an explanation for TOI-270 d but also point to alternatives for K2-18 b. The modeling of the latter as an ocean world is but one explanation. Photochemical models show how difficult it is to produce and maintain enough methane under such conditions. Furthermore, K2-18 b likely receives too much stellar energy to maintain surface liquid water, due to greenhouse heating and limited atmospheric circulation. Thus the paper’s conclusion on K2-18b:
Because a revised deep-atmosphere scenario can accommodate depleted CO and NH3 abundances, the apparent absence of these species should no longer be taken as evidence against this type of scenario for TOI-270 d and similar planets, such as K2-18 b. Our results imply that the Hycean hypothesis is currently unnecessary to explain any data, although this does not preclude the existence of Hycean worlds.
This is a deep, rich analysis drawing plausible conclusions from clearer data than we had previously acquired from transiting sub-Neptunes. The question of water worlds under hydrogen atmospheres remains open, but the galvanizing nature of this paper is that it points to forms of analysis that until now we’ve been able to do only in our own Solar System. I think the authors are connecting dots in very useful ways here, pointing to the progress in exoplanetary science as we go ever deeper into atmospheres.
From the paper. The italics are mine:
Our overall philosophy was to develop modeling approaches that are rooted as much as possible in empirical experience. This experience includes fumaroles on Earth that constrain quench temperatures between redox species in hot gases, and making planets out of meteorites and cometary material to understand how different elements can reach different levels of enrichment in planetary atmospheres. Our approaches were simple, perhaps too simple in some cases if the goal is to accurately pinpoint the composition, present conditions, and history of the planet. If, instead, the goal is to suggest new ways of thinking about geochemistry on exoplanets that maintain focus on key variables and how they can be connected to observational data, as well as large-scale links between what we observe and how the atmosphere might have originated, then a different path to progress can be taken.. The latter is the point of view we pursued.
The paper is Glein et al., “Deciphering Sub-Neptune Atmospheres: New Insights from Geochemical Models of TOI-270 d,” accepted at the Astrophysical Journal (preprint). The Benneke et al. paper is “JWST Reveals CH4, CO2, and H2O in a Metal-rich Miscible Atmosphere on a Two-Earth-Radius Exoplanet,” currently available as a preprint.
As teams of researchers begin to detect molecules that could indicate the presence of life in the atmospheres of exoplanets, controversies will emerge. In the early stages, the method will be transmission spectroscopy, in which light from the star passes through the planet’s atmosphere as it transits the host. From the resulting spectra various deductions may be drawn. Thus oxygen (O₂), ozone (O₃), methane (CH₄), or nitrous oxide (N₂O) would be interesting, particularly in out of equilibrium situations where a particular gas would need to be replenished to continue to exist.
While we continue with the painstaking work of identifying potential biological markers — and there will be many — new findings will invariably become provocations to find abiotic explanations for them. Thus the recent flurry over K2-18b, a large (2.6 times Earth’s radius) sub-Neptune that, if not entirely gaseous, may be an example of what we are learning to call ‘hycean’ worlds. The term stands for ‘hydrogen-ocean.’ Think of endless ocean under an atmosphere predominantly composed of hydrogen. Now put it in the habitable zone.
Thus the interest in K2-18b, which appears to orbit within the habitable zone of its red dwarf host. Astronomers have known about water vapor here for some time, while JWST results in 2023 further indicated carbon dioxide and methane. On its 33-day transiting orbit, this is a planet made to order for spectral analysis of its atmosphere. Now we have new work that leans in the direction of a biological explanation for a possible biosignature, one that is tantalizing but clearly demands further investigation.
The biosignature, deduced by researchers at the University of Cambridge led by Nikku Madhusudhan, involves dimethyl sulfide (DMS) and/or dimethyl disulfide (DMDS). These are molecules that, at least on Earth, are produced only by life, most commonly by marine phytoplankton, photosynthetic organisms that play a large role in producing oxygen for our atmosphere.. The detection, say the authors, is at the three-sigma level of statistical significance, which means a 0.3% probability that these results occurred by chance. Bear in mind that five-sigma is considered the threshold for scientific discovery (below a 0.00006% probability that the results are by chance). So as I say, we can call this intriguing but not definitive, a conclusion the authors support.
Image: Nikku Madhusudhan, who led the current work on K2-18b. Credit: University of Cambridge.
What excites the researchers about this work is that they first saw hints of DMS using the James Webb Space Telescope’s NIRISS (Infrared Imager and Slitless Spectrograph) and NIRSpec (Near-Infrared Spectrograph) instruments, but found further evidence using the observatory’s MIRI (Mid-Infrared Instrument) in the mid-infrared (6-12 micron) range. That’s significant because it produces an independent line of evidence using different instruments and different wavelengths. And in the words of Madhusudhan, “The signal came through strong and clear.”
But Madhusudhan said something else that has excited commentators. Noting that the concentrations of DMS and DMDS in K2-18b’s atmosphere are thousands of times stronger than what we see on Earth, there is an implication that K2-18b may be a specific type of living planet:
“Earlier theoretical work had predicted that high levels of sulfur-based gases like DMS and DMDS are possible on Hycean worlds. And now we’ve observed it, in line with what was predicted. Given everything we know about this planet, a Hycean world with an ocean that is teeming with life is the scenario that best fits the data we have.”
Centauri Dreams readers will want to check Dave Moore’s Super-Earths/Hycean Worlds for more on this category. A hycean world is considered to be a water world with habitable surface conditions, and in earlier work, Madhusudhan and colleagues have noted that K2-18b could well maintain a habitable ocean beneath a hydrogen atmosphere. We have no analog to planets like this in our own system, but the category may be emerging as a place where conditions of temperature and atmospheric pressure may allow at least microbial life.
Image: Artist’s conception of the surface of a hycean planet. Credit: Amanda Smith, Nikku Madhusudhan.
So what is producing these chemical signatures? There may be reason for some optimism about a life detection but the possibility of unknown chemical processes remains alive, and thus will spawn further work both theoretical and experimental. And this is the problem for the entire landscape of remote biosignature detection. We’re going to be seeing a lot of interesting results as our instrumentation continues to improve, but at the level of uncertainty that will ensure debate and the need for further taking of data. This is going to be a long and I suspect frustrating process. Astrophysicists are going to be knocking heads at conferences for decades.
So this is an example of how the debate is going to be playing out at many levels. The evidence for biology will be sifted against possible abiotic processes. From the paper:
…both DMS and DMDS are highly reactive and have very short lifetimes in the above experiments (i.e., a few minutes) and in the Earth’s atmosphere (i.e., between a few hours to ∼1 day), due to various photochemical loss mechanisms (e.g. Seager et al. 2013b). Thus, the resulting DMS and DMDS mixing ratios in the current terrestrial atmosphere are quite small (typically ≲1 ppb), despite continual resupply by phytoplankton and other marine organisms…. sustaining DMS and/or DMDS at over 10-1000 ppm concentrations in steady state in the atmosphere of K2-18 b would be implausible without a significant biogenic flux. Moreover, the abiotic photochemical production of DMS in the above experiments requires an even greater abundance of H2S as the ultimate source of sulfur — a molecule that we do not detect — and requires relatively low levels of CO2 to curb DMS destruction (Reed et al. 2024), contrary to the high reported abundance of CO2 on K2-18 b (Madhusudhan et al. 2023b).
Image: The graph shows the observed transmission spectrum of the habitable zone exoplanet K2-18 b using the JWST MIRI spectrograph. The vertical shows the fraction of star light absorbed in the planet’s atmosphere due to molecules in the planet’s atmosphere. The data are shown in the yellow circles with the 1-sigma uncertainties. The curves show the model fits to the data, with the black curve showing the median fit and the cyan curves outlining the 1-sigma intervals of the model fits. The absorption features attributed to dimethyl sulphide and dimethyl disulphide are indicated by the horizontal lines and text. The image behind the graph is an illustration of a hycean planet orbiting a red dwarf star. Credit: A. Smith, N. Madhusudhan (University of Cambridge).
So there are reasons for optimism. We’ll keep taking such results apart, motivated by the unsparing self-criticism of the Cambridge team, which goes out of its way to scrutinize its findings for alternative explanations (a good lesson in scientific methodology here). Case in point: Madhusudhan and colleagues point out evidence for the presence of DMS on comet 67P/Churyumov-Gerasimenko, which could mean an abiotic source delivered by comets into the atmosphere. Because comets contain ices and gases that could be interpreted as biosignatures if found in a planet’s atmosphere, we’re again reminded of the need for caution. Even so, we can deflect this.
For at K2-18b, the atmosphere is massive compared to the trace gases that could be induced by cometary delivery, and the authors doubt that DMS and DMDS would survive in their present form during a hypervelocity comet impact. K2-18b just has too much DMS and DMDS, per these findings, to be accounted for by comets alone.
Detecting a biosignature will require accumulating more and more evidence to demonstrate first the actual presence of the detected molecules and second the possible abiotic photochemical ways of producing DMS and DMDS in an atmosphere like this. Madhusudhan cites this work as an opportunity for pursuing such investigations within a community of continuing research. No one is claiming we have detected life at K2-18b, but we’re getting a nudge in that direction that will be joined by quite a few other nudges as we probe alien atmospheres.
Not all these nudges point to the same things. For among papers discussing K2-18b, another is about to appear that questions whether it and another prominent sub-Neptune (TOI–270 d) are actually hycean worlds at all. This deep dive into sub-Neptune atmospherics, led by Christopher Glein at Southwest Research Institute, will be our subject next time. For before we can make the call on any hycean biosignature, we have to be sure that oceans are possible there in the first place.
The paper is Madhusudhan et al., “New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI,” accepted at Astrophysical Journal Letters (preprint).
The Wow! signal, a one-off detection at the Ohio State ‘Big Ear’ observatory in 1977, continues to perplex those scientists who refuse to stop investigating it. If the signal were terrestrial in origin, we have to explain how it appeared at 1.42 GHz, while the band from 1.4 to 1.427 GHz is protected internationally – no emissions allowed. Aircraft can be ruled out because they would not remain static in the sky; moreover, the Ohio State observatory had excellent RFI rejection. Jim Benford today discusses an idea he put forward several years ago, that the Wow signal could have originated in power beaming, which would necessarily sweep past us as it moved across the sky and never reappear. And a new candidate has emerged, as Jim explains, involving an entirely natural process. Are we ever going to be able to figure this signal out? Read on for the possibilities. A familiar figure in these pages, Jim is a highly regarded plasma physicist and CEO of Microwave Sciences, as well as being the author of High Power Microwaves, widely considered the gold standard in its field.
by James Benford
The 1977 Wow! signal had the potential of being the first signal from extraterrestrial intelligence. But searches for recurrence of the signal heard nothing. Interest continues, as two lines of thought continue to ponder it.
An Astronomical Maser
A recent paper proposes that the Wow! signal could be the first recorded event of an astronomical maser flare in the hydrogen line [1]. (A maser is a laser-like coherent emission at microwave frequencies. The maser was the precursor to the laser.) The Wow frequency was at the hyperfine transition line of neutral hydrogen, about 1.4 GHz. The suggestion is that the Wow was a sudden brightening from stimulated emission of the hydrogen line in interstellar gas driven by a transient radiation source behind a hydrogen cloud. The group is now going through archival data searching for other examples of abrupt brightening of the hydrogen line.
Maser Wow concept: A transient radiative source behind a cold neutral hydrogen (HI) cloud produced population inversion in the cloud near the hydrogen line, emitting a narrowband burst toward Earth [1].
Image courtesy of Abel Mendez (Planetary Habitability Laboratory, University of Puerto Rico at Arecibo).
Could aliens use the hydrogen clouds as beacons, triggered by their advanced technology? Abel Mendez has pointed out that this was suggested by Bob Dixon in a student’s thesis in 1976 [2]! From that thesis [3]:
“If it is a beacon built by astroengineering, such as an extraterrestrial civilization that is controlling the emission of a natural hydrogen cloud and using it as a beacon, then the only way that it could be ascertained as such, is by some time variation. And we are not set up to study time variation.”
How could such a beacon be built? It would require producing a population inversion in a substantial volume of ionized hydrogen. That might perhaps be done by an array of thermonuclear explosives optimized to produce a narrowband emission into such a volume [4]. Exploded simultaneously, they could produce that inversion, creating the pulse seen on Earth as the Wow.
Why does the Wow! Signal have narrow bandwidth?
In 2021, I published a suggestion that the enigmatic Wow Signal, detected in 1977, might credibly have been leakage from an interstellar power beam, perhaps from launch of an interstellar probe [5]. I used this leakage to explain the observed features of the Wow Signal: the power density received, the Signal’s duration and frequency. The power beaming explanation for the Wow accounted for all four of the Wow parameters, including the fact that the Wow observation has not recurred.
At the 2023 annual Breakthrough Discuss meeting, Mike Garrett of Jodrell Bank inquired “I was thinking about the Wow signal and your suggestion that it might be power beam leakage. But it’s not obvious to me why any technical civilization would limit their power beam to a narrow band of <= 10 kHz. Is there some kind of technical advantage to doing that or some kind of technical limitation that would produce such a narrow-band response?”
After thinking about it, I have concluded that there is ‘some kind of technical advantage’ to narrow bandwidth. In fact, it is required for high-power beaming systems.
Image: The Wow Signal was detected by Jerry Ehman at the Ohio State University Radio Observatory (known as the Big Ear). The signal, strong enough to elicit Ehman’s inscribed comment on the printout, was never repeated.
A Beamer Made of Amplifiers?
High power systems involving multiple sources are usually built using amplifiers not oscillators, for several technical reasons. For example, the Breakthrough Starshot system concept has multiple laser amplifiers driven by a master oscillator, a so-called master oscillator-power amplifier (MOPA) configuration. Amplifiers are themselves characterized by the product of amplifier gain (power out divided by power in) and bandwidth, which is fixed for a given type of device, their ‘gain-bandwidth product’. This product is due to phase and frequency desynchronization between the beam and electromagnetic field outside the frequency bandwidth [6].
Therefore, for wide bandwidth, a lower power per amplifier follows. That means many more amplifiers. Likewise, to go to high power, each amplifier will have a small bandwidth. (Then the number of amplifiers is determined by the power required.) For power beaming applications, to get high power on target is essential: higher power is required, so smaller bandwidth follows.
So why do you get narrow bandwidth? You use very high gain amplifiers to essentially “eat up” the gain-bandwidth product. For example, in a klystron, you have multiple high-Q cavities that result in high gain. The high-gain SLAC-type klystrons had gains of about 100,000. Bandwidths for high power amplifiers on Earth are about one percent of one percent, 0.0001, 10-4. The Wow! bandwidth is 10 kHz/1.41 GHz, about 10-5.
So yes, the physics of amplifiers limits bandwidth in beacons and power beams because both would be built to provide very high power. So, with very high gain in the amplifiers, small bandwidth is the result.
This fact about amplifiers is another reason I think power beaming leakage is the explanation for the Wow. Earth could have accidentally received the beam leakage. Since stars constantly move relative to each other, later launches using the Wow! beam will not be seen from Earth.
Therefore I predicted that each failed additional search for the Wow! to repeat is more evidence for this explanation.
The Wow search goes on
These two very different explanations for the origin of the Wow! have differing future possibilities. I predicted that it wouldn’t be seen again. Each failed additional search for the Wow! to repeat (and there have been many) is more evidence for this explanation. Mendez and coworkers are looking to see if their process has occurred previously. They can prove their explanation by finding such occurrences in existing data. These are two very different possibilities. Only the Mendez concept can be realized soon.
References
1. Abel Mendez,1 Kevin Ortiz, Ceballos, and Jorge I. Zuluaga, “Arecibo Wow! I: An Astrophysical Explanation for the Wow! Signal,” arXiv:2408.08513v2 [astro-ph.HE], 2024.
2. Abel Mendez, private communication.
3. Cole, D. M. (1976). “A Search for Extraterrestrial Radio Beacons at the Hydrogen Line,” Doctoral dissertation, Ohio State University, 1976.
4. Taylor, T., 1986, “Third generation nuclear weapons,” Sci. Am., 256, 4, 1986.
5. James Benford “Was the Wow Signal Due to Power Beaming Leakage?”, JBIS 74 196-200, 2021.
6. James Benford, Edl Schamiloglu, John A. Swegle, Jacob Stephens and Peng Zhang, Ch. 12 in High Power Microwaves, Fourth Edition, Taylor and Francis, Boca Raton, FL, 2024.