Centauri Dreams View RSS

Imagining and Planning Interstellar Exploration
Hide details



Starshot Is a Success (Part II) 10 Mar 8:40 AM (19 hours ago)

The second part of Jim Benford’s examination of Breakthrough Starshot concludes our look at the numerous issues advanced by Phase I of the project. Largely discounted in recent press coverage, the Starshot effort in fact completed a successful Phase I and left behind numerous papers that illuminate the path forward for interstellar flight. This is solid work on everything from laser arrays to metamaterials and the engineering of data return at light-year distances. Read on.

by James Benford

“I have learned to use the word ‘impossible’ with great caution.”
— Wernher Von Braun, after the lunar landing

In this second report, I will describe the major results of Starshot beginning with the mission scenario and then treating each major technical area in terms of how solutions have been resolved and issues retired. In Part 1, I described Phase 1 objectives.

One of the causes of Starshot results not being well-known publicly is that the Breakthrough Foundation has not publicized its events and results during most of its duration. After its completion, substantial reports have appeared, but are not commonly available to the public. There is a final report, but it has yet to be published. There are briefings by Harry Atwater at Breakthrough Discuss and the IRG in Montreal in 2023 [1,2].

The most detailed discussions are in the book Laser Propulsion in Space edited by Claude Phipps, with a system overview by Pete Worden and others, a description of his system model by Kevin Parkin and other aspects of directed energy in space by Philip Lubin, all in the one volume [3]. The Kevin Parkin article is particularly interesting because it contains fully worked-out examples of the possibility of future voyages of humans traveling to the stars in large >100 m sailcraft in future centuries. Note that there are many journal publications produced by Breakthrough Starshot. And there are many papers that have been published since Starshot was put on hold.

The Starshot Mission Scenario has evolved as a substantial improvement over previous beam-driven sail mission concepts. A mothership is launched which houses a fleet of membrane-like sailcraft measuring ~5 meters in diameter and less than a micron thick. The traditional standard laser guide star adaptive optic system can’t be scaled to Starshot-sized apertures to deal with the time–dependent fluctuations due to atmospheric turbulence. The system uses a satellite–based laser which is called the Beacon. It’s in an orbit at the launch time of apogee 200,000 km.

Image: Starshot system geometry. Arrows indicate that the array acquires atmospheric turbulence data from a Beacon and points the beam at the sailcraft. (Courtesy of Breakthrough Foundation.)

The sailcraft are composed of super-reflective metamaterials that stabilize the perturbations that could prevent beam-riding during the propulsion phase. The scientific instruments that are the payload are integrated into the sail. The mission begins as the mothership deploys a sailcraft into space.

Meanwhile on Earth, a phased array of 100 million small lasers turns on, generates ~100 GW of optical power and, using information from a Beacon in high orbit, digitally adjusts the phase of the emitted light to correct for atmospheric turbulence. These small lasers would be manufactured in printed sheets, following the fabrication techniques of the semiconductor industry. This is the means of lowering laser prices.

The single 100 GW beam focuses on the sail and accelerates the sail. Almost no energy is absorbed by the sail’s reflective surface, so imparting force. The sail rides the beam for ten minutes and reaches relativistic speed. It leaves the solar system in less than a week. Soon after acceleration it encounters dust and charged particles, so can be oriented edge-on to avoid such collisions. On arriving at the Alpha Centauri system, it captures images, detects dust and particles and measures fields. The sail transmits data home to an array of optical receivers on Earth, so it begins to arrive four years later. Data return may take decades because of limited data rate. Recall that complete data return from the New Horizons flyby took about a year.

The above figure shows a concept for the sail, about 5 m in diameter. Some studies show that at the velocity under consideration the gas and dust will pass through the thin sail with virtually no damage if it travels face-on. Only the payload would need protection. The sail can also be oriented edge-on in order to avoid such collisions, giving meters of material protection to the center. The payload is around the center, protected from damage due to incoming gas and dust.

Key issues for beam-driven sail systems have been retired by high levels of Starshot research. Most are resolved at the conceptual level. Experiments are needed to verify solutions for these major issues, discussed below.

Can phase be maintained across a large aperture composed of many sources? This is well demonstrated historically for microwaves, principally for radar. For lasers, a new concept has been quantified [4, 5]. Building the hundred million laser emitters into a large array is the driving technical challenge of the project. The principle of the design is to interferometrically link multiple arrays which are phase-locked into modular tiers of larger size. That is, multiple areas which are individually phase controlled would be linked together by interferometry. This approach of linking multiple optical phased arrays is called a hierarchical array. The array design that resulted has laser dimensions and total power levels that are about five orders of magnitude beyond present state of the art capabilities. To control the phase over such a large aperture is the most significant technical challenge to Starshot.

Can a sail material be found which can meet the many constraints on sail acceleration? Most materials effort has been for laser propulsion, where the leading candidate for sail material is silicon nitride. There are no fundamental limits to optimize that material for the key parameters of mass, reflectivity, refractive index, and thermal properties. (For microwaves various types of carbon are preferred, such as microtruss and graphene.)

Can the sail ride on the beam stably? (Feedback is impossible over long ranges.) If not, sails can veer off-course on millisecond timescale. The notion of beam-riding, stable flight of a sail propelled by a beam, places considerable emphasis on the sail shape. Even for a steady beam, the sail can wander off if its shape becomes deformed or if it does not have enough spin to keep its angular momentum aligned with the beam direction in the face of perturbations. Beam pressure will keep a concave shape sail in tension, and it will resist sidewise motion if the beam moves off-center, as a sidewise restoring force restores it to its position. Early stability experiments verified that beam-riding does occur with a conical sail [6].

Experiments and simulations show that conical sails ride a microwave beam stably. The carbon–carbon sail diameter is 5 cm, height 2 cm, and mass 0.056 g.

Beam riding and structural stability is difficult. (a), beam-riding stability, where bold upward arrows depict accelerating beam, light upward arrows the force of radiation pressure, downward arrows the direction of reflected light (b) structural stability methods (c) mechanical issues [7].

Meter-scale shaped sails of submicron, ~100 atomic layer thickness can ride with stability along the axis of the accelerating beam despite the many types of deformations caused by photon pressure and thermal expansion. There is also a requirement for structural stability, the ability to survive acceleration without collapse, and crumpling under acceleration, as depicted in the figure above. And there could be thermal and tensile failure as well as rupture of sail materials. Many studies of this issue have shown multiple solutions.

Stable designs exist for concave shapes and for flat flexible sails with millimeter scale photonic structures to control reflections. (Simple flat sails cannot achieve beam-riding stability because specular reflection produces forces only normal (perpendicular) to the surface.) A considerable advantage of flat sails is that curved sail shapes are more difficult to fabricate at meter scales. However, Starshot has shown that even flat sails can beam-ride by tailoring asymmetric optical properties to produce transverse restoring forces with millimeter-scale photonic structures to control reflections. So a flat sailcraft can be modified to scatter light as if it were curved. For example, the Swartzlander group, in a series of theoretical, computational, and experimental studies, has shown that a flat sail whose reflecting surface is equipped with diffractive gratings is directionally stable [8,9]. Anisotropic scattering of incident light into the grating diffraction orders manifests in optical restoring forces transverse to the membrane, redirecting incident photon momentum to produce beam-riding.

Such metagratings or metasurfaces consist of subwavelength scatterers shaped as disks, blocks, spheres, etc. shape the scattered wavefronts, redirecting incident photon momentum transversely. This provides stabilizing restoring forces and torques. However, adding metagratings makes the sail heavier than the ~0.1 gram per square meter goal. And photonic grating patterns would have to be produced over a large area. The advantage of flat sails will significantly streamline and simplify the fabrication process. The issue is whether such structures can be scaled to manufacture on the size of meters with low mass.

Spin-stabilization will likely be needed to prevent the collapse of sails while acceleration is underway. A beam can carry angular momentum and communicate it to a sail to help control it in flight. Spin can be modified remotely by circularly polarized beams from the ground [10]. It also allows ‘hands-off’ unfurling deployment through control of the sail spin at a distance [10-12]. Spinning them at ~100 Hz rates gyroscopically stabilizes sails against drift, yaw and tilting, allowing numerous shapes to retain their stability. (Circularly polarized electromagnetic fields carry both linear and angular momentum, which acts to produce a torque through an effective moment arm of a wavelength, so longer wavelengths are more efficient in producing spin.)

A final and crucial issue: Can the data be returned from distant space targets at sufficient data rates before the sail moves far beyond the star? For solar system-scale missions this is possible with existing microwave communication technologies. that were realized 50 years ago in the Deep Space Network. For interstellar missions it is possible by using laser communications. Though today’s laser communication systems are far too heavy for Starshot, which instead aims to operate part of its sail as an optical phased array. There are methods of making this likely in future decades [13]. That is because we understand essentially completely the fundamental limits on communication, and our technology today is able to operate very close to those limits.

The mission objective is to return 100 kB of data. The power requirement on board is driven primarily by the communication needs as well as pointing, tracking and computation. The energy technology is a thin film, radioisotope thermoelectric generator.

Propulsion-oriented scientists usually assume that the mission should be done at maximum speed. But information scientists’ relation to speed is different; they focus on how it affects the data return:

* Slower is better since observations are easier and there is more time in the vicinity of the target star.

* The measure of mission performance is the volume of data returned reliably vs the ‘data latency’ (defined as time from acquisition at Centauri to return to Earth of an entire observational data set).

So from this perspective speed is a secondary parameter except as it influences the data volume and data latency, which will relate to the payload mass, and in particular the communications mass.

Messerschmitt, Lubin and Morrison have studied the minimum data latency that can be achieved for a given data volume, or equivalently the maximum data volume that can be achieved for a given data latency [13, 14]. Generally, they reduce speed for high latency (with the benefit of larger data volume, so larger mass, more instrumentation, and larger data volume).

From this, the key insight that governs the difficult problem of returning data over interstellar distance is that a cost-optimized (meaning cost minimized) system scales as the relation between speed v and mass m: v~1/m1/4. That means we can have a much heavier communication system onboard. Achieving the data return is more credible. This leads to an optimum mass that maximizes data volume for a given data latency. Future communications research will deal with several probes downlinking concurrently from the same target star. Separating these downlinks (‘multiplexing’, using different formats, polarization, etc.) is very challenging,

That leads to a very significant development conclusion: We would of course develop heavier, lower velocity probes early on as the Beamer is being built out. The Beamer will be built by adding modules of power and aperture over time. It is likely what will happen is that technologies advance, such as sail materials are improved and mass is reduced. As faster solar system deep space missions occur, mass will either drop as the system performance improves or will increase for faster, better data return. That’s the natural development path, leading to faster, better missions.

The on-board pointing system of the sail is also a technical challenge. It must point in the direction of our Solar System, and the beam will be larger than Earth’s orbital diameter, 2 AU. That means a pointing accuracy of a milliarcsecond, about 10 microradians.

Phase I confirmed that short wavelength optical communications can provide the required down-link capability with limited data rate. Low-cost receiver aperture concepts were developed.

System Cost

Before I joined Starshot, I developed an analysis for cost optimization of beam-driven sail systems. In it, the trade-off was between the cost of the sources powering the array versus the cost of the array itself. That was in agreement with the cost of transmitter systems that had been built for interplanetary communications. My conclusion was that the minimum capital cost is achieved when the cost is equally divided between the array antenna and the radiated power [15].

However, Starshot requires more power than can be directly supplied by the normal electrical grid. Therefore, energy storage for the system has to be included, and becomes a substantial cost element [16, 17]. That results in a considerable change in the laser aperture, laser power, and energy storage cost. The result is that the laser cost, which is ~80% of the array cost, becomes the dominant element in the total project cost. The cost trends shown below demonstrate that cost is viable for future fiber amplifiers at ~$0.10/W, and future semiconductor lasers at ~$0.01/W.

The figure below shows that current laser fiber amplifiers and semiconductor laser costs are far too high to afford a Starshot system today. The hope is that economies of scale in the application of lasers to aspects of modern life, for example self-driving cars, will drive down the cost of lasers by economies of scale. In order to reach an affordable level for Starshot, the prices have to fall to order of cents per watt, not many dollars per watt we have today. The points at 2040 and 2050 shows what will have to occur if the cost of Starshot is to be of order 10 billion dollars. That requirement is two to three orders of magnitude cost reduction.

Image: Cost trends for fiber amplifiers and semiconductor lasers.

The Future of Beam-driven Sails

Phase II technical demonstrations, such as laboratory beam-riding sail flights and including orbital sail deployment and sail acceleration, would lead to a firm experimental basis for pilot production of the key sub-systems, leading to the beginning of array construction. That would later lead to precursor missions.

While the Beamer is under construction, many missions become possible that are at speeds lower than interstellar, as well as other applications. The laser driver can beam power to locations in space, such as Earth satellites and space stations. It can deorbit orbital debris. It can drive fast sail missions to the Moon, Mars and the outer planets. At Mars, it could have a second laser array to decelerate the spacecraft, or a retro reflector system, such as proposed by Forward, could reflect a beam from Earth to slow the sailcraft at Mars. And it can beam power to high-performance ion engines.

Development of fast sailcraft that can travel beyond our solar system will enable us to understand the interstellar medium and then, in the fast encounter with other star systems, acquire imaging, spectroscopy, and in situ particle and field measurements.

Beam-driven sails are the only way that probes can be sent to the stars in this century. Completion of Phase II would bring much-increased credibility to the concept by demonstrating beam-riding and operation of a Beamer module in the laboratory. Then the dream of beam-driven interstellar travel could be realized.

Kevin Parkin has even envisioned human beam-driven fast travel to the stars. Accelerating at Earth gravity to relativistic speeds, allowing us to contemplate human travel in future. He points out that human civilizations’ energy production doubled every 40 years since 1800, so that the energies needed for the simplest such missions will be attainable by the end of the century.

Acknowledgements: Figures are by permission of Breakthrough Starshot and Michael Kelzenberg. I also want to thank Kevin Parkin, Dave Messerschmitt and Al Jackson for technical discussions about Starshot.

References

1) Atwater, H. Starshot: from science to spacecraft to missions, Harry Atwater, Interstellar Research Group , Montreal 2023, https://www.youtube.com/watch?v=jV2sNOYzaFA

2) See also same title, Breakthrough Discuss, Harry Atwater, 2023 https://www.youtube.com/watch?v=IrLcllx0LpQ

3. Laser Propulsion in Space: Fundamentals, Technology, and Future Missions, Claude Phipps, ed., Elsevier., Cambridge, MA ,2024.

4. Worden S., Green, W. Schalkwyk, J., Parkin K., and Fugate R., “Progress on the Starshot Laser Propulsion System,” Applied Optics, doi: 10.1364/AO.435858, 2021.

5. Bandutunga C., Sibley P., Ireland M. J., and Ward, R., “Photonic solution to phase sensing and control for light-based interstellar propulsion”, J. Opt. Soc. of Am. B, 38, 1477-1486, 2021.

6. Benford, G., Goronostavea, O., and Benford, J., “Experimental tests of beam-riding sail dynamics” in Beamed Energy Propulsion, AIP Conference Proceedings 664, Pakhomov, A., Ed. 325, 2003.

7. Gao, R., Kelzenberg M. D., and Atwater H. A., “Dynamically Stable Radiation Pressure Propulsion of Flexible Lightsails for Interstellar Exploration”, Nature Comun, 15, 4203. https://doi.org/10.1038/s41467-024-47476-1, 2024,

8. Srivastava P., Chu Y., and Swartzlander G., “Stable diffractive beam rider,” Opt. Lett. 44, 3082-3085, 2019.

9. Chu Y., Tabiryan N. and Swartzlander G., Experimental Verification of a Bigrating Beam Rider. Phys Rev Lett. (123(24), 2024.

10. Benford, G., Goronostavea, O., and Benford, J., “Spin of microwave propelled sails,” in Beamed Energy Propulsion, AIP Conference Proceedings 664, Pakhomov, A., Ed., 313, 2003.

11. Benford, J. and Benford, G., “Elastic, electrostatic and spin deployment of ultralight sails”, JBIS 59 76, 2006.

12. Martin, P. et al., “Detection of a Spinning Object Using Light’s Orbital Angular Momentum” Science 341 537, 2013.

13. Messerschmitt D., Lubin P. and Morrison I., “Challenges in Scientific Data Communication from Low-mass Interstellar Probes”, ApJS 249,36, 2020.

14. Messerschmitt D., Lubin P. and Morrison I., “Interstellar flyby scientific data downlink design,” arXiv preprint arXiv:2306.13550, 2023.

15. Benford, J., “Starship Sails Propelled by Cost-Optimized Directed Energy”, JBIS 66, 85, 2013)

16 Parkin, K., “The Breakthrough Starshot Systems Model”, Acta Astronautica 152, 370–384, 2018.

17. Parkin, K., “Starshot System Model” in Laser Propulsion in Space: Fundamentals, Technology, and Future Missions, Claude Phipps, ed., Elsevier., Cambridge, MA ,2024.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Starshot Is a Success: Part I 3 Mar 7:40 AM (7 days ago)

The fortunes of Breakthrough Starshot have been the subject of so much discussion not only in comments in these pages but in backchannel emails that it is with relief that I turn to Jim Benford’s analysis of a project that has done significant work on interstellar travel and is still very much alive. Jim led the sail team for several of his eight years with Breakthrough Starshot and was with the project from the beginning. In this article and a second that will run in a few days, he explains how and why press coverage of the effort has been erroneous, and not always through the fault of writers working the story. Let’s now take a look at what Starshot has accomplished during its intensive Phase I.

by James Benford

“Make no mistake — interstellar travel will always be difficult and expensive, but it can no longer be considered impossible.” – Robert Forward

Breakthrough Starshot has not failed, nor has it been canceled. Phase I of the program achieved its stated objectives: to identify potential show-stoppers in beam-driven interstellar propulsion and determine whether credible solutions exist. That goal was met.

Recent media coverage, including a Scientific American cover article titled “Voyage to Nowhere,” misunderstands both the intent and the outcome of Phase I. The reality is that the project thus far has been successful. It was put “on hold, paused” in 2024 to restructure for the next phase and seek broader support. It has not been canceled, as some in the media are saying.

I contend that Starshot succeeded because the key Phase I objectives were met. Of course, extensive future effort in the later phases is needed to create a fully functional Starshot system, principally the beamer and sailcraft (referred in the project as “photon engine” and “lightsail”). The major issues have been found to have credible solutions. A great many Starshot-related papers have been published. Many address the crucial issues of sail materials and sail ‘beam-riding’, meaning staying on the beam while undergoing inevitable perturbations. There is a final report, but it has not yet been published.

The principal issues for Starshot were 1) Can a phased array of lasers be constructed that is sufficiently coherent and directive as well as being affordable? 2) Can a sail material be made that will have high reflectivity, very low absorption, high emissivity and very low mass so as to be efficiently accelerated and not overheat? 3) Can a sail ride stably on the beam because of inherent restoring forces (without feedback, which is impossible over long ranges)? 4) Can data be sent back to Earth from the probe at sufficient data rates before the sail moves far beyond the target star?

In this first of two reports on the successes of the Starshot project, I discuss the shape of later phases in the effort, and distortions in the reporting on it. In the second report I will describe the major accomplishments of Phase I.

Starshot was not initiated to fully design, build and launch the first interstellar ‘lightsail’ (as they are called, referring to both the low mass and the near-visible frequency of the laser). The program path was divided into phases, as shown below. The first phase was to invest in high-risk, high-reward research that would de-risk the technology. Phase 1 was to find if there were any ‘show-stoppers’ and pave the way forward. It accomplished that.

High levels of research by Starshot retired most of these key issues for beam-driven sail systems, at least at the conceptual level. The results are at the TRL 2 level. Experiments are needed to verify the solutions for these major issues found in Phase 1.

In Phase II, a coalition of Caltech and other institutions would lead experimental technical demonstrations, and the first experiments in orbit. Then, with the technology concepts having been proven, it’s on to near-term missions shaking out various technologies while performing precursor missions, probably to the outer solar system. Much effort would be needed in systems engineering to enable such precursor missions.

The first phases of Starshot, the R&D program, are projected to cost $120M, which includes Phase 1, and concludes with solar system science missions in the medium-term. The large effort would then follow: construction of the Starshot System and finally, operation of the System and the first interstellar probe voyages.

Many requirements of the Starshot mission come together at the sail. Principal technical issues are the design of the beamer, material to be used and whether the beam and sail stay together, meaning stable beam-riding by the sail:

Stability is influenced by sail shape, beam shape and the distribution of mass, such as payload, on the sail.

Material properties, are its reflectivity, absorptivity and transmissivity, it’s tensile strength and its areal mass density.

Deployment of the diaphanous sail, correctly oriented and including any initial spin, is of course a key requirement.

• The beamer interacts with the sail through its power distribution on the sail-causing differential stresses. This depends on duration of the acceleration, the transverse width of the beam, pointing error of the beam as well as its pointing jitter.

Data return to Earth, interstellar communications, is perhaps the greatest challenge of all.

What Scientific American got wrong

Journalism is only the first draft of history, so flaws occur. Assessing a system as complex as Starshot is a challenge to a journalist with limited time. It would take years to read and absorb all the relevant literature and to mentally organize it into a reconciled and coherent understanding of the system as a whole.

The biased title – “Voyage To Nowhere” – of the piece in Scientific American, (which was chosen by the editors, not the author Sarah Scoles), may have been chosen to refer to the famous Bridge to Nowhere in Alaska and the Train to Nowhere in California. The Scientific American reporting is already being mistaken for a primary source by others, who are stating that Starshot has been “canceled”. This is an example of how media myths, once manufactured, propagate through journalistic copying.

The article fails to understand the Starshot project for a basic reason: The key people who did extensive work on the program were not available or not even known to the writer.

Because the principal workers from the Breakthrough Foundation and the leaders of Breakthrough Starshot, Pete Worden and Avi Loeb, were not interviewed, it seems the author did not know who the main contributors actually were. She relied instead on people she could easily reach. Few of them are major contributors to the program and most left the project early on or never actually participated in the project. A key participant who is not mentioned is Kevin Parkin [1, 2], who spent 8 years under contract, as did most of us who were in at the beginning or even before that. Others are Mason Peck (who is mentioned in the piece), Paul Mauskopf and Dave Messerschmitt. Unfortunately, the final report, which went through many iterations, has never been published publicly [3].

The recent policy of Breakthrough Starshot has been to have little contact with the media, so not to engage with Sarah Scoles at all didn’t help things: it left the door open for detractors to influence the narrative in her piece. Communication was a priority, with public outreach from and within Starshot during Phase 1. In research, communication enables cross-fertilization and prevents work duplication. The big gap now is a comprehensive publication that ties it all together. It could motivate researchers to continue or take up the project later if Phase II occurs.

The article also truncates the long history that led up to Starshot. Beam-driven propulsion concepts didn’t start in 2016! This was documented in my Photon Beam Propulsion Timeline, which appeared here at the start of Starshot in 2016. Media are not aware of how much has been done by the propulsion community over the last decades. Several areas of photon beam-driven sail system development, to include experiments demonstrating sail beam-driven flight [4, 5] and sail stability and dynamics, such as beam-driven spin of sails for stability [6, 7], have been reserched. The major innovation which caused the beginning of Starshot was the realization that going to much smaller sails and much higher accelerations reduces the cost of the overall system substantially.

The budget estimate given in the Scientific American article is clearly wrong. That only 4.5 million dollars could fund 8 years of steady work by many people is absurd. Thirty contracts were executed over 8 years. There were years of invitational meetings, a standing staff of advisors, subcommittees for specific topics; all of them further expenditures. And I count about 50 Starshot-related papers, some of which have been published since it was put on hold. I estimate that Breakthrough Starshot Phase 1 had a cost of 25 million dollars.

The way Forward

Phase II would lead to a firm experimental basis for the later phases in Figure 1. If Breakthrough decides to move on to Phase II, it must deal with the costs of interruption: institutional knowledge about the previous work, which is never fully captured in documentation, will need to be relearned, as the people who worked on Phase 1 have dispersed to other programs.

My second piece on Breakthrough Starshot, scheduled to run here next week, will describe the present state of the concept and the many advances achieved by Starshot in Phase I

Breakthrough Starshot was the most significant event in the history of beam propulsion, which clearly is the only way that probes can be sent to the stars in this century. And now the work goes on, the hope still lives, and the dream of beam-driven interstellar travel could be realized.

References

[1] “The Breakthrough Starshot Systems Model”, Kevin Parkin, Acta Astronautica 152, pp 370–384 (2018).

[2] “Starshot System Model” Kevin Parkin, Ch 3, in Claude Phipps, Editor, Laser Propulsion in Space: Fundamentals, Technology, and Future Missions, Elsevier (2024).

[3] Breakthrough Starshot Summary Report, September 2023, not published.

[4] “Microwave Beam-Driven Sail Flight Experiments”, James Benford, Gregory Benford, Keith Goodfellow, Raul Perez, Henry Harris, and Timothy Knowles, Proc. Space Technology and Applications International Forum, Space Exploration Technology Conf, AIP Conf. Proceedings 552, ISBN 1-56396-980-7STAIF, pg. 540, (2001).

[5] “Laser-Boosted Light Sail Experiments with the 150 kW LHMEL II CO2 Laser,” Leik Myrabo, Timothy Knowles, John Bagford and H. Harris, “High-Power Laser Ablation IV,” edited by Claude Phipps, Editor, Proc. Space Exploration Technology Conf., 4760 pp. 774-798 (2002).

[6] “Spin of Microwave Propelled Sails” Gregory Benford, Olga Goronostavea and James Benford, Beamed Energy Propulsion, AIP Conf. Proc. 664, pg. 313, A. Pakhomov, ed., (2003).

[7] “Experimental Tests of Beam-Riding Sail Dynamics”, James Benford, Gregory Benford, Olga Gornostaeva, Eusebio Garate, Michael Anderson, Alan Prichard, and Henry Harris, Proc. Space Technology and Applications International Forum (STAIF-2002), Space Exploration Technology Conf, AIP Conf. Proc. 608, ISBN 0-7354-0052-0, pg. 457, (2002).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

The Language of Contact 25 Feb 8:58 AM (13 days ago)

How we think intersects with the language we think in. Consider the verb in classical Greek, a linguistic tool so complex that it surely allows shadings of thought that are the stuff of finely tuned philosophy. But are the thoughts in our texts genuinely capable of translation? Every now and then I get a glimpse of something integral that just can’t come across in another tongue.

Back in college (and this was a long time ago), I struggled with Greek from the age of Herodotus and then, in the following semester, moved into Homer, whose language was from maybe 300 years earlier. The Odyssey, our text for that semester, is loaded with repetitive phrases – called Homeric epithets – that are memory anchors for the performance of these epics, which were delivered before large crowds by rhapsōdoi (“song-stitchers”). I was never all that great in Homeric Greek, but I do remember getting so familiar with these ‘anchors’ that I was able now and then to read a sequence of five or six lines without a dictionary. But that was a rare event and I never got much better.

The experience convinced me that translation must always be no more than an approximation. A good translation conveys the thought, but the ineffable qualities of individual languages impose their own patina on the words. ‘Wine-dark sea’ is a lovely phrase in English, but when Homer spins it out in Greek, the phrase conjures different feelings within me, and I realize that the more we learn a language, the more we begin to think like its speakers.

My question then as now is how far can we take this? And moving into SETI realms, how much could we learn if we were actually to encounter alien speakers? Is there a possibility of so capturing their language that we could actually begin to think like them?

Let’s talk about Ted Chiang’s wonderful “Story of Your Life,” which was made (and somewhat changed) into the movie Arrival. Here linguist Louise Banks describes to her daughter her work on aliens called heptapods, seven-limbed creatures who are newly arrived on Earth, motives unknown, although they are communicating. Louise goes to work on Heptapod A and Heptapod B, the spoken and written language of the aliens respectively.

Image: A still from Denis Villeneuve’s 2016 film Arrival captures the mystery of deciphering an alien language.

Heptapod B is graphical, and it begins to become apparent that its symbols (semagrams), are put together into montages that represent complete thoughts or events. The aliens appear to experience time in a non-linear way. How can humans relate to that? Strikingly, immersion in this language has powerful effects on those learning it, as Louise explains in the story:

Before I learned how to think in Heptapod B, my memories grew like a column of cigarette ash, laid down by the infinitesimal sliver of combustion that was my consciousness, marking the sequential present. After I learned Heptapod B, new memories fell into place like gigantic blocks , each one measuring years in duration, and though they didn’t arrive in order or land continuously, they soon composed a period of five decades. It is the period during which I knew Heptapod B well enough to think in it, starting during my interviews with Flapper and Raspberry and ending with my death.

Flapper and Raspberry are the human team’s names for the two heptapods they’re dealing with, and we learn that Louise now has ‘memories’ that extend forward as well as back. Or as she goes on to explain:

Usually, Heptapod B affects just my memory; my consciousness crawls along as it did before, a glowing sliver crawling forward in time, the difference being that the ash of memory lies ahead as well as behind: there is no real combustion. But occasionally I have glimpses when Heptapod B truly reigns, and I experience past and future all at once; my consciousness becomes a half-century long ember burning outside time. I perceive – during those glimpses – that entire epoch as a simultaneity. It’s a period encompassing the rest of my life, and the entirety of yours.

The ‘yours’ refers to Louise’s daughter, and the heartbreak of the story is the vision forward. What would you do if you could indeed glimpse the future and see everything that awaited you, even the death of your only child? How would you behave where your consciousness is now, with that child merely a hoped for future being? How would such knowledge, soaked in the surety of the very language you thought in, affect the things you are going to do tomorrow?

A new paper out of Publications of the National Academy of Sciences has been the trigger for these reflections on Chiang’s tale, which I consider among the finest short stories in science fiction history. The paper, with Christian Bentz (Saarland University) as lead author, looks at 40,000 year old artifacts, all of them bearing sequences of geometric signs that had been engraved by early hunter-gatherers in the Aurignacian culture, the first Homo sapiens in central Europe. It was a time of migrations and shifting populations that would have included encounters with the existing Neanderthals.

These hunter-gatherers have left many traces, among which are these fragments that include several thousand geometric signs. What struck me was that these ancient artifacts demonstrate the same complexity as proto-cuneiform script from roughly 3000 BC. Working with Ewa Dutkiewicz (Museum of Prehistory and Early History of the National Museums, Berlin), Bentz notes objects like the ‘Adorant,’ an ivory plaque showing a creature that is half man, half lion. Found in the “Geißenklösterle,” a cave in the Achtal Valley in southern Germany, it’s marked by notches and rows of dots, in much the same way as a carved mammoth tusk from a cave in the Swabian Alb. The researchers see these markings as an early alternative to writing. Says Bentz:

“Our analyses allow us to demonstrate that the sequences of symbols have nothing in common with our modern writing system, which represents spoken languages ​​and has a high information density. On the archaeological finds, however, we have symbols that repeat very frequently – cross, cross, cross, line, line, line – spoken languages ​​do not exhibit these repetitive structures. But our results also show that the hunter-gatherers of the Paleolithic era developed a symbol system with a statistically comparable information density to the earliest proto-cuneiform tablets from ancient Mesopotamia – a full 40,000 years later. The sequences of symbols in proto-cuneiform are equally repetitive; the individual symbols are repeated with comparable frequency. The sequences are comparable in their complexity.”

Image: The so-called “Adorant” from the Geißenklösterle Cave is approximately 40,000 years old. It is a small ivory plaque with an anthropomorphic figure and several rows of notches and dots. The arrangement of these markings suggests a notational system, particularly the rows of dots on the back of the plaque. Credit: © Landesmuseum Württemberg / Hendrik Zwietasch, CC BY 4.0.

As the researchers comment, the result is surprising because you would think early cuneiform would be much closer in structure to modern systems of notation, but here we have, over a period of almost 40,000 years, evidence that such writing changed little since the Paleolithic. Says Bentz: “After that, around 5,000 years ago, a new system emerged relatively suddenly, representing spoken language—and there, of course, we find completely different statistical properties,”

The paper digs into the team’s computer analysis of the Paleolithic symbols, weighing the expression of information there against cuneiform and modern writing as well. It’s clear from the results that humans have been able to encode information into signs and symbols for many millennia, with writing as we know it being one growth from many earlier forms of encoding and sign systems.

We have no extraterrestrials to interrogate, but even with our own species, we have to ask what the experience of people who lived in the Stone Age was like. What were they trying to convey with their complex sequences of symbols? The authors assume they were as cognitively capable as modern humans and I see no reason to doubt that, but how we extract their thought from such symbols remains a mystery to be resolved by future work in archaeology and linguistics.

And I wonder whether Ted Chiang’s story doesn’t tell us something about the experience of going beyond translation into total immersion in an unknown text. How does that change us? Acquiring a new language, even a modern one, subtly changes thought, and I’m also reminded of my mother’s Alzheimer’s, which somehow left her able to acquire Spanish phrases even as she lost the ability to speak in English. I always read to her, and when I tried to teach her some basic Spanish, the experiment was startlingly successful. That attempt left me wondering what parts of the human brain may be affected by full immersion in the language of any future extraterrestrial who may become known to us.

Ludwig Wittgenstein argued that words only map a deeper reality, saying “The limits of my language mean the limits of my world.” Beyond this map, how do we proceed? Perhaps one day SETI will succeed and we will explore that terrain.

The paper is Bentz et al., “Humans 40,000 y ago developed a system of conventional signs,” Publications of the National Academy of Sciences 123 (9) e2520385123. 23 February 2026. Full text. For more on this work, see Bentz and Dutkiewicz’ YouTube video: https://www.youtube.com/@StoneAgeSigns. Thanks to my ever reliable friend Antonio Tavani for sending me information about this paper.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Propulsion Options for the Solar Gravitational Lens Mission 17 Feb 9:51 AM (21 days ago)

A mission to the Sun’s gravity focus – or more precisely, the focal ‘line’ we might begin to use at around 650 AU – is never far from my mind. Any interstellar mission we might launch within the next thirty years or so (think Breakthrough Starshot, about which more next week) will essentially be shooting blind. We have little idea what to expect at Proxima Centauri b, if that is our (logical) target. But a mission to the solar gravity focus (SGL) would give us a chance to examine any prospective target at close hand.

Indeed, so powerful are the effects if we can exploit this opportunity that we should be able to see continents, weather patterns, oceans and more if we can disentangle the Einstein Ring that the planet’s image forms as shaped by general relativity. We’ve discussed the phenomenon many a time: The Sun’s gravitational well so shapes the image of what is directly behind it as seen from the SGL so as to produce stupendous magnification, the image served up as a ‘ring’ around the Sun in the same way that astronomers now see some distant galaxies as rings around closer galaxies.

Image:The Einstein Ring and how we could sample it. By looking at different slices of the Einstein ring, enough information could be acquired for a computer deconvolution to reconstruct the planet. Credit: Geoffrey Landis (NASA GRC).

Within that ring there is bountiful information. Not only would we have an image we could reconstruct, but we also would have multipixel spectroscopy, allowing us to identify elements through the signature of light from the planet aand to map these properties in more than one dimension. So fecund is the information in the Einstein ring that we could detect all this with a spacecraft telescope no more than a meter or so in diameter. And because the SGL focal line extends to infinity, we can keep taking observations as we move outward from 650 AU to perhaps 900 AU.

Now comes JPL scientist Slava Turyshev with a trade study – an analysis made to evaluate and select the best propulsion technique to make a flight to the SGL possible within a rational timeframe, here seen as roughly thirty years. That seems like a lot, but bear in mind that even our far-flung Voyagers have yet to reach a distance that’s even halfway to the SGL region. Remember, too, that once we find a way to propel a craft to the SGL, we have to choose a trajectory so precise that our target will be exactly opposite the Sun from the spacecraft. In this business, alignment is everything.

Each new Turyshev paper into SGL territory reminds us that this work has been taken into Phase III status at the Jet Propulsion Laboratory, funded by NASA’s Institute for Advanced Concepts. The potential showstoppers of an SGL mission are daunting, and have been examined in papers that examine everything from sail design and ‘sundiver’ trajectories to deconvolution of an SGL image. Perhaps most futuristic has been the Turyshev team’s discussion of self-assembly of a payload divided into small packages into the completed observational equipment enroute. Previous Centauri Dreams articles such as Solar Gravitational Lens: Sailcraft and Inflight Assembly or Good News for a Gravitational Focus Mission may be helpful, though the pace of stories on the SGL has been accelerating, and for the complete sequence I suggest a search in the archives.

All this is bringing me around to the scope of the propulsion problem. In addition to the need for precise positioning within the SGL focal line, the spacecraft must be able to move laterally within the image, which is of considerable size. One recent calculation found that an Earth-sized planet orbiting Epsilon Eridani (10 light years away) would project an image 12.5 kilometers in diameter at 630 AU from the Sun. One envisions multiple spacecraft taking pixel samples at various locations within the image plane. The image must then be produced by integrating these samples. This is ‘deconvolution,’ turning the Einstein ring into a coherent image free of ‘noise.’

As Geoffrey Landis, who made this calculation, points out: The image is far larger than the spacecraft we send. Landis (NASA GRC) also notes that a one-meter telescope at the SGL collects the same amount of light as a telescope of 80 meters without the gravitational lens. So we definitely want to do this, but to make it happen, the spacecraft will need propulsion and power. All this has a bearing on payload, for in an environment where solar panels are not an option, we need a radioisotope or fission power source.

Back to the Turyshev paper. Propulsion emerges as perhaps the mission’s most significant challenge, although one that the author thinks can be met. Here we run into what I call the ‘generation clock,’ which is the desire to keep mission outcomes within the lifetime of researchers who launched the project. Twenty to thirty years in cruise is often mentioned in connection with the SGL mission, meaning we need the ability to reach 650 AU with our spacecraft within that timeframe. A daunting task, for it involves reaching 154 kilometers per second. On outbound trajectories we’ve yet to exceed Voyager’s 17.1 km/sec, highlighting the magnitude of the problem.

Image: JPL’s Slava Turyshev.

We can’t solve it with chemical rockets, not even with gravity assist strategies, but solar sails coupled with an Oberth maneuver loom large as a potential solution. Advances in materials science and the success of missions like the Parker Solar Probe remind us of the potential here, offering the option of deploying a sail in a tight perihelion pass to achieve a massive boost. To manage 650 AU in 20 years means we will need 32.5 AU per year. But if we can work with a perihelion pass at 0.05 AU (7,500,000 km), we can achieve that speed, and the Parker probe has already proven we know how to do this. Finding the metamaterials to make a sail survive such a passage is an ongoing task.

The paper sums the issue up:

Recent “extreme solar sailing” studies emphasize that very fast transits are achievable in principle only by combining ultra-low total areal density with very deep perihelia (a few solar radii), which moves the feasibility question from trajectory mechanics to coupled materials, thermal, and large-area deployment qualification. For example, [Davoyan et al., 2021] analyzed extreme-proximity solar sailing (≲ 5 R) and discussed candidate metamaterial sail approaches together with the associated environmental and system challenges at these perihelia. These results reinforce the conclusion here: sub-20 yr sail-only access is not ruled out by physics, but it lives in a tightly coupled materials+structures+thermal qualification regime at mission scale.

So we have a lot to learn to make this happen. The paper notes that as we move from current sail readiness to what we will need for the SGL mission, we go from sails that are in the 10-meter class up to sails as much as 300 meters in diameter, while still needing to keep our sail material astonishingly thin and capable of surviving the perihelion temperatures. Operating at deep perihelia with metamaterials is a subject still very low on the TRL level, meaning technical readiness to produce and fly such a sail is nowhere near where it needs to be if we are to launch in the 2035-2040 window hoped for by mission planners. If we can launch multiple sails, we can consider self-assembly of the larger payload in transit, also at a very low TRL

Importantly, this maturity gap is not a physics limit: it is a program-and-demonstration limit. A focused late2020s/early-2030s development that couples (i) large-area deployment validation, (ii) deep-perihelion optical-property stability tests, and (iii) integrated areal-density demonstrations at the 104–105 m2 scale could credibly raise the SGL-class sail system TRL into the mission-start window, particularly for the 25–40 yr-class access regime.

Image: Sailcraft example trajectory toward the Solar Gravity Lens. Taken from an earlier report by Turyshev et al.

Nuclear electric propulsion (NEP) offers certain advantages over solar sails, including the fission reactor that powers its thrusters, for as mentioned, solar power at these distances is not practical. Turyshev’s calculations make the needed comparison, yielding a mission that can reach 650 AU in 27 years, putting it in range of what the sail strategy can deliver. Using propellant remaining in the craft upon arrival at the SGL, our spacecraft can now manage station-keeping and trajectory changes necessary to collect the needed pixels of our exoplanet image. In terms of operations, then, as well as payload capability, NEP stands out. Note that here again we have thermal issues, for the NEP-powered craft will need their own close perihelion pass to boost velocity. Turyshev points out that NEP will also demand large, deployable radiators to allow the escape of waste heat.

Nuclear thermal propulsion (NTP) now comes into the discussion, as the author considers potential hybrid missions. In NTP, liquid hydrogen is heated by the reactor core to produce thrust through the exhaust nozzle. Capable of high specific impulse, this method is treated here as “a high-thrust injection stage,” one that could be used during an Oberth maneuver to increase the velocity of an NEP-equipped spacecraft. The nuclear issues persist: We need safety analyses and ground testing facilities for the reactor, radiological handling protocols, and additional flight approval processes.

The three propulsion options play against each other in interesting ways. Sails avoid the problem of flight approval for nuclear materials as well as necessary infrastructure for ground testing. But materials and deployment issues still exist for these ultra-thin sails. An NEP engine that offers wider use beyond the SGL mission could lower incremental costs. And what if we tinker with mission duration? The fact remains that regardless of the choice of propulsion, we still have to operate in an environment that requires radioisotope or fission power, with all the implications for payload overhead that entails.

Programmatically, a credible 2035–2040 start requires aligning architecture choice with what can be demonstrated by the early 2030s. If minimum TOF [time of flight] is the primary requirement, solar sailing (with an explicit deep-perihelion materials and deployment qualification program) remains the most schedule-aligned approach. If delivered capability and operational robustness at the SGL dominate, NEP is uniquely attractive, but a 2035–2040 launch that depends on NEP for transportation must be preceded by an integrated stage demonstration that retires system-level coupling risks (thermal, EMI/EMC [Electromagnetic Interference / Electromagnetic Compatibility], plume, autonomy, and nuclear approval). In either case, SGL transportation should be treated as flagship-class in development complexity because the critical path runs through integrated demonstrations rather than through single-component maturity.

This is how missions get designed, and you can see how involved the process becomes long before actual hardware is even built. My belief is that the question of the generation clock is fading, for in dealing with issues like the SGL, we’re forced to contemplate scenarios in which those who plan the mission may not see its completion (although I hope Slava Turyshev is very much an exception!) In sending missions beyond the Solar System, we create gifts of data to future generations, who may well use what the SGL finds to plan missions much further afield, perhaps all the way to Proxima Centauri b.

The paper is Turyshev, “Propulsion Trades for a 2035-2040 Solar Gravitational Lens Mission,” currently available as a preprint. For more on acquisition of the lensed image, see Geoffrey Landis’ extremely useful slide presentation.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A Relativistic Explanation for the Dearth of Circumbinary Planets 11 Feb 9:33 AM (27 days ago)

Planets orbiting two stars have been found, but not all that many of them. We’re talking here about a planet that orbits both stars of a close binary system, and thus far, although we’ve confirmed over 6,000 exoplanets, we’ve only found 14 of them in this configuration. Circumbinary planets are odd enough to make us question what it is we don’t know about their formation and evolution that accounts for this. Now a paper from researchers at UC-Berkeley and the American University of Beirut probes a mechanism Einstein would love.

At play here are relativistic effects, having to do with the fact that, as Einstein explained, intense gravitational fields have detectable effects upon the stars’ orbits. This is hardly news, as it was the precession of Mercury in the sky that General Relativity first predicted. The planet’s orbit could be seen to precess (shift) by 43 arcseconds per century more than was expected by Newtonian mechanics. Einstein showed in 1915 that spacetime curvature could account for this, and calculated the exact 43 arcsecond shift astronomers observed.

What we see in close binary systems is that if we diagram the elliptical orbit usually found in such systems, the line connecting the closest approach (periastron) and farthest point in the orbit (apoapsis) gradually rotates. The term for this is apsidal precession. This precession – rotation of the orbital axis – is coupled with tidal interactions between the two stars, which make their own contribution to the effect. Close binary orbits, then, should be seen as shifting over time, partly as a consequence of General Relativity.

The researchers calculate that as the precession rate of the stars increases, that of a planet orbiting both stars slows. The planet’s perturbation can be accounted for by Newtonian mechanics, and its lessening precession is the result of tidal effects gradually shrinking the orbit of the two binary stars. But note this: When the two precession rates match, or come into resonance, the planet experiences serious consequences. Mohammad Farhat, (UC Berkeley) and first author of the paper, phrases the matter this way:

“Two things can happen: Either the planet gets very, very close to the binary, suffering tidal disruption or being engulfed by one of the stars, or its orbit gets significantly perturbed by the binary to be eventually ejected from the system. In both cases, you get rid of the planet.”

Image: An artist’s depiction of a planet orbiting a binary star. Here, the stars have radically different masses and as they orbit one another, they tug the planet in a way that makes the planet’s orbit slowly rotate or precess. Based on dynamic modeling, general relativistic effects make the orbit of the binary also precess. Over time, the precession rates change and, if they sync, the planet’s orbit becomes wildly eccentric. This causes the planet to either get expelled from the system or engulfed by one of the stars. Credit: NASA GSFC.

Does this mean that circumbinary planets are rare, or does it imply that most of them are probably in outer orbits and hard to find by our current methods? Ejection from the system seems the most likely outcome, but who knows? The researchers make three points about this. Quoting the paper:

(i) Systems that result in tight binaries (period ≤ 7.45 days, that of Kepler-47) via orbital decay are more likely than not deprived of a companion planet: the resonance-driven growth of the planet’s eccentricity typically drives it into the throes of its host’s driven instabilities, leading to ejection or engulfment by that host.

(ii) Planetary survivors of the sweeping resonance mostly reside far from their host and are therefore less likely to have their transits detected. Should eccentric survivors nevertheless be detected, they are expected to bear the signature of resonant capture into apse alignment with the binary.

(iii) The process appears robust to the modeling of the initial binary separation, with three out of four planets around tight binaries experiencing disruption…

What we wind up with here is that circumbinary planets are hard to find, but the greatest scarcity is going to be circumbinaries around binary systems whose orbital period is seven days or less. The researchers note that 12 of the 14 known circumbinary planets are close to but not within what they describe as the ‘instability zone,’ where these effects would be the strongest. Indeed, the combination of general relativistic effects and tidal interactions is calculated here to disrupt planets around tight binaries about 80 percent of the time. Most of the planets thus disrupted would most likely be destroyed in the process.

The paper is Farhat & Touma, “Capture into Apsidal Resonance and the Decimation of Planets around Inspiraling Binaries,” Astrophysical Journal Letters Vol. 995, No. 1 (8 December 2025), L23. Full text.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A New Tool for Exoplanet Detection and Characterization 6 Feb 5:19 AM (last month)

It’s been apparent for a long time that far more astronomical data exist than anyone has had time to examine thoroughly. That’s a reassuring thought, given the uses to which we can put these resources. Ponder such programs as Digital Access to a Sky Century at Harvard (DASCH), which draws on a trove of over half a million glass photographic plates dating back to 1885. The First and Second Palomar Sky Surveys (POSS-1 and POSS-2) go back to 1949 and are now part of the Digitized Sky Survey, which has digitized the original photographic plates. The Zwicky Transient Facility, incidentally, uses the same 48-inch Samuel Oschin Schmidt Telescope at Palomar that produced the original DSS data.

There is, in short, plenty of archival material to work with for whatever purposes astronomers want to pursue. You may remember our lengthy discussion of the unusual star KIC 8462852 (Boyajian’s Star), in which data from DASCH were used to explore the dimming of the star over time, the source of considerable controversy (see, for example, Bradley Schaefer: Further Thoughts on the Dimming of KIC 8462852 and the numerous posts surrounding the KIC 8462852 phenomenon in these pages). Archival data give us a window by which we can explore a celestial observation through time, or even look for evidence of technosignatures close to home (see ‘Lurker’ Probes & Disappearing Stars).

But now we have an entirely new class of archival data to mine and apply to the study of exoplanets. A just published paper discusses how previously undetectable data about stars and exoplanets can be found within the archives of radio astronomy surveys. The analysis method has the name Multiplexed Interferometric Radio Spectroscopy (RIMS), and it’s intriguing to learn that it may be able to detect an exoplanet’s interactions with its star, and even to run its analyses on large numbers of stars within the radio telescope’s field of view.

We are in the early stages of this work, with the first detections now needing to be further analyzed and subsequent observations made to confirm the method, so I don’t want to minimize the need for continuing study. But if things pan out, we may have added a new method to our toolkit for exoplanet detection.

The signature finding here is that the huge volumes of data accumulated by radio telescopes worldwide, so vital in the study of cosmology through the analysis of galaxies and black holes, can also track variable activity of numerous stars that are within the field of view of each of these observations. What the authors are unveiling here is the ability to perform a simultaneous survey across hundreds or potentially thousands of stars. Cyril Tasse, lead author of the paper in Nature Astronomy, is an astronomer at the Paris Observatory. Tasse explains the range that RIMS can deploy:

“RIMS exploits every second of observation, in hundreds of directions across the sky. What we used to do source by source, we can now do simultaneously. Without this method, it would have taken nearly 180 years of targeted observations to reach the same detection level.”

The researchers have examined 1.4 years of data collected at the European LOFAR (Low Frequency Array) radio telescope at 150 MHz. Here low frequency wavelengths from 10 to 240 MHz are probed by a huge array of small, fixed antennas, with locations spread across Europe, their data digitized and combined using a supercomputer at the University of Groningen in the Netherlands. Out of this data windfall the RIMS team has been able to generate some 200,000 spectra from stars, some of them hosting exoplanets. While a stellar explanation is possible for star-planet interactions, this form of analysis, say the authors, “demonstrate[s] the potential of the method for studying stellar and star–planet interactions with the Square Kilometre Array.” LOFAR can be considered a precursor to the low-frequency component of the SKA.

Here we drill down to the planetary system level, for among the violent stellar events that RIMS can track (think coronal mass ejections, for example), the researchers have traced signals that produce what we would expect to find with magnetic interactions between planet and star. Closer to home, we’ve investigated the auroral activity on Jupiter, but now we may be tracing similar phenomena on planets we have yet to detect through any other means.

Image: Artistic illustration of the magnetic interaction between a red dwarf star such as GJ 687, and its exoplanet. Credit: Danielle Futselaar/Artsource.nl.

Let’s focus for a moment on the importance of magnetic fields when it comes to making sense of stellar systems other than our own. The interior composition of planets – their internal dynamo – can be explored with a proper understanding of their magnetosphere, which also unlocks information about the parent star. That sounds highly theoretical, but on the practical plane it points toward a signal we want to acquire from an exoplanetary system in order to understand the environments present on orbiting worlds. And don’t forget how critical a magnetic field is in terms of habitability, for fragile atmospheres must be shielded from stellar winds so as to be preserved.

At the core of the new detection method is cyclotron maser instability(CMI), which is the basic process that produces the intense radio emissions we see from planets like Jupiter. CMI is an instability in a plasma, where electrons moving in a magnetic field produce coherent electromagnetic radiation. Here is a link to Juno observations of these phenomena around Jupiter.

Detecting such emissions, RIMS can point to the presence of a planet in a stellar system. Working with radio observations, we can move beyond modeling to sample actual field strengths, which is why radio emissions (not SETI!) from exoplanets have been sought for decades now. Finding a way to produce interferometric data sufficient to paint a star-planet signature is thus a priority.

Exoplanetary aurorae would indicate the existence of magnetospheres, and that’s no small result. And we may be making such a detection around a star some 14.8 light years away, says co-author Jake Turner (Cornell University):

“Our results indicate that some of the radio bursts, most notably from the exoplanetary system GJ 687, are consistent with a close-in planet disturbing the stellar magnetic field and driving intense radio emission. Specifically, our modeling shows that these radio bursts allow us to place limits on the magnetic field of the Neptune-sized planet GJ 687 b, offering a rare indirect way to study magnetic fields on worlds beyond our Solar System.”

There are also implications for the search for life elsewhere in the cosmos. Turner adds:

“Exoplanets with and without a magnetic field form, behave and evolve very differently. Therefore, there is great need to understand whether planets possess such fields. Most importantly, magnetic fields may also be important for sustaining the habitability of exoplanets, such as is the case for Earth,”

Using low-frequency radio astronomy, then, we turn a telescope array into a magnetosphere detector. Researchers have also applied the MIMS technique to the French low frequency array NenuFAR, located at the Nançay Radio Observatory south of Paris, detecting a burst from the exoplanetary system HD 189733 that was described recently in Astronomy & Astrophysics. As with another possible burst from Tau Boötes, the team is in the midst of making follow-up observations to confirm that both signals came from a star-planet interaction. If the method is proven successful, such interactions point to a new astronomical tool.

The paper is Tasse et al., “The detection of circularly polarized radio bursts from stellar and exoplanetary systems,” Nature Astronomy 27 January 2026 (abstract). The earlier paper is Zhang et al., “A circularly polarized low-frequency radio burst from the exoplanetary system HD 189733,” Astronomy & Astrophysics Vol. 700, A140 (August 2025). Full text.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Holography: Shaping a Diffractive Sail 28 Jan 4:36 AM (last month)

One result of the Breakthrough Starshot effort has been an intense examination of sail stability under a laser beam. The issue is critical, for a small sail under a powerful beam for only a few minutes must not only survive the acceleration but follow a precise trajectory. As Greg Matloff explains in the essay below, holography used in conjunction with a diffractive sail (one that diffracts light waves through optical structures like microscopic gratings or metamaterials) can allow a flat sail to operate like a curved or even round one. I’ll have more on this in terms of the numerous sail papers that Starshot has spawned soon. For today, Greg explains how what had begun as an attempt to harness holography for messaging on a deep space probe can also become a key to flight operations. The Alpha Cubesat now in orbit is an early test of these concepts. The author of The Starflight Handbook among many other books (volumes whose pages have often been graced by the artwork of the gifted C Bangs), Greg has been inspiring this writer since 1989.

by Greg Matloff

The study of diffractive photon sails likely begins in 1999 during the first year of my tenure as a NASA Summer Faculty Fellow. I was attending an IAA symposium in Aosta, Italy where my wife C Bangs curated a concurrent art show. The title of the show , which included work by about thirty artists, was “Messages from Earth”. At the show’s opening, C was approached by visionary physicist Robert Forward who informed her that the best technology to affix a message plaque to an interstellar photon sail was holography. A few weeks later, back in Huntsville AL, Bob suggested to NASA manager Les Johnson that he fund her to create a prototype holographic interstellar message plaque.

It is likely that Bob encouraged this art project as an engineering demonstration. He was aware that photon sails do not last long in Low Earth Orbit because the optimum sail aspect angle to increase orbital energy is also the worst angle to increase atmospheric drag. He had experimented with the concept of a two-sail photon sail and correctly assumed that from a dynamic point of view such a sail would fail. A thin-film hologram of an appropriate optical device could redirect solar radiation pressure accurately without increasing drag.

Our efforts resulted in the creation of a prototype holographic interstellar message plaque that is currently at NASA Marshall Space Flight Center. It was displayed to NASA staff during the summer of 2001 and has been described in a NASA report and elsewhere [1].

I thought little about holography until 2016, when I was asked by Harvard’s Avi Loeb to participate in Breakthrough Starshot as a member of the Scientific Advisory Committee. This technology development project examined the possibility of inserting nano-spacecraft into the beam of a high energy laser array located on a terrestrial mountain top. The highly reflective photon sail affixed to the tiny payload could in theory be accelerated to 20% of the speed of light.

One of the major issues was sail stability during the 5-6 minutes in a laser beam moving with Earth’s rotation. Work by Greg and Jim Benford, Avi Loeb and Zac Manchester (Carnegie Mellon University) indicated that a curved sail was necessary. to compensate for beam motion. But a curved thin sail would collapse immediately during the enormous acceleration load.

Some researchers realized that a diffractive sail that could simulate a curved surface might be necessary. Grover Swartzlander of Rochester Institute of Technology published on the topic [2].

Martina Mongrovius, then Creative Director of the NYC HoloCenter, suggested to C that one approach to incorporating an image of an appropriate diffractive optical device in the physically flat sail was holography; this was later confirmed by Swartzlander. Avi Loeb arranged for C to attend the 2017 Breakthrough meeting and demonstrate our version of the prototype holographic message plaque.

A Breakthrough Advisor present at the demonstration was Cornell professor and former NASA chief technologist Mason Peck. Mason invited C to create, with Martina’s aid, five holograms to be affixed to Cornell’s Alpha CubeSat, a student-coordinated project to serve as a test bed for several Starshot technologies.

Image: Fish Hologram (Sculpture by C Bangs, exposure by Martina Mrongovius). A holographic plaque could carry an interstellar message. But could holography also be used to simulate the optimal sail surface on a flat sail?

During the next eight years, about 100 Cornell aerospace engineering students participated in the project. Doctoral student Joshua Umansky-Castro, who has now earned his Ph.D. was the major coordinator.

In 2023, there was an exhibition aboard the NYC museum ship Intrepid (a World War II era aircraft carrier) presenting the scientific and artistic work of the Alpha CubeSat team. Alpha was launched in September of 2025 as part of a ferry mission to the ISS. The cubesat was deployed in Dec. 2025.

All goals of the effort have been successfully achieved. The tiny chipsats continue to communicate with Earth. The demonstration sail deployed as planned from the CubeSat. A post-deployment glint photographed from the ISS indicates that the holograms perform in space as expected, increasing the Technological Readiness of in-space holograms and diffraction sailing.

In May 2026 a workshop on Lagrange Sunshades to alleviate global warning is scheduled to take place in Nottingham. The best sunshade concepts suggested to date are reflective sails. Two issues with reflective sail sunshades are apparent. One is the meta-stability of L1, which requires active control to maintain the sunshade on station. A related issue is that the solar radiation momentum flux moves the effective Lagrange point farther from the Earth, requiring a larger sunshade. At the Nottingham Workshop. C and I will collaborate with Grover Swartzlander to demonstrate how a holographic/diffractive sunshade surface alleviates these issues.

References

1.G. L. Matloff, G. Vulpetti, C. Bangs and R. Haggerty, “The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography”, NASA/CR-2002-211730, NASA Marshal Spaceflight Center, Huntsville, AL (June, 2002). Also see G. L. Matloff, Deep-Space Probes: To the Outer Solar System and Beyond, 2nd. ed., Springer/Praxis, Chichester, UK (2005).

2.G. A. Swartzlander, Jr., “Radiation Pressure on a Diffractive Sailcraft”, arXiv: 1703.02940.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Shelter from the Storm 24 Jan 5:44 AM (last month)

The approaching storm will almost certainly cause power outages that will make it impossible to post here. If this occurs, you can be sure that I’ll get any incoming messages posted as soon as I can get back online. Please continue to post comments as usual and let’s cross our fingers that the storm is less dangerous than it appears.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Cellular Cosmic Isolation: When the Universe Seeds Life but Civilizations Stay Silent 20 Jan 7:06 AM (last month)

So many answers to the Fermi question have been offered that we have a veritable bestiary of solutions, each trying to explain why we have yet to encounter extraterrestrials. I like Leo Szilard’s answer the best: “They are among us, and we call them Hungarians.” That one has a pedigree that I’ll explore in a future post (and remember that Szilard was himself Hungarian). But given our paucity of data, what can we make of Fermi’s question in the light of the latest exoplanet findings? Eduardo Carmona today explores with admirable clarity a low-drama but plausible scenario. Eduardo teaches film and digital media at Loyola Marymount University and California State University Dominguez Hills. His work explores the intersection of scientific concepts and cinematic storytelling. This essay is adapted from a longer treatment that will form the conceptual basis for a science fiction film currently in development. Contact Information: Email: eduardo.carmona@lmu.edu

by Eduardo Carmona MFA

In September 2023, NASA’s OSIRIS-REx spacecraft delivered a precious cargo from asteroid Bennu: pristine samples containing ribose, glucose, nucleobases, and amino acids—the molecular Lego blocks of life itself. Just months later, in early 2024, the Breakthrough Listen initiative reported null results from their most comprehensive search yet: 97 nearby galaxies across 1-11 GHz, with no compelling technosignatures detected.

We live in a cosmos that generously distributes life’s ingredients while maintaining an eerie radio silence. This is the modern Fermi Paradox in stark relief: building blocks everywhere, conversations nowhere.

What if both observations are telling us the same story—just from different chapters?

The Seeding Paradox

The discovery of complex organic molecules on Bennu—a pristine carbonaceous asteroid that has barely changed in 4.5 billion years—confirms what astrobiologists have long suspected: the universe is in the business of making life’s components. Ribose, the sugar backbone of RNA. Nucleobases that encode genetic information. Amino acids that fold into proteins.

These aren’t laboratory curiosities. They’re delivered at scale across the cosmos, frozen in time capsules of rock and ice, raining down on every rocky world in every stellar system. The implications are profound: prebiotic chemistry isn’t a lottery. It’s standard operating procedure for the universe.

This abundance makes the silence more puzzling. If life’s ingredients are everywhere, why isn’t life—or at least communicative life—equally ubiquitous? The Drake Equation suggests we should be drowning in signals. Yet decade after decade of increasingly sophisticated SETI searches return the same answer: nothing.

The traditional responses—they’re too far away, they use technology we can’t detect, they’re deliberately hiding—feel increasingly like special pleading. What if the answer is simpler, more systemic, and reconcilable with both observations?

Cellular Cosmic Isolation: A Synthesis

I propose what I call Cellular Cosmic Isolation (CCI)—not a single explanation but a framework that synthesizes multiple constraints into a coherent picture. Think of it as a series of filters, each one narrowing the funnel from chemical abundance to electromagnetic chatter.

The framework rests on four interlocking observations:

1. Prebiotic abundance: Chemistry is generous. Small bodies deliver life’s building blocks widely and consistently. Biospheres may be common.

2. Geological bottlenecks: Complex, communicative life requires rare conditions—specifically, worlds with coexisting continents and oceans, sustained by long-duration plate tectonics (≥500 million years). Earth’s particular geological engine may be uncommon.

3. Fleeting windows: Technological civilizations may have extraordinarily brief outward-detectable phases—measured in decades, not millennia—before transitioning to post-biological forms, self-destruction, or simply turning their attention inward.

4. Communication constraints: Physical limits (finite speed of light, signal dispersion, beaming requirements) plus coordination problems suppress even the detection of civilizations that do exist.

The result? A universe where the chemistry of life is ubiquitous, simple biospheres may be common, but detectable technospheres remain vanishingly rare and non-overlapping in spacetime. We’re not alone because life is impossible. We’re alone because the path from ribose to radio telescopes has far more gates than we imagined.

The Geological Filter: Earth’s Unlikely Engine

This is perhaps CCI’s most counterintuitive claim, yet it’s grounded in recent research. In a 2024 paper in Scientific Reports, planetary scientists Robert Stern and Taras Gerya argue that Earth’s specific combination—plate tectonics that has operated for billions of years, creating and recycling continents alongside persistent oceans—may be geologically unusual.

Why does this matter for intelligence? Because continents enable:

• Terrestrial ecosystems with high energy density and environmental diversity

• Land-ocean boundaries that create evolutionary pressure for complex sensing and locomotion

• Fire (impossible underwater), which enables metallurgy and advanced tool use

• Seasonal and altitudinal variation that rewards cognitive flexibility

Venus has no plate tectonics. Mars lost its early tectonics. Europa and Enceladus have subsurface oceans but no continents. Earth’s geological engine—stable enough to persist for billions of years, dynamic enough to continuously create new land and recycle old—may be a rare configuration.

Mathematically, this adds two probability terms to the Drake Equation: foc (the fraction of habitable worlds with coexisting oceans and continents) and fpt (the fraction with sustained plate tectonics). If each is, say, 0.1-0.2, their joint probability becomes 0.01-0.04—already a significant filter.

The Temporal Filter: Civilization’s Brief Bloom

But the most devastating filter may be temporal. Traditional SETI assumes civilizations remain detectably technological for thousands or millions of years. CCI suggests the opposite: the phase during which a civilization broadcasts electromagnetic signals into space may be extraordinarily brief—perhaps only decades to centuries.

Consider the human trajectory. We’ve been radio-loud for roughly a century. But already:

• We’re transitioning from broadcast to narrowcast (cable, fiber, satellites)

• Our strongest signals are becoming more controlled and directional

• We’re developing AI systems that may fundamentally transform human civilization within this century

What comes after? Post-biological intelligence operating at computational speeds? A civilization that turns inward, exploring virtual realities? Self-annihilation? Deliberate silence to avoid dangerous contact?

We don’t know. But if the detectable technological phase (call it Lext) averages 50-200 years rather than 10,000-1,000,000 years, the probability of temporal overlap collapses. In a galaxy 13 billion years old, two civilizations with century-long detection windows need to be synchronized to within a cosmic eyeblink.

This isn’t speculation—it’s extrapolation from our own accelerating technological trajectory. And acceleration may be a universal property of technological intelligence.

The Mathematics of Solitude

The traditional Drake Equation multiplies probabilities: star formation rate × fraction with planets × habitable planets per system × fraction developing life × fraction developing intelligence × fraction developing communication × longevity of civilization.

CCI expands this with additional constraints:

Ndetectable = R* × Tgal × [biological/technological terms] × [foc × fpt] × [Lext / Tgal] × C(I)

Where C(I) captures propagation physics—distance, dispersion, scattering, beaming geometry. Each term is a probability distribution, not a point estimate.

In 2018, Oxford researchers Anders Sandberg, Stuart Armstrong, and Milan Ćirković performed a rigorous Bayesian analysis of Drake’s Equation using probability distributions for each parameter. Their conclusion? When uncertainties are properly handled, the probability that we are alone in the observable universe is substantial—not because life is impossible, but because the error bars are enormous.

CCI takes this Bayesian framework and adds the geological and temporal constraints. The result: a posterior probability distribution that is entirely consistent with both abundant prebiotic chemistry and persistent SETI nulls. No paradox required.

What We Should See (And Why We Don’t)

CCI makes testable predictions. If the framework is correct:

1. Biosignatures before technosignatures

Upcoming missions like the Habitable Worlds Observatory should detect atmospheric biosignatures (oxygen-methane disequilibria, possible vegetation edges) before detecting techno signatures. Simple biospheres should be discoverable; technospheres should remain elusive.

2. Continued SETI nulls

Radio and optical SETI campaigns will continue to find nothing—not because we’re searching wrong, but because the detectable population is genuinely sparse and temporally fleeting.

3. Technosignature detection requires extreme investment

Detection of artificial spectral edges (like photovoltaic arrays reflecting at silicon’s UV-visible cutoff) or Dyson-sphere waste heat requires hundreds of hours of observation time even for nearby stars. Their absence at practical survey depths is predicted, not puzzling.

Importantly, CCI is falsifiable. A single unambiguous, repeatable interstellar signal would invalidate the short-Lext assumption. Multiple detections of artificial spectral features would refute the geological filter. The framework lives or dies by observation.

The Cosmos as Organism

There’s an almost biological elegance to this picture. The universe manufactures prebiotic molecules in stellar nurseries and delivers them via comets and asteroids—a kind of cosmic panspermia that doesn’t require directed intelligence, just chemistry and gravity. Call it the seeding phase.

Some of those seeds land on worlds with the right geological configuration—the awakening phase—where continents and oceans coexist long enough for complex cognition to emerge. This is rarer.

A tiny fraction of those awakenings reaches technological sophistication—the communicative phase—but this phase is fleeting, measured in decades to centuries before transformation or silence. This is rarest.

And even then, physical constraints—distance, timing, beaming, the sheer improbability of coordination—suppress detection. The isolation phase.

The cosmos isn’t hostile to intelligence. It’s just structured in a way that makes electromagnetic conversation between civilizations vanishingly unlikely—not impossible, just so improbable that null results after decades of searching are exactly what we’d expect.

Each civilization, then, is like a cell in a vast organism: seeded with the same chemical building blocks, developing according to local conditions, briefly active, then transforming or falling silent before contact with other cells occurs. Cellular Cosmic Isolation.

What This Means for Us

If CCI is correct, we should recalibrate our expectations without abandoning hope. SETI is not futile—it’s hunting for an extraordinarily rare phenomenon. Every null result tightens our probabilistic constraints and guides future searches. But we should also prepare for the possibility that we are, if not alone, then at least effectively alone during our detectable window.

This shifts the weight of responsibility. If technological civilizations are rare and fleeting, then ours carries unique value—not as a recipient of cosmic wisdom from older civilizations, but as a brief, precious experiment in consciousness. The burden falls on us to use our detectable phase wisely: to either extend it, transform it into something sustainable, or at least ensure we don’t waste it.

The universe seeds life generously. It’s indifferent to whether those seeds grow into forests or fade into silence. CCI suggests that the path from chemistry to conversation is longer, stranger, and more filtered than we imagined.

But the building blocks are everywhere. The recipe is universal. And somewhere, in the vast probabilistic landscape of possibility, other cells are awakening. We just may never hear them call out before they, like us, transform into something we wouldn’t recognize as a civilization at all.

That is not a paradox. That is simply the way the cosmos works.

Further Reading

Prebiotic Chemistry:

Furukawa, Y., et al. (2025). “Detection of sugars and nucleobases in asteroid Ryugu samples.” Nature Geoscience. NASA’s OSIRIS-REx mission (2023) also reported similar findings from Bennu.

Bayesian Drake Analysis:

Sandberg, A., Drexler, E., & Ord, T. (2018). “Dissolving the Fermi Paradox.” arXiv:1806.02404. Oxford Future of Humanity Institute.

Geological Filters:

Stern, R., & Gerya, T. (2024). “Plate tectonics and the evolution of continental crust: A rare Earth perspective.” Scientific Reports, 14.

SETI Null Results:

Choza, C., et al. (2024). “A 1-11 GHz Search for Radio Techno signatures from the Galactic Center.” Astronomical Journal. Breakthrough Listen campaign results.

Barrett, J., et al. (2025). “An Exoplanet Transit Search for Radio Techno signatures.” Publications of the Astronomical Society of Australia.

Technosignature Detection:

Lingam, M., & Loeb, A. (2017). “Natural and Artificial Spectral Edges in Exoplanets.” Monthly Notices of the Royal Astronomical Society Letters, 470(1), L82-L86.

Kopparapu, R., et al. (2024). “Detectability of Solar Panels as a Techno signature.” Astrophysical Journal.

Wright, J. et al (2022). “The Case for Techno signatures: Why They May Be Abundant, Long-lived, and Unambiguous.” The Astrophysical Journal Letters 927(2), L30.

Technology Acceleration:

Garrett, M. (2025). “The longevity of radio-emitting civilizations and implications for SETI.” Journal of the British Interplanetary Society (forthcoming). See also earlier work on technological singularities and post-biological transitions.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Pandora: Exoplanets at Multiple Wavelengths 14 Jan 4:21 AM (last month)

Sometimes we forget how overloaded our observatories are, both in space and on the ground. Why not, for example, use the James Webb Space Telescope to dig even further into TRAPPIST-1’s seven planets, or examine that most tantalizing Earth-mass planet around Proxima Centauri? Myriad targets suggest themselves for an instrument like this. The problem is that priceless assets like JWST not only have other observational goals, but more tellingly, any space telescope is overbooked by scientists with approved observing programs.

Add to this the problem of potentially misleading noise in our data. Thus the significance of Pandora, lofted into orbit via a SpaceX Falcon 9 on January 11, and now successful in returning robust signals to mission controllers. One way to take the heat off overburdened instruments is to create much smaller, highly specialized spacecraft that can serve as valuable adjuncts. With Pandora we have a platform that will monitor a host star in visible light while also collecting data in the near infrared from exoplanets in orbit around it.

Image: A SpaceX Falcon 9 rocket carrying NASA’s Pandora small satellite, the Star-Planet Activity Research CubeSat (SPARCS), and Black Hole Coded Aperture Telescope (BlackCAT) CubeSat lifts off from Space Launch Complex 4 East at Vandenberg Space Force Base in California on Sunday, Jan. 11, 2026. Pandora will provide an in-depth study of at least 20 known planets orbiting distant stars to determine the composition of their atmospheres — especially the presence of hazes, clouds, and water vapor. Credit: SpaceX.

We can use transmission spectography to study an exoplanet’s atmosphere, providing it transits the host star. In that case, the data taken when the planet transits the stellar disk can be compared to data when the planet is out of view, so that chemicals in the atmosphere become apparent. This method works and has been used to great effect with a number of transiting hot Jupiters. But contamination of the result caused by the star itself remains a problem as we widen our observations to ever smaller worlds.

Dániel Apai (University of Arizona) and colleagues have been digging into this problem for a number of years now. Apai is co-investigator on the Pandora mission. He refers to “the transit light source effect” one which he has been working on since 2018. Apai put it this way in an article in the Tucson Sentinel:

“We built Pandora to shatter a barrier – to understand and remove a source of noise in the data – that limits our ability to study small exoplanets in detail and search for life on them.”

The multiwavelength aspect of Pandora is crucial for its mission. The goal is to separate exoplanet signatures from stellar activity that can mimic or even suppress our readings on compounds within the planetary atmosphere. Pandora will examine a minimum of 20 already identified exoplanets and their host stars (some of these were TESS discoveries). Each target system will be observed 10 times for 24 hours at a time. Starspots and other stellar activity can then be subtracted from the near-infrared readings on clouds, hazes and other atmospheric components.

Image: The Pandora observatory shown with the solar array deployed. Pandora is designed to be launched as a ride-share attached to an ESPA Grande ring. Very little customization was carried out on the major hardware components of the mission such as the telescope and spacecraft bus. This enabled the mission to minimize non-recurring engineering costs. Credit: Barclay et al.

Pandora’s telescope is a 45-centimeter aluminum Cassegrain instrument with two detector assemblies for the visible and near-infrared channels, the latter of which was originally developed for JWST. Its observations will serve as a valuable resource against which to examine JWST data, making it possible to distinguish a signal that may be from the upper layers of a star from the signature of gases in the planet’s atmosphere. The long stare will make it possible to accumulate over 200 hours of data on each of the mission’s targets. Let me quote a paper on the mission, one written as an overview developed for the 2025 IEEE Aerospace Conference:

Pandora is designed to address stellar contamination by collecting long-duration observations, with simultaneous visible and short-wave infrared wavelengths, of exoplanets and their host stars. These data will help us understand how contaminating stellar spectral signals affect our interpretation of the chemical compositions of planetary atmospheres. Over its one-year prime mission, Pandora will observe more than 200 transits from at least 20 exoplanets that range in size from Earth-size to Jupiter-size, and provide a legacy dataset of the first long-baseline visible photometry and near-infrared (NIR) spectroscopy catalog of exoplanets and their host stars.

A part of NASA’s Astrophysics Pioneers Program, Pandora comes in at under $20 million. It also has taken advantage of the rideshare concept, being launched beside two other spacecraft. The Star-Planet Activity Research CubeSat (SPARCS) is designed to study stellar flares and UV activity that can affect atmospheres and habitable conditions on target worlds. The topic is of high interest given our growing ability to analyze exoplanets around small M-dwarf stars, whose habitable zones expose them to high levels of UV. BlackCAT is an X-ray telescope designed to delve into gamma-ray bursts and other explosions of cosmic proportion from the earliest days of the cosmos.

Pandora will now go through systems checks by its primary builder, Blue Canyon Technologies, before control transitions to the University of Arizona’s Multi-Mission Operation Center in Tucson. The overview paper summarizes its place in the constellation of space observatories:

…a number of JWST observing programs aimed at detecting and characterizing atmospheres on Earthlike worlds are finding that stellar spectral contamination is plaguing their results. Typical transmission spectroscopic observations for exoplanets from large missions like JWST focus on collecting data for one or a small number of transits for a given target, with short observing durations before and after the transit event. In contrast to large flagship missions, SmallSat platform enable long-duration measurements for a given target. Pandora can thus collect an abundance of out-of-transit data that will help characterize the host star and directly address the problem of stellar contamination. The Pandora Science Team will select 20 primary science exoplanet host stars that span a range of stellar spectral types and planet sizes, and will collect a minimum of 10 transits per target, with each observation lasting about 24 hours. This results in 200 days of science observations required to meet mission requirements. With a one-year primary mission lifetime, this leaves a significant fraction of the year of science operations that can be used for spacecraft monitoring and additional science.

The paper is Barclay et al., “The Pandora SmallSat: A Low-Cost, High Impact Mission to Study Exoplanets and Their Host Stars,” prepared for the 2025 IEEE Aerospace Conference (preprint).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?