Centauri Dreams — Imagining and Planning Interstellar Exploration View RSS

Follow Centauri Dreams — Imagining and Planning Interstellar Exploration, filter it, and define how you want to receive the news (via Email, RSS, Telegram, WhatsApp etc.)
Hide details



Can an Interstellar Generation Ship Maintain a Population on a 250-year Trip to a Habitable Exoplanet? 28 Mar 7:47 AM (3 days ago)

Can an Interstellar Generation Ship Maintain a Population on a 250-year Trip to a Habitable Exoplanet?

I grew up on generation ship stories. I’m thinking of tales like Heinlein’s Orphans of the Sky, Aldiss’ Non-Stop, and (from my collection of old science fiction magazines) Don Wilcox’s “The Voyage that Lasted 600 Years.” The latter, from 1940, may have been the first generation ship story ever written. The idea grows out of the realization that travel to the stars may take centuries, even millennia, and that one way to envision it is to take large crews who live out their lives on the journey, their descendants becoming the ones who will walk on a new world. The problems are immense, and as Alex Tolley reminds us in today’s essay, we may not be fully considering some of the most obvious issues, especially closed life support systems. Project Hyperion is a game attempt to design a generation ship, zestfully tangling with the issues involved. The Initiative for Interstellar Studies is pushing the limits with this one. Read on.

by Alex Tolley

“Space,” it says, “is big. Really big. You just won’t believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space..” The Hitchhikers Guide to the Galaxy – Douglas Adams.

Detail of book cover of Tau Zero. Artist: Manchu.

Introduction

Science fiction stories, and most notably TV and movies, avoid the fact of the physical universe being so vast by various methods to reduce travel time. Hyperspace (e.g. Asimov’s galaxy novels), warp speed (Star Trek), and wormholes compress travel time to be like that on a planet in the modern era. Relativity will compress time for people at the cost of external time (e.g. Poul Anderson’s Tau Zero). But the energy cost of high-speed travel has a preferred slow-speed method – the space ark. Giancarlo Genta classifies this method of human crewed travel: H1 [1]. a space-ark type, where travel to the stars will take centuries. Either the crew will be preserved with cryosleep on the journey (e.g. Star Trek TOS S01E22, “Space Seed”) or generations will live and die in their ship (e.g. Aldiss Non-Stop) [2].

This last is now the focus of the Project Hyperion competition by the Initiative for Interstellar Studies (i4is) where teams are invited to design a generation ship within various constraints. This is similar to the Mars Society’s Design a City State competition for a self-supporting Mars city for 1 million people, with prizes awarded to the winners at the society’s conference.

Prior design work for an interstellar ship was carried out between 1973 and 1978, by the British Interplanetary Society (BIS). It was for an uncrewed probe to fly by Barnard’s star 5.9 lightyears distant within 50 years – their Project Daedalus.[3] Their more recent attempt at a redesign, the ironically named Project Icarus, failed to achieve a complete design, although there was progress on some technologies [4] Project Hyperion is far more ambitious based on the project constraints including human crews and a greater flight duration .[5].

So what are the constraints or boundary conditions for the competition design? Seven are given:

1. The mission duration is 250 years. In a generation ship that means about 10 generations. [Modern people can barely understand what it was like to live a quarter of a millennium ago, yet the ship must maintain an educated crew that will maintain the ship in working order over that time – AT].

2. The destination is a rocky planet that will have been prepared for colonization by an earlier [robotic? – AT] ship or directed panspermia. Conditions will not require any biological modifications of the crew.

3. The habitat section will provide 1g by rotation.

4. The atmosphere must be similar to Earth’s. [Presumably, gas ratios and partial pressures must be similar too. There does not appear to be any restriction on altitude, so presumably, the atmospheric pressure on the Tibetan plateau is acceptable. – AT]

5. The ship must provide radiation protection from galactic cosmic rays (GCR).

6. The ship must provide protection from impacts.

7. The crew or passengers will number 1000 +/- 500.

The entering team must have at least:

Designing such a ship is not trivial, especially as unlike a Lunar or Martian city, there is no rescue possible for a lone interstellar vehicle traveling for a quarter of a millennium at a speed of at least 1.5% of lightspeed to Proxima, faster for more distant stars. If the internal life support system fails and cannot be repaired, it is curtains for the crew. As the requirements are that at least 500 people arrive to live on the destination planet, any fewer survivors, perhaps indulging in cannibalism (c.f. Neal Stephenson’s Seveneves), means this design would be a failure.

Figure 1. Technology Readiness Levels. Source NASA.

The design competition constraints allow for technologies at the technology readiness level of 2 (TRL 2) which is in the early R&D stage. In other words, the technologies are unproven and may not prove workable. Therefore the designers can flex their imaginations somewhat, from propulsion (onboard or external) to shielding (mass and/or magnetic), to the all-important life support system.

Obviously, the life support system is critical. After Gerry O’Neil designed his space colonies with interiors that looked rather like semi-rural places on Earth, it was apparent to some that if these environments were stable, like managed, miniature biospheres, then they could become generation ships with added propulsion. [6]. The Hyperion project’s 500-1500 people constraint is a scaled-down version of such a ship. But given that life support is so critical, why are the teams not required to have members with biological expertise in the design? Moreover, the design evaluators also have no such member. Why would that be?

Firstly, perhaps such an Environmental Control and. Life Support System (ECLSS) is not anticipated to be a biosphere:

The requirements matrix states:

ECLSS: The habitat shall provide environmental control and life support: How are essential physical needs of the population provided? Food, water, air, waste recycling. How far is closure ensured?

Ecosystem: The ecosystem in which humans are living shall be defined at different levels: animals, plants, microbiomes.

This implies that there is no requirement for a fully natural, self-regulating, stable, biosphere, but rather an engineered solution that mixes technology with biology. The image gallery on the Hyperion website [5] seems to suggest that the ECLSS will be more technological in nature, with plants as decorative elements like that of a business lobby or shopping mall, providing the anticipated need for reminders of Earth and possibly an agricultural facility.

Another possibility is that prior work is now considered sufficient to allow it to be grafted into the design. It seems the the problem of ECLSS for interstellar worldships is almost taken for granted despite the very great number of unsolved problems.

As Soilleux [7] states

“To date, most of the thinking about interstellar worldships has focused, not unreasonably, on the still unsolved problems of developing suitably large and powerful propulsion systems. In contrast, far less effort has been given to the equally essential life support systems which are often assumed to be relatively straightforward to construct. In reality, the seductive assumption that it is possible to build an artificial ecosystem capable of sustaining a significant population for generations is beset with formidable obstacles.”

It should be noted that no actual ECLSS has proven to work for any length of time even for interplanetary flight, let alone centuries for interstellar flight. Gerry O’Neill did not make much effort beyond handwaving for the stability of the more biospheric ECLSS of his 1970s-era space colony. Prior work from Bios 3, Biosphere II, MeLISSA, and other experiments, has demonstrated very short-term recycling and sustainability, geared more towards longer duration interplanetary spaceflight and bases. Multi-year biospheres inevitably lose material due to imperfect recycling and life support imbalances that must be corrected via venting. The well-known Biosphere II project couldn’t maintain a stable biosphere to support an 8-person crew for 2 years, accompanied by a 10% per year atmosphere loss.

On a paper design basis, the British Interplanetary Society (BIS) recently worked on a more detailed design for a post-O’Neill space colony called Avalon. It supported a living interior for 10,000 people, the same as the O’Neill Island One design, providing a 1g level and separate food growing, carbon fixing, and oxygen (O2) generating areas. However, the authors did not suggest it would be suitable for interstellar flight as it did require some inputs and technological support from Earth. Therefore it remains an idea, and as with the hardware experiments to date, such ECLSSs remain at low TRL values. While the BIS articles are behind paywalls, there is a very nice review of the project in the i4is Principium magazine [7].

Table 1. Inputs to support a person in Space. Source: Second International Workshop on Closed Ecological Systems, Krasnoyarsk, Siberia, 1989 [8]

Baseline data indicates that a person requires 2 metric tonnes [MT] of consumables per year, of which food comprises 0.2 MT and oxygen (O2) 0.3 MT. For 1000 people over 250 years that is 500,000 MT. To put this mass in context, it is about the greatest mass of cargo that can be carried by ships today. Hence recycling to both save mass and to manage potentially unlimited time periods makes sense. Only very short-duration trips away from Earth allow for the consumables to be brought along and discarded after use, like the Apollo missions to the Moon. Orbiting space stations are regularly resupplied with consumables, although the ISS is using water recycling to save on the resupply of these consumables for drinking, washing, and bathing.

If an ECLSS could manage 100% recycling, then if the annual amount is the buffer to allow for annual crop growth, the ship could support the crew over the time period with just over 2000 MT, but of course with the added requirement for power and infrastructure and replacement parts to maintain the ECLSS.

Figure 2. Conceptual ECLSS process flow diagram for the BIS Avalon project.

A review of the Avalon project’s ECLSS [7] stated:

“A fully closed ECLSS is probably impossible. At a minimum, hydrogen, oxygen, nitrogen, and carbon lost from airlocks must be replaced as well as any nutrients lost to the ecosystem as products (eg soil, wood, fibres, chemicals). This presents a resource-management challenge for any fully autonomous spacecraft and an economic challenge even if only partially autonomous…Recycling does not provide a primary source of fixed nitrogen for new plant growth and significant quantities are lost to the air as N2 because of uncontrollable denitrification processes. The total loss per day for Avalon has been estimated at 43 kg to be replaced from the atmosphere.”

[That last loss figure would translate to 390 MT for the 1/10 scaled crewed Hyperion starship over 250 years.]

Hempsell evaluated the ECLSS technologies for star flight based on the Avalon project and found them unsuited to a long-duration voyage [9]. It was noted that the Avalon design still required inputs from Earth to be maintained as it was not fully recycled:

“So effective is this transport link that it is unlikely Earth orbiting colonies will have any need for the self-sufficiency assumed by the original thinking by O’Neill. It is argued that a colony’s ability to operate without support is likely to be comparable to the transport infrastructure’s journey times, which is three to four months. This is three orders of magnitude less than the requirements for a world ship. It is therefore concluded that in many critical respects the gap between colonies and world ships is much larger than previous work assumed.”

We can argue whether this will be solved eventually, but clearly, any failure is lethal for the crew.

One issue is not mentioned, despite the journey duration. Over the quarter millennium voyage, there will be evolution as the organisms adapt to the ship’s environment. Data from the ISS has shown that bacteria may mutate into more virulent pathogens. A population living in close quarters will encourage pandemics. Ionizing radiation from the sun and secondaries from the hull of a structure damages cells including their DNA. 250 years of exposure to residual GCR and secondaries will damage DNA of all life on the starship.

However, even without this direct effect on DNA, the conditions will result in organisms evolving as they adapt to the conditions on the starship, especially the small populations, increasing genetic drift. This evolution, even of complex life, can be quite fast, as the continuing monitoring of the Galápagos island finches observed by Darwin attests. Of particular concern is the creation of pathogens that will impact both humans and the food supply.

In the 1970s, the concept of a microbiome in humans, animals, and some plants was unknown, although bacteria were part of nutrient cycling. Now we know much more about the need for humans to maintain a microbiome, as well as some food crops. This could become a source of pathogens. While a space habitat can just flush out the agricultural infrastructure and replace it, no such possibility exists for the starship. Crops would need to be kept in isolated compartments to prevent a disease outbreak from destroying all the crops in the ECLSS.

If all this wasn’t difficult enough, the competition asks that the target generation population find a ready-made terrestrial habitat/terraformed environment to slip into on arrival. This presumably was prebuilt by a robotic system that arrived ahead of the crewed starship to build the infrastructure and create the environment ready for the human crew. It is the Mars agricultural problem writ large [10], with no supervision from humans to correct mistakes. If robots could do this on an exoplanet, couldn’t they make terrestrial habitats throughout the solar system?

Heppenheimer assumed that generation ships based on O’Neill habitats would not colonize planets, but rather use the resources of distant stars to build more space habitats, the environment the populations had adapted to [6]. This would maintain the logic of the idea of space habitats being the best place for humanity as well as avoiding all the problems of trying to adapt to living on a living, alien planet. Rather than building space habitats in other star systems, the Hyperion Project relies on the older idea of settling planets, and ensuring at least part of the planet is habitable by the human crew on arrival. This requires some other means of constructing an inhabitable home, whether a limited surface city or terraforming the planet.

Perhaps the main reason why both the teams and evaluators have put more emphasis on the design of the starship as a human social environment is that little work has looked at the problems of maintaining a small population in an artificial environment over 250 years. SciFi novels of generation ships tend to be dystopias. The population fails to maintain its technical skills and falls back to a simpler, almost pretechnological, lifestyle. However, this cannot be allowed in a ship that is not fully self-repairing as hardware will fail. Apart from simple technologies, no Industrial Revolution technology that continues to work has operated without the repair of parts. Our largely solid-state space probes like the Voyagers, now over 45 years old and in interstellar space, would not work reliably for 250 years, even if their failing RTGs were replaced. Mechanical moving parts fail even faster.

While thinking about how we might manage to send crewed colony ships to the stars with the envisaged projections of technologies that may be developed, it seems to me that it is rather premature given the likely start date of such a mission. Our technology will be very different even within decades, obsoleting ideas used in the design. The crews may not be human 1.0, requiring very different resources for the journey. At best, the design ideas for the technology may be like Leonardo Da Vinci’s ideas for flying machines. Da Vinci could not conceive of the technologies we use to avoid moving our meat bodies through space just to communicate with others around the planet.

Why is the population constraint 1000 +/- 500?

Gerry O’Neill’s Island One space colony was designed for 10,000 people, a small town. The BIS Avalon was designed for the same number, using a different configuration that provided the full 1g with lower Coriolis forces. The Hyperion starship constrains the population to 1/10th that number. What is the reason? It is based on the paper by Smith [14] on the genetics and estimated safe population size for a 5-generation starship [125 years at 25 years per generation?]. This was calculated on the expected lower limit of a small population’s genetic diversity with current social structures, potential diseases, etc. This is well below the number generally considered for long-term viable animal populations. However, the calculated size of the genetic bottleneck in our human past is about that size, suggesting that in extremis such a small population can recover, just as we escaped extinction. Therefore it should be sufficient for a multi-generation interstellar journey.

From Hein[14]:

While the smallest figures may work biologically, they are rather precarious for some generations before the population has been allowed to grow. We therefore currently suggest figures with Earth-departing (D1) figures on the order of 1,000 persons.

But do we need this population size? Without resorting to a full seed ship approach with the attendant issues of machine carers raising “tank babies”, can we postulate an approach without assuming future TRL9 ECLSS technology?

As James T Kirk might say: “I changed the conditions of the test competition!”

Suppose we only need a single woman who will bear a child and then age until death, perhaps before 100. The population then would be a newborn baby, the 25-year-old mother, the 50-year-old grandmother, the 75-year-old great-grandmother, and the deceased great-great-grandmother. 3 adults and a newborn. At 20, the child will count as an adult, with a 45-year-old mother, a 70-year-old grandmother, and possibly a 95-year-old great-grandmother, 4 adults. Genetic diversity would be solved from a frozen egg, sperm, and embryo bank of perhaps millions of individual selected genomes. The mother would give birth only to preselected, compatible females using implantation (or only female sperm) to gain pregnancy, to maintain the line of women.

This can easily be done today with minimal genetic engineering and a cell-sorting machine [11]. At the destination, population renewal can begin. The youngest mature woman could bear 10 infants, each separated by 2 years. The fertile female population would rapidly increase 10-fold each generation, creating a million women in 6 generations, about 150 years. Each child would be selected randomly from the stored genetic bank of female sperm or embryos. At some point, males can be phased in to restore the 50:50 ratio. Because we are only supporting 4 women, almost all the recycling for food and air can be discarded, just recycling water. Water recycling for urine and sweat has reached 98% on the ISS, so we can assume that this can be extended to the total water consumption. Using a conservative 98% recycle rate for water (sanitary, drinking, and metabolic), O2 replenishment from water by electrolysis (off-the-shelf kit) and an 80% recycle rate of metabolic carbon dioxide (CO2) to O2 and methane (CH4 by a Sabatier process), implies that the total water storage needs only be 5x the annual consumption. The water would be part of the hull as described by McConnell’s “Spacecoach” concept.

Based on the data in Table 1 above, the total life support would be about 125 MT. The methane could be vented or stored for the lander propulsion fuel. For safety through redundancy, the total number of women population might be raised 10x to 30-40, requiring only 1250 MT total consumables. All this does not require any new ECLSS technology, just enough self-repair, repairable, and recyclable machinery, to maintain the ship’s integrity and functioning. The food would be stored both as frozen or freeze-dried, with the low ambient temperature maintained by radiators exposed to interstellar space. This low mass requires no special nutrient recycling, just water recycling at the efficiency of current technology, O2 recycling using existing technology, and sufficient technology support to ensure any repairs and medical requirements can be handled by the small crew.

Neal Stephenson must have had something like this in mind for the early period of saving and rebuilding the human population in his novel Seveneves when the surviving female population in space was reduced to 7. Having said this, such a low population size even with carefully managed genetic diversity through sperm/ar embryo banks, genetic failure over the 10 generation flight may still occur. Once the population has the resources to increase, the genetic issues may be overcome by the proposed approach.

Table 2. ETHNOPOP simulation showing years to demographic extinction for closed human populations. Bands of people survive longer with larger starting sizes, but these closed populations all eventually become extinct due to demographic (age & sex structure) deficiencies. Source Hein [14]

From table 2 above, even a tiny, natural,human population may survive hundreds of years before extinction. Would the carefully managed starship population fare better, or worse?

Despite this maneuver, there is still a lot of handwavium about the creation of a terrestrial habitable environment at the destination. All the problems of building an ECLSS or biosphere in a zone on the planet or its entirety by terraforming, entirely by a non-human approach, are left to the imagination. As with humans, animals such as birds and mammals require parental nurturing, making any robotic seedship only partially viable for terraforming. If panspermia is the expected technology, this crewed journey may not start for many thousands of years.

But what about social stability?

I do applaud the Hyperion team for requiring a plan for maintaining a stable population for 250 years. The issue of damage to a space habitat in the O’Neill era was somewhat addressed by proponents, mainly through the scale of the habitat and repair times. Major malicious damage by groups was not addressed. There is no period in history that we know of that lasted this long without major wars. The US alone had a devastating Civil War just 164 years ago. The post-WWII peace in Europe guaranteed by the Pax Americana is perhaps the longest period of peace in Europe, albeit its guarantor nation was involved in serious wars in SE Asia, and the Middle East during this period. Humans seem disposed to violence between groups, even within families. I couldn’t say whether even a family of women would refrain from existential violence. Perhaps genetic modification or chemical inhibition in the water supply may be solutions to outbreaks of violence. This is speculation in a subject outside my field of expertise. I hope that the competition discovers a viable solution.

In summary, while it is worthwhile considering how we might tackle human star flight of such duration, there is no evidence that we are even close to solving the issues inherent in maintaining any sort of human star flight of such duration with the earlier-stage technologies and the constraints imposed on the designs by the competition. By the time we can contemplate such flights, technological change will likely have made all the constraints superfluous.

References and Further Reading

Genta, G Genta, “Interstellar Exploration: From Science Fiction to Actual Technology,”” Acta Astronautica Vol. 222 (September 2024), pp. 655-660 https://www.sciencedirect.com/science/article/pii/S0094576524003655?via%3Dihub

Gilster P, (2024) “Science Fiction and the Interstellar Imagination,” Web https://www.centauri-dreams.org/2024/07/17/science-fiction-and-the-interstellar-imagination/

BIS “Project Daedalus” https://en.wikipedia.org/wiki/Project_Daedalus

BIS “Project Icarus” https://en.wikipedia.org/wiki/Project_Icarus_(interstellar)

i4is “Project Hyperion Design Competition” https://www.projecthyperion.org/

Heppenheimer, T. A. (1977). Colonies in space: A Comprehensive and Factual Account of the Prospects for Human Colonization of Space. Harrisburg, PA: Stackpole Books, 1977

Soilleux, R. (2020) “Implications for an Interstellar World-Ship in findings from the BIS SPACE Project” – Principium #30 pp5-15 https://i4is.org/wp-content/uploads/2020/08/Principium30-print-2008311001opt.pdf

Nelson, M., Pechurkin, N. S., Allen, J. P., Somova, L. A., & Gitelson, J. I. (2010). “Closed ecological systems, space life support and biospherics.” In Humana Press eBooks (pp. 517–565). https://doi.org/10.1007/978-1-60327-140-0_11

Hempsell, M (2020), “Colonies and World Ships” JBIS, 73, pp.28-36 (abstract) https://www.researchgate.net/publication/377407466_Colonies_and_World_Ships

Tolley, A. (2023) “MaRMIE: The Martian Regolith Microbiome Inoculation Experiment.” Web https://www.centauri-dreams.org/?s=Marmie

Bio-Rad “S3e Cell Sorter” https://www.bio-rad.com/en-uk/category/cell-sorters?ID=OK1GR1KSY

Allen, J., Nelson, M., & Alling, A. (2003). “The legacy of biosphere 2 for the study of biospherics and closed ecological systems.” Advances in Space Research, 31(7), 1629–1639. https://doi.org/10.1016/s0273-1177(03)00103-0

Smith, C. M. (2014). “Estimation of a genetically viable population for multigenerational interstellar voyaging: Review and data for project Hyperion.” Acta Astronautica, 97, 16–29. https://www.sciencedirect.com/science/article/abs/pii/S0094576513004669

Hein A.M,, et al (2020). “World ships: Feasibility and Rationale” Acta Futura, 12, 75-104, 2020 Web https://arxiv.org/abs/2005.04100

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

The Ethics of Spreading Life in the Cosmos 25 Mar 5:24 AM (6 days ago)

The Ethics of Spreading Life in the Cosmos

We keep trying to extend our reach into the heavens, but the idea of panspermia is that the heavens are actually responsible for us. Which is to say, that at least the precursor materials that allow life to emerge came from elsewhere, and did not originate on Earth. Over a hundred years ago Swedish scientist Svante Arrhenius suggested that the pressure of starlight could push bacterial spores between planets and we can extend the notion to interstellar journeys of hardy microbes as well, blasted out of planetary surfaces by such things as meteor impacts and flung into outbound trajectories.

Panspermia notions inevitably get into the question of deep time given the distances involved. The German physician Hermann Richter (1808-1876) had something interesting to say about this, evidently motivated by his irritation with Charles Darwin, who had made no speculations on the origin of the life he studied. Richter believed in a universe that was eternal, and indeed thought that life itself shared this characteristic:

“We therefore also regard the existence of organic life in the universe as eternal; it has always existed and has propagated itself in uninterrupted succession. Omne vivum ab aeternitate e cellula!” [All life comes from cells throughout eternity].

Thus Richter supplied what Darwin did not, while accepting the notion of the evolution of life in the circumstances in which it found itself. By 1908 Arrhenius could write:

“Man used to speculate on the origin of matter, but gave that up when experience taught him that matter is indestructible and can only be transformed. For similar reasons we never inquire into the origin of the energy of motion. And we may become accustomed to the idea that life is eternal, and hence that it is useless to inquire into its origin.”

The origins of panspermia thinking go all the way back to the Greeks, and the literature is surprisingly full as we get into the 19th and early 20th Century, but I won’t linger any further on that because the paper I want to discuss today deals with a notion that came about only within the last 60 years or so. As described by Carl Sagan and Iosif Shklovskii in 1966 (in Intelligent Life in the Universe, it’s that panspermia is not only possible but might be something that humans might one day attempt.

Indeed, Michael Mautner and Greg Matloff proposed this in the 1970s (citation below), while digging into the potential risks and ethical problems associated with such a project. The idea remains controversial, to judge from the continuing flow of papers on various aspects of panspermia. We now have a study from Asher Soryl (University of Otago, NZ) and Anders Sandberg (MIMIR Centre for Long Term Futures Research, Stockholm) again sizing up guided panspermia ethics and potential pitfalls. What is new here is the exploration of the philosophy of the directed panspermia idea.

Image: Can life be spread by comets? Comet 2I/Borisov is only the second interstellar object known to have passed through our Solar System, but presumably there are vast numbers of such objects moving between the stars. In this image taken by the NASA/ESA Hubble Space Telescope, the comet appears in front of a distant background spiral galaxy (2MASX J10500165-0152029, also known as PGC 32442). The galaxy’s bright central core is smeared in the image because Hubble was tracking the comet. Borisov was approximately 326 million kilometres from Earth in this exposure. Its tail of ejected dust streaks off to the upper right. Credit: ESA/Hubble.

Spreading life is perhaps more feasible than we might imagine at first glance. We have achieved interstellar capabilities already, with the two Voyagers, Pioneers 10 and 11 and New Horizons on hyperbolic trajectories that will never return to the Solar System. Remember, time is flexible here because a directed panspermia effort would be long-term, seeding numerous stars over periods of tens of thousands of years. The payload need not be large, and Soryl and Sandberg consider a 1 kg container sufficient, one containing freeze-dried bacterial spores inside water-dissoluble UV protective sheaths. Such spores could survive millions of years in transit:

…desiccation and freezing makes D. radiodurans able to survive radiation doses of 140 kGy, equivalent to hundreds of millions of years of background radiation on Earth. A simple opening mechanism such as thermal expansion could release them randomly in a habitable zone without requiring the use of electronic components. Moreover, normal bacteria can be artificially evolved for extreme radiation tolerance, in addition to other traits that would increase their chances of surviving the journey intact. Further genetic modifications are also possible so that upon landing on suitable exoplanets, evolutionary processes could be accelerated by a factor of ∼1000 to facilitate terraforming, eventually resulting in Earth-like ecological diversity.

If the notion seems science fictional, remember that it’s also relatively inexpensive compared to instrumented payload packages or certainly manned interstellar missions. Right now when talking about getting instrumentation of any kind to another star, we’re looking at gram-scale payloads capable of being boosted to a substantial portion of lightspeed, but directed panspermia could even employ comet nuclei inoculated with life, all moving at far slower speeds. And we know of some microorganisms fully capable of surviving hypervelocity impacts, thus enabling natural panspermia.

So should we attempt such a thing, and if so, what would be our motivation? The idea of biocentrism is that life has intrinsic merit. I’ve seen it suggested that if we discover that life is not ubiquitous, we should take that as meaning we have an obligation to seed the galaxy. Another consideration, though, is whether life invariably produces sentience over time. It’s one thing to maximize life itself, but if our actions produce it on locations outside Earth, do we then have a responsibility for the potential suffering of sentient beings given we have no control over the conditions they will inhabit?

That latter point seems abstract in the extreme to me, but the authors note that ‘welfarism,’ which assesses the intrinsic value of well-being, is an ethical position that illuminates the all but God-like perspective of some directed panspermia thinking. We are, after all, talking about the creation of living systems that, over billions of years of evolution, could produce fully aware, intelligent beings, and thus we have to become philosophers, some would argue, as well as scientists, and moral philosophers at that:

While in some cases it might be worthwhile to bring sentient beings into existence, this cannot be assumed a priori in the same way that the creation of additional life is necessarily positive for proponents of life-maximising views; the desirability of a sentient being’s existence is instead contingent upon their living a good life.

Good grief… Now ponder the even more speculative cost of waiting to do directed panspermia. Every minute we wait to develop systems for directed panspermia, we lose prospective planets. After all, the universe, the authors point out, is expanding in an accelerated way (at least for now, as some recent studies have pointed out), and for every year in which we fail to attempt directed panspermia, three galaxies slip beyond our capability of ever reaching them. By the authors’ calculations, we lose on the order of one billion potentially habitable planets each year as a result of this expansion.

These are long-term thoughts indeed. What the authors are saying is reminiscent in some ways of the SETI/METI debate. Should we do something we have the capability of doing when we have no consensus on risk? In this case, we have only begun to explore what ‘risk’ even means. Is it risk of creating “astronomical levels of suffering” in created biospheres down the road? Soryl and Sandberg use the term, thinking directed panspermia should not be attempted until we have a better understanding of the issue of sentient welfare as well as technologies that can be fine-tuned to the task:

Until then, we propose a moratorium on the development of panspermia technologies – at least, until we have a clear strategy for their implementation without risking the creation of astronomical suffering. A moratorium should be seen as an opportunity for engaging in more dialogue about the ethical permissibility of directed panspermia so that it can’t happen without widespread agreement between people interested in the long-term future value of space. By accelerating discourse about it, we hope that existing normative and empirical uncertainties surrounding its implementation (at different timescales) can be resolved. Moreover, we hope to increase awareness about the possibility of S-risks resulting from space missions – not only limited to panspermia.

By S-risks, the authors refer to those risks of astronomical suffering. They assume we need to explore further what they call ‘the ethics of organised complexity.’ These are philosophical questions that are remote from ongoing space exploration, but building up a body of thought on the implications of new technologies cannot be a bad thing.

That said, is the idea of astronomical suffering viable? Life of any kind produces suffering, does it not, yet we choose it for ourselves as opposed to the alternative. I’m reminded of an online forum I once participated in when the question of existential risks to Earth by an errant asteroid came up. In the midst of asteroid mitigation questions, someone asked whether we should attempt to save Earth from a life-killer impact in the first place. Was our species worth saving given its history?

But of course it is, because we choose to live rather than die. Extending that, if we knew that we could create life that would evolve into intelligent beings, would we be responsible for their experience of life in the remote future? It’s hard to see this staying the hand of anyone seriously attempting directed panspermia. What would definitely put the brakes on it would be the discovery that life occurs widely around other stars, in which case we should leave these ecosystems to their own destiny. My suspicion is that this is exactly what our next generation telescopes and probes will discover.

The paper is Soryl & Sandberg, “To Seed or Not to Seed: Estimating the Ethical Value of Directed Panspermia,” Acta Astronautice 22 March 2025 (full text). The Mautner and Matloff paper is “Directed Panspermia: A Technical and Ethical Evaluation of Seeding the Universe,” JBIS, Vol. 32, pp. 419-423, 1979.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Why Do Super-Earths and Mini-Neptunes Form Where They Do? 20 Mar 7:40 AM (11 days ago)

Why Do Super-Earths and Mini-Neptunes Form Where They Do?

Exactly how astrophysicists model entire stellar systems through computer simulations has always struck me as something akin to magic. Of course, the same thought applies to any computational methods that involve interactions between huge numbers of objects, from molecular dynamics to plasma physics. My amazement is the result of my own inability to work within any programming language whatsoever. The work I’m looking at this morning investigates planet formation within protoplanetary disks. It reminds me again just how complex so-called N-body simulations have to be.

Two scientists from Rice University – Sho Shibata and Andre Izidoro – have been investigating how super-Earths and mini-Neptunes form. That means simulating the formation of planets by studying the gravitational interactions of a vast number of objects. N-body simulations can predict the results of such interactions in systems as complex as protoplanetary disks, and can model formation scenarios from the collisions of planetesimals to planet accretion from pebbles and small rocks. All this, of course, has to be set in motion in processes occurring over millions of years.

Image: Super-Earths and mini-Neptunes abound. New work helps us see a possible mechanism for their formation. Credit: Stock image.

Throw in planetary migration forced by conditions within the disk and you are talking about complex scenarios indeed. The paper runs through the parameters set by the researchers as to disk viscosity and metallicity and uses hydrodynamical simulations to model the movement of gas within the disk. What is significant in this study is that the authors deduce planet formation within two rings at specific locations within the disk, instead of setting the disk model as a continuous and widespread distribution. Drawing on prior work he published in Nature Astronomy in 2021, Izidoro comments:

“Our results suggest that super-Earths and mini-Neptunes do not form from a continuous distribution of solid material but rather from rings that concentrate most of the mass in solids.“

Image: This is Figure 7 from the paper. Caption: Schematic view of where and how super-Earths and mini-Neptunes form. Planetesimal formation occurs at different locations in the disk, associated with sublimation and condensation lines of silicates and water. Planetesimals and pebbles in the inner and outer rings have different compositions, as indicated by the different color coding (a). In the inner ring, planetesimal accretion dominates over pebble accretion, while in the outer ring, pebble accretion is relatively more efficient than planetesimal accretion (b). As planetesimals grow, they migrate inward, forming resonant chains anchored at the disk’s inner edge. After gas disk dispersal, resonant chains are broken, leading to giant impacts that sculpt planetary atmospheres and orbital reconfiguration (c). Credit: Shibata & Izidoro.

Thus super-Earths and mini-Neptunes, known to be common in the galaxy, form at specific locations within the protoplanetary disk. Ranging in size from 1 to 4 times the size of Earth, such worlds emerge in two bands, one of them inside about 1.5 AU from the host star, and the other beyond 5 AU, near the water snowline. We learn that super-Earths form through planetesimal accretion in the inner disk. Mini-Neptunes, on the other hand, result from the accretion of pebbles beyond the 5 AU range.

A good theory needs to make predictions, and the work at Rice University turns out to replicate a number of features of known exoplanetary systems. That includes the ‘radius valley,’ which is the scarcity of planets about 1.8 times the size of Earth. What we observe is that exoplanets generally form at roughly 1.4 or 2.4 Earth size. This ‘valley’ implies, according to the researchers, that planets smaller than 1.8 times Earth radius would be rocky super-Earths. Larger worlds become gaseous mini-Neptunes.

And what of Earth-class planets in orbits within a star’s habitable zone? Let me quote the paper on this:

Our formation model predicts that most planets with orbital periods between 100 days < P < 400 days are icy planets (>10% water content). This is because when icy planets migrate inward from the outer ring, they scatter and accrete rocky planets around 1 au. However, in a small fraction of our simulations, rocky Earthlike planets form around 1 au (A. Izidoro et al. 2014)… While the essential conditions for planetary habitability are not yet fully understood, taking our planet at face value, it may be reasonable to define Earthlike planets as those at around 1 au with rocky-dominated compositions, protracted accretion, and relatively low water content. Our formation model suggests that such planets may also exist in systems of super-Earths and mini-Neptunes, although their overall occurrence rate seems to be reasonably low, about ∼1%.

That 1 percent is a figure to linger on. If the planet formation mechanisms studied by the authors are correct in assuming two rings of distinct growth, then we can account for the high number of super-Earths and mini-Neptunes astronomers continue to find. Such planets are currently thought to orbit about 30 percent of Sun-like stars, meaning that there would be no more than one Earth-like planet around every 300 such stars.

How seriously to take such results? Recognizing that the kind of computation in play here take us into realms we cannot verify through experiment, it’s important to remember that we have to use tools like N-body simulations to delve deeply into emergent phenomena like planets (or stars, or galaxies) in formation. The key is always to judge computational results against actual observation, so that insights can turn into hard data. Being a part of making that happen is what I can only call the joy of astrophysics.

The paper is Shibata & Izidoro, “Formation of Super-Earths and Mini-Neptunes from Rings of Planetesimals,” Astrophysical Journal Letters Vol. 979, No. 2 (21 January 2025), L23 (full text). The earlier paper by Izidoro and team is “Planetesimal rings as the cause of the Solar System’s planetary architecture,” Nature Astronomy Vol. 6 (30 December 2021), 357-366 (abstract).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Deep Space and the Art of the Map 18 Mar 5:20 AM (13 days ago)

Deep Space and the Art of the Map

NASA’s recently launched SPHEREx space telescope will map the entire celestial sky four times in two years, creating a 3D map of over 450 million galaxies. We can hope for new data on the composition of interstellar dust, among other things, so the mission has certain astrobiological implications. But today I’m focusing on the idea of maps of stars as created from our vantage here on Earth. How best to make maps of a 3D volume of stars? I’ve recently been in touch with Kevin Wall, who under the business name Astrocartics Lab has created the Astrocartics and Interstellar Surveyor maps he continues to refine. He explains the background of his interest in the subject in today’s essay while delving into the principles behind his work.

by Kevin Wall

The most imposing aspect of the universe we live in is that it is 3-dimensional. The star charts of the constellations that we are most familiar with show the 2-dimensional positions of stars fixed on the dome of the sky. We do not, however, live under a dome. In the actual cosmography of the universe some stars are farther and some are closer to us and nothing about their positions is flat in any way. Only the planets that reside in a mostly 2D plane around the Sun can be considered that way.

Now if your artistic inclinations make you want a nice map of the Solar System, you can find many for sale on Amazon with which you can decorate your wall. But if your space interests lie much farther than Pluto or Planet 9 (if it exists) and you want something that shows true cosmography then your choices are next to zero. This has been true since I became interested in making 3D maps of the stars in the mid-90s.

A map of the stars nearest the Sun is of particular interest to me because they are the easiest to observe and will be the first humanity visits. These maps are not really that hard to make and there are a few that you can find today. There is one I once saw on Etsy which is 3D but has a fairly plain design. There are some you can get from Winchell Chung on his 3D Star Maps site, but they are not visually 3D (a number beside the star is used for the Z distance). Also, there are many free star maps on the Internet, but none that really capture the imagination, the sort of thing one might want to display..

So why can’t star travel enthusiasts get a nice map of the nearest stars to Earth with which they can adorn their wall? With that question in mind, I started my interstellar mission to make a map that might be nice enough to make people more interested in star travel. Although I do not consider myself an artist. I thought a map done with more attention to detail and maybe a little artistry would be appreciated. My quest to map the sky has many antecedents, as the image of an ancient Chinese map below shows.

Image: An early star chart. This is the Suzhou Planisphere, created in the 12th Century and engraved in stone some decades later. It is an astronomical chart made as an educational tool for a Chinese prince. Its age is clear from the fact that the north celestial pole lies over 5° from Polaris’ position in the present night sky. Credit: History of Chinese Science and Culture Foundation / Ian Ridpath.

How to Make a 3D Star Chart

Before I go any farther, I need to explain that this article is about how to map objects in 3D space onto a flat 2D map surface. It is not about 3D models, which can be considered a sort of map but clearly involve different construction methods. It is also not quite about computer star map programs. I use the word “quite” because although a star map program is dynamic and seems different from a static wall map, the scene on your flat computer screen is still 2D. The projection your program is using is the same as ones for paper maps but you get to pick your view angle.

The first thing you need to do in making a 3D star map is get the 3D coordinates. This is not really hard but involves some trigonometry. I will not go into the details here because this article is about how to put 3D stars on the maps once you have the coordinates and data for them. I refer you to the 3D Star Maps site for trigonometry assistance.

The basic 3D star map is a 2D X-Y grid with Z coordinates printed next to the stars. The biggest thing going for it is efficient use of map area and, in fact, it is the only way to make a 3D star map with scores of stars on it. This map makes for a good reference; however, visualizing the depth of objects on it is hard. Although you can use graphics to give this type of map some 3D, you really need to use the projection systems invented for illustrations.

Much has been learned about how to show 3-dimensional images on a 2-dimensional surface and I cannot cover everything here. Therefore, I will summarize the methods I have seen used for most 3D star maps. They engage the visual cues that we use to see depth in real life to make all 3 dimensions easy for your map users to read. There are basically 2 ways of projection to visually show the 3D dimensions of something on a flat piece of paper: Parallel and Perspective.

The basic 3D star map uses a parallel projection. In parallel projection, all lines of sight from objects on the map to the viewer are parallel to each. So, unlike in human vision where things get smaller as they get farther away, everything in this sort of projection stays the same size into infinity. Because of this and the fact that such a star map has an X-Y plane that is parallel to the viewing plane, the Z coordinates are perpendicular to the viewing plane so that you can’t see a line connecting the X-Y plane to a star. However, if you tilt the XY plane so that it is no longer parallel, you can see the z dimension as it also is no longer perpendicular to the viewing plane.

Rotating the axes like this is called axonometric projection. It is used for most of the 3D star maps I have seen and is easy to create. It also gives a good illusion of depth. The scale of distances on the map changes depending on the angles an object makes with the map origin point. Although this is not a problem for viewers who only need estimates between objects, if you want to calculate the real distances you will need to do some arithmetic.

Perspective projection simulates the natural way humans see depth. That is, things that are farther from us look smaller. In perspective drawings, lines are drawn toward one or more points on the 2D surface. They determine the progressively smaller proportions of objects in the drawing that are farther from the viewer where the point(s) are considered to be at infinity. Star maps that use this will have distances that scale smaller the farther they are away from the reader. This makes for inconsistent distances, but it allows for maps to have extremely deep objects included on them.

My first map was large and had a simple design. It had a lot of stars (20 LY radius) and used an axonometric projection but it was not much different from the free maps that were available. Looking at it, I saw the problems a map reader would have in using it. These were things my latest map tries to correct, as I elaborate on later.

The main thing I didn’t like about it, however, was that it was not good art and not good star map art as well. It just wasn’t achieving my goal of being something a lot of people would consider putting on their wall. I have already admitted that being an artist is not one of my strong points. However, there are star maps out there that look good. So, I tried to see what the nicest ones had in common and tried to make something different as I moved on to my next star map iteration.

The Art in the Science

A map is like a scientific hypothesis. It attempts to model a physical aspect of the natural world and then it is tested through observation and measurement. This means that map designs have to be based on logical and testable methods. But unlike other sciences, cartography can use art to elaborate on those methods. Therefore, things like color and graphic design and the artistic beauty of the map are important factors.

The art in star maps can go beyond just the graphical enhancements that were intended. Looking at the night sky gives people a sense of wonder. Star maps, if done right, can capture that sense of wonder. Most maps have minimalist art designs that seem to do that best – circles and lines amidst white dots on a dark background. Fantasy star map art goes a bit farther in design and creativity but such maps are always recognized as having to do with that wonder. However, I think the more minimalist ones show the mystery better. Here is a minimalist fantasy star map without any 3D aspect to it that I made that has some of the aesthetics that I like.

Another map that has captured this quality of mystery is called ‘Star Chart for Northern Skies.’ Published by Rand McNally in 1915, it is a sky map of the northern hemisphere, one that interior designer Thomas O’Brien referenced in the magazine Elle Décor and later discussed in one of his books. It is a real map intended for reference and maybe to decorate classroom walls. It is one of the maps that have a simple style that nevertheless gives that sense of wonder. Although having a star map in an astronomy magazine is great, Elle Décor magazine reaches a much wider audience of people who would never have given this subject a single thought. This minimalist style of star map art is what I wanted for my own map.

Image: The Rand McNally star chart as seen displayed in Thomas O’Brien’s home. Credit: Thomas O’Brien.

To make a minimalist art 3D star map demands a projection system that has some visual way to show depth without needing a lot of lines drawn on it. The axonometric and perspective projections seemed to me to require too much background graphics. So, I had to come up with something a bit different than what has been used before.

Eventually I thought of what was to be the 3D depth graphics of the map I call the Astrocartica (shown below). The graphics show visually the Z dimension of the star systems on it, but it is not as intuitively obvious as the other projections I have mentioned. You can use the map to find accurate distances between star systems with a ruler and without resorting to a calculator or even doing arithmetical estimating in your head.

I have found since making the Astrocartica that more than a few people have difficulty seeing how the graphics work to show the third dimension. Possibly this is because the system I use is unfamiliar to most people and my illustrations and efforts to explain it do not seem to be sufficient. So, I decided to make another star chart that was more user friendly.

Image: The Astrocartica Interstellar Chart

The Science in the Art

A book called Envisioning Information by Edward Tufte (Graphics PR, 1990) has greatly influenced my star map designs. Tufte is an expert in his field of behavioral science and his book covers many aspects of data presentation such as layering and color. I had read the book while doing the Astrocartica star map and what I took away was just how art and science need to work together for good information display. I can’t say I succeeded completely in applying every principle in the book, but Tufte did make me aware of what a good design for readability needs to accomplish.

In the Astrocartica map, I emphasized the art. But, in my next chart I wanted to emphasize the readability of a star map because of my experience with my other maps. Since the depth projection was the chief issue with the Astrocartica, I considered the proven ways to illustrate 3 dimensions on a flat surface. The parallel and perspective projections were designed to work with the use of the visual cues that humans use and they have been used for centuries. So, I decided to use one of these. I felt perspective is more appropriate for pictures and painting and that, for a map, the scaling for objects in a perspective illustration makes it hard to measure accurate distances without doing arithmetic. I chose axonometric because scales were not as inconsistent as perspective. Also, parallel projection is often used for scenes in computer games and is familiar to people as a way to illustrate 3D on a flat computer screen.

My first map had a single plane. This put some stars that were far from the plane on the end of long extenders that made reading their X-Y position awkward. Also, having all the extenders on the one plane made it crowded and hard to differentiate them. So, I tried maps with several planes and settled on a design with three.

I also put several visual cues on the new map, called Interstellar Surveyor, to show the differences in the local space cosmography. First, the grid lines that are close to the stars are a lighter color that stands out compared to the lines that are farther that are darker. This shows areas of local space that are relatively empty. Also, the edges of the parts of the planes that have stars are thicker than the parts that do not.

One thing that I took from the Astrocartica was the star system graphics. It shows distances in star systems by increasing scale by a magnitude of 10x as you go away from the primary. I tilted the system plane to match the main ones for the sake of the art.

I think, overall, the Interstellar Surveyor map is more user-friendly than the Astrocartica or the first map I attempted.

Image: The Interstellar Surveyor Chart.

Astrocartics

I think the cartography of outer space is a long neglected field of study. Cartography has a unique connection with the human understanding of the world around us. It seems to serve both our rationalizing and aesthetic senses at the same time. That alone makes it important to develop and expand. But, while cartography has been engaged in studying the Earth, it really hasn’t explored the rest of the universe like some other sciences have even though it can.

In every field of study that is based on science there are logical principles. General cartography principles divide map projections of the Earth into how they transform areas and distances. Maps of outer space also have differences and patterns that seem to be able to be put into a categorical system. Here is a possible system to categorize and (more importantly) understand maps of outer space. It attempts to put the different projection systems into categories and is based on how the X-Y plane of a map is rotated in 3-space.

Type I – Planes are parallel to the viewing plane – these would be flat maps with little 3D relief.

Type II – Planes are at an angle between parallel and perpendicular – these would use mostly axonometric projections for 3D relief.

Type III – Planes are perpendicular – for deep field maps and art.

There is one other thing – the name of the field. The term “stellar cartography” would be the first choice. But it is very deeply rooted in sci-fi and is too often pictured as the map room of a starship. That is not necessarily a bad thing but another problem is that there is more to mapping outer space than just stars – there are nebulas and planets with moons and perhaps other kinds of space terrains. Outer space mapping is more than just “stellar”. The word “astrocartography” is another logical choice. However, it has been used as a label for a branch of astrology for many decades now. Also, the word is trademarked. The word “astrography” is astro-photography.

The word “astrocartics” means charts of outer space. It comes from the word “astro” which means “star” but can also mean “outer space” and the word “carte” which is an archaic form of the word “chart” or “map”. So this is my proposal for a name. Also, I think it sounds cool.

Well, I think I have said all that I can about astrocartics. I am going to be making new maps. There will be an Astrocartica 2.0, a solar system map with unique graphics and a map that might be considered astro-political in essence. I will also be adding a blog to my store in a few months. I am willing to talk to anyone who wishes about space maps.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Are Supernovae Implicated in Mass Extinctions? 14 Mar 8:00 AM (17 days ago)

Are Supernovae Implicated in Mass Extinctions?

As we’ve been examining the connections between nearby stars lately and the possibility of their exchanging materials like comets and asteroids with their neighbors, the effects of more distant events seem a natural segue. A new paper in Monthly Notices of the Royal Astronomical Society makes the case that at least two mass extinction events in our planet’s history were forced by nearby supernova explosions. Yet another science fiction foray turned into an astrophysical investigation.

One SF treatment of the idea is Richard Cowper’s Twilight of Briareus a central theme of which is the transformation of Earth through just such an explosion. Published by Gollancz in 1974, the novel is a wild tale of alien intervention in Earth’s affairs triggered by the explosion of the star Briareus Delta, some 130 light years out, and it holds up well today. Cowper is the pseudonym for John Middleton Murry Jr., an author I’ve tracked since this novel came out and whose work I occasionally reread.

Image: New research suggests at least two mass extinction events in Earth’s history were caused by a nearby supernova. Pictured is an example of one of these stellar explosions, Supernova 1987a (centre), within a neighbouring galaxy to our Milky Way called the Large Magellanic Cloud. Credit: NASA, ESA, R. Kirshner (Harvard-Smithsonian Center for Astrophysics and Gordon and Betty Moore Foundation), and M. Mutchler and R. Avila (STScI).

Nothing quite so exotic is suggested by the new paper, whose lead author, Alexis Quintana (University of Alicante, Spain) points out that supernova explosions seed the interstellar medium with heavy chemical elements – useful indeed – but can also have devastating effects on stellar systems located a bit too close to them. Co-author Nick Wright (Keele University, UK) puts the matter more bluntly: “If a massive star were to explode as a supernova close to the Earth, the results would be devastating for life on Earth. This research suggests that this may have already happened.”

The conclusion grows out of the team’s analysis of a spherical volume some thousand parsecs in radius around the Sun. That would be about 3260 light years, a spacious volume indeed, within which the authors catalogued 24,706 O- and B-type stars. These are massive and short-lived stars often found in what are known as OB associations, clusters of young, unbound objects that have proven useful to astronomers in tracing star formation at the galactic level. The massive O type stars are considered to be supernova progenitors, while B stars range more widely in mass. Even so, larger B stars also end their lives as supernovae.

Making a census of OB objects has allowed the authors to pursue the primary reason for their paper, a calculation of the rate at which supernovae occur within the galaxy at large. That work has implications for gravitational wave studies, since supernova remnants like black holes and neutron stars and their interactions are clearly germane to such waves. The distribution of stars within 1 kiloparsec of the Sun shows stellar densities that are consistent with what we find in other associations of such stars, and thus we can extrapolate on the basis of their behavior to understand what Earth may have experienced in its own past.

Earlier work by other researchers points to supernovae that have occurred within 20 parsecs of the Sun – about 65 light years. A supernova explosion some 2-3 million years ago lines up with marine extinction at the Pliocene-Pleistocene boundary. Another may have occurred some 7 million years ago, as evidenced by the amount of interstellar iron-60 (60Fe), a radioactive isotope detected in samples from the Apollo lunar missions. Statistically, it appears that one OB association (known as the Scorpius–Centaurus association) has produced on the order of 20 supernova explosions in the recent past (astronomically speaking), and HIPPARCOS data show that the association’s position was near the Sun’s some 5 to 7 million years ago. These events, indeed, are thought by some to have produced the so-called Local Bubble, the low density cavity in the interstellar medium within which the Solar System currently resides.

Here’s a bit from the paper on this:

An updated analysis with Gaia data from Zucker et al. (2022) supports this scenario, with the supernovae starting to form the bubble at a slightly older time of ∼14 Myr ago. Measured outflows from Sco-Cen are also consistent with a relatively recent SN explosion occurring in the solar neighbourhood (Piecka, Hutschenreuter & Alves 2024). Moreover, there is kinematic evidence of families of nearby clusters related to the Local Bubble, as well as with the GSH 238+00 + 09 supershell, suggesting that they produced over 200 SNe [supernovae] within the last 30 Myr (Swiggum et al. 2024).

But the authors of this paper focus on much earlier events. Earth has experienced five mass extinctions, and there is coincidental evidence for the effects of a supernova in the late Devonian and Ordovician extinction events (372 million and 445 million years ago respectively). Both are linked with ozone depletion and mass glaciation.

A supernova going off within a few hundred parsecs of Earth would have atmospheric effects but little of significance. But the authors point out that if we close the range to 20 parsecs, things get more deadly. The destruction of the Earth’s ozone layer would be likely, with resultant mass extinctions a probable result:

Two extinction events have been specifically linked to periods of intense glaciation, potentially driven by dramatic reductions in the levels of atmospheric ozone that could have been caused by a near-Earth supernova (Fields et al. 2020), specifically the late Devonian and late Ordovician extinction events, 372 and 445 Myr ago, respectively (Bond & Grasby 2017). Our near Earth ccSN [core-collapse supernova] rate of ∼2.5 per Gyr is consistent with one or both of these extinction events being caused by a nearby SN.

So this is interesting but highly speculative. The purpose of this paper is to examine a specific volume of space large enough to draw conclusions about a type of star whose fate can tell us much about supernova remnants. This information is clearly useful for gravitational wave studies. The supernova speculation in regard to extinction events on Earth is a highly publicized suggestion that grows out of this larger analysis. In other words, it’s a small part of a solid paper that is highly useful in broader galactic studies.

Supernovae get our attention, and of course such discussions force the question of what happens in the event of a future supernova near Earth. Only two stars – Antares and Betelgeuse – are likely to become a supernova within the next million years. As both are more than 500 light years away, the risk to Earth is minimal, although the visual effects should make for quite a show. And we now have a satisfyingly large distance between our system and the nearest OB association likely to produce any danger. So much for The Twilight of Briareus. Great book, though.

The paper is Van Bemmel et al., “A census of OB stars within 1 kpc and the star formation and core collapse supernova rates of the Milky Way,” accepted at Monthly Notices of the Royal Astronomical Society (preprint).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Quantifying the Centauri Stream 11 Mar 1:29 PM (20 days ago)

Quantifying the Centauri Stream

The timescales we talk about on Centauri Dreams always catch up with me in amusing ways. As in a new paper out of Western University (London, Ontario), in which astrophysicists Cole Gregg and Paul Wiegert discuss the movement of materials from Alpha Centauri into interstellar space (and thence to our system) in ‘the near term,’ by which they mean the last 100 million years. Well, it helps to keep our perspective, and astronomy certainly demands that. Time is deep indeed (geologists, of course, know this too).

I always note Paul Wiegert’s work because he and Matt Holman (now at the Harvard-Smithsonian Center for Astrophysics) caught my eye back in the 1990s with seminal studies of Alpha Centauri and the stable orbits that could occur there around Centauri A and B (citation below). That, in fact, was the first time that I realized that a rocky planet could actually be in the habitable zone around each of those stars, something I had previously thought impossible. And in turn, that triggered deeper research, and also led ultimately to the Centauri Dreams book and this site.

Image: (L to R) Physics and astronomy professor Paul Wiegert and PhD candidate Cole Gregg developed a computer model to study the possibility that interstellar material discovered in our solar system originates from the stellar system next door, Alpha Centauri. Credit: Jeff Renaud/Western Communications.

Let’s reflect a moment on the significance of that finding when their paper ran in 1997. Wiegert and Holman showed that stable orbits can exist within 3 AU of both Alpha Centauri A and B, and they calculated a habitable zone around Centauri A of 1.2 to 1.3 AU, with a zone around Centauri B of 0.73 to 0.74 AU. Planets at Jupiter-like distances seemed to be ruled out around Centauri because of the disruptive effects of the two primary stars; after all, Centauri A and B sometimes close to within 10 AU, roughly the distance of Saturn from the Sun. The red dwarf Proxima Centauri, meanwhile, is far enough away from both (13,000 AU) so as not to affect these calculations significantly.

But while that and subsequent work homed in on orbits in the habitable zone, the Wiegert and Gregg paper examines the gravitational effects of all three stars on possible comets and meteors in the system. The scientists ask whether the Alpha Centauri system could be ejecting material, analyze the mechanisms for its ejection, and ponder how much of it might be expected to enter our own system. I first discussed their earlier work on this concept in 2024 in An Incoming Stream from Alpha Centauri. A key factor is that this triple system is in motion towards us (it’s easy to forget this). Indeed, the system approaches Sol at 22 kilometers per second, and in about 28,000 years will be within 200,000 AU, moving in from its current 268,000 AU position.

This motion means the amount of material delivered into our system should be increasing over time. As the paper notes:

…any material currently leaving that system at a low speed would be heading more-or-less toward the solar system. Broadly speaking, if material is ejected at speeds relative to its source that are much lower than its source system’s galactic orbital speed, the material follows a galactic orbit much like that of its parent, but disperses along that path due to the effects of orbital shear (W. Dehnen & Hasanuddin 2018; S. Torres et al. 2019; S. Portegies Zwart 2021). This behavior is analogous to the formation of cometary meteoroid streams within our solar system, and which can produce meteor showers at the Earth.

The effect would surely be heightened by the fact that we’re dealing not with a single star but with a system consisting of multiple stars and planets (most of the latter doubtless waiting to be discovered). Thus the gravitational scattering we can expect increases, pumping a number of asteroids and comets into the interstellar badlands. The connectivity between nearby stars is something Gregg highlights:

“We know from our own solar system that giant planets bring a little bit of chaos to space. They can perturb orbits and give a little bit of extra boost to the velocities of objects, which is all they need to leave the gravitational pull of the sun. For this model, we assumed Alpha Centauri acts similarly to our solar system. We simulated various ejection velocity scenarios to estimate how many comets and asteroids might be leaving the Alpha Centauri system.”

Image: In a wide-field image obtained with an Hasselblad 2000 FC camera by Claus Madsen (ESO), Alpha Centauri appears as a single bright yellowish star at the middle left, one of the “pointers” to the star at the top of the Southern Cross. Credit: ESO, Claus Madsen.

This material is going to be difficult to detect, to be sure. But the simulations the authors used, developed by Gregg and exhaustively presented in the paper, produce interesting results. Material from Alpha Centauri should be found inside our system, with the peak intensity of arrival showing up after Alpha Centauri’s closest approach in 28,000 years. Assuming that the system ejects comets at a rate like our own system’s, something on the order of 106 macroscopic Alpha Centauri particles should be currently within the Oort Cloud. But the chance of one of these being detectable within 10 AU of the Sun is, the authors calculate, no more than one in a million.

There should, however, be a meteor flux at Earth, with perhaps (at first approximation) 10 detectable meteors per year entering our atmosphere, most no more than 100 micrometers in size. That rate should increase by a factor of 10 in the next 28,000 years.

Thus far we have just two interstellar objects known to be from sources outside our own system, the odd 1I/’Oumuamua and the comet 2I/Borisov. But bear in mind that dust detectors on spacecraft (Cassini, Ulysses, and Galileo) have detected interstellar particles, even if detections of interstellar meteors are controversial. The authors note that this is because the only indicator of the interstellar nature of a particle is its hyperbolic excess velocity, which turns out to be very sensitive to measurement error.

We always think of the vast distances between stellar systems, but this work reminds us that there is a connectedness that we have only begun to investigate, an exchange of materials that should be common across the galaxy, and of course much more common in the galaxy’s inner regions. All this has implications, as the authors note:

…the details of the travel of interstellar material as well as its original sources remain unknown. Understanding the transfer of interstellar material carries significant implications as such material could seed the formation of planets in newly forming planetary systems (E. Grishin et al. 2019; A. Moro-Martín & C. Norman 2022), while serving as a medium for the exchange of chemical elements, organic molecules, and potentially life’s precursors between star systems—panspermia (E. Grishin et al. 2019; F. C. Adams & K. J. Napier 2022; Z. N. Osmanov 2024; H. B. Smith & L. Sinapayen 2024).

The paper is Gregg & Wiegert, “A Case Study of Interstellar Material Delivery: α Centauri,” Planetary Science Journal Vol. 6, No. 3 (6 March 2025), 56 (full text). The Wiegert and Holman paper, a key reference in Alpha Centauri studies, is “The Stability of Planets in the Alpha Centauri System,” Astronomical Journal 113 (1997), 1445–1450 (abstract).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Spaceline: A Design for a Lunar Space Elevator 7 Mar 1:29 AM (24 days ago)

Spaceline: A Design for a Lunar Space Elevator

The space elevator concept has been in the public eye since the publication of Arthur C. Clarke’s The Fountains of Paradise in 1979. Its pedigree among scientists is older still. With obvious benefits in terms of moving people and materials into space, elevators seize the imagination because of their scale and the engineering they would require. But we needn’t confine space elevators to Earth. As Alex Tolley explains in today’s essay, a new idea being discussed in the literature explores anchoring one end of the elevator to the Moon. Balanced by Earth’s gravity (and extending all the way into the domain of geosynchronous satellites), such an elevator opens the possibility of moving water and materials between Earth and a lunar colony, though the engineering proves as tricky as that needed for a system anchored on Earth. Is it even possible given the orbital characteristics of the Moon? Read on.

by Alex Tolley

Image: Space elevator connecting the moon to a space habitat. Credit: coolboy.

It is 2101, and the 22nd century has proven the skeptics wrong about space. Sitting comfortably in an Aries-1B moon shuttle on its way to Amundsen City in the Aitken basin at the lunar south pole, I am enjoying a bulb of hot Earl Grey tea. Sadly, without artificial gravity, no one has worked out how to dunk a digestive biscuit in hot drinks. The viewscreen shows the second movie of our almost 24-hour trip. Having been transferred from a spaceplane (far better than those giant VTOL rockets that reduced the cost of space access and paved the way for our multi-planetary advances), the moon-bound shuttle has crossed the Clarke orbit, past the remaining telecomm birds that haven’t been obsoleted by the swarms of comsats in low earth orbit (LEO).

A strange string of bright glints catches my eye. They are arrayed in an arrow-straight line, looking like dew on an invisible spiderweb seeking infinity. Nothing seems to move, the objects just apparently hanging in space. The flight attendant notices my apparent confusion. “What are they?” I ask as she crouches down beside me. “That is the newly operational Spaceline, a geosynchronous orbit to lunar surface space elevator. Those lights are the elevator cars carrying supplies to and from the moon. “But they are not moving,” I say. “They are moving, but too slowly to notice at this distance”, she replies, smiling, as this was probably a common observational mistake by passengers on the Moon run. “We should be seeing the expanding set of facilities at the Earth-Moon Lagrange Point 1 (EML1) nearer the Moon. The captain usually puts a magnified image on the viewscreens as we pass.”

This post is about space elevators, but not the Earth to Geosynchronous orbit that is most known, but rather a lesser known lunar space elevator {LSE), and in particular the one that rises from the lunar surface and terminates somewhere between the Earth’s surface and the Earth-Moon Lagrange Point 1 (EML1).

Because the Moon is tidally locked to Earth, an LSE from its surface can hang over the Earth, with the cable tension maintained by the gravitational pull of the Earth. Deployment of the cable is similar to the Earth space elevator (SE) which uses the Clarke orbit as a stable position that keeps the same point on the Earth below it, but in the case of the Moon, that deployment position is at the gravitationally neutral EML1. The cable is then simultaneously unrolled towards the Moon’s surface, and balanced by cable unrolled towards Earth and pulled towards Earth by gravity.

The idea of an LSE is not new and may even predate that of the Earth space elevator with a 1910 note by Friedrich Zander. Unlike the Earth space elevator that cannot be built as no material available can support its own mass, existing high-strength plastics like Zylon® can in principle be used today to build an LSE. Prior work by Pearson in 1979 on the LSE, and in 2005, with collaborators Levin, Oldson and Wykes published a NASA report on the LSE for two LSE cases, one passing through EML1 and the other anchored on the lunar farside passing through EML2 (using centrifugal force to tension the cable). These works and others demonstrated that an LSE could be built that would reduce the cost of transporting water and regolith to and from the Moon.

Prior work assumed that the cable would terminate in the Earth direction with a mass to provide the tension. The shorter the EML1 to terminus length, the greater the mass, and also the total mass of the LSE. A design tradeoff.

Image: The Lunar Space Elevator concept. A tapered cable from the Moon’s surface, through EML1 and terminating inside the geosynchronous orbit.

A 2019 arXiv preprint by Penoyre and Sandford adds their calculations for an LSE that they call the “Spaceline”. The authors show that a cable can be created without any counterweight between the Earth and the Moon, with the end of the cable dipping inside the geosynchronous orbital height. The total length of their optimal design is about 340,000 km, with a total mass of 40,000 kg (40 MT). The deployment facility and payload carriers are extra mass. The total mass of the system would therefore be lower than a design with a shorter cable and terminus mass. The authors also indicated that with the cable terminating inside the geosynchronous orbit, it would be easier to reach the cable from Earth and therefore reduce the transport costs further.

As with prior work, this all-cable design ensures that maximum tension in the cable is at the EML1 point, and declines towards both ends. The design ensures that the cable cannot collapse onto the Moon, nor break under load and fall towards Earth.

Table 1. Materials with values for density (ρ), breaking stress (B), specific strength (S), and relative strength (α). Materials with relative strength greater than 1.0 can be used in an LSE. Zylon have the highest relative strength that can be mass-manufactured. Carbon nanotubes can only be made in very short lengths at present. Source – Penoyre & Sandford.

This paper is primarily an exercise in designing an all-cable system using fixed area cross-sections, tapered cross-sections, and a hybrid of these two designs to simplify manufacture, deployment, and operation. Figure 2 shows that a Zylon cable of uniform cross-section suffices as an LSE that neither collapses nor breaks. Their calculations show that the tapered design is the most efficient with total mass, but the hybrid design using mostly uniform cross-section cable is a good compromise that simplifies manufacture and operation that reaches geosynchronous orbit.

Image: The white area is a feasible space for a cable of a uniform cross-section. With a relative strength above a critical value, a cable length can be constructed that neither collapses back to the Moon nor breaks under its own mass. Zylon can achieve this, albeit not to the desired geosynchronous orbit height. Not shown are the cases for a tapered and hybrid cable that can reach geosynchronous orbit. Source modified from Penoyre & Sandford

No attempt is made to determine other masses to support, such as the crawlers to carry payloads, how many could be supported, and the deployment hardware that must be transported to EML1. The assumption is that scaling up the area or number of cables will allow for the payload masses to be carried.

The authors do not justify the construction of the cable beyond showing that the cost of delivering payloads to the Moon, as well as returning material to space or Earth, is significantly reduced using a cable compared to spacecraft requiring propellant to transport the payloads.

The cost savings are not new, and no doubt a cable would be built if there were no other issues. But as with the SE, some issues complicate the construction of an LSE. Other authors have analyzed the LSE in more detail including survivability to space hazards [Eubanks], payload capability, speed of crawlers, ROI, and even transport of materials to and from the lunar South Pole to the LSE base on the lunar surface [Pearson et al, 2005].

So far so good. Penoyre and Sandford have shown that an unweighted cable can be used as an LSE which can stretch from the lunar surface to geosynchronous orbit, a mere 36,000 km from the Earth’s surface. Not quite as complete as the web between Earth and Moon in Aldiss’ Hothouse, but close. To reach the start of the cable, a spaceplane needn’t have to maintain geosynchronous orbit, but rather make a ballistic trajectory with the apogee reaching the terminus of the cable, and be captured by it similar to that of a skyhook, but with less difficulty.

But wait. Isn’t the paper skipping some important issues that could make this LSE impractical?

With the SE the geosynchronous orbit is circular. Once the orbital station is constructed and the cables reeled out to Earth and the counterweight, the system is very stable. This is not the case for the LSE.

The Moon’s orbit, with an eccentricity of 0.055, varies in distance from the Earth over its period from 362,600 km at perigee to 405,400 km at apogee, a difference of 42,800 km. This will result in the EML1 point moving back and forth towards the lunar surface about 36,300 km over the orbit, or about 18,000 km back and forth over the average EML1 distance. As the semi-major axis distance of EML1 to the lunar surface is about 57,000km, this is about a 2/3rd change in distance. Therefore the extra mass of the cable from EML1 to the lunar surface at apogee must be balanced by an extra length of the cable from EML1 to geosynchronous orbit, and vice versa at perigee. The base station at EML1 must therefore reel in or out cable continuously over the Moon’s orbital period. This is a dynamic situation that cannot fail or the LSE will be destabilized and potentially break or collapse.

Eubanks [2016] calculated that the micrometeoroid impacts would break a cable of uniform circular cross-section within hours, effectively breaking the cable before it could be deployed. The longer the cable, especially the long section between EML1 and geosynchronous orbit, the more quickly the break. A break in the cable would result in the section attached to the lunar surface falling back onto the Moon, wrapping itself around the Moon as it continued its orbit.

The other section would fall towards Earth, crossing the lower orbits and probably having a perigee that would enter the Earth’s atmosphere and likely burn up on entry over some time. Eubanks calculated that making the cable with a flat cross-section would ensure a lifetime between possible breaks of 5 years. Penoyre and Sandford acknowledged the danger of such breaks and also suggested a flat cross-section, although this would be less effective where the cable tapered.

While the LSE is relatively free of satellites and other artifacts, the question arises why the Earth’s terminus of the cable is inside the geosynchronous orbit. Geosynchronous satellites have a relative velocity of about 3 km/s relative to the end of the cable, posing a hazard to both satellites and cable. This is made worse if the cable length is not adjusted fast enough as it would dip deeper into the orbits of satellites with corresponding higher impact velocities and increased numbers of possible impacts. All this for the advantage of easier access to the end of the cable.

While accepting that cables with moving payloads are a cheaper way to transport material to and from the lunar surface, the speed at which these payloads can be moved is also relevant. An analogy might be that while walking across a country might be the cheapest form of travel, it is far slower than taking powered transport and time is important for commercial transport.

Various authors have used different assumptions of travel speed on the cable, up to 3600 kph (1.0 kps) [Radley 2017]. A more realistic speed might be 100 kph. At this speed, a payload from geosynchronous orbit to the lunar surface would take about 20 weeks. This is the same order of time to reach Mars on a Hohman orbit and similar to transoceanic voyages in the age of square-rigged sailing ships, This is entirely unsuited for transportation of humans or goods that can perish or be damaged by radiation from the solar wind or galactic cosmic rays. It would be suited for carrying bulk materials and equipment.

If the cable were a constant area flat ribbon, probably woven into a Hoytether [Eubanks 2016, Radley 2017], the payloads may not need to be self-powered but simply attached to the cable and moved like a cable car. Transport from the lunar surface to a station at EML1 would take about 3-4 weeks. Therefore people and some food supplies would still need to be ferried by rocket to and from the Moon.

While we think of the Moon as tightly locked facing the Earth, in practice it has a libration that would move the relative position of the lunar surface sideways back and forth over the month. This would send waves up the cable with a velocity dependent on the design of the cable. The dynamics of this oscillation would need to be investigated. Similarly, while Coriolis forces do not affect a static LSE, they will be a factor with the carriers moving on a cable from a low velocity at geosynchronous orbit to the Moon’s orbital velocity of about 1 kps. This is an order of magnitude higher than at the Earth terminus of the cable, and these forces will need to be determined for their effect on cable dynamics.

The authors also state that a base station of potentially immense size could be positioned at EML1 where the cable would be deployed. EML1 would be a convenient place to expand facilities making use of the zero gravity at that point. While Arthur Clarke had a manned telescope at EML1 in his novel A Fall of Moondust, EML1 is not a stable point or attractor, but rather unstable. This is made even worse by the orbit of the Moon which changes the position of this point and the surface over its orbit, as well as its position in orbit. As a result, the proposed Lunar Gateway space station is not placed at EML1 but rather in a Near Rectilinear Halo Orbit (NRHO) that requires far less fuel for station keeping. While the idea of a station at EML1 sounds attractive, it might be a costly facility to maintain, even with the advantage of having a cable to adjust its position.

Despite these caveats, there are good scientific and potential commercial reasons for reducing the cost of transporting mass to and from the Moon, as well as maintaining a facility close to the EML1. These have been explained in more detail by [Pearson 2005, Eubanks 2016]. I would add, as a fan of solar and beamed sails, that this could be a good place to deploy and launch these sails. There are no satellite hazards to navigate, and the low to zero gravity would allow these sails to reach escape velocity without needing the slow spiral out from Earth if started at LEO.

I propose that rather than having a cable reach geosynchronous orbit, it might be better to have a shorter weighted cable as proposed by Pearson even at the cost of a greater total mass of the LSE. Used in combination with a rotating tether in LEO (skyhook), transport to the tether could be achieved with an aircraft or suborbital rocket, attaching the payload to the skyhook, and having it launched into a high orbit to reach the end of the LSE. This would have to be a well-coordinated maneuver, reducing the costs even further albeit with the potential problems of skyhook and satellite impacts, especially with satellite swarms in LEO.

Our journey to the Moon has nearly ended. On the surface, I can see the long tracks of what looks like monorail lines. They were designed to launch packages of regolith or basic metal components into space, as originally envisaged by Gerard O’Neill in the late 20th century, to construct space solar power satellites and habitats. China’s AE Corp’s Spaceline (太空线) proved more economic, obsoleting the mass driver’s original purpose. They were repurposed to accelerate probes brought up from the Earth to the Moon, into deep space. One day their larger descendants will launch crewed spaceships on their journeys to the planets.

References

Penoyre, Z, Sandford E, (2019) The Spaceline: A Practical Space Elevator Alternative Achievable With Current Technology https://arxiv.org/abs/1908.09339

Pearson J, et al (2005) Lunar Space Elevators For Cislunar Space Development. Phase I Final Technical Report.

Eubanks, T. M. (2013). A space elevator for the far side of the moon. Annual Meeting of the Lunar Exploration Analysis Group, 1748, 7047. http://ui.adsabs.harvard.edu/abs/2013LPICo1748.7047E/abstract

Eubanks, T. M., & Radley, C. F. (2017). Extra-Terrestrial space elevators and the NASA 2050 Strategic Vision. Planetary Science Vision 2050 Workshop, 1989, 8172. https://ui.adsabs.harvard.edu/abs/2017LPICo1989.8172E/abstract

Eubanks, T. M., & Radley, C. F. (2016). Scientific return of a lunar elevator. Space Policy, 37, 97–102. https://doi.org/10.1016/j.spacepol.2016.08.005

Radley, C. F. (2017). The Lunar Space Elevator, a near term means to reduce cost of lunar access. 2018 AIAA SPACE and Astronautics Forum and Exposition. https://doi.org/10.2514/6.2017-5372

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A New Class of Interstellar Object? 5 Mar 11:09 AM (26 days ago)

A New Class of Interstellar Object?

Peculiar things always get our attention, calling to mind the adage that scientific discovery revolves around the person who notices something no one else has and says “That’s odd.” The thought is usually ascribed to Asimov, but there is evidently no solid attribution. Whoever said it in whatever context, “that’s odd” is a better term than “Eureka!” to describe a new insight into nature. So often we learn not all at once but by nudges and hunches.

This may be the case with the odd objects turned up by the Japanese infrared satellite AKARI in 2021. Looking toward the Scutum-Centaurus Arm along the galactic plane, the observatory found deep absorption bands of the kind produced by interstellar dust and ice. No surprise that a spectral analysis revealed water, carbon dioxide, carbon monoxide and organic molecules, given that interstellar ices in star-forming regions are rich in these chemicals, but the ‘odd’ bit is that these two objects are a long way from any such regions.

Image: Molecular emission lines from mysterious icy objects captured by the ALMA telescope. The background image is an infrared composite color map, where 1.2-micron light is shown in cyan and 4.5-micron light is in red, based on infrared data from 2MASS and WISE. Credit: ALMA (ESO/NAOJ/NRAO), T. Shimonishi et al. (Niigata Univ).

Interstellar ices are produced as submicron-sized dust grains, rich in carbon, oxygen, silicon, magnesium and iron, gather materials that adhere to their surfaces in cold and dense regions of the galaxy. Such ices are thought to be efficient at producing complex organic molecules, more so than chemical reactions that form in gaseous states, so they’re of high astrobiological interest.

The team performed follow-up observations using ALMA (this is the Atacama Large Millimeter/submillimeter Array in Chile) at a wavelength of 0.9 mm, useful because radio, as opposed to infrared, can be used to analyze the motion and composition of such gases. The data showed that what is being observed doesn’t exhibit the characteristics of any previously known interstellar objects in the vicinity of such ices. Instead, the researchers found molecular emission lines of carbon monoxide and silicon monoxide distributed in a tight region of less than one arcsecond. The expected submillimeter thermal emission from interstellar dust was not detected.

So what’s going on here? Takashi Shimonishi, an astronomer at Niigata University, Japan and lead author of the paper on this work, notes that the two objects are roughly 30,000 to 40,000 light years away. Interestingly, they show different velocities, indicating that they’re distinct and moving independently. Says the scientist:

“This was an unexpected result, as these peculiar objects are separated by only about 3 arcminutes on the celestial sphere and exhibit similar colors, brightness, and interstellar ice features, but they are not linked [to] each other.”

Let’s take a closer look at the odd energy distribution here. We would expect objects surrounded by ices would be embedded in interstellar dust, which should make for a bright signal in the far-infrared to submillimeter wavelength range. But ALMA detected no submillimeter radiation from either object. No previously known icy objects correspond to this signature.

Image: Energy distribution of one of the mysterious icy objects (black) compared with those of known interstellar icy objects. Interstellar ices are detected in protostars (green), young stars with protoplanetary disks (cyan), and mass-losing evolved stars (brown), but the spectral characteristics of the mysterious icy object do not match any of these known sources. Credit: T. Shimonishi et al. (Niigata Univ).

We learn from the paper that strong shockwaves seem to have disrupted interstellar dust in these bodies, based on the ratio of silicon monoxide to carbon monoxide. And the final oddity, at least so far: The size of the gas and dust clouds associated with these objects – determined by comparison of ALMA emission data with the AKARI absorption data – shows that both range from 100 to 1000 AU, which makes them compact in relation to typical molecular clouds.

So we have objects that don’t correspond to stars in the early stages of formation or stars shielded by dense molecular clouds. We seem to be looking at a new class of interstellar object altogether. The paper concludes:

These characteristics, i.e., (i) rich ice-absorption features, (ii) large visual extinction, (iii) lack of mid-infrared and submillimeter excess emission, (iv) very compact source size, (v) SiO-dominated broad molecular line emission [silicon monoxide], and (vi) isolation, cannot easily be accounted for by any of known interstellar icy sources. They may represent a previously unknown or rare type of isolated icy objects. Future high-spatial-resolution and high-sensitivity observations as well as detailed SED modeling is required. An upcoming near-infrared spectroscopic survey with SPHEREx (M. L. N. Ashby et al. 2023) may detect more similar sources.

Let’s hope so, because insight into oddities is a key part of interstellar exploration. Clearly we haven’t heard the last of these mysterious bodies.

The paper is Takashi Shimonishi et al., “ALMA Observations of Peculiar Embedded Icy Objects,” Astrophysical Journal 981 (2025), 49 (full text).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Shaping the Sail: Metamaterials and the Manipulation of Light 3 Mar 1:37 AM (28 days ago)

Shaping the Sail: Metamaterials and the Manipulation of Light

Experimenting on beamed energy and sailcraft is no easy matter, as I hope the previous post made clear. Although useful laboratory experiments have been run, the challenges involved in testing for sail stability under a beam and sail deployment are hard to surmount in Earth’s gravity well. They’re also demanding in terms of equipment, for we need both the appropriate lasers and vacuum chambers that can approximate the conditions the sail will be subjected to in space. But this space is being explored now more than ever before. Jim Benford has pointed me to an excellent bibliography on lightsail studies at Caltech that I recommend to anyone interested in following this further.

When I said we were in the early days of sail experimentation, I was drawing your attention to the fact that we’re only now learning how to produce and manipulate the metamaterials – structures that have electromagnetic properties beyond those we find in naturally occurring materials – that may be our best choices for sail material. Here I’m looking at a paper I cited last time, by Jadon Lin (University of Sydney) and colleagues. Lin points out that we need to put any sail materials through a battery of tests. Let me quote the paper on this, as the authors sum it up better than I could:

To name a few, tests are needed for: more complete characterization of linear and nonlinear optical properties of candidate materials over the broad NIR-Doppler band and MIR band [Near-Infrared and Mid-Infrared] and over the full range of temperatures a sail may encounter; initially small scale, then large scale nanostructure fabrication followed by complete optical characterization of their scattering; structural tests and; direct radiation-pressure measurements. In particular, low defect, high purity synthesis and nanostructuring of materials over square meter scales will require substantial technological advancements, especially for the intricate designs found by inverse design that are often unintuitive.

Image: An artist’s conception of a lightsail during acceleration by a ground-based laser array. Here the sail appears curved, as many early studies had indicated. Now we’re learning that flat sails have a new life of their own. Read on. Image credit: Masumi Shibata/Breakthrough Initiatives.

Quiet Revolution

Let’s back out to the big picture for a moment, because materials science is moving so rapidly. We went from flat sails to rotating parabolic shapes in early sail analysis, hoping to find a way to keep a sail inundated with a 50 GW beam stable under acceleration. The revolution now emerging with metamaterials is that we are returning to the flat sail concept. Instead of a shaped reflective surface, we’re envisioning using properties of the sail material, including photonic metagratings. Unlike a curved, lens-like surface, these are ‘scatterers’ at the nano-scale that direct and shape incoming light. Recent research shows we can use these to produce restoring forces and torques that keep a flat sail stable. This in its own way is something of a revolution.

We need to evaluate candidate materials and the methods of fabrication that will produce them, and then factor all of this into the design of the actual sail membrane. It’s telling that, according to Lin’s paper, the flexibility of the membrane and its relation to sail stability is in need of extensive testing, and so is the question of special relativity and sail dynamics, which when we’re discussing interstellar sails and their velocities must be considered. Gratings and metastructures, the authors believe, are the best ways to cope with distorted sail shapes or the effects of relativistic velocities. To some extent we can test these matters on the interplanetary level with small sails in space.

Let’s now wind this back to the Breakthrough Starshot concept, announced in 2016 and pursued through sail studies in following years. Here the idea is to use a ground-based laser array to push tiny payloads attached to lightsails up to 20 percent of the speed of light, making the journey to the nearest stars a matter of two decades. The technology could be applied to various systems, but of course Alpha Centauri is the obvious first candidate, and in particular Proxima Centauri b a prime target in the habitable zone.

Harry Atwater, a professor of applied physics and materials science at Caltech, has been at the forefront of sail research, focusing on the ultrathin membranes that will have to be developed to make such journeys possible. He and his colleagues have developed a platform for studying sail membranes that can measure the force that lasers exert on such sails, an example of the movement from theory to laboratory observation and measurement. Atwater sees the matter this way:

“There are numerous challenges involved in developing a membrane that could ultimately be used as lightsail. It needs to withstand heat, hold its shape under pressure, and ride stably along the axis of a laser beam. But before we can begin building such a sail, we need to understand how the materials respond to radiation pressure from lasers. We wanted to know if we could determine the force being exerted on a membrane just by measuring its movements. It turns out we can.”

Image: Caltech’s Harry Atwater. Credit: California Institute of Technology.

The Sail as Trampoline

The team’s paper on these early measurements of radiation pressure on lightsail materials appears in Nature Photonics (citation below). To measure these forces, the team creates a lightsail in miniature, tethered at the corners within a larger membrane. Electron beam lithography is the means of crafting a silicon nitride membrane a scant 50 nanometers thick, producing the result seen in the image below. As of now, silicon nitride seems to have the inside track as the leading material candidate

The Caltech researchers note their experiment’s similarity to a tiny trampoline – the membrane is a square some 40 microns wide and 40 microns long, and as the Caltech materials show, it is suspended at the corners by silicon nitride springs. So we have a tiny lightsail tethered within a larger membrane as the subject of our tests.

Image: A microscope image of the Caltech team’s “miniature trampoline,” a tiny lightsail tethered at the corners for direct radiation pressure measurement. Credit: Harry Atwater/Caltech.

The method is to subject the membrane to argon laser light at a visible wavelength, measuring the radiation pressure that it experiences by its effects on the motion of the ‘trampoline’ as it moves up and down. The tethering of the sail is itself a challenge, with the sail acting as a mechanical resonator that vibrates as the light hits it. Crucial to the measurement is to subtract the heat from the laser beam from the actual direct effect of radiation pressure. Ingeniously, the researchers quantified the motion induced by these long-range optical forces, measuring both the force and power of the laser beam.

This is intriguing stuff. A lightsail in space is not going to stay perpendicular to a laser source beaming up from Earth, so the measurements have to angle the laser beam in various ways to approximate this effect. The tiny signal from the motion of the lightsail material is isolated through the use of a common-path interferometer that effectively screens out environmental noise. The interferometer was integrated into the microscope being used to measure the sail, with the whole apparatus contained within a vacuum chamber. The result: Measurements down to the level of picometers could be detected, and mechanical stiffness shown by the motion of the springs as the sail was pushed by the radiation pressure from the laser.

Image: This is Figure 1 from the paper. Caption: From interstellar lightsails to laboratory-based lightsail platforms. a, Concept of laser-propelled interstellar lightsail of 10 m2 in area and 100 nm or less in thickness. b, Laboratory-based lightsail platforms relying on edge-constrained silicon nitride membranes (left), linearly tethered membranes (middle) and spring-supported membranes (right). Removing the edge constraint allows to decouple the effects of optical force and membrane deformation, model lightsail dynamics, and study optical scattering from the edges. Suspending lightsails by compliant serpentine springs rather than linear tethers significantly increases its mechanical susceptibility to laser radiation pressure of the same power P, resulting in larger out-of-plane displacement Δz for more precise detection. Credit: Michaeli et al.

The crucial issue of stability takes in the spread of the laser beam at an angle, with some of the beam missing the sample, perhaps due to its hitting the edge of the sail, scattering some of the light. It will be imperative to control any sideways motion and rotation in the sail once it is under the beam through careful crafting of the metamaterials from which it is made. Co-author Ramon Gao, a Caltech graduate student in applied physics, summarizes it this way:

“The goal then would be to see if we can use these nanostructured surfaces to, for example, impart a restoring force or torque to a lightsail. If a lightsail were to move or rotate out of the laser beam, we would like it to move or rotate back on its own. [This work] is an important stepping stone toward observing optical forces and torques designed to let a freely accelerating lightsail ride the laser beam.”

Thus we take an early step into the complexities of sail interaction with a laser. The paper presents the significance of this step:

Our observation platform enables characterization of the mechanical, optical, and thermal properties of lightsail prototype devices, thus opening the door for further multiphysics studies of radiation pressure forces on macroscopic objects. Additionally, photonic, phononic [sound-like waves traveling through a solid] or thermal designs tailored to optimize different aspects of lightsailing can be incorporated and characterized. In particular, characterizing and shaping optical forces with nanophotonic structures for far-field mechanical manipulation is central to the emerging field of meta-optomechanics, allowing for arbitrary trajectory control of complex geometries and morphologies with light. Laser-driven lightsails require self-stabilizing forces and torques emerging from judiciously designed metasurfaces for beam-riding. We expect that their direct observation is possible using our testbed, which is an important stepping stone towards the realization of stable, beam-riding interstellar lightsails, and optomechanical manipulation of macroscopic metaobjects.

We are, in other words, doing things with light that are far beyond what the early researchers into lightsails would have known about. I think about Robert Forward and Freeman Dyson at the 1980 JPL meeting I referred to last time, working out the math on an interstellar lightsail. Imagine what they would have made of the opportunity to use metamaterials and nanostructures to craft the optimum beam-rider. It’s heartening to see how the current effort at JPL under Harry Atwater is progressing. Laboratory experimentation on lightsails builds the knowledge base that will ultimately help us craft fast sails for missions within the Solar System and one day to another star.

The Atwater paper on sail technologies is Michaeli et al., “Direct radiation pressure measurements for lightsail membranes,” Nature Photonics 30 April 2025 (abstract). Also referred to above is Lin et al., “Photonic Lightsails: Fast and Stable Propulsion for Interstellar Travel,” available as a preprint.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Experimenting on an Interstellar Sail 27 Feb 1:37 AM (last month)

Experimenting on an Interstellar Sail

The idea of beaming a propulsive force to a sail in space is now sixty years old, if we take Robert Forward’s first publications on it into account. The gigantic mass ratios necessary to build a rocket that could reach interstellar distances were the driver of Forward’s imagination, as he realized in 1962 that the only way to make an interstellar spacecraft was to separate the energy source and the reaction mass from the vehicle.

Robert Bussard knew that as well, which is why in more or less the same timeframe we got his paper on the interstellar ramjet. It would scoop up hydrogen between the stars and subject it to fusion. But the Bussard ramjet had to light fusion onboard, whereas a sail propelled by a laser beam – a lightsail – operated without a heavy engine. The idea worked on paper but demanded a laser of sufficient size (Forward calculated over 10 kilometers) to make it a concept for the far future. His solution demanded very large lasers in close solar orbits, and thus an existing system-wide space infrastructure.

Forward’s article “Pluto – Gateway to the Stars” ran in the journal Missiles and Rockets in April of 1962 (and would later be confused with an article having a similar name that ran in Galaxy that year, though without the laser sail concept). The beauty of the laser sail was immediately apparent, one of those insights that have other theorists asking themselves why they hadn’t come up with it. Because a beamed sail greatly eases the inverse square law problem. The latter tells us that solar photons aren’t enough because they decrease with the square of our distance from the Sun. Make your laser powerful enough and its narrow beam can push much harder and further.

Image: This is the original image from the Missiles and Rockets article. Caption: Theoretical method for providing power for interstellar travel is use of a very large Laser in orbit close to sun. Laser would convert random solar energy into intense, very narrow light beams that would apply radiation pressure to solar sail carrying space cabin at distances of light years. Rearward beam from Laser would equalize light pressure. Author Forward observes, however, that the Laser would have to be over 10 kilometers in diameter. Therefore other means must be developed.

All the work that began with Forward’s initial sail insights had been theoretical, with authors exploring laser concepts of varying sizes and shapes even as Forward offered fantastic mission designs that could take human crews to places like Epsilon Eridani while obeying the laws of physics. He believed a mission to Alpha Centauri could be launched as early as 1995, triggering interest from JPL’s Bruce Murray, who convened a workshop in 1980 to quantify Forward’s notions and find ways to return a payload to Earth. To my knowledge, the papers from this workshop have never been published, doubtless because the engineering demanded by such a mission was far beyond our reach. Still, it would be interesting to read the thoughts of workshop luminaries Freeman Dyson, Forward, Bussard and others on where we stood in that timeframe.

In 1999 NASA’s Advanced Concepts Office proposed a launch to Alpha Centauri in 2028, a notion that might have been furthered by Jim Benford and Geoffrey Landis’ proposal of using a carbon micro-truss (just invented in that year) that could withstand a microwave beam without melting. Now we begin to see actual laboratory experiments, and in the same year Leik Myrabo subjected carbon micro-truss material to laser beam bombardment to measure an acceleration of 0.15 gravities. See Benford’s A Photon Beam Propulsion Timeline for more on this period of sail laboratory work.

Image: Plan for the development of sails for interstellar flight, 1999. Credit: JPL/Caltech.

So laboratory work explicitly devoted to microwave- and laser-driven sails began 25 years ago and has lately resurfaced through work on sail materials that has developed through the Breakthrough Starshot initiative. Indeed, there are numerous recent papers scattered through the literature that we will be discussing in the future, some containing experimental results from Starshot-funded scientists. It would be helpful for the entire community if this work could be codified and presented in a single report.

But let’s go back to that early lab work. It was in April of 2000 that Benford showed, in experiments at JPL, that sails driven by a microwave beam could survive accelerations up to 13 gravities, while undergoing desorption when the sail reached high temperatures (desorption could have interesting propulsive effects of its own). The effects of spinning the sail were also examined, while Myrabo’s team in that same year experimented with carbon sails coated with molybdenum. By 2002, Benford and his brother Gregory demonstrated in work at UC-Irvine that a conical sail could be stable while riding a microwave beam.

While further work at the University of New Mexico under Chaouki Abdallah and team developed simulations confirming the stability of conical sails under a microwave beam, interest in sails primarily focused on materials in the work of scientists like Gregory Matloff and Geoffrey Landis. Landis’ work on dielectric films for highly reflective sails was particularly significant as materials science kept coming up with interesting candidates — Matloff proposed graphene as a sail material that can sustain high accelerations in 2012, and the examination of metamaterials for the task continues.

When Philip Lubin’s team at UC-Santa Barbara began their work on small wafer-sized spacecraft, it would feed into the concept of the Breakthrough Starshot initiative that was announced in 2016 (See Breakthrough Starshot: Early Testing of ‘Wafer-craft’ Design). Lubin’s work in turn grew out of the Project Starlight and DEEP-IN beamed energy studies his team pursued at UC-Santa Barbara, work that has now been collected in a two-volume set called The Path to Transformational Space Exploration.

A spacecraft on a chip can itself be a micro-sail, as Mason Peck (Cornell University) and team have pointed out in their examination of chips that could use solar photon pressure to move about the Solar System (see Beaming ‘Wafer’ Probes to the Stars). So the idea of miniaturizing a payload and exploiting the potential of laser beaming grafts readily onto the microchip research already underway. It’s interesting that the idea of incorporating the payload into the sail itself goes back to Robert Forward’s Starwisp concept, a kind of ‘smartsail’ whose surface contains the circuits that acquire data. Unfortunately, the Starwisp design had serious flaws, as Geoff Landis would later point out.

We’re still in the early stages in terms of laboratory work focused on sail materials for a lightsail that could carry any kind of payload. Let me quote an interesting new paper on this matter:

Most of the work discussed so far has been theoretical and numerical. Experimental verification of many aspects of lightsails, such as deployment and stability, are difficult to achieve in laboratories subject to Earth’s gravity, and may require extremely powerful lasers and extreme vacuum chambers. Many of the proposed structures are not yet able to be fabricated on the scales required, or rely on material properties that are insufficiently characterized.38 Thus, before full sails can be made, let alone tested, it is imperative that experimental characterizations that can be achieved on Earth be conducted.

This is from a paper by Jadon Lin (University of Sydney) and colleagues called “Photonic lightsails: Fast and Stable Propulsion for Interstellar Travel,” a preprint available here (thanks to Michael Fidler for the reference). We need to talk about the kind of tests needed, and I’ll begin with that next time. We’re headed for the interesting work performed at JPL under Harry Atwater that grows out of a concept some consider our best chance for reaching another star in this century.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Ernst Öpik and the Interstellar Idea 25 Feb 1:36 AM (last month)

Ernst Öpik and the Interstellar Idea

Some names seem to carry a certain magic, at least when we’re young. I think back to all the hours I used to haunt St. Louis-area libraries when I was growing up. I would go to the astronomy section and start checking out books until over time I had read the bulk of what was available there. Fred Hoyle’s name was magic because he wrote The Black Cloud, one of the first science fiction novels I ever read. So naturally I followed his work on how stars produce elements and on the steady state theory with great interest.

Willy Ley’s name was magic because he worked with Chesley Bonestell (another magic name) producing The Conquest of Space in 1949, and then the fabulous Rockets, Missiles, and Space Travel in 1957, a truly energizing read. Not to mention the fact that he had a science column in what I thought at the time was the best of the science fiction magazines, the ever-engaging Galaxy. It still stuns me that Ley died less than a month before Apollo 11 landed on the Moon.

My list could go on, but this morning I’ll pick one of the more obscure names, that of Ernst Öpik. Unlike Hoyle and Ley, Öpik (1893-1985) wasn’t famous for popularizing astronomy, but I would occasionally run into his name in the library books I was reading. An Estonian who did his doctoral work at that country’s University of Tartu, Öpik also did work at the University of Moscow but wound up fleeing Estonia in 1944 out of fear of what would happen when the Red Army swept into his country. He spent the productive second half of his career at the Armagh Observatory in Northern Ireland and for a time held a position as well at the University of Maryland.

Image: Ernst Öpik. Credit ESA.

Did I say productive? Consider that by 1970 Öpik had published almost 300 research papers, well over a hundred reviews and 345 articles for the Irish Astronomical Journal, of which he was editor from 1950 to 1981. He remained associate editor there until his death.

I found the references to Öpik in my reading rather fascinating, as I was reminded when Al Jackson mentioned him to me in a recent email. It turns out, as I had already found, that Öpik turns up in the strangest places. Recently I wrote about the so-called ‘manhole’ cover that some have argued is the fastest human object ever sent into space. The object is controversial, as it was actually a heavy cover designed to contain an underground nuclear blast, and rather spectacularly proven unsuccessful at that task. In short, it seems to have lifted off, a kind of mini-Orion. And no one really knows whether it just disintegrated or is still out there beyond the Solar System. See A ‘Manhole Cover’ Beyond the Solar System if this intrigues you.

Öpik’s role in the ‘manhole cover’ story grows out of his book The Physics of Meteor Flight in the Atmosphere, in which he calculated the mass loss of meteors moving through the atmosphere at various velocities. Although he knew nothing about the cover, Öpik’’s work turned out to be useful to Al as he thought about what would have happened to the cover. Because calculations on the potential speed of the explosively driven lid demonstrated that an object moving at six times escape velocity, as this would have been, would vaporize. This seems to put the quietus on the idea that the 4-inch thick iron lid used at the test detonation of Pascal B had been ‘launched’ into hyperbolic orbit.

But this was just a calculation that later became useful. In broader ways, Öpik was a figure that Al describes as much like Fritz Zwicky, meaning a man of highly original thought, often far ahead of this time. He turns out to have played a role in the development of the Oort Cloud concept. This would have utterly escaped my attention in my early library days since I had no access to the journals and wouldn’t have understood much if I did. But in a paper called “Note on Stellar Perturbations of Nearby Parabolic Orbits,” which ran in Proceedings of the American Academy of Arts and Sciences in 1932, the Estonian astronomer had this to say (after page after page of dense mathematics that are to this day far beyond my pay grade):

According to statistics by Jantzen, 395 comets (1909) showed a more or less random distribution of the inclinations, with a slight preponderance of direct motions over retrograde ones, with an age of from 109 to 3.109 years, this would correspond to an average aphelion distance of 1500-2000 a.u., or a period of revolution of 20000-30000 years. For greater aphelion distances the distribution of inclinations should be practically uniform, being smoothed out by perturbations.

Does this remind you of anything? Öpik was writing eighteen years before Jan Oort used cometary orbits to predict the existence of the cloud that now bears his name. Öpik believed there was a reservoir of comets around the Sun. There had to be, for a few comets were known to take on such eccentric orbits that they periodically entered the inner system and swung by our star, some close enough to throw a sizeable tail. Öpik was interested in how cometary orbits could be nudged by the influence of other stars. In other words, there must be a collection of objects at such a distance that were barely bound to the Sun and could readily be dislodged from their orbits.

I’m told that the Oort Cloud is, at least in some quarters, referred to as the Öpik/Oort Cloud, in much the same way that the Kuiper Belt is sometimes called the Edgeworth/Kuiper Belt because of similar work done at more or less the same time. But such dual naming strategies rarely win out in the end.

Being reminded of all this, I noticed that Öpik had done major work on such topics as visual binary stars (he estimated density in some of these), the distance of the Andromeda Galaxy, the frequency of craters on Mars, and the Yarkovsky Effect, which Öpik more or less put on the map through his discussions of Yarkovsky’s work. Studying him, I have the sense of a far-seeing man whose work was sometimes overlooked, but one whose contributions have in many cases proved to be prescient.

Naturally I was interested to learn whether Öpik had anything to say about our subject on Centauri Dreams, the prospect of interstellar flight. And indeed he did, in such a way that the sometimes glowering photographs we have of him seem to reveal something of his thinking on the matter (to be fair, some of us are simply not photogenic, and I understand that he was a kind and gentle man). Indeed, Armagh Observatory director Eric Lindsay described him thus:

…a “very human person with an understanding of, and sympathy for, our many frailties and, thank goodness, with a keen sense of humour. He will take infinite patience to explain the simplest problem to a person, young or old, with enthusiasm for astronomy but lacking astronomical background and training.”

The interstellar flight paper was written in 1964 for the Irish Astronomical Journal. Here he dismissed interstellar flight out of hand. Antimatter was a problem – remember that at the time he was writing, Öpik had few papers on interstellar flight to respond to, and he doesn’t seem to have been aware of the early work on sail strategies and lasers that Robert Forward and György Marx were exploring. So he focused on two papers he did know, the first being Les Shepherd’s study of interstellar flight via antimatter, seeing huge problems in storage and collection of the needed fuel. Here he quotes Edward Purcell approvingly. Writing in A.G.W. Cameron’s Interstellar Communication in 1963, Purcell said:

The exhaust power of the antimatter rocket would equal the solar energy received by the earth – all in gamma rays. So the problem is not to shield the payload, the problem is to shield the earth.

Having dismissed antimatter entirely, Öpik moves on to Robert Bussard’s highly visible ramjet concept, which had been published in 1960. He describes the ramjet sucking up interstellar gas and using it for fusion and spends most of the paper shredding the concept. I won’t go into the math but his arguments reflect many of the reasons that the ramjet concept has come to be met with disfavor. Here’s his conclusion:

…the ‘ramjet’ mechanism is impossible everywhere, as well as inside the Orion Nebula – one must get there first. “Traveling around the universe in space suits – except for local exploration… belongs back where it came from, on the cereal box.” (E. Purcell, loc. cit.). It is for space fiction, for paper projects – and for ghosts. “The only means of communication between different civilizations thus seems to be electro-magnetic signals” (S. von Hoerner, “The General Limits of Space Travel”, in Interstellar Communication, pp. 144-159). Slower motion (up to 0.01 c is a problem of longevity or hereditary succession of the crew; this we cannot reject because we do not know anything about it.

I always look back on Purcell’s comment and muse that cereal boxes used to be more interesting than they are today. I do wonder what Öpik might have made of sail strategies, and I’m aware of but have not seen a paper from him on interstellar travel by hibernation, written in 1978. So he seems to have maintained an interest in what he elsewhere referred to as “our cosmic destiny.” But like so many, he found interstellar distances too daunting to be attempted other than through excruciatingly long journey times in the kind of generation ship we’re familiar with in science fiction.

Since Öpik’s day a much broader community of scientists willing to study interstellar flight has emerged, even if for most it is a sideline rather than a dedicated project. We have begun to explore the laser lightsail as an option, but are only beginning the kind of laboratory work needed, even if a recent paper out of Harry Atwater’s team at Caltech shows progress. An unmanned flyby of a nearby star no longer seems to belong on a cereal box, but it’s a bit sobering to realize that even with sail strategies now under consideration by interstellar theorists, we’re still a long, long way from a mission.

Öpik’s paper on what would come to be known as the Oort Cloud is “Note on Stellar Perturbations of Nearby Parabolic Orbits,” Proceedings of the American Academy of Arts and Sciences, vol. 67 (1932), p. 169. The paper on interstellar travel is “Is Interstellar Travel Possible?” Irish Astronomical Journal Vol. 6(8) (1964), p. 299 (full text). The Irish Astronomical Journal put together a bibliography covering 1972 until his death in 1985, which students of Öpik can find here. The Atwater paper on sail technologies is Michaeli et al., “Direct radiation pressure measurements for lightsail membranes,” Nature Photonics 30 April 2025 (abstract). More on this one shortly.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

SETI’s Hard Steps (and How to Resolve Them) 21 Feb 8:30 AM (last month)

SETI’s Hard Steps (and How to Resolve Them)

The idea of life achieving a series of plateaus, each of which is a long and perilous slog, has serious implications for SETI. It was Brandon Carter, now at the Laboratoire Univers et Théories in Meudon, France, who proposed the notion of such ‘hard steps’ back in the early 1980s. Follow-up work by a number of authors, especially Frank Tipler and John Barrow (The Anthropic Cosmological Principle) has refined the concept and added to the steps Carter conceived. Since then, the idea that life might take a substantial amount of the lifetime of a star to emerge has bedeviled those who want to see a universe filled with technological civilizations. Each ‘hard step’ is unlikely in itself, and our existence depends upon our planet’s having achieved all of them.

Carter was motivated by the timing of our emergence, which we can round off at 4.6 billion years after the formation of our planet. He reasoned that the upper limit for habitability at Earth’s surface is on the order of 5.6 billion years after Earth’s formation, a suspicious fact – why would human origins require a time that approximates the extinction of the biosphere that supports us? He deduced from this that the average time for intelligent beings to emerge on a planet exceeds the lifespan of its biosphere. We are, in other words, a lucky species that squeezed in our development early.

Image: Two highly influential physicists. Brandon Carter (right) sitting with Roy Kerr, who discovered the Einsteinian solution for a rotating black hole. Carter’s own early work on black holes is highly regarded, although these days he seems primarily known for the ‘hard steps’ hypothesis. Credit: University of Canterbury (NZ).

Figuring a G-class star like the Sun having a lifetime on the order of 10 billion years, most such stars would spawn planetary systems that never saw the evolution of intelligence, and perhaps not any form of life. Because an obvious hard step is abiogenesis, and although the universe seems stuffed with ingredients, we have no evidence yet of life anywhere else. The fact that it did happen here tells us nothing more than that, and until we dig out evidence of a ‘second genesis,’ perhaps here in our own Solar System inside an icy moon, or on Mars, we can form no firm conclusions.

There’s a readable overview of the ‘hard steps’ notion on The Conversation, and I’ll direct you both to that as well as to the paper just out from the authors of the overview, which runs in Science Advances (citation below). In both, Penn State’s Jason Wright and Jennifer Macalady collaborate with Daniel Brady Mills (Ludwig Maximilian University of Munich) and the University of Rochester’s Adam Frank to describe such ‘steps’ as the development of eurkarytic cells – i.e., cells with nuclei. We humans are eukaryotes, so this hard step had to happen for us to be reading this.

We could keep adding to the list of hard steps as the discussion has spun out over the past few decades, but it seems agreed that photosynthesis is a big one. The so-called ‘Cambrian explosion’ might be considered a hard step, since it involves sudden complexity, refinements to body parts of all kinds and specialized organs, and it happens quickly. And what of the emergence of consciousness itself? That’s a big one, especially since we are a long way from explaining just what consciousness actually is, and how and even where it develops. Robin Hanson has used the hard steps concept to discuss ‘filters’ that separate basic lifeforms from complex technological societies.

Whichever steps we choose, the idea of a series of highly improbable events leveraging each other on the road to intelligence and technology seems to make the chances of civilizations elsewhere remote. But let’s pause right there. Wright and colleagues take note of the work of evolutionary biologist Geerat Vermeij (UC-Davis), who argues that our view of innovation through evolution is inescapably affected by information loss. Here’s a bit on this from the new paper:

Vermeij concluded that information loss over geologic time could explain the apparent uniqueness of ancient evolutionary innovations when (i) small clades [a clade comprises a founding ancestor and all of its descendants] that independently evolved the innovation in question go extinct, leaving no living descendants, and (ii) an ancient innovation evolved independently in two closely related lineages, or within a short period of time, and the genetic differences between these two lineages become “saturated” to the point where the lineages become genetically indistinguishable.

In other words, as we examine life on early Earth, we have to reckon with incompleteness in our fossil record (huge gaps possible there), with species we know nothing about going extinct despite having achieved a hard step. The authors point out that if this is the case, then we can’t really describe proposed hard steps as ‘hard.’ Other possibilities exist, including that innovations do happen only once, but they may be so powerful that creatures with a new evolutionary trait quickly change their environment so that other lineages of evolution don’t have time to develop.

Image: Earth’s habitability is compromised by a Sun that will, about 5.6 billion years after its formation, become too hot to allow life. Image credit: Wikimedia Commons.

We’re still left with the question of why it has taken so much of the lifetime of the Sun to produce ourselves, a question that bothered Carter sufficiently in 1983 that it drove him to the hard steps analysis. Here the authors offer something Carter did not, an analysis of Earth’s habitability over time. It’s one that can change the outcome. For each of the hard steps sets up its own evolutionary requirements, and these could be met only as Earth’s environment changed. Consider, for example, that 50 percent of our planet’s history elapsed before modern eukaryotic cells had enough oxygen to thrive.

So maybe our planet had to pass certain environmental thresholds:

…we raise the possibility that there are no hard steps (despite the appearance of major evolutionary singularities in the universal tree of life) (51) and that the broad pace of evolution on Earth is set by global-environmental processes operating on geologic timescales (i.e., billions of years) (30). Put differently, humans originated so “late” in Earth’s history because the window of human habitability has only opened relatively recently in Earth history.

Suppose abiogenesis is not a hard step. Biosignatures, then, should be common in planetary atmospheres, at least on planets like Earth that are geologically active, in the habitable zone of their stars, and have atmospheres involving nitrogen, carbon dioxide and water. If oxygenic photosynthesis is a hard step, then we’ll find atmospheres that are low in oxygen, rich in methane and carbon dioxide and other ingredients of the atmosphere of the early Earth. If no hard steps exist at all, then we should find the full range of atmospheric types from early Earth (Archean) to present day (Phanerozoic). Our study of atmospheres will help us make the call on the very existence of hard steps.

Given a lack of hard steps, if this model is correct, then the evolution of a biosphere appears more predictable as habitats emerge and evolve. That would offer us a different way of assessing Earth’s past, but also imply that the same trends have emerged on other worlds like Earth. Our existence in that sense would imply that intelligent beings in other stellar systems are more probable than Carter believed.

The paper is Mills et al., “Reassessment of the “hard-steps” model for the evolution of intelligent life,” Science Advances. Vol. 11, Issue 7 (14 February 2025). Full text. Brandon Carter’s famous paper on the hard steps is “The Anthropic Principle and its Implications for Biological Evolution.” Philosophical Transactions of the Royal Society of London A 310 (1983), 347–363. Abstract.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Pandora: Exoplanet Atmospheres via Smallsat 19 Feb 10:40 AM (last month)

Pandora: Exoplanet Atmospheres via Smallsat

I’ve been digging into NASA’s Small Spacecraft Strategic Plan out of continuing interest in missions that take advantage of miniaturization to do things once consigned to large-scale craft. And I was intrigued to learn about the small spacecraft deployed on Apollo 15 and 16, two units developed by TRW in a series called Particles and Fields Subsatellites. Each weighed 35 kilograms and was powered by six solar panels and rechargeable batteries. The midget satellites were deployed from the Apollo Command and Service Module via a spring-loaded container giving the units a four foot-per-second velocity. Apollo 15’s operated for six months before an electronics failure ended the venture. The Apollo 16 subsatellite crashed on the lunar surface 34 days into its mission after completing 424 orbits.

Here I thought I knew Apollo history backwards and forwards and I had never run into anything about these craft. It turns out that smallsats – usually cited as spacecraft with weight up to 180 kilograms – have an evocative history in support of larger missions, and current planning includes support for missions with deep space applications. Consider Pandora, which is designed to complement operations of the James Webb Space Telescope, extending our knowledge of exoplanet atmospheres with a different observational strategy.

JWST puts transmission spectroscopy to work, analyzing light from the host star as a transiting planet moves across the disk. A planet’s spectral signature can thus be derived and compared to the spectrum taken when the planet is out of transit and only the star is visible. This is helpful indeed, but despite JWST’s obvious successes, detecting the atmosphere of planets as small as Earth is a challenge. The chief culprit is magnetic activity on the star itself, contaminating the spectral data. The Pandora mission, a partnership between NASA and Lawrence Livermore National Laboratory, mitigates the problem by collecting long-duration observations at simultaneous visible and infrared wavelengths.

Image: A transmission spectrum made from a single observation using Webb’s Near-Infrared Imager and Slitless Spectrograph (NIRISS) reveals atmospheric characteristics of the hot gas giant exoplanet WASP-96 b. A transmission spectrum is made by comparing starlight filtered through a planet’s atmosphere as it moves across the star, to the unfiltered starlight detected when the planet is beside the star. Each of the 141 data points (white circles) on this graph represents the amount of a specific wavelength of light that is blocked by the planet and absorbed by its atmosphere. In this observation, the wavelengths detected by NIRISS range from 0.6 microns (red) to 2.8 microns (in the near-infrared). The amount of starlight blocked ranges from about 13,600 parts per million (1.36 percent) to 14,700 parts per million (1.47 percent). Credit: European Space Agency.

Stellar contamination produces spectral noise that mimics features in a planetary atmosphere, or else obscures them, a problem that has long frustrated scientists. Collecting data at shorter wavelengths than JWST’s shortest wavelengths (0.6 microns) helps get around this problem. Pandora’s visible light channel will track the spot-covering fractions of surface stellar activity while its Near-Infrared channel will simultaneously measure the variation in spectral features as the star rotates. A more fine-grained correction for stellar contamination thus becomes possible, and as the new paper on this work explains, the ultimate objective then becomes “…to robustly identify exoplanets with hydrogen- or water-dominated atmospheres, and determine which planets are likely covered by clouds and hazes.”

Pandora will operate concurrently with JWST, complementing JWST’s deep-dive, high-precision spectroscopy measurements with broad wavelength, long-baseline observations. Pandora’s science objectives are well-suited for a SmallSat platform and illustrate how small missions can be used to truly maximize the science from larger flagship missions.

The plan is for the mission to select 20 primary exoplanet host stars and collect data from a minimum of 10 transits per host star, with each observation lasting about 24 hours, producing 200 days of science observations. The lengthy data acquisition time for each star means an abundance of out-of-transit data can be collected to address the problem of stellar contamination. The primary mission has a lifetime of one year, which allows for a significant range of science operations in addition to the above.

Long-duration measurements like those planned for Pandora contrast with data collection on large missions like JWST, which often focus on one or a small number of transits per target. Such complementarity is a worthy goal, and a reminder of the lower cost and high adaptability of using the smallsat platform in conjunction with a primary mission. In addition, smallsats rely on standardized and commercial parts to reduce risk and avoid solutions specific to any single mission. Cost savings can be substantial.

Image: The Pandora observatory shown with the solar array deployed. Pandora is designed to be launched as a ride-share attached to an ESPA Grande ring [(Evolved Expendable Launch Vehicle) Secondary Payload Adapter ring]. Very little customization was carried out on the major hardware components of the mission such as the telescope and spacecraft bus. This enabled the mission to minimize non-recurring engineering costs. Credit: Barclay et al.

Operating at these scales has clear deep space applications. This is a fast growing, innovative part of spacecraft design that has implications for all kinds of missions, and I’m reminded of the interesting work ongoing at the Jet Propulsion Laboratory in terms of designing a mission to the Sun’s gravity lens. Smallsats and self-assembly enroute may prove to be a game-changer there.

For the technical details on Pandora, see the just released paper. The project completed its Critical Design Review in October of 2023 and is slated for launch into a Sun-synchronous orbit in the Fall of this year. Launch is another smallsat benefit, for many smallsats are being designed to fit into a secondary payload adapter ring on the launch vehicle, allowing them to be ‘rideshare’ missions that launch with other satellites.

The paper is Barclay et al., “The Pandora SmallSat: A Low-Cost, High Impact
Mission to Study Exoplanets and Their Host Stars,” accepted for the IEEE Aerospace Conference 2025. Preprint.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A Three-Dimensional Look at an Exoplanet Atmosphere 18 Feb 9:46 AM (last month)

A Three-Dimensional Look at an Exoplanet Atmosphere

Some 900 light years away in the constellation Puppis, the planet WASP-121b is proving an interesting test case as we probe ever deeper into exoplanetary atmospheres. As has been the case with so many early atmosphere studies, WASP-121b, also known as Tylos, is a hot-Jupiter, with a year lasting about thirty Earth hours, in a vise-like tidal lock that leaves one side always facing the star, the other away. What we gain in two new studies of this world is an unprecedented map of the atmosphere’s structure.

At stake here is a 3D look into what goes on as differing air flows move from one side of the planet to the other. A jet stream moves material around its equator, but there is a separate flow at lower altitudes that pumps gas from the hottest regions to the dark side. “This kind of climate has never been seen before on any planet,” says Julia Victoria Seidel (European Southern Observatory), lead author of a paper that appears today in Nature. Seidel points out that we have nothing in the Solar System to rival the speed and violence of the jet stream as it crosses the hot side of Tylos.

The astronomers used the European Southern Observatory’s Very Large Telescope, combining all four units to parse out the movement of chemical elements like iron and titanium in the weather patterns produced by these layered winds. What’s particularly significant here is the fact that we are now able to delve into an exoplanet atmosphere at three levels, analyzing variations in altitude as well as across varying regions on the world, and finally the interactions that produce weather patterns, form clouds and induce precipitation. Such 3D models take us to the greatest level of complexity yet.

“The VLT enabled us to probe three different layers of the exoplanet’s atmosphere in one fell swoop,” says study co-author Leonardo A. dos Santos, an assistant astronomer at the Space Telescope Science Institute in Baltimore. Tracking the movements of iron, sodium and hydrogen, the researchers could follow the course of winds at different layers in the planet’s atmosphere. A second paper, published in Astronomy & Astrophysics, announced the discovery of titanium in the atmosphere.

Image: Structure and motion of the atmosphere of the exoplanet Tylos. Astronomers have peered through the atmosphere of a planet beyond the Solar System, mapping its 3D structure for the first time. By combining all four telescope units of the European Southern Observatory’s Very Large Telescope (ESO’s VLT), they found powerful winds carrying chemical elements like iron and titanium, creating intricate weather patterns across the planet’s atmosphere. The discovery opens the door for detailed studies of the chemical makeup and weather of other alien worlds. Credit: ESO.

Note what we have in the image above. The paper describes it this way:

…a unilateral flow from the hot starfacing side to the cooler space-facing side of the planet sits below an equatorial super-rotational jet stream. By resolving the vertical structure of atmospheric dynamics, we move beyond integrated global snapshots of the atmosphere, enabling more accurate identification of flow patterns and allowing for a more nuanced comparison to models.

And that’s the key here – refining existing models to pave the way for future work. Digging into the 3D structure of the atmosphere required the VLT’s ESPRESSO spectrograph, collecting four times the light of an individual instrument to reveal the planet as it transited its star, a F-class star with mass and radius close to that of the Sun. Planet Tylos is named after the ancient Greek name for Bahrain, as part of the NameExoWorlds project. The host star bears the name Dilmun after the ancient civilization emergent on a trade route in the region after the 3rd millennium BC.

The Seidel et al. paper notes that existing Global Circulation Models (3D) do not fully capture what is observed at WASP-121b, making scenarios like these valuable testbeds for advancing the state of the art. Extremely Large Telescopes now under development will be able to put these refined models to work as they broaden the study of exoplanet atmospheres in extreme conditions:

The discrepancy between GCMs and the provided observations highlight the impact of high signal-to-noise ratio data of extreme worlds such as ultra-hot Jupiters in benchmarking our current understanding of atmospheric dynamics. This study marks a shift in our observational understanding of planetary atmospheres beyond our solar system. By probing the atmospheric winds in unprecedented precision, we unveil the 3D structure of atmospheric flows, most importantly the vertical transitions between layers from the deep sub-to-anti-stellar-point winds to a surprisingly pronounced equatorial jet stream. These benchmark observations made possible by ESPRESSO’s 4-UT mode serve as a catalyst for the advancement of global circulation models across wider vertical pressure ranges thus significantly advancing our knowledge on atmospheric dynamics.

The papers are Seidel et al., “Vertical structure of an exoplanet’s atmospheric jet stream,” Nature 18 February 2025 (abstract) and Prinoth et al., “Titanium chemistry of WASP-121 b with ESPRESSO in 4-UT mode,” in process at Astronomy & Astrophysics (preprint).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

What Would Surprise You? 14 Feb 3:33 AM (last month)

What Would Surprise You?

Someone asked me the other day what it would take to surprise me. In other words, given the deluge of data coming in from all kinds of observatories, what one bit of news would set me back on my heels? That took some reflection. Would it surprise me, my interlocutor persisted, if SETI fails to find another civilization in my lifetime?

The answer to that is no, because I approach SETI without expectations. My guess is that intelligence in the universe is rare, but it’s only a hunch. How could it be anything else? So no, continuing silence via SETI does not surprise me. And while a confirmed signal would be fascinating news, I can’t say it would truly surprise me either. I can work out scenarios where civilizations much older than ours do become known.

Some surprises, of course, are bigger than others. Volcanoes on Io were a surprise back in the Voyager days, and geysers on Enceladus were not exactly expected, but I’m talking here about an all but metaphysical surprise. And I think I found one as I pondered this over the last few days. What would genuinely shock me – absolutely knock the pins out from under me – would be if we learn through future observation and even probes that Proxima Centauri b is devoid of life.

I’m using Proxima b as a proxy for the entire question of life on other worlds. We have no idea how common abiogenesis is. Can life actually emerge out of all the ingredients so liberally provided by the universe? We’re here, so evidently so, but are we rare? I would be stunned if Proxima b and similar planets in the habitable zone around nearby red dwarfs showed no sign of life whatsoever. And of course I don’t limit this to M-class stars.

Forget intelligence – that’s an entirely different question. I realize that my core assumption, without evidence, is that abiogenesis happens just about everywhere. And I think that most of us share this assumption.

The universe is going to seem like a pretty barren place if we discover that it’s wildly unlikely for life to emerge in any form. I’ve mentioned before my hunch that when it comes to intelligent civilizations, the number of these in the galaxy is somewhere between 1 and 10. At any given time, that is. Who knows what the past has held, or what the future will bring? But if we find that life itself doesn’t take hold to run the experiment, it’s going to color this writer’s entire philosophy and darken his mood.

We want life to thrive. Notice, for example, how we keep reading about potentially habitable planets, our fixation with the habitable zone being natural because we live in one and would like to find places like ours. Out of Oxford comes a news release with the headline “Researchers confirm the existence of an exoplanet in the habitable zone.” That’s the tame version of more lively stories that grow out of such research with titles like “Humans could live here” and “A Home for ET.” I’m making those up, but you know the kind of headlines I mean, and they can get more aggressive still. We hunger for life.

Here’s one from The Times: “‘Super-Earth’ discovered — and it’s a prime candidate for alien life.’” But is it?

Image: Artist’s depiction of an exoplanet like HD 20794 d in a conceivably habitable orbit. It may or may not be rocky. It may or may not be barren. How much do our expectations drive our thinking about it? Credit: University of Oxford.

That Oxford result is revealing, so let’s pause on it. HD 20794 d is about 20 light years from us, orbiting a G-class star like the Sun, which gives it that extra cachet of being near a familiar host. Three confirmed planets and a dust disk orbit this star in Eridanus, the most interesting being the super-Earth in question, which appears to be about twice Earth’s radius and 5.8 times its mass. The HARPS (High Accuracy Radial Velocity Planet Searcher) and ESPRESSO spectrographs at La Silla (Chile) have confirmed the planet, quite a catch given that the original signal detected in radial velocity studies was at the limit of the HARPS spectrograph’s capabilities.

Habitable? Maybe, but we can’t push this too far. The paper notes that “HD 20794 d could also be a mini-Neptune with a non-negligible H/He atmosphere.” And keep an eye on that elliptical orbit, which means climate on such a world would be, shall we say, interesting as it moves among the inner and outer edges of the habitable zone during its 647-day year. I think Oxford co-author Michael Cretignier is optimistic when he refers to this planet as an ‘Earth analogue,’ given that orbit as well as the size and mass of the world, but I get his point that its proximity to Sol makes this an interesting place to concentrate future resources. Again, my instincts tell me that some kind of life ought to show up if this is a rocky world, even if it’s nothing more than simple vegetation.

Because it’s so close, HD 20794 d is going to get attention from upcoming Extremely Large Telescopes and missions like the Habitable Worlds Observatory. The level of stellar activity is low, which is what made it possible to tease this extremely challenging planetary signal out of the noise – remember the nature of the orbit, and the interactions with two other planets in this system. Probing its atmosphere for biosignatures will definitely be on the agenda for future missions.

Obviously we don’t know enough about HD 20794 d to talk meaningfully about it in terms of life, but my point is about expectation and hope. I think we’re heavily biased to expect life, to the point where we’re describing habitable zone possibilities in places where they’re still murky and poorly defined. That tells me that the biggest surprises for most of us will be if we find no life of any kind no matter which direction we look. That’s an outcome I definitely do not expect, but we can’t rule it out. At least not yet.

The paper is Nari et al., “Revisiting the multi-planetary system of the nearby star HD 20794 Confirmation of a low-mass planet in the habitable zone of a nearby G-dwarf,” Astronomy & Astrophysics Vol. 693 (28 January 2025), A297 (full text).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Pondering Life in an Alien Ocean 11 Feb 5:00 AM (last month)

Pondering Life in an Alien Ocean

No one ever said Europa Clipper would be able to detect life beneath the ice, but as we look at the first imagery from the spacecraft’s star-tracking cameras, it’s helpful to keep the scope of the mission in mind. We’re after some critical information here, such as the thickness of the ice shell, the interactions between shell and underlying ocean, the composition of that ocean. All of these should give us a better idea of whether this tiny world really can be a home for life.

Image: This mosaic of a star field was made from three images captured Dec. 4, 2024, by star tracker cameras aboard NASA’s Europa Clipper spacecraft. The pair of star trackers (formally known as the stellar reference units) captured and transmitted Europa Clipper’s first imagery of space. The picture, composed of three shots, shows tiny pinpricks of light from stars 150 to 300 light-years away. The starfield represents only about 0.1% of the full sky around the spacecraft, but by mapping the stars in just that small slice of sky, the orbiter is able to determine where it is pointed and orient itself correctly. The starfield includes the four brightest stars – Gienah, Algorab, Kraz, and Alchiba – of the constellation Corvus, which is Latin for “crow,” a bird in Greek mythology that was associated with Apollo. Besides being interesting to stargazers, the photos signal the successful checkout of the star trackers. The spacecraft checkout phase has been going on since Europa Clipper launched on a SpaceX Falcon Heavy rocket on Oct. 14, 2024. Credit: NASA/JPL-Caltech.

Seen in one light, this field of stars is utterly unexceptional. Fold in the understanding that the data are being sent from a spacecraft enroute to Jupiter, and it takes on its own aura. Naturally the images that we’ll be getting at the turn of the decade will far outdo these, but as with New Horizons, early glimpses along the route are a way of taking the mission’s pulse. It’s a long hike out to our biggest gas giant.

I bring this up, though, in relation to new work on Enceladus, that other extremely interesting ice world. You would think Enceladus would pose a much easier problem when it comes to examining an internal ocean. After all, the tiny moon regularly spews material from its ocean out through those helpful cracks around its south pole, the kind of activity that an orbiter or a flyby spacecraft can readily sample, as did Cassini.

Contrast that with Europa, which appears to throw the occasional plume as well, though to my knowledge, these plumes are rare, with evidence for them emerging in Hubble data no later than 2016. It’s possible that Europa Clipper will find more, or that reanalysis of Galileo data may point to older activity. But there’s no question that in terms of easy access to ocean material, Enceladus offers the fastest track.

Enceladus flybys by the Cassini orbiter revealed ice particles, salts, molecular hydrogen and organic compounds. But according to a new paper from Flynn Ames (University of Reading) and colleagues, such snared material isn’t likely to reveal life no matter how many times we sample it. Writing in Communications Earth and Environment, the authors make the case that the ocean inside Enceladus is layered in such a way that microbes or other organic materials would likely break down as they rose to the surface.

In other words, Enceladus might have a robust ecosystem on the seafloor and yet produce jets of material which cannot possibly yield an answer. Says Ames:

“Imagine trying to detect life at the depths of Earth’s oceans by only sampling water from the surface. That’s the challenge we face with Enceladus, except we’re also dealing with an ocean whose physics we do not fully understand. We’ve found that Enceladus’ ocean should behave like oil and water in a jar, with layers that resist vertical mixing. These natural barriers could trap particles and chemical traces of life in the depths below for hundreds to hundreds of thousands of years.”

The study relies on theoretical models that are run through global ocean numerical simulations, plugging in a timescale for transporting material to the surface across a range of salinity and mixing (mostly by tidal effects). Remarkably, there is no choice of variables that offers an ocean that is not stratified from top to bottom. In this environment, given the transport mechanisms at work, hydrothermal materials would take centuries to reach the plumes, with obvious consequences for their survival.

From the paper:

Stable stratification inhibits convection—an efficient mechanism for vertical transport of particulates and dissolved substances. In Earth’s predominantly stably stratified ocean this permits the marine snow phenomena, where organic matter, unable to maintain neutral buoyancy, undergoes ’detrainment’, settling down to the ocean bottom. Meanwhile, the slow ascent of hydrothermally derived, dissolved substances provides time for scavenging processes and usage by life, resulting in surface concentrations far lower than those present nearer source regions at depth.

Although its focus is on Enceladus, the paper offers clear implications for what may be going on at Europa. Have a look at the image below (drawn not from the body of the paper but from the supplementary materials linked after the footnotes) and you’ll see the problem. We’re looking at these findings as applied to what we know of Europa.

Image: From part of Figure S7 in the supplementary materials. Caption: “Tracer age (years) at Europa’s ocean-ice interface, computed using the theoretical model outlined in the main text. Note that age contours are logarithmic.” Credit: Ames et al.

The figure shows the depth of the inversion and age of the ice shell for the same ranges in ocean salinity as inserted for Enceladus. Here we have to be careful about how much we don’t know. The ice thickness, for instance, is assumed as 10 kilometers in these calculations. Given all the factors involved, the transport timescale through the stratified layers of the modeled Europa is, as the figure shows, over 10,000 years. The same stratification layers impede delivery of oxidants from the surface to the ocean.

So there we are. The Ames paper stands as a challenge to the idea that we will be able to find evidence of life in the waters just below the ice, and likewise indicates that even if we do begin to trace more plumes from Europa’s ocean, these would be unlikely to contain any conclusive evidence about biology. Just what we needed – the erasure of evidence due to the length of the journey from the ocean depths to the ice sheet. Icy moons, it seems, are going to remain mysterious even longer than we thought.

The paper is Ames et al., “Ocean stratification impedes particulate transport to the plumes of Enceladus,” Communications Earth & Environment 6 (6 February 2025), 63 (full text).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Putting AI to Work on Technosignatures 7 Feb 9:03 AM (last month)

Putting AI to Work on Technosignatures

As a quick follow-up to yesterday’s article on quantifying technosignature data, I want to mention the SETI Institute’s invitation for applicants to the Davie Postdoctoral Fellowship in Artificial Intelligence for Astronomy. The Institute’s Vishal Gajjar and his collaborators both in the US and at IIT Tirupati in India will be working with the chosen candidate to focus on neural networks optimized for processing image data, so-called ‘CNN architectures’ that can uncover unusual signals in massive datasets.

“Machine learning is transforming the way we search for exoplanets, allowing us to uncover hidden patterns in vast datasets,” says Gajjar. “This fellowship will accelerate the development of advanced AI tools to detect not just conventional planets, but also exotic and unconventional transit signatures including potential technosignatures.”

As AI matures, the exploration of datasets is a critical matter as these results from missions like TESS and Kepler are packed with both exoplanet data as well as stellar activity and systematics that can mislead investigators. Frameworks for sifting out anomalies should help us distinguish unusual candidates including disintegrating objects, planets with rings, exocomets and perhaps even megastructures and other technosignatures, all flagged by their deviation from our widely used transit models.

The data continue to accumulate even as our AI tools sharpen to look for anomalies. I can think of several Centauri Dreams readers who should find this work right up their alley. If you’re interested, you can find everything you need to apply for the fellowship here. The deadline for applications is March 15, 2025.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Quantifying the Chances of a Technosignature 5 Feb 5:37 AM (last month)

Quantifying the Chances of a Technosignature

It’s one thing to talk about technology as we humans know it, but applying it to hypothetical extraterrestrials is another matter. We have to paint with a broad brush here. Thus Jason Wright’s explanation of technosignatures as conceived by SETI scientists. The Penn State astronomer and astrophysicist defines technology in that context as “the physical manifestations of deliberate engineering.” That’s saying that a technology produces something that is in principle observable. Whether or not our current detection methods are adequate to the task is another matter.

Image: Artist’s concept of an interesting radio signal from galactic center. But the spectrum of possible technosignature detections is broad indeed, extending far beyond radio. Credit: UCLA SETI Group/Yuri Beletsky, Carnegie Las Campanas Observatory.

A technosignature need not be the sign of industrial or scientific activity. Consider: In a new paper in The Astronomical Journal, Sofia Sheikh (SETI Institute) and colleagues including such SETI notables as Wright himself, Ravi Kopparapu and Adam Frank point out that the extinction of ancient megafauna some 12,800 years ago may have contributed to changes in atmospheric methane that fed into a period of cooling known as the Younger Dryas, to be followed by growing human agricultural activity whose effects on carbon dioxide and methane in the atmosphere would be detectable.

As a technosignature, that one has a certain fascination, but it’s not likely to be definitive in ferreting out extraterrestrials, at least not at our stage of detection technology. But Sheikh and team are really after a much less ambiguous question. We know what our own transmission and detection methods are. How far away can our own technosignature be detected? By studying the range of technosignatures we are producing on Earth, the authors produce a scale covering thirteen orders of magnitude in detectability, with radio still at the top of the heap. The work establishes quantitative standards for detectability based on Earth’s current capabilities.

We might, for example, use the James Webb Space Telescope and the upcoming Habitable Worlds Observatory to provide data on atmospheric technosignatures as far out as 5.7 light years away. That takes us interstellar, with that interesting system at Proxima Centauri in range. Let’s tarry a bit longer on this one. While carbon dioxide is implicated in manmade changes to Earth’s atmosphere, the paper points to other sources, zeroing in on one in particular:

…there are other atmospheric technosignatures in Earth’s atmosphere that have very few or even no known nontechnological sources. For example, chlorofluorocarbons (CFCs), a subcategory of halocarbons, are directly produced by human technology (with only very small natural sources), e.g., refrigerants and cleaning agents, and their presence in Earth’s atmosphere constitutes a nearly unambiguous technosignature (J. Haqq-Misra et al. 2022). Nitrogen dioxide (NO2), like CO2, has abiotic, biogenic, and technological sources, but combustion in vehicles and fossil-fueled power plants is a significant contributor to the NO2 in Earth’s atmosphere (R. Kopparapu et al. 2021).

And indeed nitrogen dioxide (NO2) is what the authors plug into this study, drawing on earlier work by some of the paper’s authors. Note the fact that biosignatures and technosignatures overlap here given how much work has proceeded on characterizing exoplanet atmospheres. It turns out that the wavelength bands that the Habitable Worlds Observatory will see best in its search for biosignatures are also those that include the NO2 technosignature, a useful example of piggybacking our searches.

But of course the realm of technosignatures is wide, including everything from the lights of cities to ‘heat islands’ (inferring cities), orbiting satellites, radio transmissions and lasers. I’m aware of no other study that combines the various forms of technosignature in a single analysis. If you start looking at the full range of technosignatures according to distance, you find objects on a planetary surface to be the toughest catch, with heat islands swimming into focus only from a distance as far as outer planetary orbits in the Solar System. The current technology with the most punch is planetary radar, whose pulses should be detectable as much as 12,000 light years away, although such a signal would be a fleeting and non-repeating curiosity.

SETI does find signals like that now and then. But precisely because they are non-repeating, we simply don’t know what to make of them.

Image: The maximum distances that each of Earth’s modern-day technosignatures could be detected at using modern-day receiving technology, in visual form. Also marked are various astronomical objects of interest. Credit: Sheikh et al.

Think back to the early days of SETI and ponder how far we’ve come in trying to understand what other civilizations might do that could get us to notice them. SETI grew directly out of the famous work by Giuseppe Cocconi and Philip Morrison that laid out the case for artificial radio signals in 1959, followed shortly thereafter by Frank Drake’s pioneering work at Green Bank with Project Ozma. Less known is the 1961 paper by Charles Townes and R. N. Schwartz that got us into optical wavelengths.

And while ‘Dysonian SETI’, which explicitly searches for technosignatures, is usually associated with vast engineering projects like Dyson spheres, the point here is that a civilization will produce evidence for an outside observer that will continue to deepen as that observer’s tools increase in sophistication. The search for technosignatures, then, actually grows into a multi-wavelength effort, but one that spans a vast range. Making all this quantitative involves a ‘detectability distance scale.’ The authors choose one known as an ichnoscale. Here’s how the paper describes it:

Using Earth as a mirror in this way, we can employ the concept of the ichnoscale (ι) from H. Socas-Navarro et al. (2021): “the relative size scale of a given technosignature in units of the same technosignature produced by current Earth technology.” An ι value of 1 is defined by Earth’s current technology. This necessarily evolves over time—for this work, we set the ichnoscale to Earth-2024-level technology, including near-future technologies that are already in development.

Considering how fast our detection methods are improving as we build extremely large telescopes (ELTs) and push into ever more sophisticated space observatories, learning the nature of this scale will become increasingly relevant. While we realized in the mid-20th Century that radio was detectable at interstellar distances, we’re now able to detect not just an intentional signal but radio leakage, at least from nearby stellar systems. That’s an extension of the parameter space that involves levels of power we have already demonstrated on Earth. The ichnoscale framework quantifies these signatures that will gradually become possible to detect as our methods evolve.

We see more clearly which methods are most likely to succeed. This is an important scaling because the universe we actually live in may not resemble the one we construct in our imaginations. Let me quote the paper on this important point:

…the focus on planetary-scale technosignatures provides very specific suggestions for which searches to pursue in a Universe where large-scale energy expenditures and/or travel between systems is logistically infeasible. While science fiction is, for example, replete with mechanics for rapid interstellar travel, all current physics implies it would be slow and expensive. We should take that constraint seriously.

And with this in mind we can state key results:

1. Radio remains the way that Earth is most detectable at ι = 1.
2. Investment in atmospheric biosignature searches has opened up the door for atmospheric technosignature searches.
3. Humanity’s remotely detectable impacts on Earth and the solar system span 12 orders of magnitude.
4. Our modern-day planetary-scale impacts are modest compared to what is assumed in many technosignature papers.
5. We have a multiwavelength constellation of technosignatures, with more of the constellation becoming visible the closer the observer becomes.

Let’s pause on item 4. The point here is that most notions of technosignatures assume technologies visible on astronomical scales, and indeed it is usually assumed that our first SETI detections, when and if they come, will involve civilizations vastly older and superior in technology than ourselves. Planets bearing technologies like those we have today are a supremely difficult catch, because the technosignatures we are throwing are tiny and all but trivial compared to the Dyson spheres and starship beaming networks we typically consider. And this point seems overwhelmingly obvious:

We should be careful about extrapolating current technosignatures to scales of ιTS = 10 (or even ιTS = 2) without considering the changing context in which these technologies are being developed, used, and (sometimes) mitigated or phased out (e.g., the recovery of the ozone hole; J. Kuttippurath & P. J. Nair 2017). As another example, we are becoming aware of the negative health effects of the UHI [urban heat index] (as detailed in, e.g., A. Piracha & M. T. Chaudhary 2022); thus, work may be done to mitigate the concentrated regions of high infrared flux discussed in Section 4.3.

Indeed. How many of the technosignatures we are producing are stable? Chlorofluorocarbons in the atmosphere are subject to adjustment on astronomically trivial timeframes. The chances of running into a culture that is about to realize it is polluting itself just before it takes action to mitigate the problem seem remote. So all these factors have to be taken into account as we rank technosignature detection strategies, and it’s clear that in this “multiwavelength constellation of technosignatures” the closer we are, the better we see. All the more reason to continue to pursue not just better telescopes but better ways to get ever improving platforms into the outer Solar System and beyond. Interstellar probes, anyone?

The paper is Sheikh et al., “Earth Detecting Earth: At What Distance Could Earth’s Constellation of Technosignatures be Detected with Present-day Technology?” Astronomical Journal Vol. 169, No. 2 (3 February 2025), 118 (full text). The Cocconi and Morrison paper is “Searching for Interstellar Communications,” Nature 184 (19 September 1959), 844-846 (abstract). The 1961 paper on laser communications is Schwartz and Townes, “Interstellar and Interplanetary Communication by Optical Masers,” Nature 190 (15 April 1961), 205-208 (abstract).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

A Fast Radio Burst in a Dead Elliptical Galaxy 3 Feb 2:39 AM (last month)

A Fast Radio Burst in a Dead Elliptical Galaxy

Work is healing, so let’s get back to it. I’m enthralled with what we’re discovering as we steadily build our catalog of fast radio bursts (FRB), close to 100 of which have now been associated with a galaxy. These are transient radio pulses of short duration (down to a fraction of a millisecond, though some last several seconds), the first being found in 2007 by Duncan Lorimer, as astronomer at West Virginia University. Sometimes FRBs repeat, although many do not, and one is known to repeat on a regular basis.

What kind of astrophysical processes might be driving such a phenomenon? The leading candidate appears to be supernovae in a state of core collapse, producing vast amounts of energy as stars more massive than the Sun end their lives. Out of such catastrophic events a type of neutron star called a magnetar may be produced, its powerful magnetic field pumping out X-ray and gamma ray radiation. Young, massive stars and regions of active star formation are implicated under this theory. But as we’re learning, magnetars are only one of a possible range of candidates.

For the event known as FRB 20240209A, detected in 2024 by the Canadian Hydrogen Intensity Mapping Experiment (CHIME), has dealt us a wild card. Remember, a single FRB can produce more energy in a quick burst than our Sun emits in an entire year. This one has repeatedly fired up, producing 21 pulses between February and July of last year. And the problem with it is that it has been traced to a galaxy in which star formation has ceased. That finding is verified by data from the Gemini North telescope and the Keck Observatory using its Low Resolution Imaging Spectrometer (LRIS).

Yuxin (Vic) Dong is an NSF Graduate Research Fellow and second author on one of two papers recently published on the event:

“For nearby galaxies, there is often archival data from surveys available that tells you the redshift — or distance —to the galaxy. However, in some cases, these redshift measurements may lack precision, and that’s where Keck Observatory and the LRIS instrument becomes crucial. Using a Keck/LRIS spectrum, we can extract the redshift to a very high accuracy. Spectra are like fingerprints of galaxies, and they contain special features, called spectral lines, that encode tons of information about what’s going on in the galaxy like the stellar population age and star formation activity. What’s really fascinating in this case is that the features we saw from the Keck/LRIS spectrum revealed that this galaxy is quiescent, meaning star formation has shut down in the galaxy. This is strikingly different from most FRB galaxies we know which are still actively making new stars.”

Image: The ellipse shows the location of the FRB and the crosshairs point to its host galaxy, taken with the Gemini North telescope from Maunakea. Credit: Shah et al.

It turns out that FRB 20240209A is coming from a galaxy fully 11.3 billion years old some 2 billion light years from Earth. This is painstaking work and quite productive, for the papers’ authors report that the galaxy is both extremely luminous and the most massive FRB host galaxy yet found. Moreover, while FRBs that have been associated with their host galaxies are usually located deep within the galaxy, this one occurs 130,000 light years from galactic center, in a region with few stars nearby.

When you’re dealing with a new phenomenon, finding similar events can be productive. In this case, there is one other FRB that can be placed in the outskirts of a galaxy, the spiral M81. While FRB 20240209A occurred in an ancient elliptical galaxy, it like the M81 event is far from areas of active star formation, again raising the possibility that FRBs have causes we have yet to pin down. From the Eftekhari et al. paper:

Since the first host associations, investigations into FRB host demographics have offered valuable insights into the origins of FRBs and their possible progenitor systems. Such studies remain in their infancy, however. With the development of interferometric capabilities for various FRB experiments and the promise of hundreds of precisely localized events, the discovery landscape for new and unforeseen hosts and environments presents considerable potential.

And as the paper notes, FRB 20240209A isn’t the first FRB that challenges our assumptions:

Indeed, the connection of a few FRBs with remarkable environments, including dwarf galaxies (S. Chatterjee et al. 2017; C. H. Niu et al. 2022), a globular cluster (F. Kirsten et al. 2022), and the elliptical host of FRB 20240209A, implicate exotic formation channels as well as older stellar populations for some FRBs and demonstrate that novel environments offer significant constraining power for FRB progenitors. A larger sample of host associations will further uncover intriguing diversity in host environments and may identify interesting subpopulations or correlations with FRB repetition, energetics, or other burst characteristics, contributing to a clearer understanding of FRB origins.

Image: CHIME detectors. Credit: CHIME, Andre Renard, Dunlap Institute for Astronomy & Astrophysics, University of Toronto.

Clearly we have a long way to go as the FRB catalog grows. Senior author Wen-fai Fong, who was involved in both papers, likes to talk about the surprises the universe has in store for us, disrupting any possibility of scientific complacency. Instead, we are often confronted with yet another reason to revise our thinking, in what Fong refers to as “a ‘dialogue’ with the universe” as we pursue time-domain astronomy, the analysis of changes in brightness and spectra over time so suited for mysterious FRBs.

The papers are Shah et al., “A repeating fast radio burst source in the outskirts of a quiescent galaxy,” Astrophysical Journal Letters Vol. 979, No. 2 (21 January 2025) (full text) and Eftekhari et al., “The massive and quiescent elliptical host galaxy of the repeating fast radio burst FRB 20240209A,” Astrophysical Journal Letters Vol. 979 No. 2 (21 January 2025). Full text.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Centauri Dreams to Resume Soon 27 Jan 8:37 AM (2 months ago)

Centauri Dreams to Resume Soon

I’d like to thank all of you who wrote comments and emails about the recent pause in Centauri Dreams. My beautiful wife Eloise passed away on January 17. It was as peaceful a death as can be imagined, and I am so pleased to say that she was able to stay at home until the end. As she had battled Alzheimer’s for eleven gallant years, death was simply a bridge that now had to be crossed. As she did with everything else in her life, she did it with class.

This is to let you know that I will be getting Centauri Dreams back into action again in about three weeks. When I began the site in 2004, my primary goal was to teach myself as much as I could about the topics we address by writing about them, which is how I’ve always tended to learn things. I’ve always welcomed comments that informed me, caught my errors and extended the discussion into new realms. No one could work with a better audience than the readers I’ve been privileged to address, and for this I will always be profoundly grateful.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?