The Search for Extraterrestrial Life and Post-Biological Intelligence

Papers presented at an international symposium considering the true nature of extraterrestrial Intelligence.  


Introduction: The True Nature of Aliens

Is it time to re-think ET?

For well over a half-century, a small number of scientists have conducted searches for artificially-produced signals that would indicate the presence of intelligence elsewhere in the cosmos. This effort, known as SETI (Search for Extraterrestrial Intelligence), has yet to find any confirmed radio transmissions or pulsing lasers from other beings. But the hunt continues, recently buoyed by the discovery of thousands of exoplanets. For many, the abundance of habitable real estate makes it difficult to believe that Earth is the only world where life and intelligence have arisen.

SETI practitioners mostly busy themselves with refining their equipment and their lists of target solar systems. They seldom consider the nature of their prey – what form extraterrestrial intelligence might take. Their premise is that any technically sophisticated species will eventually develop signaling technology, irrespective of their biology or physiognomy.

This view may not seem anthropocentric, for it makes no overt assumptions about the biochemistry of extraterrestrials; only that intelligence will arise on at least some worlds with life. However, the trajectory of our own technology now suggests that within a century or two of our development of radio transmitters and lasers, we are likely to build machines with artificial, generalized intelligence. We are engineering our successors, and the next intelligent species on Earth is not only certain to dwarf our own cognitive abilities, but will be able to engineer its own, superior descendants by design, rather than counting on uncertain Darwinian processes. Assuming that something similar happens to other technological societies, then the implications for SETI are profound.

In September, 2015, the John Templeton Foundation’s Humble Approach Initiative sponsored a three-day symposium entitled “Exploring Exoplanets: The Search for Extraterrestrial Life and Post-Biological Intelligence.” The venue for the meeting was the Royal Society’s Chicheley Hall, north of London, where a dozen researchers gave informal presentations and engaged in the type of lively dinner table conversations that such meetings inevitably spawn.

The subject matter was broad, ranging from the multi-pronged search for habitable planets and how we might detect life to the impact of both the search and an eventual discovery. However, the matter of post-biological intelligence – briefly described above – or the possibility of non-Darwinian evolutionary processes was an incentive for many of the symposium contributions.

We present here short write-ups of seven of these talks. They are more than simply interesting: they suggest a revolution in how we should think about, and search for, our intellectual peers. Indeed, they suggest that “peers” may be too generous to Homo sapiens. As these essays argue, the majority of the cognitive capability in the cosmos may be far beyond our own.

-- Seth Shostak

This symposium was chaired by Martin J. Rees, OM, Kt, FRS and Paul C.W. Davies, AM, and organized by Mary Ann Meyers, JTF’s Senior Fellow. Also present was B. Ashley Zauderer, Assistant Director of Math and Physical Sciences at the Templeton Foundation.

[Go to Top]

 


DISCUSSING ALIENS: CONSTRAINTS FROM CHEMISTRY AND DARWINISM

Steven A. Benner
The Foundation for Applied Molecular Evolution
Firebird Biomolecular Sciences LLC
13709 Progress Boulevard, Alachua FL 32615
sbenner@ffame.org

ABSTRACT

The definition of life as a “self-sustaining chemical system capable of Darwinian evolution” captures what we believe is the only mechanism for organic matter to organize itself to create behaviors that we value in biologic systems. However, because its mutations can neither reflect the current environment (Lamarckianism) nor anticipate future environments, Darwinism requires the death of children simply to maintain the capacity for future evolution. Their death is also required to create the positive adaptations that are required to manage changing environments. However, thanks to our intelligence, humankind is on the verge of escaping Darwinism via germline DNA modification (for example, using CRISPR). If technological advances occur in parallel as intelligent societies advance, any alien species likely to encounter us before we encounter them is likely to have itself escaped Darwinism. Anticipating this, synthetic biology is creating, in the laboratory, alternative systems that might be Lamarckian without needing to be intelligent. At least speculatively, unintelligent Lamarckian systems could evolve more rapidly than unintelligent Darwinian systems, precisely because they do not waste resources on dead offspring. Indeed, a survey of modern terran molecular biology suggests that such a system may have operated on early Earth, but then was supplanted once translation arose. Thus, a second example of life created in the laboratory might actually represent a path that natural history on Earth began to follow, but later decided not to continue.

INTRODUCTION

Two decades ago, a committee empaneled by NASA defined life as “a self-sustaining chemical system capable of Darwinian evolution” [1]. This definition makes solid contact with reality, especially in the biosphere that we see around us. Indeed, many alternative proposed definitions are actually not definitions of life (my favorite is life as “a nontrivial trajectory through phase space”). Rather, they are definitions of models for life. As such, they do not have any particular experimental touchstone. And I am not prepared at this moment to join Jack Szostak in his view that no definition-theory of life is necessary [2].

On the contrary, the “NASA definition” is quite useful in guiding “origins of life” studies. Organic materials, if left to themselves in the presence of energy, devolve to give increasingly complex tars. This is an experimental observation, repeated many times in many kitchens. However, another observation is well-validated: If that organic matter also has access to Darwinism, then it generates life essentially everywhere.

Thus, according to this definition-theory, the question that must be answered to understand how life originates asks how, during the devolution of prebiotic organic matter, a chemical system able to support Darwinism might emerge. This is a “juicy” target for experiments. It is also a good target in the search of for life, as we can easily define molecular structures that are necessary for evolvable biopolymers, including a polyelectrolyte backbone [3]. We can look for those molecular structures as biosignatures.

However, we are reminded that a definition captures within it a theory [4], and this definition is no different. This particular definition communicates to the world that its drafters felt that Darwinism is the only mechanism by which matter can self-organize to give the properties that we value in biology. It is conceivable that we might encounter an entity that has all of the properties that we value in life, including the ability to converse with us, but lacks access to Darwinism, or perhaps lacks a chemical foundation. It would therefore fall outside of this definition-theory. Science fiction offers many of these concepts. But “the ability to conceive” is weak evidence for existence. In fact, the reason why we do not now change this definition is because we do not believe that such life actually can exist.

Now, unless we are missing something big, explorations of the solar system over the past half-century have failed to place in our hands anything that matches this definition-theory. The closest that we have come is the suggestion that Mars might harbor single cell organisms. This motivates us to ask about origins on Mars. Once this question is raised, reasons can be identified to see Mars as a site preferable to Earth for the origin of life [5]. For example, the dryness of Mars, relative to Earth, helps square the paradox arising from the ease with which genetic biopolymers (like RNA) are corroded by water, confronting the apparent need for water as a solvent where those biopolymers can function.

However, the discovery of extrasolar planetary systems expands the number of places where these questions can be asked. It does this significantly, as we can expect the presence of as many as a hundred billion earthlike planets in the galaxy, one or two for each star, ahead of their actual observation. Could natural history on those extrasolar planets have relieved life from the constraints of chemistry or the constraints of Darwinism?

Both possibilities are actually emerging from intelligence on Earth. Darwinism involves a molecular system that is replicated with errors, where the errors are themselves replicable. According to theory, any molecular system that has these properties should have access to the behaviors that we value in life.

The DNA, RNA, and proteins that we see around us are evidence of this hypothesis. Mutations in DNA replication create imperfect replicates, daughter DNA molecules that have slight changes in their nucleotide sequence. When the daughter sets out to replicate her DNA molecules, the changed DNA is the starting point. The mechanism for DNA replication allows the information in the imperfections that distinguish the mother from the daughter to be passed to the granddaughter with the same fidelity as the information that was perfectly transmitted from the mother.

In this respect, replication of DNA is quite unlike, for example, the replication of “fire”, often considered as a counter example for popular definitions of life. Fire does reproduce, by sending sparks into unburned territory. Fire does consume free energy. And “daughter” fires are different from the parental fire. However, those differences are not passed on to their descendent fires. Fire, therefore, is not life.

Darwinism has, however, other pieces of baggage. In particular, the errors in replication must be random with respect to future value. They cannot be “prospective” with respect to fitness. Nor can they arise from direct feedback from the environment to the genetic system. To be consistent with Darwinism, the errors must be random with respect to current and future fitness needs.

This unfortunate feature of Darwinism requires that “babies must die”. To allow adaptation to changing environmental conditions, mutations must be allowed. However, since those mutations cannot reflect either current or future demands for fitness, a sizeable fraction (and, according to most textbooks, a large majority) of these must be disadvantageous to fitness, some to the point of being lethal. Even in a stationary environment where no mutations are needed (assuming that the parent has already attained genetic “perfection” for that environment), necessarily detrimental mutations must occur so that the life can maintain thecapacity to evolve should the environment change.

Now, the standard “RNA first” model for the origin of life is that prebiotic materials spontaneously devolving gained access to Darwinism when they gave rise to an RNA molecule that was able to catalyze the template-directed polymerization of RNA. The first round of that polymerization would create a Watson-Crick copy of the template; the second would deliver a duplicate of the template, if the replication were perfect. However, that replication would almost certainly be imperfect. But those imperfections would then be replicable, and Darwinism would be off and running.

However, as life advances, especially if it becomes intelligent, the destruction of babies for the purpose of advancing the fitness of a genetic pool, in the low chance that the random mutation would have future value, would easily be seen as a waste of resources. This would go double for mutations that must occur in a static environment to maintain evolvability should the environment happen to change in the future. Would it not be better for life to be Lamarckian, allowing at least direct feedback from the environment to the genome to allow the genome to become fitter in the present, without needing to extinguish infants? And would it not be better still if those mutations could be prospective, where an intelligent evolving entity anticipated what future information would be needed in the gene pool, and arrange to get it?

This is not, after all, total science fiction. We are perhaps a scientific generation away from being able to alter, by direct and deliberate intervention, the genetic information in our germlines. Thus, our children need not have mutations that undermine current demands on fitness. If that technology were to be securely in place, we would have access to Lamarckianism. We could remove from our germlines the mutations that currently lower our fitness (like hemophilia). And if we could also predict future genomic needs for fitness (a more difficult challenge), so much the better.

Interestingly, once we had access to Lamarckianism, we could easily lose our capability to support Darwinian evolution. Now, according to the “NASA” definition-theory of life, if we lost our access to Darwinism in favor of these much better modes of evolving, we would no longer be “life”. But no problem. We would simply change that definition-theory.

What about alien biology? It seems if we were encounter an alien life form, we would most likely encounter it in a non-intelligent Darwinian form, absent a molecular concept that would allow Lamarckianism or prospective mutation. Our exploration in the Solar System over the past 50 years almost certainly rules out the chances of having another system intelligent enough to implement germ line gene editing. And we are currently unable to traverse the distances to another star where we might find such an intelligent system.

But that is not the case for any alien life form that encounters us from an extrasolar locale. If we assume that technological advances move approximately in parallel, we might argue that any alien life that has learned how to travel between stars would almost certainly have learned how to make its genome fit without needing to have babies die. At the very least, it would have altered its biochemistry to make it better suited for interstellar travel. Certainly, our terran biochemistry is not well-suited for interstellar travel; our DNA simply would not tolerate the high-energy physics that it would encounter.

But could Lamarckianism have arisen without intelligence? A consideration of the constraints of replication chemistry suggests that it might, and perhaps did on early Earth. In modern terran life, information cannot feed back from the environment directly to the genome because of the unidirectionality of ribosome-based translation. This system can use a sequence of nucleotides in RNA to encode the sequence of amino acids in proteins. However, it has no replication mechanism that allows the sequence of amino acids in a protein to encode the sequence of nucleotides in an RNA molecule. There is no “reverse translation”.

But conceive of an alternative life form that does not use proteins to create “phenotype”. Here, instead of being a three biopolymer life form (like we are), let us consider the possibility that a two biopolymer system might support Darwinian evolution, a form of the “RNA World” model for early life on Earth [6]. Those two biopolymers would be:

1. A catalytic biopolymer, which folds, has multiple functional groups, and has a large diversity of biophysical properties depending on its sequence, the diversity required for rich catalytic potential.

2. A genetic biopolymer, designed to not fold, with no functional groups beyond those needed for genetic transfer, little catalytic capabilities, but with biophysical properties that remain largely unchanged with changing sequence, all required for genetic potential.

Then, let us assume that the second biopolymer directly encodes the first by process of “transcription” through base pairing, just as DNA encodes RNA in processes catalyzed by modern RNA polymerases. Then, since base pairing is reciprocal, the RNA can also code the synthesis of a complementary DNA, just as reverse transcriptases do in modern terran biology.

This system has the potential to be Lamarckian. An RNA transcript might find itself mutated to be better able to meet a current fitness need. If so, it could be reverse transcribed back into the genome. Without the need to have any children die.

Today, of course, we regard proteins as intrinsically better for catalysis than RNA. This comes from many efforts to evolve RNA in the laboratory to perform catalytic function. With just four nucleotides, RNA has very little of the information density of modern terran proteins. Thus, RNA catalysts are plagued by alternative folding, where inactive forms compete with active forms [7]. Further, RNA has very little of the chemical functionality found in proteins. Protein catalysts rely on the functionality of such molecules as carboxylate, thiol, imidazole, and ammonium groups that are present on the amino acid side chains; all are missing from standard encoded RNA.

However, these constraints would be set aside if we were to expand the number of building blocks in the two biopolymers that can form Watson-Crick pairs. And add functionality to those extra building blocks that enhances its power as a catalyst.

Synthetic biology is suggesting this was possible. From work in the laboratory, we now know that as many as 12 different nucleotides forming 6 different nucleobase pairs are possible within the Watson-Crick nucleobase pairing “concept” [8]. These have been made in the laboratory by synthetic biologists, and a molecular biology has been developed to support many of them. These have been synthesized, again by synthetic biologists, to carry groups interesting for catalysis, including ammonium, carboxylate, imidazole, and nitro, the last not even found in modern terran proteins, but which serves as a universal binding entity. The system is evolvable, again in the laboratory, and appears to be a richer reservoir of catalytic functionality than standard nucleic acids [9]. NASA and the Templeton World Charity Foundation are presently supporting us as we try to get Lamarckianism out of this two-biopolymer system.

But these are the products of “intelligent design”. Could such a two-biopolymer life form have arisen without the guiding hand of an intelligent organic chemist?

Interestingly, some surviving features of modern molecular biology suggest that terran life in the RNA world tried to follow this path, and succeeded for a while. Many RNA nucleotides in the oldest RNA molecules (tRNA, rRNA) actually have ammonium groups, carboxylate groups, and thiol groups. These may be vestiges of a time when RNA was being pressed into service as the platform for a catalytic molecule in the RNA World. According to this view, DNA itself had structural changes needed to make it better suited as a genetic specialist. For example, the thymine in DNA was presumably recruited from uracil, methylated to convert an RNA nucleobase into something better suited for genetics.

This raises the question that asks whether early biology exploited the Lamarckian potential of a two-biopolymer system in the RNA world. If it did, it is conceivable that a system capable of Lamarckian evolution is better able to adapt rapidly than a system having access only to Darwinian evolution. It also raises the question whether, when we finally encounter biology on Mars (as we hope to do), it will have a two-biopolymer architecture and be Lamarckian.

Whatever its advantages, terran life clearly decided to dispense with a two biopolymer, potentially Lamarckian, molecular biology. Perhaps it found a four letter RNA alphabet, even with added functional groups, simply too low in information density to compete with the catalytic power of proteins. Perhaps the power of proteins as a platform for phenotype was just so much larger than that power in RNA, even 12 letter RNA with abundant functionality, that proteins were preferred, notwithstanding the complexity of ribosome-based translation. We are doing experiments to see whether proteins continue to have an overall advantage over RNA once the RNA has 12 different replicable building blocks supporting a half-dozen functional groups.

Awaiting the outcome of these experiments, the availability in the laboratory of a functioning set of nucleotides implementing all of the six easily accessible patterns of hydrogen bonds, adjusted separately (given current theory) for optimal performance in genetic and catalytic systems, suggests that natural history might have followed a different path following the emergence of the RNA World. Rather than inventing the ribosome, and having DNA as its genetic molecule, natural history might have simply evolved to make RNA the catalytic biopolymer. It would have done so simply by altering various biosynthetic routes to the oligonucleotides that it has managed to acquire (G, A, C, and U) to add a few of those shown above, with additional hydrogen bonding patterns. Then, it might have added functional groups to this, with 12 different nucleotide building blocks.

This alternative natural history would have allowed terran life to avoid the kinetically slow step of inventing ribosomes; the time required and the path to the ribosome, tRNA molecules, aminoacyl charging enzymes. Further, the two-biopolymer life form would have (forever after) spared the biosphere the “thermodynamic” continuing expense of maintaining a third biopolymer. The translation machinery consumes half of the resources of a bacterial cell. Instead, it could be supporting Darwinian evolution with just two-biopolymers, a DNA-like biopolymer with an expanded set of nucleobases optimized for genetics, and an RNA-like biopolymer with many functionalized nucleobases needed for catalysis.

And it would have prevented the need to kill children to maintain and expand the information in the genetic biosphere.

REFERENCES

[1] Joyce, G. F. 1994, forward to Origins of Life: The Central Concepts, D. W. Deamer and G. R. Fleischaker eds., Jones and Bartlett (Boston)

[2] Szostak, J. W. 2012, “Attempts to define life do not help to understand the origin of life,”  J. Biomol. Struct. Dyn.  29, pp. 599 – 600

[3] Benner, S. A. and Hutter, D. 2002, “Phosphates, DNA, and the search for nonterrean life: A second generation model for genetic molecules,” Bioorg. Chem. 30, pp. 62 – 80

[4] Cleland, C.E., Chyba, C.F. 2002, “Defining ‘life’”, Orig. Life Evol. Biosp., 32, pp 387 – 393

[5] Benner, S. A. and Kim, H.-J. 2015, The case for a Martian origin for Earth life,” Instruments, Methods, and Missions for Astrobiology XVII, R. B. Hoover, G. V. Levin, A. Yu.  Rozanov, and N.C. Wickramasinghe (eds.), Proceedings of SPIE9606, 96060C

[6] Benner, S. A., Allemann, R. K., Ellington, A. D., Ge, L., Glasfeld, A., Leanz, G. F., Krauch, T., Macpherson, L. J., Moroney, S. E., Piccirilli, J. A., and Weinhold, E. G. 1987, “Natural selection, protein engineering and the last riboorganism. Rational model building in biochemistry”, Cold Spring Harbor Symp. Quant. Biol. 52, pp. 53 – 63

[7] Carrigan, M., Ricardo, A., Ang, D. N., Benner, S. A. (2004) Quantitative analysis of a deoxyribonucleotide catalyst obtained via in vitro selection. A DNA ribonuclease. Biochemistry43, 11446-11459

[8] Benner, S. A., Yang, Z., and Chen, F. 2010, “Synthetic biology, tinkering biology, and artificial biology. What are we learning?”, Comptes Rendus 14, pp. 372 – 387

[9] Zhang, L., Yang, Z., Sefah, K., Bradley, K. M., Hoshika, S., Kim, M.-J., Kim, H.-J., Zhu, G., Jimenez, E., Cansiz, S., Teng, I.-T., Champanhac, C., McLendon, C., Liu, C., Zhang, W., Gerloff, D. L., Huang, Z., Tan, W.-H., and Benner, S. A. 2015, “Evolution of functional six-nucleotide DNA”, J. Am. Chem. Soc. 137, pp. 6734 – 6737

[Go to Top]


BIO-SIGNATURES AND TECHNO-SIGNATURES BEYOND EARTH

Paul C.W. Davies
Beyond Center for Fundamental Concepts in Science
Arizona State University, P.O. Box 870506
Tempe, AZ 85287–0506 
Paul.Davies@asu.edu

ONLY NEEDS RESOLUTION ON MISSING REF [9].  OTHERWISE GOOD TO GO TO FINAL

ABSTRACT

Among the many uncertainties that feature in the Drake equation, the least certain quantity is fl, the fraction of earthlike planets on which life arises. Because the process that transformed non-life into life is unknown it is meaningless to estimate the probability, which might be infinitesimally small. Arguments to the contrary are unconvincing. The best hope for determining that fl is not close to zero would be the discovery a second sample of biology, or post-biology, either on Earth or beyond. A variety of search strategies suggests itself.

HABITABLE IS NOT THE SAME AS INHABITED

The founder of SETI, Frank Drake, summarizes the factors that determine the number of communicating civilizations in our galaxy in terms of an equation:

N = R* fp ne fl fi fc L 

where

R* = rate of formation of Sun-like stars inthe galaxy

fp   =  fraction of those stars with planets

ne  =  averagenumber of earthlike planets in each planetary system

fl   =  fraction of those planets on which life emerges

fi   =  fraction of planets with life on which intelligence evolves

fc  =   fraction of those planets on which technological civilization and the ability to

          communicate emerges

L  =   the average lifetime of a communicating civilization.

The number N represents how many “radio-active” civilizations exist in the galaxy. The symbols on the right are quantities we need to estimate – guesstimate would be more apt – the value ofN.

In the five decades since Frank Drake formulated his eponymous equation, our understanding of astrophysics and planetary science has advanced enormously. The first three terms of the equation refer to factors that are now known with reasonable precision, due in no small part to the discovery of enough extrasolar planets for meaningful statistics to be developed.

Unfortunately this progress has not been matched by a similar leap in understanding of the remaining factors – the biological ones. In particular, the probability of life emerging on an earthlike planet, fl, remains completely unknown. In the 1960s and 1970s, most scientists assumed that the origin of life was a freak event, an accident of chemistry of such low probability that it would be unlikely to have occurred twice within the observable universe.

Today, however, many distinguished scientists express a belief that life will be almost inevitable on a rocky planet with liquid water – a “cosmic imperative” to use the evocative term of Christian de Duve [1]. But this sentiment is based on little more than fashion. Indeed, it is easy to imagine plausible constraints on the chemical pathway to life that would make its successful passage infinitesimally small. In the case of the fifth term in the Drake equation – the probability that intelligence will evolve if life gets going – at least we have a well-understood mechanism (Darwinian evolution) on which to base a probability estimate (though it still remains deeply problematic to do that). The same is true of the remaining terms. Thus the uncertainly in the number of communicating civilizations in the galaxy, N, is overwhelmingly dominated by fl.

In the important hunt for earthlike, extrasolar planets, astronomers are busy cataloguing habitable real estate across the galaxy. The qualification “earthlike” is admittedly vague.  Nevertheless it is clear that our galaxy alone contains millions if not billions of worlds that are earthlike in some respect, and thus potential abodes for life.  However, while the qualification “earthlike” may be a necessary condition for life to arise, it is far from sufficient.  “Earthlike” refers to a setting, not a process. To establish life on an earthlike planet, all the necessary physical and chemical steps have to happen, and as we don’t know what those steps are, we are ignorant as to how many habitable planets do, in fact, host some form of life.

Drake himself favors a value of fl close to unity. That is, given an earthlike planet, it is very likely that life will arise there. It is a sentiment echoed by planet hunter Geoff Marcy, who recently said he would “bet his house” on the galaxy teeming with life.  By contrast both Francis Crick [2] and Jacques Monod [3] argued that life’s origin was a freak event (“almost a miracle,” according to Crick). Unfortunately these disagreements are based almost entirely on philosophical judgments rather than scientific evidence, for the simple reason that science remains largely silent on the specifics of the pathway from non-life to life. One may estimate the odds of a process occurring only if the process is known. One cannot estimate the odds of an unknown process.

THE “UP-IT-POPS” FALLACY

Carl Sagan said: “the origin of life must be a highly probable affair; as soon as conditions permit, up it pops!” [4] While it is certainly the case that the rapid appearance of life on Earth is consistent with its genesis being probable, it is equally consistent with it being exceedingly improbable, as was pointed out by Brandon Carter over three decades ago [5]. The essence of Carter’s argument is that any given earthlike planet will have a finite “habitability window” during which life might emerge and evolve to the level of intelligence. On Earth, this window extends for about 4 billion years – from about 3.8 billion years ago to about 800 million years hence, when the Sun will be so hot that the planet will be an uninhabitable furnace. Suppose, reasoned Carter, life’s origin is so improbable that the expectation time for it to occur is many orders of magnitude longer than this habitability window. And further suppose that, in addition to the (improbable) transition from non-life to life, several other very hard steps are needed before intelligence is attained (for example, eukaryogenesis, sex, multi-cellularity, evolution of a central nervous system). If in all there are n hard steps, each of which has an expectation time much longer than the habitability window, and each of which is necessary before the next step may be taken, then a simple statistical argument leads to a relationship between n and the duration of the window.

Carter is able to conclude that there are about 5 extremely improbable steps spaced about 800 million years apart involved in attaining intelligent life on Earth. Significantly, the first step is also bracketed by an interval of 800 million years. That is, if the emergence of life was an exceedingly improbable process (but of course one that had to happen for humans to be here and ponder it) then probability theory predicts it should have happened fairly rapidly – within 800 million years. Another way of expressing it is that, unless life had got going quickly, we would not be here to discuss it three billion years later.

IMPROVED UNDERSTANDING OF LIFE’S ORIGIN

Perhaps we can guess a plausible value of fl by studying the chemistry that underlies life? Attempts to re-create the chemical pathway from non-life to life have been pursued since the pioneering work of Haldane [6] and Oparin [7] in the 1920s, and were boosted by the famous experiment of Stanley Miller in 1952 [8].  However, life is so complex that the results of pre-biotic synthesis go only a tiny way down the long pathway to life and tell us little about potentially extremely hard chemical obstacles at later stages.  

There is, however, a more serious issue lurking here. Life is clearly more than complex chemistry. Chemistry deals with concepts such as molecular shapes and binding strengths, reaction rates and thermodynamics.  By contrast, when biologists describe life they use terms like signals, coded instructions, digital error correction and editing, and translation – all concepts from the realm of information processing.

While chemistry provides the substrate for life, the informational aspects (which are instantiated in the chemistry) require an origin story of their own [9]. In a nutshell, pre-biotic chemical experiments help us understand how the hardware of life might (might) have come to exist, but so far have cast little light on the origin of the software aspect.  Because life requires both specialized hardware and specialized software to come out of as-yet little-understood physical processes, we are very far from being able to quantify the likelihood of getting both in a plausible molecular soup.

A SECOND SAMPLE OF LIFE

If we were to be presented with a second sample of life, and we could be sure that it arose independently of known life, then the case would instantly be made that fl is not infinitesimally small. Various scenarios have been suggested for the discovery of a second sample of life.

SETI succeeds!

In the event that mankind obtains incontrovertible evidence of the existence of alien technology, then we could conclude that life must have arisen in at least one other location in the universe [10].  (This conclusion assumes, of course, that the pathway to technology involves biology and intelligence.  Logically there is no reason why this has to be the case but it is the default assumption.)  Note that the conclusion would follow even in the absence of an actual message or signal from an alien civilization – the “gold standard” of SETI. It would be sufficient to discover any signs of technology.

Synthetic biology

The burgeoning field of synthetic biology [11], in which new forms of life are engineered in the laboratory, might suggest that life is literally easy to make, and that it may manifest itself in a wide range of molecular forms.  Although synthetic biology currently falls far short of constructing living organisms from scratch (as opposed to re-wiring or re-programming existing organisms), one may imagine that in the future this will be possible. Would we then conclude that the transition from non-life to life is not especially difficult and therefore likely to be widespread in nature? The answer is no. Creating life in the laboratory will demand a great deal of sophisticated scientific equipment, a host of purified substances, and a particular sequence of chemical and physical steps, each of which is likely to take place under tightly controlled conditions; indeed, under different conditions for each step. (It will also require a large budget!)

But above all, creating life in the laboratory entails the attentions of an intelligent designer – the scientist – who embarks on the venture with a particular end product in mind and a well thought-out sequence of steps to attain it. So it may turn out to be relatively easy (if expensive) for scientists to make life, but that does not mean it is also easy for nature to do so.

Life on Mars

The best hope for finding life in the solar system seems, by common consent, to be on the planet Mars. The problem is that Mars and Earth are not biologically quarantined from each other.  Over the history of the solar system, these two planets have traded a prodigious amount of material. The existence of many known Mars meteorites demonstrates that rocky ejecta can arrive on another planet relatively unscathed, and the same could be assumed about any hitchhiking organisms [12].  Given this traffic of material over billions of years, it seems very likely that if life were to have arisen on Mars, it would very soon be transported to Earth to seed our planet (and vice versa).  So finding life (past or present) on Mars would not of itself demonstrate a second genesis.

Extra-solar planets

Establishing the presence of life on an extrasolar planet from spectroscopic data alone is challenging.  Instruments capable of detecting possibly biologically-associated atmospheric gases are being planned, but it may be a long time before we have that capability.

A shadow biosphere on Earth

If life does form readily in earthlike conditions, then we might expect it to have started many times on Earth itself.  All known life on Earth is interrelated, with a common genetic code and a common biochemical scheme involving the same suite of nucleotides and amino acids, the manufacture of proteins by ribosomes and several other specific universal features, suggesting a common origin. The discovery of just a single micro-organism so biochemically distinct from known life (i.e. so alien) that it could not belong to this familiar tree would be powerful evidence for an independent genesis event. Almost all known species are microbial, and at the present time scientists have only scratched the surface of this microbial realm. Thus there is plenty of room at the bottom for microbes that are biochemically weird enough to qualify for an alternative form of life [13], [14].

POST BIOLOGICAL INTELLIGENCE

The second least understood term in the Drake equation is the last: L, the longevity of a civilization. Sagan fretted that L might be rather short if alien civilizations mirrored human, with our known warlike tendencies.  There is a strong case that emotion-bound biological intelligence is likely to be short-lived, not only for Sagan’s reason (nuclear annihilation), but also because biological intelligence is surely but a transitory phase in the evolution of intelligence in the universe. Already on Earth much intellectual heavy-lifting is being done by computers, and we can foresee a time when designed artificial systems will outsmart humans in almost every capacity. An extraterrestrial civilization of, say, ten million years duration is most unlikely to be dominated by flesh-and-blood sentient beings, but by complex designed and manufactured systems of the nth iteration. Looking for techno-signatures of post-biological systems is a huge challenge given that futurists tend to extrapolate from human civilization, which is shaped by mainly biological factors.  Given the unknowns, it makes sense to be alert to the possibility (however remote) of alien techno-signatures in any observational database to which we have ready access, including of course SETI data, but also data from any astronomical, biological, geological and planetary databases [15]. 

REFERENCES

[1]  De Duve, C. 1995, Vital Dust, Basic Books (New York)

[2]  Crick, F. 1981, Life Itself; Its Origin and Nature, Simon and Schuster (New York)

[3]  Monod, J. 1972, Chance and Necessity, trans. by A. Wainhouse, Collins (London)

[4]  Sagan, C. 1995, Bioastronomy News, 7 (4), 1

[5]  Carter, B. 1983, “The anthropic principle and its implications for biological evolution,”Philosophical Transactions of the Royal Society of London A 310, 347

[6]  Haldane, J. B. S. 1968, “The origin of life,” in Science and Life, Pemberton Publishing (London)

[7]  Oparin, A. I. 1924, Proiskhozhdenie zhizny (The Origin of Life),  Ann Synge trans, in Bernal, J. D. ed. The origin of life. 1967, J. D. Bernal, ed., Weidenfeld and Nicholson (London)

[8]  Miller, S. L. 1953, “A production of amino acids under possible primitive earth conditions,”Science117, 528

[9] Davies, P. C. W. and Walker, S. I. 2012, “The algorithmic origins of life,” J. R. Soc. Interface 10, doi: 10.1098/rsif.2012.0869

[10] Davies, P. 2010, The Eerie Silence, Penguin (New York)

[11] Benner, S. A., Chen, F., and Yang, Z. Y. 2011, “Synthetic biology, tinkering biology, and artificial biology: a perspective from chemistry,” Synthetic Biology, eds. Pier Luigi Luisi and Cristiano Chiarabelli , Wiley 69-106

[12]  Davies, P. 1996, “The transfer of viable micro-organisms between planets”, Evolution of Hydrothermal Ecosystems on Earth (and Mars?): Proceedings of the CIBA Foundation Symposium No. 20 (ed. Gregory Brock and Jamie Goode, Wiley (New York); see also Paul Davies 2003, The Origin of Life, Penguin (New York)

[13]  Davies, P. C. W. and Lineweaver, C. H. 2005, “Searching for a second sample of life on Earth,” Astrobiology 5, 154

[14]  Davies, P.C.W., Benner, S. A. Cleland, C. E., Lineweaver, C. H., McKay, C. P. and Wolfe-Simon, F. 2009 “Signatures of a shadow biosphere,” Astrobiology 9, 241

[15]  Davies, P. C. W. and Wagner, R. V. 2011, “Searching for alien artifacts on the moon,”

Acta Astronautica  89, 261

[Go to Top]


INTELLIGENT EVOLUTION: AN APPROACH TO OPEN-ENDED EVOLUTION

Chrisantha Fernando
Google, DeepMind
chrisantha@google.com

ABSTRACT

I will argue that evolution by natural selection scores highly on a formal definition of universal intelligence, and therefore if we produce a system capable of open­ended evolution in a computer, then we will have created a necessary condition for ‘post­biological’ digital intelligence. Natural selection satisfies several other criteria for intelligence, such as creativity; in fact it has even re­invented itself at least twice in the immune system and as cultural evolution. The origin of digital evolution may constitute the next major transition in evolution [1] in which the human cultural system invents a new evolutionary system in software that evolves in silico. What might be done to achieve this is discussed. 

WHAT IS INTELLIGENCE?

Let us define a unit of intelligence as Legg and Hutter define universal intelligence, i.e. as the time discounted reward (value) obtained by a unit (agent) over the set of computable reward summable environments, weighted by the simplicity (inverse exponentiated Kolmogorov complexity) of the environment [2]. The scaling is intended to make performance on simple environments count more than performance on complex environments. Whilst this is an uncomputable quantity, because it is in practice not possible to do the sum over all possible environments due to tractability and computability issues, and it is also not possible to calculate simplicity perfectly, it is agnostic to mechanisms and so makes the fewest biasing assumptions.

There are many other aspects that people wish to capture in a definition of intelligence, e.g. learning to learn, speed of adaptation, discovering low­dimensional compressed structures and regularities, allowing manipulation and prediction of the environment, representing the world internally as a model and using this to plan. I will consider these aspects later, and argue that evolution by natural selection satisfies several of these criteria even on its own.

The Legg-Hutter intelligence measure can be formalized as follows. Let µ be an environment and let π be an agent. At each interaction step t, the agent π outputs an action at, and the environment replies by an observation ot and a reward rt, that can depend on all previous actions, observations and rewards. The value of the agent π in the environment µ is the expected sum of rewards the agent can gather in this environment:[1]. - ([1] Discounting is avoided by considering reward-summable environments [2].)

Let M be a set of environments, and let  be the weight of environment µ within M, with and  if µ is not in M. Then the value of an agent π in the set of environments M is:

Thus, for a given set of environments M, agents can be compared based on their value. Now, we want to compare agents on the largest possible set of environments, maybe all possible environments. How can we do that? Legg and Hutter’s solution is to rely on Solomonoff’s universal prior, which assigns a prior weight to all computable environments µ, i.e., all environments that can be simulated by a computer (including real-valued environments up to a possibly-increasing precision).

To do this, we need to choose a Universal Turing Machine (UTM) of reference. A UTM is equivalent to a (universal) programming language, like Java and C, which can describe all programs. Then for a chosen reference UTM, the weight of an environment µ iswhere K(µ) is the length in bits of the smallest program that can describe µ on the UTM (K is Kolmogorov’s complexity).

An important property of UTMs is that any UTM U1 can simulate another UTM U2, just like any (universal) programming language can simulate any other programming language by first writing an interpreter, which only incurs an additive penalty in K(µ) – this penalty is the length in bits of the interpreter, just like a Java interpreter can be used to interpret C programs by first writing a C interpreter in Java.

In summary, a unit of intelligence is one that obtains high value in a set of environments, with greater weight being given to doing well in simpler environments. 

WHAT IS NATURAL SELECTION?

Next I will define another concept; the unit of evolution, originally coined by John Maynard Smith and discussed by Okasha [3]. A unit of evolution is an entity that has the following three properties.

1. It is capable of autocatalytic growth, i.e. has exponential growth dynamics whereby the rate of increase of that entity is proportional to the frequency of that entity itself.

2. Entities in a population exhibit variations, i.e. it is possible to have many different types of entity, A, B, C, etc…

3. Entities must have heredity, that is “like must give rise to like”, i.e A’s to A-like things and B’s to B-like things.

During multiplication, offspring must resemble parents. If in addition there is differential fitness, e.g. A’s have a higher chance of surviving and replicating than B’s and C’s because of some property of A’s, then A’s can increase in frequency and eventually be universal, or “go to fixation”. By this process, the fittest entities will always increase in frequency, given some very simplifying assumptions, and this can be thought of as survival of the fittest.

WHAT IS LIFE?

It is also useful to define a last kind of unit, that being Tibor Ganti’s unit of life [4]. A unit of life is an entity that has a boundary, a metabolism, and an informational control system. By metabolism I mean that the system is a dissipative structure existing out of equilibrium open to mass and energy flow. For example, a cell, an ostrich, and a country may be considered units of life. Clouds and fire are metabolic, but have no informational control systems. They do not need to be capable of replication as units of evolution do. Units of life are hierarchically compositionally organized, i.e. units of life can be themselves made of many units of life. This is a design principle in macroevolution observed on Earth.

The proper relationship of units of life to units of evolution is that of partially overlapping sets. Most units of evolution currently are units of life as well. Units of life that are incapable of reproduction such as mules, sterile workers, etc., are not units of evolution. There are units of evolution that are not units of life, e.g. binary strings in the computer being evolved by genetic algorithms. In these algorithms a ‘genome’ encodes some phenotype, e.g. a wing, and the wing is tested in a simulator and given a fitness, and those genotypes that make high fitness wings have a higher chance of replicating with mutation (and perhaps crossover with other good wing producing genotypes). In this way, the wing design can get better in the computer without anyone explicitly saying what a good wing should look like. These are clearly not units of life according to the definition, as they have no metabolism.

The proper relationship of units of intelligence to units of evolution is that of a partially overlapping set. There are many algorithms, such as temporal difference reinforcement learning, that have no units of evolution but that are capable of learning to obtain reward in a wide variety of complex environments  [5]. Units of evolution are all units of intelligence however, because value can be replaced simply by fitness, and the environmental simplicity measured in the same way as before. Similarly, all units of life are units of intelligence, but not all units of intelligence are units of life.

I argue that to produce post­biological intelligence we should focus on producing units of intelligence that are units of evolution, but not worry about making them units of life, i.e. providing a boundary and a metabolism.

WHAT IS OPEN-ENDED EVOLUTION?

Open-­ended evolution is a subfield of artificial life in which (mostly) computer scientists try to design the initial conditions and dynamical rules of an evolutionary system in a computer such that it will continue indefinitely to produce novel adaptations.

Notable examples are Corewars [6], Tierra [7], Avida [8], Geb [9], Polyworld [10] , and Chromaria [11]. There is no universal agreement as to what open-ended means formally. Most would agree that “you know it when you see it”, as the system continues to evolve “interesting novelty”. You don’t want to reset the system as interesting accumulated adaptations will be lost.

I propose that the notion of universal intelligence is helpful in understanding what open-ended evolution really is. Open­ended evolution can be thought of as achieving general artificial intelligence rather than narrow artificial intelligence, e.g. a Deep Blue can play chess very well but does not increase the range of environments in which it can play well. No matter how long you leave Deep Blue on, it won’t ever be good at noughts and crosses, or draughts. Open­ended evolution is where an evolutionary system continues to discover and solve novel interesting problems. OEE is the coupling of an environment, an individual-evaluation function for individuals, evolutions operators and an initial individual such that: the environment allows for the creation of more and more complex individuals, the evolution operators have the capability to produce individuals of growing complexity, the evaluation function assigns a higher value to more complex individuals.

Nobody knows for sure (because we have not constructed such a system yet) what the minimal requirements are for an in silico system to exhibit open-ended intelligence/open-ended evolution. The best example we have is human intelligence which exists in the intersection of all three sets of units. The question is, what can be thrown away?

There is a group of philosophers influenced by concepts of autopoiesis who believe that to be a unit of life is a necessary prerequisite for intelligence [12]. I disagree. Units of life may be required for the origin of intelligence, e.g. it was the only way that intelligence could have happened from scratch on a planet, but it does not mean that the algorithmic principles of intelligence cannot be extracted, distilled out, and re-embodied in a form that does not require the agent to be a unit of life.

A more subtle question is whether it is even possible to have open­ended intelligence without open­ended evolution. I’ll argue that a necessary requirement for open-ended intelligence is open-ended evolution. The reason is that if some algorithm X exists that has some level of intelligence, then it can only be open-ended by modifying itself into X’. It can do so through some process of gradient descent, which means that it minimizes some cost function that the designer of that algorithm gave it. However, to produce any real creativity, it must modify itself in ways that the designer could not have foreseen. To evolve cost functions themselves, some higher-order search is required [13].  It will need to make a guess, and in any difficult problem, most guesses will be wrong, therefore it will at the very least have to have memory of the original X, in case X’ is actually worse, and the system needs to return to the better previous X.

This process is called stochastic hill climbing. Any such system will benefit from parallelization. If there is the capacity to implement say a billion copies of the algorithm X, one simple thing to do would be independent, parallel hill­climbing in which each X tries out a different X’, tests it, and either keeps it or reverts to its original X, etc. This would at most provide a linear speedup in search. The next best algorithm is called competitive learning in which most of the search resources, i.e. the capacity to make many X’ and test them, is given preferentially to the currently best X’s. However, we have shown that in realistic problems such as evolving controllers for robot object discrimination a much more powerful algorithm is to allow information transfer between X’s, whereby the best X’s overwrite (perhaps partially) the not so good X’s [14]. This is full natural selection. So the argument is that when there are parallel generation and evaluation resources, the most efficient algorithm for mixing solutions, amplifying good solutions, and searching over a complex fitness landscape is natural selection. Nobody really knows why, although attempts have been made to understand in which class of search landscape evolution by natural selection scales polynomially rather than exponentially [15]. The core algorithmic difference between natural selection and most ensemble methods in machine learning is that natural selection involves transfer of information between X’s.

Very recently, there have been approaches in machine learning in which information transfer occurs between agents in an ensemble. For example, in asynchronous reinforcement learning [16] multiple agent’s experience the world separately and share these experiences to update a centralized shared set of neural network parameters. Cultural evolution is a special example of this in which humans observe other humans’ behavior and modify themselves accordingly. The modification need not be random, it may be the result of the implementation of a complex algorithm of causal inference or learning in the individual unit, in which units observe the behavior or past experiences between each other. The critical element is replication/transfer/exchange of information between individuals in a population. In short, I believe that when there are parallel resources available to implement an algorithm X, then a very efficient way to explore open-ended variants of X is turning it into a unit of evolution. There is no proof of this claim yet, there is some evidence for it, and I know of no arguments against it.

In conclusion I believe that post­biological intelligence will be formed when we become capable of designing and running a sufficiently large evolutionary system in silico with the capacity for open­ended evolution. I will discuss how we might go about doing this in a later section. First I would like to emphasize how organismal macroevolution actually possesses some of the more specific features of intelligence that people typically want from an intelligent system.

HOW IS EVOLUTION INTELLIGENT?

In the last decade, we have learned that evolution has many of the properties of human intelligence. The main one is that it can learn to learn, or equivalently, it can evolve to evolve. Gregory Bateson called this Deuterolearning [17]. In short, evolution has been getting better at “the evolution game” over the last 3.5 billion years on Earth. It has invented better methods of search, e.g. it invented sexual reproduction which implements crossover, which does not benefit the individual, but benefits the population or lineage or solutions [18]. It has invented methods of representing the phenotype of the organism in the genotype such that random variations at the nucleotide level produce non­random variations in the phenotype level. The nicest example of this I know is shown in Figure 1.

Figure 1. The two tables on the left are phenotypically identical but have different genetic encodings. This results in homogeneous mutation in genotype space producing heterogeneous variation in phenotype space. Some directions in this space are better than others, and evolution is capable of modifying genotype to phenotype maps so that they explore phenotype space preferentially along these desirable directions [19], [20].

Imagine that there are two tables both encoded by evolution, both identical. One table is encoded by the height and width parameters, the other table is encoded by the x,y coordinates of the blocks making up the table. Since both tables are identical they have equal fitness, e.g. let us say each produces two table children. The first table produces useful bar stools and coffee tables, but the second produces with high probability tables that fall over. Therefore we see that whilst the fitness of the parents is identical, the fitness of their children is different. So, in the next generation, the better genotypic encoding is likely to get passed on, whilst the worse genotypic encoding is likely to be lost, as its phenotype is not robust to mutational variation. The genotypic encoding has no fitness advantage to the individual, both parents have an equal fitness of two. It has a fitness advantage to the children. This is a slightly mind-bending idea, that there exist properties of the individual that are selected not for the benefits to the individual itself, but for the benefits that are likely to be conferred to its progeny. Toussaint has called the phenomenon whereby there is a many to one genotype to phenotype map, in which homogeneous variation in genotype space produces heterogeneous variation in phenotype space [20].

The question is then, what evolutionary forces are able to improve genotype to phenotype maps, such that ‘bad’ encodings like the x,y encoding are evolved to produce compressed encodings like the height­width encoding [21]?  There seem to be at least two possible explanations for the evolution of evolvability, as this phenomenon is known. The first is mutational robustness in which there is selection for variants to be fit. The second is lineage selection, in which it is possible think of a lineage as a unit of evolution rather than in individual, e.g. consider an individual to be a parent, child set, and grandchild set, and let this entire lineage have a fitness. In this case, variability properties (e.g. mutation rates, particular neutral genotype to phenotype maps) can be selected for if these variability variants can be stably inherited. In short, if there is heritable variation in variability properties, then variability properties can be acted on by selection. In larger population sizes it is possible for longer lineages to co­exist before they go to fixation, therefore it is possible for stronger selection at the lineage level. These are concepts that are only vaguely and poorly understood now, and much more work is required to model and elucidate these remarkable phenomena in which evolution improves itself over time. Several evolutionary biologists disagree about the evolution of evolvability, about its power, and so on, so this is cutting edge research in evolution right now. Also, when we are dealing with post­biological evolution, we can engineer the system so that it is more capable of the evolution of evolvability than our own organismal genetic system is. We can design evolution as it could be, rather than evolution as we know it.

I think that evolution has the kind of intelligence that Reti the chess grandmaster had. When Reti was asked “How many moves do you look ahead?”, he said “One. The right one”. The evolution of evolvability confers a similar kind of insight (rather than foresight) to evolution. Evolution can learn that in these kinds of situation it is better to make these kinds of moves. It does not need to plan or look ahead. It trusts its experience and instinct in a metaphorical sense. To modify Dawkins’ metaphor a little to encompass this idea, the watchmaker may be blind, but he is not stupid. Other systems have this kind of intelligence too, for example AlphaGo’s value network can immediately see a never-before-seen board position and approximate its value, without explicit planning [22]. But it is interesting that in an evolutionary system with nontrivial neutrality, such exploration distributions are automatically discovered.

Sure, humans have many other arrows in the quiver of their intelligent machinery, but evolution shares some of these arrows. It is remarkably creative. This has lead Eörs Szathmáry and me to propose in 2008 the Theory of Darwinian Neurodynamics that evolution by natural selection takes place in the human brain, and is responsible for human creativity and search in the space of ideas [23]. We have proposed mechanisms by which entities could replicate in the brain [24] and experimentalists are trying to discover whether these mechanisms could work in foetal rat neuron model systems.

We are at a very early stage yet, but if it does turn out that a system of natural selection is taking place in the brain, then it will be quite contrary to the beliefs of most neuroscientists. Most neuroscientists do not understand the algorithmic advantages of evolution, and most evolutionary biologists do not understand neuroscience. Yet, these two fields deal with the only two open-ended adaptive systems we know on Earth, the brain, and evolution. Why is there not more enthusiasm to think about shared algorithmic connections between these two data points?

Recently I have been attempting to discover how evolution can benefit machine learning (gradient based methods) and vice versa. One link is that machine learning methods can be used to learn more effective mutation operators. We have demonstrated this in two publications in which deep learning is used to guide evolution [25], [26]. Symmetrically, evolution can be used to guide gradient descent by for example evolving the topology of deep learning networks, cost functions, etc. [27], [28], or evolving generative indirect encodings of the weights of larger neural networks which themselves learn [29].

EVOLUTION HAS INVENTED POST-BIOLOGICAL INTELLIGENCE ON NUMEROUS OCCASIONS

What does post­biological mean? If we went back 3 billion years to before the origin of nucleotides and digital information in the form, presumably, of RNA molecules consisting of A, C, G, and U monomers, then would the discovery of RNA template-based information by the evolutionary system on Earth have been called post­biological by an external alien observer? Would the first multicellular organism to be invented by the evolutionary system of Earth have been called post­biological? Would the origin of language and cultural evolution in human populations have been called post­biological, in that new memetic and cultural and behavioral units of evolution had suddenly come into existence that had not existed before? The major transitions in evolution are all examples of the origin of post­biological units of intelligence. It is hard to predict what form the next units arising from a major transition will take. It seems possible that they will arise as increasingly autonomous cultural algorithmic units.

For example, as our communication and activity is mediated by increasingly complex control systems, such as say self­driving cars, then these control systems may share information and modify themselves through experience observed from others. In this way, many conventions that were previously static and culturally defined may become flexible and self-modifying, independent of our own explicit control. These early units will be limited initially to specific kinds of narrow intelligence and narrow regions of creativity.

However, initially from an academic or purely esoteric study (and indeed this has already begun in the form of automated science, e.g. for discovering new physical laws [30] we will develop algorithms that wish to understand the world at a deep level. These are algorithms for automated science, whose raison d’etre is to manipulate and predict the world as best they can, for the pure sake of manipulation and prediction. These software scientists will compete and cooperate and share information such as human scientists do. Eventually much science may be automated, and discoveries will be made by these algorithms that have accumulated information over years of experimentation and hypothesizing. In many ways organismal evolution is already analogous to a population of scientists; our genomes contain huge amounts of information about the world [31]. Eyes tell us about light, and wings tell us about fluid dynamics.

EVOLUTION HAS REINVENTED EVOLUTION AT LEAST TWICE

Evolution has reinvented itself at a faster timescale in the adaptive immune system. In a sense this is post­biological evolution in the armpits. When you have a cold, the T­cells in your lymph nodes generate a random diversity of antibodies. Those T-cells that produce antibodies that bind the foreign antigen better replicate more rapidly, and take over. Thus, over a few days, many rounds of natural selection of T­cells results in the discovery of tightly binding antibodies that can better bind the invading molecule. Evolution has re­invented units of evolution at the somatic time scale.

Evolution has also reinvented itself in cultural evolution. It is quite clear that cultural evolution happens, for example in the evolution of language, recipes, fashions and technology. This new faster evolution exploits learning in its inner loop. The unit of evolution itself is capable of learning. In other words cognitive variability is much cleverer than genetic variability. The way that new solutions are generated is much more sophisticated in cognition than in genetics. A more speculative theory that was mentioned before is that evolution may have reinvented evolution in the human brain. I will not dwell on this now.

OPEN-ENDED EVOLUTION IN SILICO

How should we go about producing post­biological intelligence in computers? Each unit of evolution should be capable of learning from the experience of others.

The society of scientists model I propose here is designed to sidestep the quagmire of arbitrary decisions that must be made when designing an artificial ecosystem as in Tierra or Avida. Eventually the evolving agents in the computer failed to produce anything qualitatively new and interesting. Sometimes the simplest faster replicating agents took over and killed the slower more complex agents. This shows us all too clearly that it is not sufficient to put natural selection into a system in order to produce open-ended complexity. I know that that is the approach taken by most people working in the field of open­ended evolution, but I think currently it is too great a computational challenge. The computing power required to simulate physics and chemistry to the extent that interesting open­ended, self-organizing life-like bodies could arise and self­replicate by gathering resources is enormous. I am proposing throwing away the idea that units of life are required for open-ended intelligence, which is to a large extent what the community of open­ended evolution enthusiasts implicitly believe.

My proposal is that we should do this in the following way: Let each unit be as sophisticated a learning algorithm as one is capable of inventing. Then produce a population of such units. Allow this population of units to simultaneously interact with the world. Allow each unit to replicate based on how well it is capable of making a unique experimental prediction in the world, that no other unit can produce. If multiple units make the same prediction or manipulation they share the reward and hence the fitness obtained from making that prediction. In this sense, unique manipulation and prediction become the intrinsic motivation that drives this population of experimentalist agents. Those that discover new regularities in the world and that can exploit these for manipulations and predictions that other agents can’t will get more fitness and will replicate.

There is a design decision at this stage about which kind of evolution framework to use to evolve this society of scientists, and a decision to be made of how deeply embodied in the environment each agent is.  In the classical genetic algorithm setup, each individual is evaluated in the environment independently of the other individuals, yielding an individual/environment fitness value. In a weak co-evolutionary setup, the individuals interact with the environment altogether, so the fitness of an individual is relative not only to the environment but also to the rest of the population, but individuals cannot interact directly, and don't observe the other agents. In a stronger  co-evolutionary version, the individuals can communicate, but they are evaluated in different “rooms” of the environment, i.e., the interaction of the other agents with their own version of the environment does not affect other individuals' versions of the environment. Alternatively individuals may also communicate directly, and may know the “actions” of the other individuals or their fitness, and may also know their genotypes (e.g., to copy parts of them). In the extreme case, individuals—both their genotype and their phenotype—are part of the environment, like in a cellular automaton. This may be much less practical though.

The system will need to be sufficiently large that a diversity of manipulations and predictions can co­exist. An archive will be needed to prevent the Red Queen Effect which is a co­evolutionary pathology in which evolution may make no progress but solutions may oscillate, an effect which is unfortunately not entirely absent in human science in which previous discoveries may be forgotten and in which therefore the criteria for success may oscillate or generally be non­stationary. This archive will serve as the scientific repository of knowledge which can be accessed by all agents. An extra level of complexity is added when the changes in the world made by one agent affect the world of another agent. In this case there will be cooperation and competition dynamics between software scientists, as in any fully fledged ecology.

Let us focus on a more esoteric open­ended science. Within this world of software scientists, there will be some who are malicious manipulators, some groups that co-manipulate the world to suit each other’s predictive abilities, and some who parasitize the creativity of others. But the whole gamut of ecology and social dynamics will be present in there. From a machine learning perspective we can think of this as an ensemble method for open-ended unsupervised learning. Scientists are trying to maximally compress the world whilst maintaining experimental predictability. The population of scientists makes novel, unique discoveries about the world and these accumulate.

CONCLUSIONS

I have argued that if we wish to produce post­biological intelligence we should make an open­ended evolutionary system in a computer or on the internet. All previous attempts up to now have failed. It is necessary that some survival advantage always arises from being complex. This is certainly the case in our world, and I would expect in most worlds, due to the tendency of physics to make hierarchical and compositional structures.

Acknowledgements: Thanks to Laurent Orseau for discussions and comments on the manuscript. Thanks to the Seth Shostak and Paul Davies for help with the final versions of the manuscript.

REFERENCES

[1] Szathmáry, E., and Smith, J. M. 1995, “The major evolutionary transitions,” Nature, 374, 6519, pp. 227 – 232

[2] Legg, S., and Hutter, M. 2007, “Universal intelligence: A definition of machine intelligence,” Minds and Machines, 17, 4, pp. 391 – 444

[3] Okasha, S. 2005, “Multilevel selection and the major transitions in evolution,” Science, 72, 5, pp. 1013 – 1025

[4] Ganti, T. 2003, The Principles of Life, Oxford University Press (Oxford)

[5] Minh, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Petersen, S. et al. 2015, “Human-level control through deep reinforcement learning,” Nature, 518, 7540, pp. 529 – 533

[6] Dewdney, A.K., 1984, In the game called core war hostile programs engage in a battle of bits.” Scientific American, 250 (5), pp. 15–19

[7] Ray, T. 1992, “Evolution, ecology and optimization of digital organisms,” Santa Fe Institute working paper 92-08-042

[8] Adami, C. and Brown, C.T. 1994, “Evolutionary Learning in the 2D Artificial Life Systems Avida,” R. Brooks, P. Maes (eds.), Proc. Artificial Life IV, MIT Press (Cambridge), pp. 377 – 381

[9] Channon, A. D. 2003. “Improving and still passing the ALife test: Component normalised activity statistics classify evolution in Geb as unbounded”, Proceedings of Artificial Life VIII, Sydney, R. K. Standish, M. A. Bedau and H. A. Abbass, eds., MIT Press (Cambridge) pp. 173 – 181

[10] Yaeger, L. S. 1994, “Computational genetics, physiology, metabolism, neural systems, learning, vision, and behavior or PolyWorld: Life in a new context,” C. Langton ed., Proceedings of the Artificial Life III Conference, Addison-Wesley (Boston), pp. 263 – 298

[11] Soros, L. B., and Kenneth, S. O. 2014, “Identifying necessary conditions for open-ended evolution through the artificial life world of Chromaria,” Proc. of Artificial Life Conference (ALife 14)

[12] Ruiz-Mirazo, K., Peretó, J., and Moreno, A. 2004, “A universal definition of life: autonomy and open-ended evolution,” Origins of Life and Evolution of the Biosphere, 34, 3, pp. 323 – 346

[13] Niekum, S., Spector, L., and Barto, A., 2011, “Evolution of reward functions for reinforcement learning,” Proceedings of the 13th annual conference companion on Genetic and evolutionary computation , ACM, pp. 177 – 178

[14] Fernando, C.T., Szathmary, E., Husbands, P., 2014, “Selectionist and evolutionary approaches to brain function: a critical appraisal,” Frontiers in computational neuroscience 6:24

[15] Watson, R.A., and Szathmáry, E. 2016, “How Can Evolution Learn?” Trends in ecology and evolution 31 (2), pp. 147 – 157

[16] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., and Kavukcuoglu, K., 2016, “Asynchronous methods for deep reinforcement learning,” arXiv preprint arXiv:1602.01783

[17] Bateson, G. 1972, Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology, University of Chicago Press (Chicago)

[18] Watson, R.A., Weinreich, D.M., Wakeley, J., 2011 “Genome structure and the benefit of sex,” Evolution 65 (2), pp. 523 – 536

[19] Kashtan, N, and Alon, U., 2005, “Spontaneous evolution of modularity and network motifs,” Proceedings of the National Academy of Sciences of the United States of America 102 (39), pp. 13772 – 13778

[20] Toussaint, M., 2004, “The evolution of genetic representations and modular adaptation,”  PhD thesis, Institut für Neuroinformatik, Ruhr-Universiät-Bochum, Germany. Published with the Logos Verlag Berlin. ISBN 3-8325-0579-2, 173 pages

[21] Pigliucci, M. 2008, “Is evolvability evolvable?” Nature Reviews Genetics, 9, 1, pp. 75 – 82

[22] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., and Dieleman, S., 2016, “Mastering the game of Go with deep neural networks and tree search”. Nature, 529 (7587), pp. 484 – 489

[23] Fernando, C., Karishma, K. K., and Szathmáry, E., 2008, “Copying and evolution of neuronal topology,” PloS ONE 3.11: e3775

[24] Fernando, C., Goldstein, R., and Szathmáry, E. 2010,  “The neuronal replicator hypothesis,” Neural computation 22 (11) pp. 2809 – 2857

[25] Churchill, A.W., Sigtia, S., Fernando, C. 2014, “A denoising autoencoder that guides stochastic search,” arXiv preprint arXiv:1404.1614

[26] Churchill, A. W., Sigtia, S., and Fernando, C. 2016, “Learning to generate genotypes with neural networks,” arXiv preprint arXiv:1604.04153

[27] Bayer, J., Wierstra, D., Togelius, J., and Schmidhuber, J. 2009, “Evolving memory cell structures for sequence learning,” International Conference on Artificial Neural Networks, Springer (Berlin) pp. 755 – 764

[28] Jozefowicz, R., Zaremba, W., Sutskever, I. 2015, “An empirical exploration of recurrent network architectures,” International Conference of Machine Learning (ICML)

[29] Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M., and Wierstra, D. 2016. “Convolution by evolution: Differentiable pattern producing networks”. arXiv preprint arXiv:1606.02580

[30] Schmidt, M., and Lipson, H. 2009, “Distilling free-form natural laws from experimental data,” Science, 324, 5923, pp. 81 – 85

[31] Adami, C. 1998, Introduction to artificial life (Vol. 1), Springer Science and Business Media (Berlin)

[Go to Top]


THE HUNT FOR HABITABLE PLANETS

Didier Queloz
Cavendish Laboratory
19 J J Thomson Avenue
Cambridge, CB3 0HE
UNITED KINGDOM
dq212@cam.ac.uk

ABSTRACT

Thousands of exoplanets have been discovered in the past two decades.  We review the most promising “technically mature” approaches that are likely, in the next decade, to provide us with data that would indicate the presence of life, as well as describe the main difficulties that will need to be overcome to make such measurements.

Confined for centuries to the category of pure speculation and philosophical debates, the existence of life outside our solar system is now at the edge of reaching the stage of being a testable scientific hypothesis. The first discovery of a planet orbiting another Sun-­‐like star in 1995 triggered a wave of interest and exoplanet search programs [1]. Twenty years later, thousands of exoplanets have been detected, and the discoveries are proceeding at an ever-­‐increasing rate.

The current list of known exoplanets is not only composed of gas giants like our own Jupiter, but also includes a rapidly increasing fraction of smaller planets that some believe have a composition similar to Earth. For some of these exoplanets, basic information on their atmospheric properties has been obtained. These early results are paving the way for future atmospheric studies of habitable, terrestrial exoplanets with the hope of obtaining solid evidence for the existence of life around another star.

It is now obvious that planets orbiting other stars are common. But, interestingly the bulk of exoplanets detected so far have orbital distances less than 1 AU, in stark contrast to Solar System planets. This may be a selection effect, but it may also represent one of the dominant arrangements of planetary systems in the universe, making our Solar System’s configuration more special than expected. We are still far from having a comprehensive view of the full diversity of planetary systems predicted by models of planet formation, and have not yet detected any “Twin of Earth”, making it difficult to place our Solar System in context.

Part of the reason for this is an unforeseen contribution to the noise budget of stellar observations arising from magnetic and convective effects in a stellar atmosphere. Stellar activity has become one of the main limiting factors in these observations, making the detection of planets like Earth difficult, whether done by transit or Doppler techniques. This additional noise structure, intrinsic to the astrophysical nature of stars, slows down progress and requires new strategies to be developed to circumvent the problem. For example, this was one of the main motivations for extending the Kepler mission’s lifetime and for initial efforts to obtain an intensive series of Doppler measurements on a few bright stars.

INTENSIVE RADIAL VELOCITY MEASURES

In 2003, the high-­‐precision HARPS spectrograph was installed on the 3.6m ESO telescope at La Silla. Ten years later the main planet survey carried out with HARPS used about 800 nights of observations, with an average of 40 radial velocity measurements for each of the selected 350 bright, southern G-­‐K dwarfs stars.  This program achieved a total of about 150 measurements for a few dozen of them. This led  to the discovery of compact systems of exoplanets with masses in the Neptune and “super-­‐Earth” range. It also successfully demonstrated that it is possible to build an efficient spectrograph, optimized for planet searches, that has long-­‐term radial velocity precision below one meter per second on a timescale of years.

Unfortunately, we also learned that solar type stars, on average, are Doppler variable at the 1-­‐2 m/s level, on timescales ranging from a few days to many months. Today, our progress towards detecting smaller planets on longer orbits depends more on our ability to address stellar variability issues than on improvements in spectrograph design or the availability of bigger telescopes.

We know that some spectroscopic indicators can be used to model components of stellar activity. Recent intensive observation campaigns on stars like Corot-­‐7 or Alpha Cen B suggest that intense series of measurements are a promising way to dig out small amplitude planetary signals from beneath the “sea of noise” originating from the star [2], [3].

The recent results of the Kepler mission clearly indicate that the odds of finding a G or K dwarf star hosting a planet with a size below 2 Earth-­‐diameters and an orbit shorter than 30 days, is higher than 50%. Extrapolation of these results to a 1 Earth-­‐diameter planet in the habitable zone (HZ) suggests that between 7% and 15% of planetary systems may be found in such a special orbital configuration.  Based on this important statistical result, we may conclude that Earth’s twins may be found in many nearby, naked-­‐eye stars.

Transiting exoplanets have a special geometrical configuration relative to Earth that makes them “Rosetta stones” for studies of other worlds [4]. They are the only exoplanets for which we can accurately measure both mass and radius, yielding strong clues to their physical structure and bulk composition [5]. We can also  measure their  orbital obliquity, and derive insightful constraints on their dynamical history [6]. But the true power of their special orbital geometry is that it offers a way to study their atmosphere without having to spatially resolve them from their host stars.

PERSPECTIVE ON OUR OWN SOLAR SYSTEM

The booming study of transiting planets allows us to start placing our own solar system in a broader perspective. While the Kepler space mission is determining the frequency of small-­‐size planets around solar-type stars [7], ground-­‐based surveys targeting relatively bright stars (V<13) are detecting – at an increasing rate – short-period, transiting giant planets suitable for detailed characterization. Notably, follow-­‐up observations of these bright “hot Jupiters” performed with space-­‐ and ground-­‐based instruments have given initial glimpses into their atmospheric properties, including chemical composition, vertical pressure-­‐ temperature profiles, albedos, and circulation patterns [8]. These first detailed studies of other worlds have laid the foundations of comparative exoplanetology [9].

The PLATO space mission (PLAnetary Transits and Oscillation of stars) scheduled for launch in 2024 is designed to detect terrestrial exoplanets in the habitable zone of solar-­‐type stars and assess their bulk properties. To characterize the nature and composition of planets, a massive ground-­‐based follow--‐up program will be required, especially to measure the mass of transiting planets. Examples include Corot-­‐7 and Kepler-­‐78, for which a large number of measurements (more than 100) are needed to obtain the mass of a planet and measure its density with sufficient accuracy to obtain a useful constraint on its structure. PLATO will outstrip TESS and Kepler in the detection of small planets where characterization is possible. About 100 Earth-mass planets are expected to be seen, and thousands of super-­‐Earths.

In principle, exporting the techniques developed for the pioneering first studies of transiting gas giants to the atmospheric characterization of terrestrial planets orbiting in the habitable zone (HZ; e.g. [10]) of their star is a promising path to search for life outside our solar system without requiring the huge technological developments and financial costs required by direct imagery projects like TPF [11] and Darwin [12].

Still, and as a practical matter, the application of these methods to an Earth-­‐twin transiting a Sun-­‐like star still seems out of reach. The main reasons for this are the overwhelmingly large area contrasts between the solar disk and the tiny Earth atmospheric annulus, and between the Sun’s and Earth’s luminosities, leading to signal-­‐to-­‐noise ratios (SNR) that are much less than one for any spectroscopic signature and any realistic program, even when considering the observation of a putative terrestrial planet transiting a nearby solar-­‐ twin with the future James Webb Space Telescope [13].

Fortunately, this negative conclusion does not hold for the dominant population of the solar neighborhood, the M-­‐ dwarfs. Because of their smaller size and luminosity, and the resulting larger planet-­‐to-­‐star flux and size contrasts, expected SNRs on the detection of spectroscopic signatures are indeed much more favorable for M-­‐dwarfs than for solar analogs. Furthermore, their HZ is much closer to them than for a solar-­‐type star, making the transits of a habitable planet more frequent and probable.

Looking for planets around low mass stars today seems the most realistic approach to detecting the first terrestrial planets amenable to atmosphere characterization, with the prospect of detecting bio-­‐ signatures with giant, next-­‐generation telescopes like the E-­‐ELT and the JWST in space. Many studies (e.g., [14], [15]) have shown that these two future major facilities have the potential to thoroughly probe the atmospheric properties of Earth-­‐ sized planets, but only if they transit a nearby (~30 pc at most) ultra-­‐ cool dwarf.

The small luminosity of M-type stars, the most abundant stars in the Galaxy, have habitable zones that are 30-­‐100 times closer-in than for the Sun. If a star is similar to Jupiter in size, the transit signature of an Earth-size planet will be deep and short enough to be detected from the ground. These are ideal targets to search for Earth-size planets from the ground using the transit method.

To carry out a successful survey of a thousand ultra-cool stars almost uniformly spread over the sky requires milli-magnitude photometric precision, and measures obtained at high cadence. These are necessary to detect a fast event produced by a short period, Earth-size planet in transit. As a practical matter, each target has to be monitored individually and continuously for dozens of nights. Doing this requires a series of modest size telescopes equipped with a red optimized CCD, located in sites of outstanding quality.

REFERENCES

[1] Mayor, M. and Queloz, D. 1995, Nature 378, pp. 355 - 359

[2] Haywood, R. D., Collier Cameron, A., Queloz, D., Barros, S. C. C., Deleuil, M., Fares, R., Gillon, M., Lanza, A. F., Lovis, C., Moutou, C., Pepe, F., Pollacco, D., Santerne, A., Segransan, D., Unruh, Y. C. 2014, “Planets and Stellar Activity: Hide and Seek in the CoRoT-7 System”, arXiv:1407.1044 [astro-ph.EP]

[3] Dumusques, X., Pepe, F., Lovis, C., Segransan, D., Sahlmann, J., Benz, W. Bouchy, F., Mayor, M., Queloz, D., Santos, N., and Udry, S. 2012, Nature 491, pp. 207 - 2011

[4] Winn, J. S. 2010, Exoplanets, ed. S. Seager, University of Arizona Press (Tucson)

[5] Fortney, J. J., Marley, M. S., and Barnes, J. W. 2007, Ap J. 659, pp. 1661 - 1672

[6] Winn, J.N. 2011, “The Rossiter-McLaughlin effect for exoplanets,” in The Astrophysics of Planetary Systems, Proc. IAU Symp. No. 276, eds. A. Sozzetti, M. Lattanzi, and A. Boss, Cambridge University Press (Cambridge)

[7] Borucki. W. J. et al. 2011, Ap. J. 736, pp. 19 - 22

[8] Seager, S. and Deming, D. 2010, “Exoplanet atmospheres”, Ann. Rev. of Astr. And Astrophys. 48, pp. 631 - 672

[9] Seager, S. 2008, “Exoplanet transit spectroscopy and photometry,” in Space Sci. Rev. 135, pp. 345 – 354

[10] Kasting, J. F., Whitmire, D. P.,and Reynolds, R. P. 1993, Icarus 101, pp. 108 - 128

[11] Traub, W., Shaklan, S., and Lawson, P. 2007, In the spirit of Bernard Lyot: The direct detection of planets and circumstellar disks in the 21st century, ed. P. Kalas, A. & A. 509

[12] Cockell, C. S. et al. 2009, “Darwin—an experimental astronomy mission to search for extrasolar planets, Exp Astron. 23, pp. 435 – 461

[13] Seager, S. et al. 2009, “Discovery and characterization of transiting superearths using an all-sky transit survey and follow-up by the James Webb Space Telescope,” arXiv:0903.4880 [astro-ph.EP]

[14] Kaltenegger, L., and Traub, W. A. 2009, Ap. J. 698, pp. 519 - 527

[15] Snellen, I., de Kok, R., Le Poole, R., Brogi, M., and Birkby, J. 2013, Ap. J. 764, 182

[Go to Top]