The Anthropic Cosmological Principle - PDF Free Download (2024)

THE^NTHROPIC COSMOLOGICAL i—PRINCIPLE—i

THE

ANTHROPIC COSMOLOGICAL PRINCIPLE J O H N D. B A R R O W Lecturer, Astronomy Centre, University of Sussex and F R A N K J. T I P L E R Professor of Mathematics and Physics, Tulane University, New Orleans With a foreword by John A. Wheeler

CLARENDON PRESS • Oxford OXFORD UNIVERSITY PRESS • New York 1986

Oxford University Press, Walton Street, Oxford OX2 6DP Oxford New York Toronto Delhi Bombay Calcutta Madras Karachi Kuala Lumpur Singapore Hong Kong Tokyo Nairobi Dar es Salaam Cape Town Melbourne Auckland and associated companies in Beirut Berlin Ibadan Nicosia Oxford is a trade mark of Oxford University Press Published in the United States by Oxford University Press, New York © John D. Barrow and Frank J. Tipler, 1986 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press British Library Cataloguing in Publication Data Barrow, John D. The anthropic cosmological principle. 1. Man I. Title II. Tipler, Frank J. 128 BD450 ISBN 0-19-851949-4 Library of Congress Cataloging in Publication Data Barrow, John D., 1952— The anthropic cosmological principle. Bibliography: p. Includes index. 1. Cosmology. 2. Man. 3. Teleology. 4. Intellect. 5. Life on other planets. 6. Science—Philosophy. I. Tipler, Frank J. II. Title. BD511.B34 1985 113 85-4824 ISBN 0-19-851949-4

Printing (last digit):

987654321

Printed in the United States of America

To Elizabeth and Jolanta

Foreword

John A. Wheeler, Center for Theoretical Physics, University of Texas at Austin 'Conceive of a universe forever empty of life?' 'Of course not', a philosopher of old might have said, contemptuously dismissing the question, and adding, over his shoulder, as he walked away, 'It has no sense to talk about a universe unless there is somebody there to talk about it'. That quick dismissal of the idea of a universe without life was not so easy after Copernicus. He dethroned man from a central place in the scheme of things. His model of the motions of the planets and the Earth taught us to look at the world as machine. Out of that beginning has grown a science which at first sight seems to have no special platform for man, mind, or meaning. Man? Pure biochemistry! Mind? Memory modelable by electronic circuitry! Meaning? Why ask after that puzzling and intangible commodity? 'Sire', some today might rephrase Laplace's famous reply to Napoleon, 'I have no need of that concept'. What is man that the universe should be mindful of him? Telescopes bring light from distant quasi-stellar sources that lived billions of years before life on Earth, before there even was an Earth. Creation's still warm ashes we call 'natural radioactivity'. A thermometer and the relative abundance of the lighter elements today tell us the correlation between temperature and density in the first three minutes of the universe. Conditions still earlier and still more extreme we read out of particle physics. In the perspective of these violences of matter and field, of these ranges of heat and pressure, of these reaches of space and time, is not man an unimportant bit of dust on an unimportant planet in an unimportant galaxy in an unimportant region somewhere in the vastness of space? No! The philosopher of old was right! Meaning is important, is even central. It is not only that man is adapted to the universe. The universe is adapted to man. Imagine a universe in which one or another of the fundamental dimensionless constants of physics is altered by a few percent one way or the other? Man could never come into being in such a universe. That is the central point of the anthropic principle. According to this principle, a life-giving factor lies at the centre of the whole machinery and design of the world. What is the status of the anthropic principle? Is it a theorem? No. Is it a mere tautology, equivalent to the trivial statement, 'The universe has to be such as to admit life, somewhere, at some point in its history, because

viii

Foreword

we are here'? No. Is it a proposition testable by its predictions? Perhaps. Then what is the status of the anthropic principle? That is the issue- on which every reader of this fascinating book will want to make his own judgement. Nowhere better than in the present account can the reader see new thinking, new ideas, new concepts in the making. The struggles of old to sort sense from nonsense in the domain of heat, phlogiston, and energy by now have almost passed into the limbo of the unappreciated. The belief of many in the early part of this century that 'Chemical forces are chemical forces, and electrical forces are electrical forces, and never the twain shall meet' has long ago been shattered. Our own time has made enormous headway in sniffing out the sophisticated relations between entropy, information, randomness, and computability. But on a proper assessment of the anthropic principle we are still in the dark. Here above all we see how out of date that old view is, 'First define your terms, then proceed with your reasoning'. Instead, we know, theory, concepts, and methods of measurement are born into the world, by a single creative act, in inseparable union. In advancing a new domain of investigation to the point where it can become an established part of science, it is often more difficult to ask the right questions than to find the right answers, and nowhere more so than in dealing with the anthropic principle. Good judgement, above all, is required, judgement in the sense of George Graves, 'an awareness of all the factors in the situation, and an appreciation of their relative importance'. To the task of history, exposition, and judgement of the anthropic principle the authors of this book bring a unique combination of skills. John Barrow has to his credit a long list of distinguished contributions in the field of astrophysics generally and cosmology in particular. Frank Tipler is widely known for important concepts and theorems in general relativity and gravitation physics. It would be difficult to discover a single aspect of the anthropic principle to which the authors do not bring a combination of the best thinking of past and present and new analysis of their own. Philosophical considerations connected with the anthropic principle? Of the considerations on this topic contained in Chapters 2 and 3 perhaps half are new contributions of the authors. Why, except in the physics of elementary particles at the very smallest scale of lengths, does nature limit itself to three dimensions of space and one of time? Considerations out of past times and present physics on this topic give Chapter 4 a special flavour. In Chapter 6 the authors provide one of the best short reviews of cosmology ever published. In Chapter 8 Barrow and Tipler not only recall the arguments of L. J. Henderson's

Foreword

ix

famous 1913 book, The fitness of the environment They also spell out George Wald's more recent emphasis on the unique properties of water, carbon dioxide, and nitrogen. They add new arguments to Wald's rating of chlorophyll, an unparalleled agent, as the most effective photosynthetic molecule that anyone could invent. Taking account of biological considerations and modern statistical methods, Barrow and Tipler derive with new clarity Brandon Carter's striking anthropic-principle inequality. It states that the length of time from now, on into the future, for which the earth will continue to be an inhabitable planet will be only a fraction of the time, 4.6 billion years, that it has required for evolution on earth to produce man. The Carter inequality, as thus derived, is still more quantitative, still more limiting, still more striking. It states that the fraction of these 4.6 billion years yet to come is smaller than l/8th, l/9th, l/10th,... or less, according as the number of critical or improbable or gateway steps in the past evolution of man was 7 , 8 , 9 , . . . or more. This amazing prediction looks like being some day testable and therefore would seem to count as 'falsifiable' in the sense of Karl Popper. Chapter 9, outlining a space-travel argument against the existence of extraterrestrial intelligent life, is almost entirely new. So is the final Chapter 10. It rivals in thought-provoking power any of the others. It discusses the idea that intelligent life will some day spread itself so thoroughly throughout all space that it will 'begin to transform and continue to transform the universe on a cosmological scale', thus making it possible to transmit 'the values of humankind... to an arbitrarily distant futurity... an Omega Point... [at which] life will have gained control of all matter and forces...'. In the mind of every thinking person there is set aside a special room, a museum of wonders. Every time we enter that museum we find our attention gripped by marvel number one, this strange universe, in which we live and move and have our being. Like a strange botanic specimen newly arrived from a far corner of the earth, it appears at first sight so carefully cleaned of clues that we do not know which are the branches and which are the roots. Which end is up and which is down? Which part is nutrient-giving and which is nutrient-receiving? Man? Or machinery? Everyone who finds himself pondering this question from time to time will want to have Barrow and Tipler with him on his voyages of thought. They bring along with them, now and then to speak to us in their own words, a delightful company of rapscallions and wise men, of wits and discoverers. Travelling with the authors and their friends of past and present we find ourselves coming again and again upon issues that are live, current, important.

Preface This book was begun long ago. Over many years there had grown up a collection of largely unpublished results revealing a series of mysterious coincidences between the numerical values of the fundamental constants of Nature. The possibility of our own existence seems to hinge precariously upon these coincidences. These relationships and many other peculiar aspects of the Universe's make-up appear to be necessary to allow the evolution of carbon-based organisms like ourselves. Furthermore, the twentieth-century dogma that human observers occupy a position in the Universe that must not be privileged in any way is strongly challenged by such a line of thinking. Observers will reside only in places where conditions are conducive to their evolution and existence: such sites may well turn out to be special. Our picture of the Universe and its laws are influenced by an unavoidable selection effect—that of our own existence. It is this spectrum of ideas, its historical background and wider scientific ramifications that we set out to explore. The authors must confess to a curious spectrum of academic interests which have been indulged to the full in this study. In seemed to us that cosmologists and lay persons were often struck by the seeming novelty of this collection of ideas called the Anthropic Principle. For this reason it is important to display the Anthropic Principle in a historical perspective as a modern manifestation of a certain tradition in the history of ideas that has a long and fascinating history involving, at one time or another, many of the great figures of human thought and speculation. For these reasons we have attempted not only to describe the collection of results that modern cosmologists would call the 'Anthropic Principle', but to trace the history of the underlying world-view in which it has germinated, together with the diverse range of subjects where it has interesting but unnoticed ramifications. Our discussion is of necessity therefore a medley of technical and non-technical studies but we hope it has been organized in a manner that allows those with only particular interests and uninterests to indulge them without too much distraction from the parts of the other sort. Roughly speaking, the degree of difficulty increases as the book goes on: whereas the early chapters study the historical antecedents of the Anthropic Principle, the later ones investigate modern developments which involve mathematical ideas in cosmology, astrophysics, and quantum theory. There are many people who have played some part in getting this

xii

Preface

project started and bringing it to some sort of conclusion. In particular, we are grateful to Dennis Sciama without whose encouragement it would not have begun, and to John Wheeler without whose prodding it would never have been completed. We are also indebted to a large number of individuals for discussions and suggestions, for providing diagrams or reading drafts of particular chapters; for their help in this way we would like particularly to thank R. Alpher, M. Begelman, M. Berry, F. Birtel, S. Brenner, R. Breuer, P. Brosche, S. G. Brush, B. J. Carr, B. Carter, P. C. W. Davies, W. Dean, J. Demaret, D. Deutsch, B. DeWitt, P. Dirac, F. Drake, F. Dyson, G. F. R. Ellis, R. Fenn, A. Flew, S. Fox, M. Gardner, J. Goldstein, S. J. Gould, A. Guth, C. Hartshorne, S. W. Hawking, F. A. Hayek, J. Hedley-Brooke, P. Hefner, F. Hoyle, S. Jaki, M. Jammer, R. Jastrow, R. Juszkiewicz, J. Leslie, W. H. McCrea, C. Macleod, J. E. Marsden, E. Mascall, R. Matzner, J. Maynard Smith, E. Mayr, L. Mestel, D. Mohr, P. Morrison, J. V. Narlikar, D. M. Page, A. R. Peaco*cke, R. Penrose, J. Perdew, F. Quigley, M. J. Rees, H. Reeves, M. Ruderman, W. Saslaw, C. Sagan, D. W. Sciama, I. Segal, J. Silk, G. G. Simpson, S. Tangherlini, R. J. Tayler, G. Wald, J. A. Wheeler, G. Whitrow, S.-T. Yau, W. H. Zurek, and the staff of Oxford University Press. On the vital practical side we are grateful to the secretarial staff of the Astronomy Centre at Sussex and the Departments of Mathematics and Physics at Tulane University, especially Suzi Lam, for their expert typing and management of the text. We also thank Salvador Dali for allowing us to reproduce the example of his work which graces the front cover, and finally we are indebted to a succession of editors at Oxford University Press who handled a continually evolving manuscript and its authors with great skill and patience. Perhaps in despair at the authors' modification of the manuscript they had cause to recall Dorothy Sayers' vivid description of what Harriet Vane discovered when she happened upon a former tutor in the throes of preparing a book for publication by the Press... The English tutor's room was festooned with proofs of her forthcoming work on the prosodic elements in English verse from Beowulf to Bridges. Since Miss Lydgate had perfected, or was in process of perfecting (since no work of scholarship ever attains a static perfection) an entirely new prosodic theory, demanding a novel and complicated system of notation which involved the use of twelve different varieties of type; and since Miss Lydgate's handwriting was difficult to read and her experience in dealing with printers limited, there existed at that moment five successive revises in galley form, at different stages of completion, together with two sheets in page-proof, and an appendix in typescript, while the important Introduction which afforded the key to the whole argument still remained to be written. It was only when a section had advanced to page-proof condition that Miss Lydgate became fully convinced of the necessity of

Preface

xiii

transferring large paragraphs of argument from one chapter to another, each change of this kind naturally demanding expensive over-running on the page-proof, and the elimination of the corresponding portions in the five sets of revises...' Brighton July, 1985

J. D. B. F. J. T.

Acknowledgements The authors gratefully acknowledge the following sources of illustrations and tables reproduced in this book, and thank authors and publishers who have granted their permission. Figures: 5.1 adapted from B. Carr and M. Rees, Nature, Lond. 278, 605 (1979); 5.2 based on A. Holden, Bonds between atoms, p. 15, Oxford University Press (1977); 5.3. V. S. Weisskopf, 'Of atoms, mountains and stars: a study in qualitative physics', Science 187, 602-12, Diagram 21 (February 1975); 5.5 R. D. Evans The atomic nucleus, p. 382, Fig. 3.5, McGraw Hill, New York (1955); 5.6 and 5.7 P. C. Davies, J. Physics A, 5, 1296 (1972); 5.9 redrawn from D. Clayton, Principles of stellar evolution and nucleosynthesis, p. 302, Fig. 4-6, University of Chicago Press (1968 and 1983); 5.11 adapted from M. Harwit, Astrophysical concepts, p. 17, Wiley, New York; 5.12 adapted from B. Carr and M. Rees, Nature, Lond. 278, 605 (1979); 5.13 reproduced, with permission, from the Annual Reviews Nuclear and Particle Science 25 © 1975 by Annual Reviews Inc; 5.14. M. Begelman, R. Blandford, and M. Rees, Rev. mod. Phys. 56, 294, Fig. 15, with permission of the authors and the American Physical Society; 6.4 redrawn from D. Woody and P. Richards, Phys. Rev. Lett. 42, 925 (1979); 6.5 and 6.6 C. Frenk, M. Davis, G. Efstathiou, and S. White; 6.7 adapted from B. Carr and M. Rees, Nature, Lond. 278, 605 (1979); 6.10 based on M. Rees, Les Houches Lectures; 6.12 based on H. Kodama 'Comments on the chaotic inflation', KEK Report 84-12, ed. K. Odaka and A. Sugamoto (1984); 7.1 B. De Witt, Physics Today 23, 31 (1970); 8.2 J. D. Watson, Molecular biology of the gene, W. A. Benjamin Inc., 2nd edn, copyright 1970 by J. D. Watson; 8.3 M. Arbib, in Interstellar communication : scientific prospects, ed. C. Ponnamperuma and A. Cameron, Houghton Mifflin, Boston (1974); 8.4, 8.5, 8.6, 8.7, 8.8 adapted from Linus Pauling in General chemistry, W. H. Freeman, New York (1956); 8.9 adapted from Linus Pauling and R. Hayward in General chemistry, W. H. Freeman, New York (1956), 8.10 J. Edsall and J. Wyman, Biophysical chemistry, Vol. 1, p. 178, Academic Press (1958); 8.11, 8.12, 8.13 adapted from Linus Pauling in General chemistry, W. H. Freeman, New York (1956); 8.14 adapted from J. Edsall and J. Wyman, Biophysical chemistry, Vol. 1, p. 3, Academic Press (1958); 8.15 reprinted from Linus Pauling The nature of the chemical bond, third edition, copyright © 1960 by Cornell University, used by permission of the publisher, Cornell University Press; 8.16 F. H. Stillinger 'Water revisited', Science 209, 451-7 (1980), © 1980 by the American Association

xvi

Acknowledgements

for the Advancement of Science; 8.17 Albert L. Lehninger in Biochemistry, Worth Publishers Inc., New York (1975); 8.18 adapted from G. Wald, Origins of life 5, 11 (1974) and in Conditions for life ed, A. Gabor, Freeman, New York (1976); 8.20 J. Lovelock, J. E. Gaia: a new look at life on earth, Oxford University Press (1979). Tables: 8.1-8.7 A. Needham, The uniqueness of biological materials, Pergamon Press, Oxford (1965); 8.8 J. Lovelock, Gaia: a new look at life on earth, Oxford University Press (1979); 8.9 J. Edsall and J. Wyman, Physical chemistry, Vol. 1, p. 24, Academic Press (1958); 8.10 Albert L. Lehninger, Biochemistry, Worth Publishers Inc., New York (1975). Preparation for publication of the Foreword was assisted by the Center for Theoretical Physics, University of Texas at Austin and by NSF Grants PHY 8205717 and PHY 503890.

Contents 1 INTRODUCTION 1.1 Prologue 1.2 Anthropic Definitions

1 15

2 DESIGN ARGUMENTS 2.1 Historical Prologue 27 2.2 The Ancients 31 2.3 The Medieval Labryrinth 46 2.4 The Age of Discovery 49 2.5 Mechanical Worlds 55 2.6 Critical Developments 68 2.7 The Devolution of Design 83 2.8 Design in Non-Western Religion and Philosophy 92 2.9 Relationship Between The Design Argument and the Cosmological Argument 103 3 MODERN TELEOLOGY AND THE ANTHROPIC PRINCIPLES 3.1 Overview: Teleology in the Twentieth Century 3.2 The Status of Teleology in Modern Biology 3.3 Henderson and the Fitness of the Environment 3.4 Teleological Ideas and Action Principles 3.5 Teleological Ideas in Absolute Idealism 3.6 Biological Constraints on the Age of the Earth: The First Successful Use of an Anthropic Timescale Argument 3.7 Dysteleology: Entropy and the Heat Death 3.8 The Anthropic Principle and the Direction of Time 3.9 Teleology and the Modern 'Empirical' Theologians 3.10 Teleological Evolution: Bergson, Alexander, Whitehead, and the Philosophers of Progress 3.11 Teilhard de Chardin: Mystic, Paleontologist and Teleologist

123 127 143 148 153 159 166 173 180 185 195

xviii

Contents

4 T H E R E D I S C O V E R Y OF T H E A N T H R O P I C PRINCIPLE 4.1 The Lore of Large Numbers 4.2 From Coincidence to Consequence 4.3 'Fundamentalism' 4.4 Dirac's Hypothesis 4.5 Varying Constants 4.6 A New Perspective 4.7 Are There Any Laws of Physics? 4.8 Dimensionality

219 220 224 231 238 243 255 258

5 T H E W E A K A N T H R O P I C P R I N C I P L E IN PHYSICS AND ASTROPHYSICS 5.1 Prologue 5.2 Atoms and Molecules 5.3 Planets and Asteroids 5.4 Planetary Life 5.5 Nuclear Forces 5.6 The Stars 5.7 Star Formation 5.8 White Dwarfs and Neutron Stars 5.9 Black Holes 5.10 Grand Unified Gauge Theories

288 295 305 310 318 327 339 340 347 354

6 T H E A N T H R O P I C P R I N C I P L E S IN C L A S S I C A L COSMOLOGY 6.1 Introduction 6.2 The Hot Big Bang Cosmology 6.3 The Size of the Universe 6.4 Key Cosmic Times 6.5 Galaxies 6.6 The Origin of the Lightest Elements 6.7 The Value of S 6.8 Initial Conditions 6.9 The Cosmological Constant 6.10 Inhom*ogeneity 6.11 Isotropy 6.12 Inflation 6.13 Inflation and the Anthropic Principle 6.14 Creation ex nihilo 6.15 Boundary Conditions

367 372 384 385 387 398 401 408 412 414 419 430 434 440 444

Contents

7 QUANTUM MECHANICS AND THE ANTHROPIC PRINCIPLE 7.1 The Interpretations of Quantum Mechanics 7.2 The Many-Worlds Interpretation 7.3 The Friedman Universe from the Many-Worlds Point of View 7.4 Weak Anthropic Boundary Conditions in Quantum Cosmology 7.5 Strong Anthropic Boundary Conditions in Quantum Cosmology 8 THE ANTHROPIC PRINCIPLE AND BIOCHEMISTRY 8.1 Introduction 8.2 The Definitions of Life and Intelligent Life 8.3 The Anthropic Significance of Water 8.4 The Unique Properties of Hydrogen and Oxygen 8.5 The Anthropic Significance of Carbon, Carbon Dioxide and Carbonic Acid 8.6 Nitrogen, Its Compounds, and other Elements Essential for Life 8.7 Weak Anthropic Principle Constraints on the Future of the Earth

xix

458 472 490 497 503 510 511 524 541 545 548 556

9 THE SPACE-TRAVEL ARGUMENT AGAINST THE EXISTENCE OF EXTRATERRESTRIAL INTELLIGENT LIFE 9.1 The Basic Idea of the Argument 9.2 General Theory of Space Exploration and Colonization 9.3 Upper Bounds on the Number of Intelligent Species in the Galaxy 9.4 Motivations for Interstellar Communication and Exploration 9.5 Anthropic Principle Arguments Against Steady-State Cosmologies

601

10 T H E F U T U R E OF T H E U N I V E R S E 10.1 Man's Place in an Evolving Cosmos 10.2 Early Views of the Universe's Future 10.3 Global Constraints on the Future of the Universe

613 615 621

576 578 586 590

Contents

XX

10.4 The Future Evolution of Matter: Classical Timescales 10.5 The Future Evolution of Matter: Quantum Timescales 10.6 Life and the Final State of the Universe INDEX

641 647 658 683

THE ANTHROPIC COSMOLOGICAL PRINCIPLE

Ah Mr. Gibbon, another damned, fat, square book. Always scribble, scribble, scribble, eh?

THE DUKE OF GLOUCESTER

[on being presented with volume 2 of The Decline and Fall of the Roman Empire]

1 Introduction The Cosmos is about the smallest hole that a man can hide his head in G. K. Chesterton

1.1 Prologue

What is Man, that Thou art mindful of him? Psalm 8:4

The central problem of science and epistemology is deciding which postulates to take as fundamental. The perennial solution of the great idealistic philosophers has been to regard Mind as logically prior, and even materialistic philosophers consider the innate properties of matter to be such as to allow—or even require—the existence of intelligence to contemplate it; that is, these properties are necessary or sufficient for life. Thus the existence of Mind is taken as one of the basic postulates of a philosophical system. Physicists, on the other hand, are loath to admit any consideration of Mind into their theories. Even quantum mechanics, which supposedly brought the observer into physics, makes no use of intellectual properties; a photographic plate would serve equally well as an 'observer'. But, during the past fifteen years there has grown up amongst cosmologists an interest in a collection of ideas, known as the Anthropic Cosmological Principle, which offer a means of relating Mind and observership directly to the phenomena traditionally within the encompass of physical science. The expulsion of Man from his self-assumed position at the centre of Nature owes much to the Copernican principle that we do not occupy a privileged position in the Universe. This Copernican assumption would be regarded as axiomatic at the outset of most scientific investigations. However, like most generalizations it must be used with care. Although we do not regard our position in the Universe to be central or special in every way, this does not mean that it cannot be special in any way. This possibility led Brandon Carter to limit the Copernican dogma by an 'Anthropic Principle' to the effect that 'our location in the Universe is necessarily privileged to the extent of being compatible with our existence as observers'. The basic features of the Universe, including such properties as its shape, size, age and laws of change, must be observed to be of a type that allows the evolution of observers, for if intelligent life did not evolve in an otherwise possible universe, it is obvious that no one would 1

Introduction

2

be asking the reason for the observed shape, size, age and so forth of the Universe. At first sight such an observation might appear true but trivial. However, it has far-reaching implications for physics. It is a restatement, of the fact that any observed properties of the Universe that may initially appear astonishingly improbable, can only be seen in their true perspective after we have accounted for the fact that certain properties of the Universe are necessary prerequisites for the evolution and existence of any observers at all. The measured values of many cosmological and physical quantities that define our Universe are circ*mscribed by the necessity that we observe from a site where conditions are appropriate for the occurrence of biological evolution and at a cosmic epoch exceeding the astrophysical and biological timescales required for the development of life-supporting environments and biochemistry. What we have been describing is just a grandiose example of a type of intrinsic bias that scientists term a 'selection effect'. For example, astronomers might be interested in determining the fraction of all galaxies that lie in particular ranges of brightness. But if you simply observe as many galaxies as you can find and list the numbers found according to their brightness you will not get a reliable picture of the true brightness distribution of galaxies. Not all galaxies are bright enough to be seen or big enough to be distinguished from stars, and those that are brighter are more easily seen than those that are fainter, so our observations are biased towards finding a disproportionately large fraction of very bright galaxies compared to the true state of affairs. Again, at a more mundane level, if a ratcatcher tells you that all rats are more than six inches long because he has never caught any that are shorter, you should check the size of his traps before drawing any far-reaching conclusions about the length of rats. Even though you are most likely to see an elephant in a zoo that does not mean that all elephants are in zoos, or even that most elephants are in zoos. In section 1.2 we shall restate these ideas in a more precise and quantitative form, but to get the flavour of how this form of the Anthropic Principle can be used we shall consider the question of the size of the Universe to illustrate how our own existence acts as a selection effect when assessing observed properties of the Universe. The fact that modern astronomical observations reveal the visible Universe to be close to fifteen billion light years in extent has provoked many vague generalizations about its structure, significance and ultimate purpose. Many a philosopher has argued against the ultimate importance of life in the Universe by pointing out how little life there appears to be compared with the enormity of space and the multitude of distant galaxies. But the Big Bang cosmological picture shows this up as too simplistic a judgement. Hubble's classic discovery that the Universe is in a dynamic state of expansion reveals that its size is inextricably bound up 2

3

4

5

3 Introduction

with its age. The Universe is fifteen billion light years in size because it is fifteen billion years old. Although a universe the size of a single galaxy would contain enough matter to make more than one hundred billion stars the size of our Sun, it would have been expanding for less than a single year. We have learned that the complex phenomenon we call 'life' is built upon chemical elements more complex than hydrogen and helium gases. Most biochemists believe that carbon, on which our own organic chemistry is founded, is the only possible basis for the spontaneous generation of life. In order to create the building blocks of life—carbon, nitrogen, oxygen and phosphorus—the simple elements of hydrogen and helium which were synthesized in the primordial inferno of the Big Bang must be cooked at a more moderate temperature and for a much longer time than is available in the early universe. The furnaces that are available are the interiors of stars. There, hydrogen and helium are burnt into the heavier life-supporting elements by exothermic nuclear reactions. When stars die, the resulting explosions which we see as supernovae, can disperse these elements through space and they become incorporated into planets and, ultimately, into ourselves. This stellar alchemy takes over ten billion years to complete. Hence, for there to be enough time to construct the constituents of living beings, the Universe must be at least ten billion years old and therefore, as a consequence of its expansion, at least ten billion light years in extent. We should not be surprised to observe that the Universe is so large. No astronomer could exist in one that was significantly smaller. The Universe needs to be as big as it is in order to evolve just a single carbon-based life-form. We should emphasize that this selection of a particular size for the universe actually does not depend on accepting most biochemists' belief that only carbon can form the basis of spontaneously generated life. Even if their belief is false, the fact remains that we are a carbon-based intelligent life-form which spontaneously evolved on an earthlike planet around a star of G2 spectral type, and any observation we make is necessarily self-selected by this absolutely fundamental fact In particular, a life-form which evolved spontaneously in such an environment must necessarily see the Universe to be at least several billion years old and hence see it to be at least several billion light years across. This remains true even if non-carbon life-forms abound in the cosmos. Non-carbon life-forms are not necessarily restricted to seeing a minimum size to the universe, but we are. Human bodies are measuring instruments whose self-selection properties must be taken into account, just as astronomers must take into account the self-selection properties of optical telescopes. Such telescopes tell us about radiation in the visible band of the electromagnetic spectrum, but it would be completely illegitimate to conclude from purely 6

7

Introduction

4

optical observations that all of the electromagnetic energy in the Universe is in the visible band. Only when one is aware of the self-selection of optical telescopes is it possible to consider the possibility that non-visible radiation exists. Similarly, it is essential to be aware of the self-selection which results from our being hom*o sapiens when trying to draw conclusions about the nature of the Universe. This self-selection principle is the most basic version of the Anthropic Principle and it is usually called the Weak Anthropic Principle. In a sense, the Weak Anthropic Principle may be regarded as the culmination of the Copernican Principle, because the former shows how to separate those features of the Universe whose appearance depends on anthropocentric selection, from those features which are genuinely determined by the action of physical laws. In fact, the Copernican Revolution was initiated by the application of the Weak Anthropic Principle. The outstanding problem of ancient astronomy was explaining the motion of the planets, particularly their retrograde motion. Ptolemy and his followers explained the retrograde motion by invoking an epicycle, the ancient astronomical version of a new physical law. Copernicus showed that the epicycle was unnecessary; the retrograde motion was due to an anthropocentric selection effect: we were observing the planetary motions from the vantage point of the moving Earth. At this level the Anthropic Principle deepens our scientific understanding of the link between the inorganic and organic worlds and reveals an intimate connection between the large and small-scale structure of the Universe. It enables us to elucidate the interconnections that exist between the laws and structures of Nature to gain new insight into the chain of universal properties required to permit life. The realization that the possibility of biological evolution is strongly dependent upon the global structure of the Universe is truly surprising and perhaps provokes us to consider that the existence of life may be no more, but no less, remarkable than the existence of the Universe itself. The Anthropic Principle, in all of its manifestations but particularly in its Weak form, is closely analogous to the self-reference arguments of mathematics and computer science. These self-reference arguments lead us to understand the limitations of logical knowledge: Godel's Incompleteness Theorem demonstrates that any mathematical system sufficiently complex to contain arithmetic must contain true statements which cannot be proven true, while Turing's Halting Theorem shows that a computer cannot fully understand itself. Similarly, the Anthropic Principle shows that the observed structure of the Universe is restricted by the fact that we are observing this structure; by the fact that, so to speak, the Universe is observing itself. The size of the observable Universe is a property that is changing with 54

5 Introduction

time because of the overall expansion of the system of galaxies and clusters. A selection effect enters because we are constrained by the timescales of biological evolution to observe the Universe only after billions of years of expansion have already elapsed. However, we can take this consideration a little further. One of the most important results of twentieth-century physics has been the gradual realization that there exist invariant properties of the natural world and its elementary components which render the gross size and structure of virtually all its constituents quite inevitable. The sizes of stars and planets, and even people, are neither random nor the result of any Darwinian selection process from a myriad of possibilities. These, and other gross features of the Universe are the consequences of necessity; they are manifestations of the possible equilibrium states between competing forces of attraction and repulsion. The intrinsic strengths of these controlling forces of Nature are determined by a mysterious collection of pure numbers that we call the constants of Nature. The Holy Grail of modern physics is to explain why these numerical constants—quantities like the ratio of the proton and electron masses for example—have the particular numerical values they do. Although there has been significant progress towards this goal during the last few years we still have far to go in this quest. Nevertheless, there is one interesting approach that we can take which employs an Anthropic Principle in a more adventurous and speculative manner than the examples of selfselection we have already given. It is possible to express some of the necessary or sufficient conditions for the evolution of observers as conditions on the relative sizes of different collections of constants of Nature. Then we can determine to what extent our observation of the peculiar values these constants are found to take is necessary for the existence of observers. For example, if the relative strengths of the nuclear and electromagnetic forces were to be slightly different then carbon atoms could not exist in Nature and human physicists would not have evolved. Likewise, many of the global properties of the Universe, for instance the ratio of the number of photons to protons, must be found to lie within a very narrow range if cosmic conditions are to allow carbon-based life to arise. The early investigations of the constraints imposed upon the constants of Nature by the requirement that our form of life exist produced some surprising results. It was found that there exist a number of unlikely coincidences between numbers of enormous magnitude that are, superficially, completely independent; moreover, these coincidences appear essential to the existence of carbon-based observers in the Universe. So numerous and unlikely did these coincidences seem that Carter proposed a stronger version of the Anthropic Principle than the Weak form of 8

9

10

11

12

13

1

Introduction

6

self-selection principle introduced earlier: that the Universe must be such 'as to admit the creation of observers within it at some stage.' This is clearly a more metaphysical and less defensible notion, for it implies that the Universe could not have been structured differently—that perhaps the constants of Nature could not have had numerical values other than what we observe. Now, we create a considerable problem. For we are tempted to make statements of comparative reference regarding the properties of our observable Universe with respect to the alternative universes we can imagine possessing different values of their fundamental constants. But there is only one Universe; where do we find the other possible universes against which to compare our own in order to decide how fortunate it is that all these remarkable coincidences that are necessary for our own evolution actually exist? There has long been an interest in the idea that our Universe is but one of many possible worlds. Traditionally, this interest has been coupled with the naive human tendency to regard our Universe as optimal, in some sense, because it appears superfically to be tailor-made for the presence of living creatures like ourselves. We recall Leibniz' claim that ours is the 'best of all possible worlds'; a view that led him to be mercilessly caricatured by Voltaire as Pangloss, a professor of 'metaphysicotheologo-cosmolo-nigology'. Yet, Leibniz' claims also led Maupertuis to formulate the first Action Principles of physics which created new formulations of Newtonian mechanics and provided a basis for the modern approach to formulating and determining new laws of Nature. Maupertuis claimed that the dynamical paths through space possessing non-minimal values of a mathematical quantity he called the Action would be observed if we had less perfect laws of motion than exist in our World. They were identified with the other 'possible worlds'. The fact that Newton's laws of motion were equivalent to bodies taking the path through space that minimizes the Action was cited by Maupertuis as proof that our World, with all its laws, was 'best' in a precise and rigorous mathematical sense. Maupertuis' ensemble of worlds is not the only one that physicists are familiar with. There have been many suggestions as to how an ensemble of different hypothetical, or actual' universes can arise. Far from being examples of idle scholastic speculation many of these schemes are part and parcel of new developments in theoretical physics and cosmology. In general, there are three types of ensemble that one can appeal to in connection with various forms of the Anthropic Principle and they have rather different degrees of certitude. First, we can consider collections of different possible universes which are parametrized by different values of quantities that do not have the status of invariant constants of Nature. That is, quantities that can, in 14

15,16

28 Introduction

principle, vary even in our observed Universe. For example, we might consider various cosmological models possessing different initial conditions but with the same laws and constants of Nature that we actually observe. Typical quantities of this sort that we might allow to change are the expansion rate or the levels of isotropy and spatial uniformity in the material content of the Universe. Mathematically, this amounts to choosing different sets of initial boundary conditions for Einstein's gravitational field equations of general relativity (solutions of these equations generate cosmological models). In general, arbitrarily chosen initial conditions at the Big Bang do not necessarily evolve to produce a universe looking like the one we observe after more than fifteen billion years of expansion. We would like to know if the subset of initial conditions that does produce universes like our own has a significant intersection with the subset that allows the eventual evolution of life. Another way of generating variations in quantities that are not constants of Nature is possible if the Universe is infinite, as current astronomical data suggest. If cosmological initial conditions are exhaustively random and infinite then anything that can occur with non-vanishing probability will occur somewhere; in fact, it will occur infinitely often. Since our Universe has been expanding for a finite time of only about fifteen billion years, only regions that are no farther away than fifteen billion light years can currently be seen by us. Any region farther away than this cannot causally influence us because there has been insufficient time for light to reach us from regions beyond fifteen billion light years. This extent defines what we call the observable, (or visible), Universe'. But if the Universe is randomly infinite it will contain an infinite number of causally disjoint regions. Conditions within these regions may be different from those within our observable part of the Universe; in some places they will be conducive to the evolution of observers but in others they may not. According to this type of picture, if we could show that conditions very close to those we observe today are absolutely necessary for life, then appeal could be made to an extended form of natural selection to claim that life will only evolve in regions possessing benign properties; hence our observation of such a set of properties in the finite portion of the entire infinite Universe that is observable by ourselves is not surprising. Furthermore, if one could show that the type of Universe we observe out to fifteen billion light years is necessary for observers to evolve then, because in any randomly infinite set of cosmological initial conditions there must exist an infinite number of subsets that will evolve into regions resembling the type of observable Universe we see, it could be argued that the properties of our visible portion of the infinite Universe neither have nor require any further explanation. This is an idea that it is possible to falsify by detecting a density of cosmic material sufficient to 17

4

19

Introduction

8

render the Universe finite. Interestingly, some of the currently popular 'inflationary' theories of how the cosmic medium behaves very close to the Big Bang not only predict that if our Universe is infinite then it should be extremely non-uniform beyond our visible horizon, but these theories also exploit probabilistic properties of infinite initial data sets. A third class of universe ensembles that has been contemplated involves the speculative idea of introducing a change in the values of the constants of Nature, or other features of the Universe that strongly constrain the outcome of the laws of Nature—for example, the charge on the electron or the dimensionality of space. Besides simply imagining what would happen if our Universe were to possess constants with different numerical values, one can explore the consequences of allowing fundamental constants of Nature, like Newton's gravitation 'constant', to vary in space or time. Accurate experimental measurements are also available to constrain the allowed magnitude of any such variations. It has also been suggested that if the Universe is cyclic and oscillatory then it might be that the values of the fundamental constants are changed on each occasion the Universe collapses into the 'Big Crunch' before emerging into a new expanding phase. A probability distribution can also be associated with the observed values of the constants of Nature arising in our own Universe in some new particle physics theories that aim to show that a sufficiently old and cool universe must inevitably display apparent symmetries and particular laws of Nature even if none really existed in the initial high temperature environment near the Big Bang. These 'chaotic gauge theories', as they are called, allow, in principle, a calculation of the probability that after about fifteen billion years we see a particular symmetry or law of Nature in the elementary particle world. Finally, there is the fourth and last class of world ensemble. A muchdiscussed and considerably more subtle ensemble of possible worlds is one which has been introduced to provide a satisfactory resolution of paradoxes arising in the interpretation of quantum mechanics. Such an ensemble may be the only way to make sense of a quantum cosmological theory. This 'Many Worlds' interpretation of the quantum theory introduced by Everett and Wheeler requires the simultaneous existence of an infinite number of equally real worlds, all of which are more-or-less causally disjoint, in order to interpret consistently the relationship between observed phenomena and observers. As the Anthropic Principle has impressed many with its apparent novelty and has been the subject of many popular books and articles, it is important to present it in its true historical perspective in relation to the plethora of Design Arguments beloved of philosophers, scientists and theologians in past centuries and which still permeate the popular mind 23

24

48

25

26

27

28

9 Introduction

today. When identified in this way, the idea of the Anthropic Principle in many of its forms can be traced from the pre-Socratics to the founding of modern evolutionary biology. In Chapter 2 we provide a detailed historical survey of this development. As is well known, Aristotle used the notion of 'final causes' in Nature in opposition to the more materialistic alternatives promoted by his contemporaries. His ideas became extremely influential centuries later following their adaption and adoption by Thomas Aquinas to form his grand synthesis of Greek and JudaeoChristian thought. Aquinas used these teleological ideas regarding the ordering of Nature to produce a Design Argument for the existence of God. Subsequently, the subject developed into a focal point for both expert and inept comment. The most significant impact upon teleological explanations for the structure of Nature arose not from the work of philosophers but rather from Darwin's Origin of Species, first published in 1859. Those arguments that had been used so successfully in the past to argue for the anthropocentric purpose of the natural world were suddenly turned upon their heads to demonstrate the contrary: the inevitable conditioning of organic structures by the local environment via natural selection. Undaunted, some leading scientists sought to retain purpose in Nature by subsuming evolutionary theory within a universal teleology. We study the role played by teleological reasoning in twentieth-century science and philosophy in Chapter 3. There we show also how more primitive versions of the Anthropic Principles have led in the past to new developments in the physical sciences. In this chapter we also describe in some detail the position of teleology and teleonomy in evolutionary biology and introduce the intimate connection between life and computers. This allows us to develop the striking resemblance between some ideas of modern computer theorists, in which the entire Universe is envisaged as a program being run on an abstract computer rather than a real one, and the ontology of the absolute idealists. The traditional picture of the 'Heat Death of the Universe', together with the pictures of teleological evolution to be found in the works of Bergson, Alexander, Whitehead and the other philosophers of progress, leads us into studies of some types of melioristic world-view that have been suggested by philosophers and theologians. We should warn the professional historian that our presentation of the history of teleology and anthropic arguments will appear Whiggish. To the uninitiated, the term refers to the interpretation of history favoured by the great Whig (liberal) historians of the nineteenth century. As we shall discuss in Chapter 3, these scholars believed that the history of mankind was teleological: a record of slow but continual progress toward the political system dear to the hearts of Whigs, liberal democracy. The Whig historians thus analysed the events and ideas of the past from the

Introduction

10

point of view of the present rather than trying to understand the people of the past on their own terms. Modern historians generally differ from the Whig historians in two ways: first, modern historians by and large discern no over-all purpose in history (and we agree with this assessment). Second, modern historians try to approach history from the point of view of the actors rather than judging the validity of archaic world-views from our own Olympian heights. In the opinion of many professional historians, it is not the job of historians to pass moral judgments on the actions of those who lived in the past. A charge of Whiggery—analysing and judging the past from our point of view—has become one of the worse charges that one historian can level at another; a Whiggish approach to history is regarded as the shameful mark of an amateur. Nevertheless, it is quite impossible for any historian, amateur or professional, to avoid being Whiggish to some extent. As pointed out by the philosopher Morton White, in the very act of criticizing the longdead Whig historians for judging the people of the past, the modern historians are themselves judging the work of some of their intellectual forebears, namely the Whig historians. Furthermore, every historian must always select a finite part of the infinitely-detailed past to write about. This selection is necessarily determined by the interests of people in the present, the modern historian if no one else. As even the arch critic of Whiggery, Herbert Butterfield, put it in his The Whig Interpretation of History: 49

51

The historian is something more than the mere external spectator. Something more is necessary if only to enable him to seize the significant detail and discern the sympathies between events and find the facts that hang together. By imaginative sympathy he makes the past intelligible to the present. He translates its conditioning circ*mstances into terms which we today can understand. It is in this sense that history must always be written from the point of view of the present. It is in this sense that every age will have to write its history over again. 50

This is one of the senses in which we shall be Whiggish: we shall try to interpret the ideas of the past in terms a modern scientist can understand. For example, we shall express the concepts of absolute idealism in computer language, and describe the cosmologies of the past in terms of the language used by modern cosmologists. But our primary purpose in this book is not to write history. It is to describe the modern Anthropic Principle. This will necessarily involve the use of some fairly sophisticated mathematics and require some familiarity with the concepts of modern physics. Not all readers who are interested in reading about the Anthropic Principle will possess all the requisite scientific background. Many of these readers—for instance, theologians 55

11 Introduction

and philosophers—will actually be more familiar with the philosophical ideas of the past than with more recent scientific developments. The history sections have been written so that such readers can get a rough idea of the modern concepts by seeing the parallels with the old ideas. Such an approach will give a Whiggish flavour to our treatment of the history of teleology. There is a third reason for the Whiggish flavour of our history: we do want to pass judgments on the work of the scientists and philosophers of the past. Our purpose in doing so is not to demonstrate our superiority over our predecessors, but to learn from their mistakes and successes. It is essential to take this approach in a book on a teleological idea like the Anthropic Principle. There is a general belief that teleology is scientifically bankrupt, and that history shows it always has been. We shall show that on the contrary, teleology has on occasion led to significant scientific advances. It has admittedly also led scientists astray; we want to study the past in order learn under what conditions we might reasonably expect teleology to be reliable guide. The fourth and final reason for the appearance of Whiggery in our history of teleology is that there are re-occurring themes present in the history of teleology; we are only reporting them. We refuse to distort history to fit the current fad of historiography. We are not the only contemporary students of history to discern such patterns in intellectual history. Such patterns are particularly noticeable in the history of science: the distinguished historian of science Gerald Holton has termed such re-occurring patterns themata. To cite just one example of a re-occurring thema from the history of teleology, the cosmologies of the eighteenth-century German idealist Schelling, the twentieth-century British philosopher Alexander, and Teilhard de Chardin are quite similar, simply because all of these men believed in an evolving, melioristic universe; and, broadly speaking, there is really only one way to constuct such a cosmology. We shall discuss this form of teleology in more detail in Chapters 2 and 3. In Chapter 4 we shall describe in detail how the modern form of the Anthropic self-selection principle arose out of the study of the famous Large Number Coincidences of cosmology. Here the Anthropic Principle was first employed in its modern form to demonstrate that the observed Large Number Coincidences are necessary properties of an observable Universe. This was an important observation because the desire for an explanation of these coincidences had led Dirac to conclude that Newton's gravitation constant must decrease with cosmic time. His suggestion was to start an entirely new sub-culture in gravitation research. We examine then in more detail the idea that there may exist ensembles of different universes in which various coincidences between 52

29

30

Introduction

12

the values of fundamental constants deviate from their observed values. One of the earliest uses of the Anthropic self-selection idea was that of Whitrow who invoked it as a means of explaining why space is found to possess three dimensions, and we develop this idea in the light of modern ideas in theoretical physics. One of the themes of this chapter is that the recognition of unusual and suggestive coincidences between the numerical values of combinations of physical constants can play an important role in framing detailed theoretical descriptions of the Universe's structure. Chapter 5 shows how one can determine the gross structure of all the principal constituents of the physical world as equilibrium states between competing fundamental forces. We can then express these characteristics solely in terms of dimensionless constants of Nature aside from inessential geometrical factors like 27t. Having achieved such a description one is in a position to determine the sensitivity of structures essential to the existence of observers with respect to small changes in the values of fundamental constants of Nature. The principal achievement of this type of approach to structures in the Universe is that it enables one to identify which fortuitous properties of the Universe are real coincidences and distinguish them from those which are inevitable consequences of the particular values that the fundamental constants take. The fact that the mass of a human is the geometric mean of a planetary and an atomic mass while the mass of a planet is the geometric mean of an atomic mass and the mass of the observable Universe are two striking examples. These apparent 'coincidences' are actually consequences of the particular numerical values of the fundamental constants defining the gravitational and electromagnetic interactions of physics. By contrast the fact that the disks of the Sun and Moon have virtually the same angular size (about half a degree) when viewed from Earth is a pure coincidence and it does not appear to be one that is necessary for the existence of observers. The ratio of the Earth's radius and distance from the Sun is another pure coincidence, in that it is not determined by fundamental constants of Nature alone, but were this ratio slightly different from what it is observed to be, observers could not have evolved on Earth. The arguments of Chapter 5 can be used to elucidate the inevitable sizes and masses of objects spanning the range from atomic nuclei to stars. If we want to proceed further up the size-spectrum things become more complicated. It is still not known to what extent properties of the whole Universe, determined perhaps by initial conditions or events close the Big Bang, play a role in fixing the sizes of galaxies and galaxy clusters. In Chapter 6 we show how the arguments of Chapter 5 can be extended into the cosmological realm where we find the constants of Nature joined by several dimensionless cosmological parameters to complete the description of the Universe's coarse-grained structure. We give a detailed 31

32

33

13 Introduction

overview of modern cosmology together with the latest consequences of unified gauge theories for our picture of the very early Universe. This picture enables us to interrelate many aspects of the Universe once regarded as independent coincidences. It also enables us to highlight a number of extraordinarily finely tuned coincidences upon which the possible evolution of observers appears to hinge. We are also able to show well-known Anthropic arguments regarding the observation that the Universe is isotropic to within one part in ten thousand are not actually correct. In order to trace the origin of the Universe's most unusual large scale properties, we are driven closer and closer to events neighbouring the initial singularity, if such there was. Eventually, classical theories of gravitation become inadequate and a study of the first instants of the Universal expansion requires a quantum cosmological model. The development of such a quantum gravitational theory is the greatest unsolved problem in physics at present but fruitful approaches towards effecting a marriage between quantum field theory and general relativity are beginning to be found. There have even been claims that a quantum wave function for the Universe can be written down. Quantum mechanics involves observers in a subtle and controversial manner. There are several schools of thought regarding the interpretation of quantum theory. These are described in detail in Chapter 7. After describing the 'Copenhagen' and 'Many Worlds' interpretations we show that the latter picture appears to be necessary to give meaning to any wave function of the entire Universe and we develop a simple quantum cosmological model in detail. This description allows the Anthropic Principle to make specific predictions. The Anthropic Principles seek to link aspects of the global and local structure of the Universe to those conditions necessary for the existence of living observers. It is therefore of crucial importance to be clear about what we mean by 'life'. In Chapter 8 we give a new definition of life and discuss various alternatives that have been suggested in the past. We then consider those aspects of chemical and biochemical structures that appear necessary for life based upon atomic structures. Here we are, in effect, extending the methodology of Chapter 5 from astrophysics to biochemistry with the aim of determining how the crucial properties of molecular structures are related to the invariant aspects of Nature in the form of fundamental constants and bonding angles. To complete this chapter we extend some recent ideas of Carter regarding the evolution of intelligent life on Earth. This leads to an Anthropic Principle prediction which relates the likely time of survival of terrestrial life in the future the number of improbable steps in the evolution of intelligent life on Earth via a simple mathematical inequality. 17

34

35

14

Introduction

In Chapter 9 we discuss the controversial subject of extraterrestrial life and provide arguments that there probably exists no other intelligent species with the capability of interstellar communication within our own Milky Way Galaxy. We place more emphasis upon the ideas of biologists regarding the likelihood of intelligent life-forms evolving than is usually done by astronomers interested in the possibility of extraterrestrial intelligence. As a postscript we show how the logic used to project the capabilities of technologically advanced life-forms can be used to frame an Anthropic Principle argument against the possibility that we live in a Steady-State Universe. This shows that Anthropic Principle arguments can be used to winnow-out cosmological theories. Conversely, if the theories which contradict the Anthropic Principle are found to be correct, the Anthropic Principle is refuted; this gives another test of the Anthropic Principle. Finally, in Chapter 10, we attempt to predict the possible future histories of the Universe in the light of known physics and cosmology. We describe in detail the expected evolution of both open and closed cosmological models in the far future and also stress a number of global constraints that exist upon the structure of a universe consistent with our own observations today. In our final speculative sections we investigate the possibility of life surviving into the indefinite future of both open and closed universes. We define life using the latest ideas in information and computer theory and determine what the Universe must be like in order that information-processing continue indefinitely; in effect, we investigate the implications for physics of the requirement that 'life' never becomes extinct. Paradoxically, this appears to be possible only in a closed universe with a very special global causal structure, and thus the requirement that life never dies out—which we define precisely by a new 'Final Anthropic Principle'—leads to definite testable predictions about the global structure of the Universe. Since indefinite survival in a closed universe means survival in a high-energy environment near the final singularity, the Final Anthropic Principle also leads to some predictions in high-energy particle physics. Before abandoning the reader to the rest of the book we should make a few comments about its contents. Our study involves detailed mathematical investigations of physics and cosmology, studies of chemistry and evolutionary biology as well as a considerable amount of historical description and analysis. We hope we have something new to say in all these areas. However, not every reader will be interested in all of this material. Our chapters have, in the main, been constructed in such a way that they can be read independently, and the notes and references are collected together accordingly. Scientists with no interest in the history of ideas can just skip the chapters in which they are discussed. Likewise,

15 Introduction

non-scientists can avoid mathematics altogether they wish. One last word: the authors are cosmologists, not philosophers. This has one very important consequence which the average reader should bear in mind. Whereas philosophers and theologians appear to possess an emotional attachment to their theories and ideas which requires them to believe them, scientists tend to regard their ideas differently. They are interested in formulating many logically consistent possibilities, leaving any judgement regarding their truth to observation. Scientists feel no qualms about suggesting different but mutually exclusive explanations for the same phenomenon. The authors are no exception to this rule and it would be unwise of the reader to draw any wider conclusions about the authors' views from what they may read here.

1.2 Anthropic Definitions

Definitions are like belts. The shorter they are, the more elastic they need to be. S. Toulmin

Although the Anthropic Principle is widely cited and has often been discussed in the astronomical literature, (as can be seen from the bibliography to this chapter alone), there exist few attempts to frame a precise statement of the Principle; rather, astronomers seem to like to leave a little flexibility in its formulation perhaps in the hope that its significance may thereby more readily emerge in the future. The first published discussion by Carter saw the introduction of a distinction between what he termed 'Weak' and 'Strong' Anthropic statements. Here, we would like to define precise versions of these two Anthropic Principles and then introduce Wheeler's Participatory Anthropic Principle together with a new Final Anthropic Principle which we shall investigate in Chapter 10. The Weak Anthropic Principle (WAP) tries to tie a precise statement to the notion that any cosmological observations made by astronomers are biased by an all-embracing selection effect: our own existence. Features of the Universe which appear to us astonishingly improbable, a priori, can only be judged in their correct perspective when due allowance has been made for the fact that certain properties of the Universe are necessary if it is to contain carbonaceous astronomers like ourselves. This approach to evaluating unusual features of our Universe first re-emerges in modern times in a paper of Whitrow who, in 1955, sought an answer to the question 'why does space have three dimensions?'. Although unable to explain why space actually has, (or perhaps even why it must have), three dimensions, Whitrow argued that this feature of the World is not unrelated to our own existence as observers of it. When formulated in three dimensions, mathematical physics possesses many 1

6

31

16

Introduction

unique properties that are necessary prerequisites for the existence of rational information-processing and 'observers' similar to ourselves. Whitrow concluded that only in three-dimensional spaces can the dimensionality of space be questioned. At about the same time Whitrow also pointed out that the expansion of the Universe forges an unbreakable link between its overall size and age and the ambient density of material within it. This connection reveals that only a very 'large' universe is a possible habitat for life. More detailed ideas of this sort had also been published in Russian by the Soviet astronomer Idlis. He argued that a variety of special astronomical conditions must be met if a universe is to be habitable. He also entertained the possibility that we were observers merely of a tiny fraction of a diverse and infinite universe whose unobserved regions may not meet the minimum requirements for observers that there exist hospitable temperatures and stable sources of stellar energy. Our definition of the WAP is motivated in part by these insights together with later, rather similar ideas of Dicke who, in 1957, pointed out that the number of particles in the observable extent of the Universe, and the existence of Dirac's famous Large Number Coincidences 'were not random but conditioned by biological factors'. This motivates the following definition: Weak Anthropic Principle (WAP): The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the Universe be old enough for it to have already done so. Again we should stress that this statement is in no way either speculative or controversial. It expresses only the fact that those properties of the Universe we are able to discern are self-selected by the fact that they must be consistent with our own evolution and present existence. WAP would not necessarily restrict the observations of non-carbon-based life but our observations are restricted by our very special nature. As a corollary, the WAP also challenges us to isolate that subset of the Universe's properties which are necessary for the evolution and continued existence of our form of life. The entire collection of the Universe's laws and properties that we now observe need be neither necessary nor sufficient for the existence of life. Some properties, for instance the large size and great age of the Universe, do appear to be necessary conditions; others, like the precise variation in the distribution of matter in the Universe from place to place, may not be necessary for the development of observers at some site. The non-teleological character of evolution by natural selection ensures that none of the observed properties of the Universe are sufficient conditions for the evolution and existence of life. 36

37

13

17 Introduction

Carter, and others, have pointed out that as a self-selection principle the WAP is a statement of Bayes' theorem. The Bayesian approach to inference attributes a priori and a posteriori probabilities to any hypothesis before and after some piece of relevant evidence, E, is taken into account. In such a situation we call the before and after probabilities p and p , respectively. The fact that for any particular outcome O, the probability of observing O before the evidence E is known equals the probability of observing O given the evidence E, after E was accounted for, is expressed by the equation, Pb(0) = ( 0 / E ) (1.1) where/denotes a conditional probability. Bayes' formula then gives the relative plausibililty of any two theories a and 0 in the face of a piece of evidence E as 35

38

B

A

P a

38

PE(<X)^PACE/<X)PA(<X) PEO) PA(E/0)PA(0)

(1.2)

Thus the relative probabilities of the truth of a or |8 are modified by the conditional probabilities p (E/a) and p (E//3) which account for any bias of the experiment (or experimenter) towards gathering evidence that favours a rather than /3 (or vice versa). The WAP as we have stated it is just an application of Bayes' theorem. The WAP is certainly not a powerless tautalogical statement because cosmological models have been defended in which the gross structure of the Universe is predicted to be the same on the average whenever it is observed. The, now defunct, continuous creation theory proposed by Bondi, Gold and Hoyle is a good example. The WAP could have been used to make this steady-state cosmology appear extremely improbable even before it came into irredeemable conflict with direct observations. As Rees points out, A

A

12

the fact that there is an epoch when [the Hubble time, t , which is essentially equal to the age of the Universe] is of order the age of a typical star is not surprising in any 'big bang' cosmology. Nor is it surprising that we should ourselves be observing the universe at this particular epoch. In a steady-state cosmology, however, there would seem no a priori reason why the timescale for stellar evolution should not be either [much less than] t (in which case nearly all the matter would be in dead stars or 'burnt-out' galaxies) or [much greater than] t (in which case only a very exceptionally old galaxy would look like our own). Such considerations could have provided suggestive arguments in favour of 'big bang' cosmologies . . . H

H

H

We can also give some examples of how the WAP leads to synthesizing insights that deepen our appreciation of the unity of Nature. Observed facts, often suspected at first sight to be unrelated, can be connected by

Introduction

18

examining their relation to the conditions necessary for our own existence and their explicit dependence on the constants of physics. Let us reconsider, from the Bayesian point of view, the classic example mentioned in section 1.1, relating the size of the Universe to the period of time necessary to generate observers. The requirement that enough time pass for cosmic expansion to cool off sufficiently after the Big Bang to allow the existence of carbon ensures that the observable Universe must be relatively old and so, because the boundary of the observable Universe expands at the speed of light, very large. The nuclei of carbon, nitrogen, oxygen and phosphorus of which we are made, are cooked from the light primordial nuclei of hydrogen and helium by nuclear reactions in stellar interiors. When a star nears the end of its life, it disperses these biological precursors throughout space. The time required for stars to produce carbon and other bioactive elements in this way is roughly the lifetime of a star on the 'main-sequence' of its evolution, given by (1.3) where G is Newton's gravitation constant, c is the velocity of light, h is Planck's constant and m is the proton mass. Thus, in order that the Universe contain the building-blocks of life, it must be at least as old as t+ and hence, by virtue of its expansion, at least ct+ (roughly ten billion light years) in extent. No one should be surprised to find the Universe to be as large as it is. We could not exist in one that was significantly smaller. Moreover, the argument that the Universe should be teeming with civilizations on account of its vastness loses much of its persuasiveness: the Universe has to be as big as it is in order to support just one lonely outpost of life. Here, we can see the deployment of (1.2) explicitly if we let the hypothesis that the large size of the Universe is superfluous for life on planet Earth be a and let hypothesis /3 be that life on Earth is connected with the size of the Universe. If the evidence E is that the Universe is observed to be greater than ten billion light years in extent then, although p (E//3)« 1, the hypothesis is not necessarily then improbable because we have argued that p (E/|8)— 1. We also observe the expansion of the Universe to be occurring at a rate which is irresolvably close to the special value which allows it the smallest deceleration compatible with indefinite future expansion. This feature of the Universe is also dependent on the epoch of observation. And again, if galaxies and clusters of galaxies grow in extent by mergers and hierarchical clustering, then the characteristic scale of galaxy clustering that we infer will be determined by the cosmic epoch at which it is observed. Ellis has stressed the existence of a spatial restriction which further circ*mscribes the range of observed astronomical phenomena. What N

B

A

2

39

19 Introduction

amounts to a universal application of the principle of natural selection would tell us that observers may only exist in particular regions of a spatially inhom*ogeneous universe. Since realistic mathematical models of inhom*ogeneous universes are extremely difficult to construct, various unverifiable cosmological 'Principles' are often used by theoretical cosmologists to allow simple cosmological models to be extracted from Einstein's general theory of relativity. These Principles invariably make statements about regions of the Universe which are unobservable not only in practice but also in principle (because of the finite speed of light). Principles of this sort need to be used with care. For example, Principles of Mediocrity like the Copernican Principle or the Principle of Plenitude (see Chapter 3) would imply that if the Universe did possess a preferred place, or centre, then we should not expect to find ourselves positioned there. However, general relativity allows possible cosmological models to be constructed which not only possess a centre, but which also have conditions conducive to the existence of observers only near that centre. The WAP would offer a good explanation for our central position in such circ*mstances, whilst the Principles of Mediocrity would force us to conclude that we do not exist at all! According to WAP, it is possible to contemplate the existence of many possible universes, each possessing different defining parameters and properties. Observers like ourselves obviously can exist only in that subset containing universes consistent with the evolution of carbon-based life. This approach introduces necessarily the idea of an ensemble of possible universes and was suggested independently by the Cambridge biologist Charles Pantin in 1965. Pan tin had recognized that a vague principle of amazement at the fortuitous properties of natural substances like carbon or water could not yield any testable predictions about the World, but the amazement might disappear if 40

we could know that our Universe was only one of an indefinite number with varying properties, [so] we could perhaps invoke a solution analogous to the principle of Natural Selection; that only in certain universes which happen to include ours, are the conditions suitable for the existence of life, and unless that condition is fulfilled there will be no observers to note the fact

However, as Pantin also realized, it still remains an open question as to why any permutation of the fundamental constants of Nature allows the existence of life, albeit a question we would not be worrying about were such a fortuitous permutation not to exist. If one subscribes to this 'ensemble interpretation' of the WAP one must decide how large an ensemble of alternative worlds is to be admitted. Many ensembles can be imagined according to our willingness

20

Introduction

to speculate—different sets of cosmological initial data, different numerical values of fundamental constants, different space-time dimensions, different laws of physics—some of these possibilities we shall discuss in later chapters. The theoretical investigations initiated by Carter reveal that in some sense the subset of the ensemble containing worlds able to evolve observers is very 'small'. Most perturbations of the fundamental constants of Nature away from their actual numerical values lead to model worlds that are still-born, unable to generate observers and become cognizable. Usually, they allow neither nuclei, atoms nor stars to exist. Whatever the size and variety of permutations allowed within a hypothetical ensemble of 'many worlds', one might introduce here an analogue of the Drake equation often employed to guess the number of extraterrestrial civilizations in our Galaxy. Instead of expressing the probability of life existing elsewhere as a product of independent probabilities for the occurrence of processes like planetary formation, protocellular evolution and so forth, one could express the probability of life existing anywhere as a product of probabilities that encode the fact that life is only possible if parameters like the fine structure constant or the strong coupling constant lie in a particular numerical range The existence of the fundamental cosmic timescale like (1.3), fixed only by invariant constants of Nature, c, h, G, and m , was exploited by Dicke to produce a powerful WAP argument against Dirac's conclusion that the Newtonian gravitation constant, G, is decreasing with time. Dirac had noticed that the dimensionless measure of the strength of gravity 1

41

4 2 , 4 3

13

N

30

(1.4) is roughly of order the inverse square root of the number of nucleons in the observable Universe, N(t), at the present time 10 yrs. At any time, t, the quantity N(t) is simply M _4irpu(ct) ct N(03m Gm m if we use the cosmological relation that the density of the Universe, pu, is related to its age by p ~ ( G t ) . (The present age of roughly 10 yrs is displayed in the last step.) Dirac argued that it is very unlikely that these two quantities should possess simply related dimensionless magnitudes which are both so vastly different from unity and yet be independent. Rather, there must exist an approximate equality between them of the form (1.6) 10

3

v

N

N

L7

2

3

N

-1

lo

21 Introduction

However, whereas a is a time-independent combination of constants, N(t) increases linearly with the time of observation, r, which for us is the present age of the Universe. The relation (1.6) can only hold for all times if one component of a is time-varying and so Dirac suggested that we must have G ^ r so that N(f) ^ oc t . The quantities N(t) and a ^ are now observed to be of the same magnitude because (as a result of some unfound law of Nature) they are actually equal, and furthermore, they are of such an enormous magnitude because they both increase linearly in time and the Universe is very old—although this 'oldness' can presumably only be explained by the WAP even in this scheme of 'varying' constants for the reasons discussed above in connection with the size of the Universe. However, the WAP shows Dirac's radical conclusion of a time-varying Newtonian gravitation constant to be quite unnecessary. The coincidence that today we observe N ~ a g is necessary for our existence. Since we would not expect to observe the Universe either before stars form or after they have burnt out, human astronomers will most probably observe the Universe close to the epoch t+ given by (1.3). Hence, we will observe the time-dependent quantity N(t) to take on a value of order N(t+) and, by (1.3) and (1.4), this value is necessarily just G

G

1

2

2

2

N ( t J ~Gm -^~ao

(1.7)

2

N

where the second relation is a consequence of the value of t* in (1.3). If we let 8 be Dirac's hypothesis of time-varying G, while y is the hypothesis that G is constant while the 'evidence', E, is the coincidence (1.6); then, although the a priori probability that we live at the time when the numbers N(t) and a ^ are equal is very low, ( p ( E / y ) « 1), this does not render hypothesis y (the constancy of G) implausible because there is an anthropic selection effect which ensures p (E/y)—1. This selection effect is the one pointed out by Dicke. We should notice that this argument alone explains why we must observe N(t) and ag to be of equal magnitude, but not why that magnitude has the extraordinarily large value ~10 . (We shall have a lot more to say about this problem in Chapters 4, 5 and 6). As mentioned in section 1.1, Carter introduced the more speculative Strong Anthropic Principle (SAP) to provide a 'reason' for our observation of large dimensionless ratios like 10 ; we state his SAP as follows: Strong Anthropic Principle (SAP): The Universe must have those properties which allow life to develop within it at some stage in its history. An implication of the SAP is that the constants and laws of Nature must be such that life can exist. This speculative statement leads to a 2

B

A

2

79

1

79

22

Introduction

number of quite distinct interpretations of a radical nature: firstly, the most obvious is to continue in the tradition of the classical Design Arguments and claim that: (A) There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'. This view would have been supported by the natural theologians of past centuries, whose views we shall examine in Chapter 2. More recently it has been taken seriously by scientists who include the Harvard chemist Lawrence Henderson and the British astrophysicist Fred Hoyle, so impressed were they by the string of 'coincidences' that exist between particular numerical values of dimensionless constants of Nature without which life of any sort would be excluded. Hoyle points out how natural it might be to draw a teleological conclusion from the fortuitous positioning of nuclear resonance levels in carbon and oxygen: 44

45

I do not believe that any scientist who examined the evidence would fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce inside the stars. If this is so, then my apparently random quirks have become part of a deep-laid scheme. If not then we are back again at a monstrous sequence of accidents.

The interpretation (A) above does not appear to be open either to proof or to disproof and is religious in nature. Indeed it is a view either implicit or explicit in most theologies. This is all we need say about the 'teleological' version of the SAP at this stage. However, the inclusion of quantum physics into the SAP produces quite different interpretations. Wheeler has coined the title 'Participatory Anthropic Principle' (PAP) for a second possible interpretation of the SAP: (B) Observers are necessary to bring the Universe into being. This statement is somewhat reminiscent of the outlook of Bishop Berkeley and we shall see that it has physical content when considered in the light of attempts to arrive at a satisfactory interpretation of quantum mechanics. It is closely related to another possibility: (C) An ensemble of other different universes is necessary for the existence of our Universe. This statement receives support from the 'Many-Worlds' interpretation of quantum mechanics and a sum-over-histories approach to quantum gravitation because they must unavoidably recognize the existence of a whole class of real 'other worlds' from which ours is selected by an optimizing principle. We shall express this version of the SAP 6

46

47

23 Introduction

mathematically in Chapter 7, and we shall see that this version of the SAP has consequences which are potentially testable. Suppose that for some unknown reason the SAP is true and that intelligent life must come into existence at some stage in the Universe's history. But if it dies out at our stage of development, long before it has had any measurable non-quantum influence on the Universe in the large, it is hard to see why it must have come into existence in the first place. This motivates the following generalization of the SAP: Final Anthropic Principle (FAP): Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, it will never die out. We shall examine the consequences of the FAP in our final chapter by using the ideas of information theory and computer science. The FAP will be made precise in this chapter. As we shall see, FAP will turn out to require the Universe and elementary particle states to possess a number of definite properties. These properties provide observational tests for this statement of the FAP. Although the FAP is a statement of physics and hence ipso facto has no ethical or moral content, it nevertheless is closely connected with moral values, for the validity of the FAP is the physical precondition for moral values to arise and to continue to exist in the Universe: no moral values of any sort can exist in a lifeless cosmology. Furthermore, the FAP seems to imply a melioristic cosmos. We should warn the reader once again that both the FAP and the SAP are quite speculative; unquestionably, neither should be regarded as well-established principles of physics. In contrast, the WAP is just a restatement, albeit a subtle restatement, of one of the most important and well-established principles of science: that it is essential to take into account the limitations of one's measuring apparatus when interpreting one's observations. 53

References 1. B. Carter, in Confrontation of cosmological theories with observation, ed. M. S. Longair (Reidel, Dordrecht, 1974), p. 291. 2. See for example P. J. Peebles, The large scale structure of the universe, (Princeton University Press, Princeton, 1980). 3. A. Sandage and E. Hardy, Astrophys. J. 183, 743 (1973). S. Weinberg, Gravitation and cosmology (Wiley, NY, 1972). 4. We examine some of these claims in Chapters 2 and 3. 5. E. Hubble, Proc. natn. Acad. Sci., USA. 15, 169 (1929). 6. J. A. Wheeler, in Foundational problems in the special sciences, ed. R. E.

Introduction

24

Butts and J. Hintikka (Reidel, Dordrecht, 1977), p. 3; in The nature of scientific discovery, ed. O. Gingerich (Smithsonian Press, Washington, 1975), pp. 261-96 and pp. 575-87. 7. D. Clayton, Principles of stellar evolution and nucleosynthesis (McGraw-Hill, NY, 1968). R. J. Tayler, The stars: their evolution and structure (Wykeham, London, 1970). 8. For reviews of a number of examples see, in particular, B. J. Carr and M. J. Rees, Nature 278, 605 (1979), V. Weisskopf, Science 187, 605 (1975); we shall investigate this in detail in Chapters 5 and 6. 9. For an interesting overview of constants see The constants of nature, ed. W. H. McCrea and M. J. Rees (Royal Society of London, London, 1983). This book was originally published as the contents of Phil Trans. R. Soc. Vol. A 310 in 1983 See also J. M. Levy-Leblond, Riv. nuovo Cim. 7, 187 (1977). 10. P. Candelas and S. Weinberg, Nucl. Phys. B 237,397 (1984). 11. P. C. W. Davies, J. Phys. A 5, 1296 (1972). 12. M. J. Rees, Comm. Astrophys. Space Phys. 4, 182 (1972). 13. R. H. Dicke, Rev. Mod. Phys. 29, 355 and 363 (1977); Nature 192, 440 (1961). 14. P. L. M. Maupertuis, Essai de cosmologie (1751), in Oeuvres, Vol. 4, p. 3 (Lyon, 1768). We discuss these developments in detail in sections 2.5 and 3.4. 15. J. D. Barrow, 'Cosmology, the existence of observers and ensembles of possible universes', in Les voies de la connaissance (Tsukuba Conference Proceedings, Radio-France Culture, Paris 1985). 16. J. D. Barrow, Quart. J. R. astron. Soc. 23, 146 (1983). 17. C. B. Collins and S. W. Hawking, Astrophys. J. 180, 317 (1973); J. D. Barrow, Quart. J. R. astrom. Soc. 23, 344 (1982). For a popular discussion see Chapter 5 of J. D. Barrow and J. Silk, The left hand of creation (Basic Books, NY, 1983 and Heinemann, London, 1984). 18. This is an old argument applied to cosmology by G. F. R. Ellis and G. B. Brundrit, Q. J. R. astron. Soc. 20, 37 (1979). See F. J. Tipler, Quart. J. R. astron. Soc. 22, 133 (1981) for a discussion of the history of this argument. 19. Notice the infinity alone is not a sufficient condition for this to occur; it must be an exhaustively random infinity in order to include all possibilities. 20. If the visible part of the Universe is accurately described by Friedman's equation without cosmological constant (as seems to be the case, see ref. 3) then a density exceeding about 2. 10~ gm c m is required. 21. A. Guth, Phys. Rev. D23, 347 (1981); K. Sato, Mon. Not. R. astron. Soc. 195, 467 (1981); A. Linde, Phys. Lett. B 108, 389 (1982). For an overview see G. Gibbons, S. W. Hawking, and S. T. C. Siklos, The very early universe (Cambridge University Press, Cambridge, 1983). 22. A. Linde, Nuovo Cim. Lett. 39, 401 (1984). 23. J. D. Barrow, Phil. Trans. R. Soc. A 310, 337 (1983); E. Witten, Nucl. Phys. B 186, 412 (1981). 24. R. D. Reasonberg, Phil. Trans. R. Soc. A 310, 227 (1983). 25. H. B. Nielsen, Phil. Trans. R. Soc. A310, 261 (1983); J. Iliopoulos, D. V. Nanopoulos, and T. N. Tamvaros, Phys. Lett. B 94, 141 (1980); J. D. Barrow and A. C. Ottewill, J. Phys. A16, 2757 (1983). 26. J. A. Wheeler and W. H. Zurek, Quantum theory and measurement, (Prince29

-3

25 Introduction ton University Press, Princeton, 1982); H. Everett, Rev. Mod. Phys. 29, 454 (1957); F. J. Tipler, 'Interpreting the wave function of the universe', Phys. Rep. (In press.) B. Espagnet, Scient. Am. Nov (1979), p. 128. 27. V. Trimble, Am. Scient. 65, 76 (1977); F. Dyson, Scient. Am. 224, No. 3, pp. 50-9 (Sept 1971); J. Leslie, Am. Phil. Quart. 19, 141 (1982), Am. Phil. Quart. 7, 286 (1970), Mind 92, 573 (1983); in Scientific explanation and understanding: essays on reasoning and rationality in science, ed. N. Rescher (University Press of America, Lanham, 1983), pp. 53-83; in Evolution and creation, ed. E. McMullin (University of Notre Dame Press, Notre Dame 1984); P. J. Hall, Quart. J. R. astron. Soc. 24, 443 (1983); J. Demaret and C. Barbier, Revue des Questions Scientifique 152, 181, 461 (1981); E. J. Squires, Eur. J. Phys. 2, 55 (1981); P. C. W. Davies, Accidental universe (Cambridge University Press, Cambridge, 1982); R. Breuer, Das anthropische Prinzip (Meyster, Miinchen, 1981); J. D. Barrow and J. Silk, Scient. Am., April (1980), p. 98; A. Finkbeiner, Sky & Telescope, Aug. (1984), p. 107; J. D. Barrow and F. J. Tipler, L'homme et le cosmos (Imago-Radio France, Paris, 1984); J. Eccles, The human mystery (Springer, NY, 1979); B. Lovell, In the centre of immensities, (Harper & Row, NY, 1983); J. A. Wheeler, Am. Scient. 62, 683 (1974); G. Gale, Scient. Am. 245 (No. 6, Dec.), 154 (1981); M. T. Simmons, Mosaic (March-April 1982) p. 16; G. Wald, Origins of Life 5, 7 (1974); S. W. Hawking, CERN Courier 21 (1), 3 (1981); G. F. R. Ellis, S. Afr. J. Sci. 75, 529 (1979); S. J. Gould, Natural History 92, 34 (1983); J. Maddox, Nature 307, 409 (1984); P. C. W. Davies, Prog. Part. Nucl. Phys. 10, 1 (1983); F. J. Dyson, Disturbing the universe (Harper & Row, NY, 1979). 28. For a representative general bibliography of Design Arguments see: H. Baker, The image of man (Harper, NY, 1947); P. Bertocci, An introduction to the philosophy of religion, (Prentice Hall, NY, 1951); The cosmological arguments, ed. D. R. Burnill (Doubleday, NY, 1967); E. A. Burtt, The metaphysical foundations of modern physical science (Harcourt Brace, NY, 1927); C. Hartshorne, A natural theology for our time (Open Court, NY, 1967); L. E. Hicks, A critique of Design Arguments (Scribners, NY, 1883); R. H. Hurlbutt III, Hume, Newton and the Design Argument, (University Nebraska Press, Lincoln, Nebraska, 1965); P. Janet, Final causes (Clark, Edinburgh, 1878); D. L. LeMahieu, The mind of William Paley (University Nebraska Press, Lincoln, Nebraska, 1976); A. O. Lovejoy, The Great Chain of Being: a study in the history of an idea, (Harvard University Press, Cambridge, 1936); J. D. McFarland, Kant's concept of teleology, (University Edinburgh Press, Edinburgh, 1970); T. McPherson, The argument from design (Macmillan, Edinburgh, 1972); L. Stephen, English thought in the eighteenth century, Vol. 1 (Harcourt Brace, NY, 1962); R. G. Swinburne, Philosophy 43, 164 (1968); F. R. Tennant, Philosophical theology, 2 vols (Cambridge University Press, Cambridge, 1930); A. Woodfield, Teleology (Cambridge University Press, Cambridge, 1976); L. Wright, Teleological explanations (University of California Press, Berkeley, 1976). 29. J. D. Barrow, Quart. J. R. astron. Soc. 22, 388 (1981). 30. P. A. M. Dirac, Nature 139, 323 (1937). 31. G. Whitrow, Br. J. Phil. Sci. 6, 13 (1955). 32. B. J. Carr and M. J. Rees, Nature 278, 605 (1979). 33. M. H. Hart, Icarus 33, 23 (1978). 34. J. Hartle and S. W. Hawking, Phys. Rev. D 28, 2960 (1983); F. J. Tipler, preprint (1984). 9

26

Introduction

35. B. Carter, Phil Trans. R. Soc. A 310, 347 (1983). 36. Whitrow, cited in E. Mascall, Christian theology and natural science: 1956 Bampton lectures (Longmans Green, London, 1956). 37. G. Idlis, Izv. Astrophys. Inst. Kazakh. SSR 7, 39 (1958), in Russian. 38. See, for example, P. L. Meyer, Introductory probability and statistical applications (Addison-Wesley, NY, 1971). 39. G. F. R. Ellis, Gen. Rel. Gravn. 11, 281 (1979); G. F. R. Ellis, R. Maartens, and S. D. Nel, Mon. Not. R. Soc. 184, 439 (1978). 40. C. F. A. Pantin, in Biology and personality, ed. I. T. Ramsey (Blackwell, Oxford, 1965), pp. 103-4. 41. I. S. Shklovskii and C. Sagan, Intelligent life in the universe (Dell, NY, 1966). 42. J. D. Barrow, in ref. 23. 43. T. L. Wilson, Quart. J. R. astron. Soc. 25, 435 (1984). 44. L. J. Henderson, The fitness of the environment (Smith, Gloucester, Mass., 1913; reprinted Harvard University Press, Cambridge, Mass., 1970) and The order of Nature (Harvard University Press, Cambridge, Mass., 1917). 45. F. Hoyle, in Religion and the scientists (SCM, London, 1959). 46. M. Jammer, The philosophy of quantum mechanics (Wiley, NY, 1974). 47. R. P. Feynman and A. R. Hibbs, Quantum mechanics and path integrals (McGraw-Hill, New York, 1965); L. S. Schulman, Techniques and applications of path integration (Wiley, NY, 1981). 48. C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973), Chapter 44. 49. For a lucid recent expression of this view from a professional historian see 'Whigs and professionals', by C. Russell in Nature 308, 111 (1984). 50. H. Butterfield, The Whig interpretation of history (G. Bell, London, 1951), p. 92. This book was first published in 1931. Butterfield later toned down his opposition to Whiggery. See, for example, his book The Englishman and his history (Cambridge University Press, Cambridge, 1944). 51. M. White, unpublished lecture at Harvard University (1957); quoted by W. W. Bartley III, in The retreat to commitment (Knopf, NY, 1962), pp. 98-100. 52. G. Holton, Thematic origins of scientific though (Harvard University Press, NY, 1973). 53. Physical assumptions and moral assumptions belong to different logical categories. Physical assumptions, like all scientific statements, are statements of matters of fact: syntactically, they are declarative sentences. Moral assumptions, on the other hand, are statements concerning moral obligation: syntactically, they are imperative sentences, which contain the word 'ought' or its equivalent. For further discussion of this point see any modern textbook on moral philosophy, for instance H. Reichenbach, The rise of scientific philosophy (University of California Press, Berkeley, 1968). 54. For further discussion of self-reference arguments in mathematics, see D. Hofstadter, Godel, Escher, Bach: an eternal golden braid (Basic Books, NY, 1979). 55. For a more recent defence of the idea that the past has to be discussed in terms the present can understand, see D. Hull, History & Theory 18, 1 (1979). We are grateful to Professor S. G. Brush for this reference.

2 Design Arguments What had thatflowerto do with being white, The wayside blue and innocent heal-all? What brought the kindred spider to that height, Then steered the white moth thither in the night? What but design of darkness to appall?— If design govern in a thing so small. Robert Frost

2.1 Historical Prologue

Original ideas are exceedingly rare and the most that philosophers have done in the course of time is to erect a new combination of them. G. Sarton

The Anthropic Principle is a consequence of our own existence. Since the dawn of recorded history humankind has used the local and global environment to good advantage; the soil and its fruits for food, the heavenly bodies for navigation, and the winds and waves for power. Such beneficiaries might naturally be led to conclude that the world in all its richness and subtlety was contrived for their benefit alone; uniquely designed for them rather than merely fortuitously used by them. From such inclinations and the natural attraction they appear to hold for those seeking meaning and significance in life, simple design arguments grew in a variety of cultures, each fashioned by the knowledge and sophistication of the society around it and nurtured by the religious and scientific beliefs of the day. In the Hebrew writings that form our Old Testament, we see the idea of providential design as a key feature of the Creation narratives and the epic poetry of the Wisdom and prophetic writings. The idea of a partially anthropocentric universe with teleological aspects is the warp and woof of the Judaeo-Christian world-view that underlies the growth of Western civilization. Another important aspect of our heritage is the growth of science and logic in early Greece, where the early Greeks also generated a detailed teleological view of the world which was, in time, wedded by the Scholastics to the poetic view of the Judaeo-Christian tradition. Astronomers and physicists who first encounter the collection of results and observations that exist under the collective label of the Anthropic Principle are usually surprised by the novelty of such an anthropocentric approach to Nature. Yet, the Anthropic Principle is just the latest manifestation of a style of argument that can be traced back to ancient

28

Design Arguments

times when philosophy and science were conjoined and 'metaphysics' was concerned with the method as well as the meaning of science. In this chapter we shall follow these arguments from ancient to modern times and attempt to display the recurrent polarization of opinion regarding the meaning of the order perceived in the overall constitution of the world and the apparent teleological relationship between living creatures and their habitats. We shall see many foreshadowings of modern 'Anthropic' arguments. The Strong Anthropic Principle of Carter has strong teleological overtones. It suggests that 'observers' must play a key role in (if not be the goal of) the evolution of the Universe. This type of notion was extensively discussed in past centuries and was bound up with the question of evidence for a Deity. The search for supporting circ*mstantial evidence focussed primarily upon the biological realm. Indeed, to such an extent did organic analogies permeate the ideas of most Greeks that the entire universe was viewed as an organism wherein the constituent parts were constantly adjusting for the benefit of the whole and in which the lesser members were meaningful only through their function as part of the whole. The most notable supporter of such a view, whose ideas were to dominate Western thought for nearly two thousand years, was Aristotle. He was aware that any phenomenon could be associated with various types of cause, among them an 'efficient' cause (which is what modern physicists would call a 'cause'). But Aristotle did not believe one could claim a true understanding of any natural object or artefact unless one knew also its 'final cause'—the end for which it exists. This he believed to be the pre-eminent quality of things. Rival philosophers denied the relevance of such a notion and even Aristotle's pupils occasionally urged moderation in the deployment of final causes as a mode of explanation. It was, unfortunately, apt to produce 'laws' of Nature that tell us things are as they are because it is their natural place to be so! Aristotle's ideas emerge in Western culture through the channel of medieval scholasticism. Scholars like Aquinas realized the power of teleological reasoning as support for an a posteriori 'Design Argument' for the existence of a Deity to whom the 'guidedness of things' might be attributed. Broadly speaking, the Greeks viewed the world as an organism, a view based in part upon the analogy between the natural world and human society. The renaissance view which superseded the Greek view was no less analogical but the paradigm had changed from the organic to the mechanical. The new picture of the clockwork 'watch-world' displayed both the religious conviction in a created order for the world and the desire to find a Creator playing the role of the watch-maker. Wheras the teleological view accompanying the organic world-picture supported a

29 Design Arguments

general 'guidedness of things', the element of design in the mechanical picture was evidenced by the God-given intrinsic properties of things and the regularity of the laws of Nature. This development leads us to draw a distinction between teleological arguments—which argue that because of the laws of causality order must have a consequent purpose, and eutaxiological arguments—which argue that order must have a cause, which is planned. Whereas teleological arguments were based upon the notion that things were constructed for either our immediate benefit or some ultimate end, the eutaxiological arguments point just to their co-present, harmonious composition. There is a clear distinction: the intricate construction of a watch can be appreciated without knowing anything of the 'end' for which it has been made. This important distinction, and the terminology, was introduced by Hicks in 1883. The growth of design arguments was, of course, accompanied by the efforts of persuasive and eloquent dissenters to discredit the notion of premeditated design in every or any facet of the natural world. Many of these expressions of scepticism have proven to be overwhelmingly compelling in the biological realm where environmental adaption is now seen to play a key role through the mechanism of natural selection. However, when originally proposed they fell largely upon deaf ears in the face of an impressive array of observational data marshalled in support of 'design'. Scientists rarely take philosophers seriously and they did not often do so in these matters either. One of the strengths of the teleological argument for the layperson is its compelling simplicity; for as one nineteenthcentury reviewer remarked, 'Imagine two men debating in public, one affirming and the other denying that eyes were intended to see with'. Commonsense superficially appears to affirm the teleological view very convincingly. Closer examination reveals that the argument contains all manner of hidden assumptions and associations, not least of which is a confusion between the ideas of purpose and function. The eutaxiological argument so popular with Newton and his disciples, on the other hand, is logically simpler than the teleological one and hides no linguistic subtleties; but to appreciate the existence of the mathematical beauty and harmony it exhibits and verify the examples cited in support of its claims requires considerable scientific knowledge. For this reason the logically simpler, but conceptually more difficult and more interesting, eutaxiological arguments appealed less to the popular mind. The eutaxiological Design Argument is most similar to the Weak Anthropic Principle. Teleological Design Arguments are analogous to the Final Anthropic Principle, and the Strong Anthropic Principle has something in common with both forms of Design Argument. As a rule, teleological arguments go hand in hand with a holistic, synthetic and global world view whilst the eutaxiological approach is wedded to the local and analytic perspective 79

30

Design Arguments

that typifies modern physics. To those brought up with the modern scientific method and its emphasis upon concepts like verification, experiment, falsification and so forth, it is surprising that science made as much progress as it did when inbred by teleological ideas. Yet it is clear that even the naivest Design Arguments, unlike the philosophical objections to them, were steeped in observations of the natural world. Indeed, Darwin attributes much of his initial interest in the problem of natural adaption to William Paley's meticulous recording of design in the plant and animal kingdoms. There are other striking examples of teleological reasoning producing significant advances in experimental and theoretical science; for example, Harvey's discovery of the human circulatory system, Maupertuis' discovery of the Principle of Least Action and von Baer's discovery of the mammalian ovum. We shall see that the simpler teleological arguments concerning biological systems were supplanted by Darwin's work, but the system of eutaxiological arguments regarding 'coincidences' in the astronomical make-up of the Universe and in the fortuitous form of the laws of Nature were left unscathed by these developments and it is these arguments that have evolved into the modern Anthropic Principles. But careful thinkers would not jump now so readily to the conclusions of the early seekers after Design. The modern view of Nature stresses its unfinished and changing character. This is the real sense in which our world differs from a watch. An unfinished watch does not work and the discovery of time's role in Nature led to an abandonment of Design arguments based upon omnipresent harmony and perfection in favour of those that concentrated upon current co-present coincidences. The other modern view that we must appreciate is that we have come to realize the difference between the world as it really is ('reality') and our scientific theories about it and models of it. In every aspect our physical theories are approximations to reality, they claim merely to be 'realistic' and so we hesitate to draw far-reaching conclusions about the ultimate nature of reality from models which must be, at some level, inaccurate descriptions of reality. Scientists have not always recognized this, and some do not even today. We see good examples of the consequences of this weakness when we look back at the religious fervour with which Newton's equations of motion and gravitation were regarded by those eighteenth-century scientists intent upon demonstrating that God, like Newton, was also a mathematician. Whilst this group were claiming that the constancy and reliability of the laws of Nature witnessed a Creator, another was citing the breakdown of their constancy, or miracles, as the prime evidence for a Deity. Our treatment of these questions regarding 'design' will be largely chronological and our aim is to chart the history of ideas concerning design and teleology and to bring into focus the similarity between these

31 Design Arguments

ancient ideas and the way modern 'Anthropic' arguments are framed. The Anthropic Principle, we shall argue, is a consequence of a certain symmetry in the history of ideas. We shall also see that many other contemporary issues that today are tangent to the Anthropic Principles were also associated with Design Arguments of the past. For example, the question of the plurality of worlds and the construction of proofs of the existence of God (or gods), the uniqueness of man in anthropocentric Christian teleology and the logical status of our perceptions of the natural world were all of continual fascination. There is also a detectable and recurrent trend revealed by our study: students of Nature build a model to describe its workings based on observations; if this description is successful the model becomes an article of faith, some aspect of absolute truth comes to be taken as embodied within it. The descriptive model then becomes almost an idol of worship and a proliferation of Design Arguments arise as expressions of a faith that would claim no comparable or superior descriptions could exist (the fate, perhaps, of a 'paradigm' in ancient times). Thus the modern anthropic principles can be seen partly as natural consequences of the fact that current physical theories are extremely successful. This success is itself still a mystery; after all there is no obvious reason why we should find ourselves able to understand the fundamental structure of Nature. It is also, in part, a consequence of the fact that we have found Nature to be constructed upon certain immutable foundation stones, which we call fundamental constants of Nature. As yet, we have no explanation for the precise numerical values taken by these unchanging dimensionless numbers. They are not subject to evolution or selection by any known natural or unnatural mechanism. The fortuitous nature of many of their numerical values is a mystery that cries out for a solution. The Anthropic Principle is but one direction of inquiry, albeit, as we shall now see, a surprisingly traditional one.

2.2 The Ancients

You all know the argument from design: everything in the world is made just so that we can manage to live in the world, and if the world was ever so little different, we could not manage to live in it. This is the argument from design. B. Russell

Our inquiry into the Western predecessors of the modern Anthropic Principle begins on the Mediterranean island of Ionia during the sixth century BC within a culture that valued both curiosity and abstraction for their own sakes. Here, a tiny society nurtured some of the first natural philosophers to pose abstract problems completely divorced from any

Design Arguments

32

technological, nautical, agricultural or authoritarian stimuli. Their primary goal was to elucidate the primary forms and functions at the root of all natural phenomena. To realize that ambition they had to understand both the nature of man and the structure of his environment. Anaxagoras of Clazomenae (500-428 BC) is a pivotal figure, a mediator between the ancient Ionian philosophical tradition and the emergence of the Greek tradition. In 480 BC he migrated to Athens, probably as a member of Xerxes' militia, and there remained for thirty years as the first teacher of philosophy among the Athenians. Eventually, like Socrates, his career there was to end with charges of heresy; but unlike his famous successor he chose to leave, and fleeing to Ionia, worked there for a further twenty-five years. Unfortunately we possess only fragments of Anaxagoras' writings in their original form and these seem to be of an introductory and general nature, but later writers provide sufficient commentary for a fragmentary 'identikit' portrait of his ideas to be composed. Both Plato and Aristotle regard him as the first to attribute the evident structural harmony and order in Nature to some form of intelligent design plan rather than the chance concourse of atoms. Since Anaxagoras appears to be first of the known pre-Socratics to dwell upon the presence of order in Nature, it is perhaps no surprise that he was among the first to attempt to explain this observation by some primary cause. Anaxagoras sought some allembracing dynamical influence which would provide him with an explanation for the mysterious harmony he saw about him. He believed the Universe and all matter to have always existed, at first a mindless confusion of infinitesimal particles, but destined to become ordered by the influence of a cosmic 'Mind'. This 'Mind' (vovq) intervened to eradicate the state of primeval chaos by the induction of a vortical motion in space , which first led to a harmonious segregation of natural things and then slowly abated leaving quiescence, harmony and order. The rotation of the heavenly bodies in the solar system remain as the last vestige of the action of cosmic 'Mind'. Anaxagoras aims to explain the orderly motion and arrangement of matter by some subtle and fluid entity which exercises a guiding influence upon the Universe like a man's mind controls his body. These ideas are relevant because they signal the first introduction of 'Mind' in conjunction and in competition with 'Matter' for the explanation of phenomena; a problem to be much discussed by subsequent generations of philosophers and scientists. Our interest is attracted by this simple feature of his thinking because it forges a link with later Platonic and Aristotelian ideas. Unfortunately, if the extant writings provide a fair sample, vovs appears to have been a rather vaguely defined entity. It is employed to order all things initially, but thereafter plays no direct role in the 1

2

3

33 Design Arguments

temporal development of things nor is it ever used to explain the specific order and design displayed by an individual object or organism. Anaxagoras' description places its influence at the boundary of the Universe, its role cosmological and metaphysical, And what was to be, and what was and is not now, and what is now and what will be—all these mind ordered. 4

This initial and purposeful cause contrasts sharply with the metaphysical edifices that were constructed later by Plato and Aristotle. The latter postulated an 'end' (reAos), neither personal nor purposefully goaldirected, to which phenomena were magnetically directed. Anaxagoras' lack of a teleological emphasis provokes criticism from Aristotle who highlights what appears to moderns the plain common sense of the Anaxagorean view. The disagreement between Anaxagoras and Aristotle is interesting because it will appear again and again through the centuries, albeit suitably camouflaged by the though-forms and categories of contemporary thinking, Now Anaxagoras says that it is due to his possessing hands that man is of all things the most intelligent. But it may be argued that he comes into possession of hands because of his outstanding intelligence. For hands are a tool, and Nature always allots each tool, just as any sensible man would do, to whosoever is able to make use of it 5

The root of Aristotle's discontent with Anaxagoras is a suspicion that his predecessor was merely advocating a pre-Socratic version of the 'God-ofthe-Gaps' methodology in his approach to the natural world. 'Mind' appears only as a form of metaphysical mortar to fill the gaps and cracks of ignorance in his otherwise entirely deterministic world model. For, Aristotle claims

Anaxagoras uses mind as a theatrical device for his cosmogony; and whenever he is puzzled over the explanation of why something is from necessity, he wheels it in; but in the case of other happenings he makes anything the explanation rather than mind. 6

This criticism had in fact been voiced in a disconsolate commentary a little earlier by Socrates, who describes how objections slowly dawned upon him as he read one of Anaxagoras' books in search of ideas on design in the Universe. He recalls the moment of anticlimax vividly,

Well, I heard someone reading once out of a book, by Anaxagoras he said, how mind is really the arranger and cause of all things; I was delighted with this cause, and it seemed to me in a certain way to be correct that mind is the cause of all, and I thought if this is true, mind arranging all things in places as is best. If, therefore, one wishes to find out the cause of anything, how it is generated or perishes or exists, what one ought to find out is how it is best for it to exist or to

Design Arguments

34

do or feel everything.... I was glad to think I had found a teacher of the cause of things after my own mind in Anaxagoras For I did not believe that when he said all this was ordered by mind, he would bring in any other cause for them that it was best that they should be as they are I got his books eagerly.... How high I soared, how low I fell! When as I went on reading I saw the man using mind not at all; and stating no valid cause of the arrangement of all things, but giving airs and ethers and waters no causes, and many other strange things. 7

Whilst these earliest notions concerning order and motion were being incubated, a Sicilian contemporary, Empedocles of Argigentum (492-435 BC), was developing some radically different ideas about the origin of ordered organic structures and their survival over the course of time. Unlike many of his contemporaries, Empedocles was a keen and careful observer of Nature and despite sporadic delusions of divinity combined this with the general study of magic, poetry and medicine. His key insight was to intertwine the notions of change and temporal evolution with physical processes rather than conceive of them possessing some time-invariant meaning. These evolutionary processes he imagined to be somehow connected with the presence of order and design in Nature. In modern biological parlance we would say that he proposed the mechanism of 'normalizing selection'. Initially, creatures of all possible forms and genetic permutations were imagined to exist but over the passage of time only some were able to reproduce and multiply. Gradually the centaurs and half-human monsters eliminate themselves through sterility. He imagines that eventually only the ordered, and therefore 'normal,' beings survive. This type of selection only maintains an invariant species against mutant invasion and is really quite distinct from Darwin's idea of natural selection wherein no species is immune to change. Again we learn more of these ideas through Aristotle's condemnation of them; he quotes Empedocles' summary On [the earth] many heads sprung up without necks and arms wandered bare and bereft of shoulders. Eyes strayed up and down want of foreheads Shambling creatures with countless hands While others, again arose as offspring of men with the heads of oxen, and creatures in whom the nature of women and men was mingled, furnished with sterile parts. 8

Parmenides (c.480 BC) the founder of the school of Elea in Southern Italy was one of the earliest logicians. Although he seems to have written in verse, it is of a sufficiently prosaic nature to allow his principal theses to be extracted. He hoped to explain what is 'intelligible' and wanted to show it was impossible to make a negative existential judgement. Parmenides claimed that a 'many-worlds' interpretation of nature is necessary because of the non-uniqueness of the subjective element in our perception and understanding of the world. As a corollary to this he maintained that

35 Design Arguments

what is inconceivable must actually be impossible—empty space cannot exist! Over two thousand years later these ideas will appear in a new guise in debates concerning the role of the observer in quantum theory and the theory of measurement. The more immediate, but no less important consequence of these ideas was the early atomists' abandonment of trust in the senses as a certain and invariant gauge of world structure. In order to avoid this awkward perceptive subjectivity they sought objective reality in imperceptible 'atomic' microphenomena that they believed to be independent of the observer and absolute in character. Socrates (470-399 BC) and his student Plato (427-347 BC) later reacted against this trend towards purely materialistic explanations of natural phenomena and attempted to show that material order not only sprang from 'Mind' but was actively sustained by it. Plato argued that because matter cannot induce motion itself, the observed presence of motion is evidence of a mental presence and Cause underpinning the whole natural world. He also conceived of a particular hierarchical cosmological model exhibiting this doctrine. In the beginning the outer sphere of his hierarchical universe was perturbed into motion by an obliging deity and thereafter remained in ordered motion and displayed a completely invariant structure. In the 'Laws' this regular structure is cited as evidence of the gods. For, when asked how one might prove the existence of the gods, Cleinas replies with one of the most explicit early design arguments: 9

How? In the first place, the earth and the sun, and the stars and the Universe, and the fair order of the seasons, and division of them into years and months, furnishes proofs of their existence. 10

However, this appeal to astronomical phenomena has a slightly hollow ring to it in the light of Socrates' attitude towards all experimental philosophy and astronomy. We see that he was aware of the ability of 'physical philosophers' to provide many different but equally plausible explanations of a single observation but has no notion that perhaps further observations might narrow down the number of 'conflicting opinions': With regard to astronomy Socrates considered a knowledge of it desirable to the extent of determining the day of the year or of the month and the hour of the night; but as for learning the course of the stars, [he regards] occupying oneself with the planets or inquiring about their distance from the earth or about their orbits or the causes as a waste of time. He dwelt on the contradictions and conflicting opinions of the physical philosophers . . . and, in fine, he held that speculators on the Universe and on the laws of the heavenly bodies were hardly better than madmen. 11

Plato opposed contemporary ideas that attempted to explain the observed

Design Arguments

36

structures and contrivances in Nature as a result of either chance or mechanism, and this opposition was grounded on the evidence for design in the natural world. He preferred a picture of the Universe as an organic and teleologically ordered structure. Socrates gives the first clear statement of an anthropocentric design argument with a distinctly eighteenth-century flavour to it when he is reported by Xenophon extolling the human eye as a proof of the wisdom of the gods:

But which seems to you most worthy of admiration Astrodemus? The artist who forms images devoid of motion and intelligence, or who had skill to produce animals that are endued, not only with activity, but understanding? . . . But it is evidently apparent that he who at the beginning made man endued him with senses because they were good for him . . . Is not that providence, Aristodemus, in a most eminent manner conspicuous, which because the eye of man is delicate in its contexture, hath therefore prepared eyelids like doors, whereby to screen it, which extend themselves whenever it is needful, and again close when sleep approaches? . . . Is it not to be admired . . . that the mouth through which the food is conveyed should be placed so near the nose and eyes as to prevent the passage unnoticed of whatever is unfit for nourishment? And cans't thou still doubt Aristodemus, whether a disposition of parts like this should be the work of chance, or of wisdom and contrivance. 12

Another very early commentator on the beneficial and superficially purposeful contrivance of natural things toward our perennial well-being was the Cretan philosopher, Diogenes (400-325 BC). Working about a century after Anaxagoras, he appears to be one of the earliest thinkers who appealed to a teleological principle behind natural phenomena on the basis of their optimal arrangements. In particular, he was impressed by the regular cycle of the seasons, Such a distribution would not have been possible without Intelligence, that all things should have their measure: winter and summer and night and day and rain and winds and periods of fine weather; other things also, if one will study them closely, will be found to have the best possible arrangement. 13

He claims that 'air' must be this ordering 'Intelligence' because 'man and the other animals that breathe live by air.. .'. The earliest opponents of these teleological notions were Democritus (450-?BC) and Leucippus of Elea (440-?BC). Leucippus appears as a rather obscure fifth-century figure reputed to have founded the school at Abdera in Thrace where Democritus was born. Again our knowledge of their work derives principally from secondary sources—through Aristotle, Epicurus, and others. Leucippus proposed the early 'atomic' theory which was then developed more 'scientifically' by Democritus before being tenuously extrapolated into the realm of ethics and philosophy by 14

37 Design Arguments

Epicurus. Their development of the mechanism of causation and an atomic view of the world was entirely ateleological; the only causes admitted were atomic collisions (although later Epicurus and Lucretius were to appeal to a mysterious intrinsic atomic property, 'swerve', which enabled atoms to collide). As with Empedocles we see inklings of some parallels with modern evolutionary biology and the 'many worlds' interpretation of quantum theory in their writings. Democritus understands the link between life and its local environment and has the notion of an ensemble of planetary systems: There are worlds infinite in number and different in size. In some there is neither sun nor moon, in others there are more than one sun and moon. The distance between the worlds are unequal, in some directions there are more of them . . . Their destruction comes about through collision with one another. Some worlds are destitute of animal and plant life and of all moisture. 15

The pre-eminent proponent of a teleological world view amongst the ancients was Aristotle (384-322 BC) and his commentary on the ideas of others provides a valuable source of information. The Stagirite's teleological view was to become tremendously influential, some would claim out of all proportion to its profundity, because it became amalgamated with the Judaeo-Christian revelation in the Scholastic synthesis. By this indirect route his ideas were able to shape the thought of Western Europe for nearly two thousand years. Unlike Socrates and Plato, Aristotle was not an Athenian. His father was a physician at the court of the Macedonian royal family and his keen observation of and life-long interest in flora and fauna may have derived from early paternal influence. Whilst still a teenager he went to Athens to study as a student of Plato at the Academy. There he worked for twenty years, principally on ethics, mathematics, politics and philosophy, but then left for the coastal region of Asia Minor where he rekindled his interest in observation through studies in zoology and biology. So much did he learn during that period that on his return to Athens he was able to establish a thriving school of botanical and biological investigation which laid the foundations of modern study in these disciplines. Aristotelian science was based upon presupposition of an 'intelligent natural world that functions according to some deliberate design'. Its supporters were therefore very critical of all those pre-Socratic thinkers who regarded the world structure as simply the inevitable residue of chance or necessity. Aristotle's own detailed observational studies in botany, biology and zoology led him to take up the organic analogy as the most fruitful description of the world and he regarded it as superior to the mechanistic paradigm. In his Metaphysics, Aristotle works through the ideas of earlier

Design Arguments

38

philosophers and rejects them one by one. He strongly opposes a recurrent idea, held for example by the Atomists, that a thing is explained when one knows what it is made of. For, he argues, its material composition provides us with its 'Material Cause', but to explain it completely we require an understanding of three further 'Causes'. A 'Formal Cause' must be identified. This relates to the form or pattern intrinsic to the object which prevents it from behaving like another; for example, it distinguishes sculptures from lumps of unformed metal (or at least it did!). Next, the 'Efficient Cause' should be recognized as the agent which produces the object, transferring the mental notion of a statue from the sculptor's mind into solid material bronze; the 'Efficient Cause' is what moderns mean when they use the word 'cause'. Finally, there exists that 'Cause' which Aristotle regarded as the most important: the 'Final Cause'—the purpose for which the object exists. Even at this stage it is evident that this multiplicity of causes leads very quickly to metaphysical ideas of supreme initial causes or ultimate final ends. The common preoccupation with the presence of order in the Universe meant there were many similarities between the cosmologies of Aristotle and Plato. Where Aristotle differed was in his attitude towards initial conditions. He argued that knowledge of the 'beginning' is not relevant to our understanding of the present configuration—that initial conditions did not matter—and furthermore, there were reasons for supposing there never was an origin in time—the natural order should be eternal and unchanging. Aristotle's cosmology was the first 'steady-state' Universe. There, the similarity with any modern cosmological model very abruptly ends. Aristotle imagined the Universe to possess a spherical boundary with the earth resting at its centre. Surrounding the earth were a whole series of concentric shells; the three closest to the centre contained water, air and fire respectively. Now, the idea behind this hierarchical structure was to explain why, for example, flames 'naturally' rose whilst other objects, like stones, always fell to the earth. The outer shell of fire was encompassed by a succession of seven solid and crystalline spheres; they carried the Moon, Mercury, Venus, the Sun, Mars, Jupiter, Saturn and finally the fixed stellar background. This outer stellar sphere was endowed with a dynamical rotation which it is communicated to the inner spheres and thereby to the planets themselves. Aristotle's guiding principle was that the ultimate meaning of things was to be divined from their 'end' (reAos) rather than their present configuration—that is, by learning of their final rather than their material causes. This 'end' was the most perfect and fitting purpose, 16

. . . it belongs to physical science to consider the purpose or end for which a thing subsists. The poet was led to say 'An end it has for which it was produced'. This is absurd, for not that which is last deserves the name of end, but that which is most perfect. 17

39 Design Arguments

Although, as we saw above, Aristotle credits Anaxagoras for germinating this view, he upbraids him strongly for employing it in so limited and sterile a fashion. In contrast, he energetically develops his own scheme of final causes in combination with the Platonic teleology and uses it to interpret his own detailed observations of Nature. Although he is not often credited for it, he carried through this programme with something of the modern scientific philosophy:

The actual facts are not yet sufficiently made out. Should further research ever discover them, we must yield to their guidance rather than to that of theory; for theories must be abandoned, unless their teachings tally with the indisputable results of observation. 18

He is clearly anxious to derive support for his teleological ideas from observational facts and wants to avoid the approach of those of his predecessors who have adopted the methodology of armchair natural philosophers. From the idea of a 'Final Cause' there emerged the Aristotelian idea of an internal perfecting principle or 'entelechy' which directs things toward some terminal point characterized by its unique harmony. In any individual object all its sub-components are united for its greatest benefit and are coherently organized with this 'perfect' end in view. The evidence for such an opinion, he argues, is much more readily obtained from astronomical observations than from biological ones. For, in the former system, the time-scale over which significant changes occur is so much longer:

For order and definiteness are much more plainly manifest in the celestial bodies than in our own frame; while change and chance are characteristic of the perishable things of earth. Yet there are some who, while they allow that every animal exists and was generated by nature, nevertheless hold that the heaven was constructed to be what it is by chance and spontaneity; the heaven, in which not the faintest sign of haphazard or of disorder is discernible! Again whenever there is plainly some final end to which a motion tends, should nothing stand in the way, we always say that such final end is the aim or purpose of the motion and from this it is evident that there must be a something or other really existing, corresponding to what we call by the name of Nature. 19

Aristotle also displays an objectivity and breadth of view in his discussion of the limitations and conceivable objections to his teleology that was to prove all too rare in the later work of his many followers. He realizes, for example, that development could play an important role in generating organic structures: In plants, also there is purpose, but it is less distinct; and this shows that plants were produced in the same manner as animals, not by chance, as by the union of olives upon grape-vines. Similarly, it may be argued, that there should be an

Design Arguments

40

accidental generation [or production] or the germs of things; but he who asserts this subverts Nature herself, for Nature produces those things which, being continually moved by a certain principle contained in themselves, arrive at a certain end. 20

and that necessity must be considered as an influence upon their development

We have . . . to inquire whether necessity may not also have a share in the matters and it must be admitted that these mutual relations could not from the very beginning have possibly been other than they are. 21

On another occasion he recapitulates the antiteleological position of the atomists in a convincing fashion: But here a doubt is raised. Why, it is said, may not nature act without having an end, and without seeking the best of things? Jupiter, for instance, does not send rain to develop and nourish the grain, but it rains by a necessary law; for in rising, the vapour must grow cool, and the cooled vapour becoming water must necessarily fall. But if, this phenomenon taking place, the wheat profits by it to germinate and grow, it is a simple accident. And so again, if the grain which someone has put into the barn is destroyed in the consequence of rain, it does not rain apparently in order to rot the grain, and it is a simple accident if it be lost. What hinders us from saying as well, that in nature the bodily organs themselves are subject to the same law and that the teeth, for instance, necessarily grow ... What hinders us from making the same remark for all the organs where there seems to be an end and a special destination. 22

Whereas Plato had been interested in order and structural design within the Universe principally as manifestations of its static, permanent and unchangeable nature, Aristotle's view was clearly more dynamic. The Aristotelian world was endowed with a process of temporal evolution acting solely for the sake of the entities finally evolved. Following the death of Aristotle, peripatetic thinking was dominated for a period of thirty-five years by Tyrtamus of Eresos (372-287 BC). Now regarded as one of the founders of systematic botanical study, Tyrtamus is better known to us by his nickname Theophrastus' which he received from Aristotle because of his stimulating conversation. Like others before him, Theophrastus was struck by a dichotomy in his experience. On the one hand he was conscious of the orderliness of his mental processes whilst on the other he perceived a natural world of enormous complexity. He felt that if some link could be forged between these disjoint areas of experience then light might be shed upon them both. Despites his long association with Aristotle, first as a fellow student of Plato at the Academy and then as a co-worker at the Lyceum, he was

41 Design Arguments

critical of his master's teleological mode of thinking and recognized the strongly subjective elements that were incorporated in its application:

As regards the view that everything has a purpose and nothing is in vain, first of all the definition of purpose is not so easy, as is often said; for where should we begin and where decide to stop? Moreover, it does not seem to be true of various things, some of which are due to chance and others to a certain necessity, as we see in the heavens and in the many phenomena on earth. 23

He then goes on the give many examples of natural phenomena, like drought, flood, and famine, which yield no discernible end, interpreting them as casting doubt upon Aristotle's perfecting principle as a useful practical guide into the nature of things. He concludes that natural science will only make sure and sound progress if it moderates its appeal to final causes, for 24

We must try to set a limit to the assigning of final causes. This is the prerequisite for all scientific enquiry into the universe, that is into the conditions of existence of real things, and their relations with one another. 23

The contemporary counter to the peripatetic school's teleology was the radical alternative of Epicurus of Samos (341-270 BC) and his followers. Following in the footsteps of Democritus and Leucippus, these later atomists emphasized the importance of assuming a complete state of statistical disorder at the moment of the World's creation. They claimed this chaotic initial state subsequently evolved by natural forces into an ordered system characterized by regular and steady rotations. They argued that the infinite time allowed for creation makes it inevitable that it should eventually develop into a stable configuration capable of remaining in a constantly ordered state. The Epicureans were, of course, anxious to scotch any notions of supernatural causation or the appeal to any entity who controls or ordains events. Interestingly, no useful scientific structure was erected upon this materialistic foundation because Epicurus had a very low view of mundane scientific investigation. Indeed, he excluded many of its basic tools—logic, mathematics, grammar and history—from his school's curriculum. He was particularly hostile to the study of astronomy because celestial phenomena seemed to him to admit of so many equally consistent and indistinguishable explanations: First of all then we must not suppose that any other object is to be gained from the knowledge of the phenomena of the sky, whether they are dealt with in connection with other doctrines or independently, than peace of mind and a sure confidence, just as in all other branches of study. 25

The most remarkable spokesman for the Epicurean position was the Roman poet Titus Lucretius Carus (99-55 BC). His great poem De Rerum Natura aimed to bury all superstitious speculation and philosophical 26

Design Arguments

42

dogma by outlining the vast scope of a purely materialistic doctrine. It reveals an uncanny intuition regarding the future conceptual development of physics and displays such a good knowledge of flora and fauna that one is led to wonder whether Lucretius wrote other prosaic and systematic studies of these subjects which are now lost to us. Lucretius believed life to have originated at some definite moment in the past by natural processes but that the created beings included 'a host of monsters, grotesque in build and aspect' who were subsequently eliminated by their sterility:

In those days, again, many species must have died out altogether and failed to reproduce their kind. Every species that you now see drawing the breath of the world survived either by cunning or by prowess or by speed. In addition, there are many that survive under human protection because their usefulness has commended them to our care. 27

As his poem unfolds the entire materialistic methodology is eloquently restated and the logical difficulty inherent in a teleological approach is forcefully presented to his patron, Memmius: to put it bluntly, he claims that teleologists like Aristotle have simply been putting the cart before the horse:

There is one illusion that you must do your level best to escape—an error to guard against with all your foresight. You must not imagine that the bright orbs of our eyes were created purposely, so that we might be able to look before us . . . and helpful hands attached at either side, in order that we might do what is needful to sustain life. To interpret these or any other phenomena on these lines is perversely to turn the truth upside down. In fact, nothing in our bodies was born in order that we might be able to use it, but the thing born creates the use . . . The ears were created long before a sound was heard . . . They cannot, therefore, have grown for the sake of being used. 28

Yet this critical approach ground to a temporary halt with Lucretius whilst the teleological aspect of Aristotle's philosophy he criticized so strongly, being more adaptable to the theistic Islamic and Christian cultures, was to grow in influence and extent. Another group who inherited some of Aristotle's teleological ideas were the Stoics; a school which was founded by Zeno of Citium (334262 BC) during the fourth century BC and which took its name from a painted corridor on the north side of the market place in Athens where it was the custom of the school to meet for discussion. Teleological ideas appear in Stoic physics under the guise of 'Providence'. For the Stoics this concept embodied the notion that all was the best; the idea was carefully gauged to temper the harsher Stoic dictum of 'fate' within which was enshrined the absolute rule of causality. They replaced Aristotle's infinitely old, 'steady-state' Universe with one possessing a cyclic recurr-

43 Design Arguments

ence. Their conviction regarding the innate order and rationality of Nature, which became the basis of their ethics, made the Stoics fervent supporters of the cosmological Design Argument in all its forms. Although they rejected the mechanical world-view in favour of a more Aristotelian organic analogy, they nevertheless developed their Design Arguments via the analogy between the workings of the world and familiar mechanical models. The Roman lawyer, orator and popularizer of Greek philosophy, Marcus Cicero, records that 29

30

The Stoics, however, most assuredly did consider man to be at the very apex of the hierarchy of beings and felt that the rest of the Universe was geared to his benefit.

Cicero (106-43 BC) himself devotes much of his famous work De Natura Deorum to arguments for the existence of the gods drawn from the beneficial contrivance of the world. He also signals the start of a tendency for teleological design arguments to be employed to establish not only the existence but also the character traits of a deity or deities. De Natura Deorum describes the conversations between two disciples of Plato, namely Cotta and Cicero; a Stoic, Balbus; and an Epicurean atomist, Velleius. As might be anticipated from our discussion so far, Balbus provides various teleological arguments for the gods' existence and is backed up by the Platonists in the face of Velleius' continuous opposition. For example, Balbus criticizes the Epicurean view that things could have fallen out so nicely just by chance and reveals a new type of numerical perspective on the likelihood of ordered configurations arising spontaneously: Can I but wonder here that anyone can persuade himself that certain solid and individual bodies should move by their natural forces and gravitation in such a manner that a world so beautiful adorned should be made by their fortuitous concourse. He who believes this possible may as well believe, that if a great quantity of the one and twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius . I doubt whether fortune could make a single verse of them . . . Thus if we every way examine the Universe, it is apparent from the greatest reason that the whole is admirably governed by a divine providence for the safety and preservation of all beings. 4

9

31

These arguments were inspired by a lost work of Aristotle (De Philosophia) in which he reportedly argued that our familiarity with the remarkable aspects of Nature has removed our sense of wonder at them. If we had spent our lives underground and then suddenly came to the surface we would be so struck by the structure of the heavens and the beauty of the Earth that we would be inevitably and 'immediately

44

Design Arguments

convinced of the existence of the gods and that all these wonders were their handiwork'. Cicero couples a purely mechanical view of the world with a good anatomical knowledge and even gives the now classic design argument based upon the watch analogy that was used so persistently by Boyle, Niewentyt, Paley and others over fifteen hundred years later 32

When we see some example of a mechanism, such as a globe or clock or some such device, do we doubt that it is the creation of a conscious intelligence? So when we see the movement of the heavenly bodies, . . . how can we doubt that these too are not only the works of reason but of a reason which is perfect and divine? 33

These and many other examples adorn an argument for the 'gods' that is eutaxiological rather than teleological in character; that is, it is based upon the presence of discernible order and mutual harmony in Nature rather than the recognition of any conscious or unconscious anthropocentric purposes. It is a type of argument that was to be repeated regularly in future centuries. Another, whose ideas were later to form the basis of many eighteenthand nineteenth-century treatises on the 'Wisdom of God' as evidenced by anthropocentric teleology, was the Greek physician Galen (131-201). Although Galen was eclectic in his philosophical outlook he clearly favoured the Aristotelian picture as the most natural backdrop for his monotheistic views. He developed the doctrine of Final Causes in a more specific and teleological manner than Cicero, arguing that the purpose of the deity could be ascertained by detailed inspection of his assumed works in Nature. Specifically, his study of the specialized design of the human hand was a classic piece of anatomical analysis that became the basis of Bell's Bridgewater Treatise on the teleological aspects of this organ over sixteen hundred years later, so little were later workers able to add to his insights. Of the human body he writes: Let us, then, scrutinize this member of our body, and inquire, not simply whether it be in itself useful for all the purposes of life and adapted to an animal endued with the highest intelligence, but whether its entire structure be not such that it could not be improved upon by any conceivable alteration. 34

His approach was wholly teleological and maintained that all the bodily processes were divinely and optimally planned in every respect. This anthropocentric tenor also runs through the encyclopaedic natural history of the Roman, Pliny (23-79), who also usually described nature by drawing on its relation to man: Nature and earth fill us with admiration . . . as we contemplate the great variety of plants and find that they are created for the wants or enjoyments of mankind. 35

45 Design Arguments

Despite their great administrative, legal and military skills the Romans produced little in the way of lasting abstract ideas. The most relevant character to our study is perhaps Boethius (470-525) who mediates the transition from Roman to Scholastic thinking. For many years a prominent Roman statesman and philosopher he was to write his influential manual The Consolation of Philosophy whilst incarcerated in Pavia gaol awaiting execution. This work is one of the few threads of contact between classical learning and the Dark Ages and is written in an unusual medley of poetry and prose (the author speaks in prose whilst philosophy replies in verse). Boethius' support of the teleological doctrine of Final Causes is clear from the outset of his work where he hails Socrates, Plato and Aristotle as the only true philosophers and sets them in opposition to the spurious Stoic and Epicurean thinkers: 36

Thinkest thou that this world is governed by haphazard and chance? Or rather doest thou believe that it is ruled by reason? 37

His answer ensured that the teleological argument was handed on safely to the emerging civilizations of Northern Europe, for Boethius' book was probably the most widely read scholarly work of the medieval period. It played a major role in shaping the philosophical vocabulary and perspective of those times—it is even fabled that Alfred the Great (849-901) had it translated into Anglo-Saxon. Although the worldview it presents is teleological and anthropocentric through and through, the world model it presumes most definitely is not. Boethius saw and stated that despite the implication of final causes, the astronomical position of man was both infinitesimal and insignificant; a view that would have become familiar to his later pre-Copernican readership: Thou hast learnt from astronomical proofs that the whole earth compared with the Universe is no greater than a point; that is, compared with the sphere of the heavens, it may be thought of as having no size at all. Then, of this tiny corner, it is only one-quarter that, according to Ptolemy, is habitable to living things. Take away from this quarter the seas, marshes, and other desert places, and the space left for man hardly even deserves the name of infinitesimal. 38

This completes the sketch of Greek and Roman origins, showing how the Design and anti-Design arguments began there. (The dates of the principal protagonists are shown in Figure 2.1.) But, these seeds would have fallen on stony ground had it not been for their adoption by the inheritors of an entirely different tradition. During the next seven hundred years Greek learning was first perpetuated by the Arabic schools who translated many of the early texts. This Eastern influence reached its zenith during the tenth century and through it Aristotelian ideas slowly diffused into the European culture to be moulded into a Christian form by Aquinas as easily as it was fitted into the Muslim perspective of the early Arabic philosophers.

Design Arguments

46 i

1

1

Democritus

r

Epicurus

Leucippus

Parmenides Empedocles

Theophrastus

Anaxagoras

ELsis

Aristotle

Socrates

I

5 OObc

I

I

400bc

300bc

I

200bc

Figure 2.1. The chronology of some of the early contributors to the question of design in nature. Where precise dates of birth and death are unknown estimates have been used.

2.3 The Medieval Labryrinth

The human imagination has seldom had before it an object so sublimely ordered as the medieval cosmos ... it is perhaps ... a shade too ordered. Is there nowhere any vagueness? No underdiscovered byways? C. S. Lewis

What characterizes the Medieval mind most uniquely for the modern spectator is its absolute respect for written authorities. All writers tried to base their works on ancient authority—most notably that of Aristotle. Also, in C. S. Lewis' words, 'Medieval man was not a dreamer nor a wanderer. He was an organizer, a codifier, a builder of systems. He wanted "a place for everything and everything in the right place." Distinction, definition, tabulation were his delight.' These two powerful traits proved perfect, not only for the preservation of the ancient Design arguments, but for their subsequent elevation to the status of ecclesiastical dogma. The nearest one gets to a parallel of the atomists versus the teleologists is, at first, the division of opinion concerning whether science, religion and metaphysics should be conjoined with the blessing of the Design Argument. Averroes of Cordova (1126-1198) was a Mohammedan member of the early Hispan-Arabic school of philosophy and medicine who opposed such a scholastic synthesis. He wanted to separate the basis of religion from experimental science and logic because of the pseudo-conflicts he saw inherent in such a union. He still maintains a teleological view but it is only partially anthropocentric, for he feels it is unreasonable to say 39

47 Design Arguments

that all Nature exists solely for the luxury of humankind:

Why did God create more than one sort of vegetable and animal soul? The reason is that the existence of most of these species rests upon the principle of perfection (completeness). Some animals and plants can be seen to exist only for the sake of man, or of one another; but of others this cannot be granted, for example, of the wild animals which are harmful to men. 40

Looking to another culture one finds the Jewish rabbi Maimonides (1135-1204), an astronomer, philosopher and physician who, like the Arabs, sought to reconcile Aristotelian philosophy with his own religious heritage. This led to his construction of a Jewish Scholastic system that developed the 'proof' of God from contingent being following analogous earlier work by Avicenna (980-1037) and others. Maimonides wrote an apologetic work as a spiritual guide for atheistic philosophers entitled Guide for the Perplexed wherein he states an objection to anthropocentric teleology which is based on the enormous size of the Universe: 41

Consider then how immense is the size of these bodies, and how numerous they are. And if the earth is thus no bigger than a point relative to the sphere of the fixed stars, what must be the ratio of the human species to the created Universe as a whole? And how then can any of us think that these things exist for his sake, and that they are meant to serve his uses? 44

By the middle of the thirteenth century the Dominican scholars, Albert the Great and Thomas Aquinas (1225-74) had completed Aristotle's conversion to Christianity. Aquinas, the 'angelic doctor', was born after the major rediscovery and translation of many of Aristotle's works into Latin and his own unique contribution was a vast unification of Aristotle's philosophy with the Judaeo-Christian doctrine of the Catholic church. The Scholastic ideal held that the nature of ultimate things was accessible to reason alone without revelation from a divine source. Therefore Scholasticism preserved a strong belief in the intrinsic intelligibility of Nature and in the presence of an underlying rationality in an age full of astrological and magical notions. Ironically, this rationality would in the future backfire against some of the more negative aspects of Scholastic dogma. Specifically, Aquinas uses a teleological design argument for the existence of a unique God as the basis of his famous 'Fifth Way' to prove the existence of God and attributes the idea to St. John of Damascene: The fifth way begins from the guidedness of things. For we observed that some things which lack knowledge, such as natural bodies, work towards an end. This is apparent from the fact they always or most usually work in the same way and move towards what is best. From which it is clear that they reach their end not by chance but by intention. For these things which do not have knowledge do not tend to an end, except under the direction of someone who knows and under-

Design Arguments

48

stands: the arrow, for example, is shot by the archer. There is therefore, an intelligent personal being by whom everything in nature is ordered to its end. 45

His argument does not appeal to any specific pieces of empirical evidence or detailed examples of adaption but to a single aspect of world order— the general trend of natural behaviour. Alongside Thomist philosophy there began to develop, through a number of eminent Franciscan friars, an approach to science that has a more modern flavour. Roger Bacon (1214-94) was the most far-sighted— and the most persecuted—of the advocates for this new emphasis. His foresight influenced many fields of learning that are today quite distinct. He argued, for example, that the use of original texts in historical and linguistic study was essential for scholarship whilst in the sciences he saw that useful progress could only be made through a combination of mathematical reasoning and experimental investigation. Yet, alongside this new and modernistic philosophy of the scientific method Bacon held what was, for his time, a typical view of final causation and mankind's pre-eminent position within the natural world: Man, if we look to final causes, may be regarded as the centre of the world; in so much that if man were taken away from the world, the rest would seem to be all astray, without aim or purpose . . . and leading to nothing. For the whole world works together in the services of man; and there is nothing from which he does not derive use and fruit . . . in so much that all things seem to be going about man's business and not their own. 46

The strength of his position was that he did not allow such finalistic inclinations to usurp the place of direct observations in the practice of physical science. Final causes were relegated entirely to the metaphysical domain. Conscious of the ease with which we adopt preconceived and fallacious modes of reasoning, Bacon ear-marked four explicit sources of erroneous deduction; undue regard for established doctrines and authorities, habit, prejudice and the 'false conceit of knowledge'. Uncritical adoption of Aristotelian metaphysics in the area of physical science was clearly the paradigm for the first of these pitfalls. The Scholastics, in addition to introducing the term 'final cause' (causa finalis) into philosophy, were also the first to use the appellation 'natural theology' (theologia naturalis) which was to prove so popular during the seventeenth and eighteenth centuries. It originates in the work of Raymonde of Sebonde (c. 1400), an obscure scholar who was persuaded to remain in Toulouse as the university professor of medicine, philosophy and theology whilst passing through on a journey to Paris from his home in Barcelona. His book Theologia Naturalis sive Liber Creaturarum was clearly not wholly orthodox because it was placed on the Index in 1595, 47

49 Design Arguments

but the reasons for this are still not altogether clear. It later became influential following its translation by Montaigne in 1569 and was reprinted thereafter in France on several occasions. The author's guiding theme is the kinship of mankind with the natural world and is slightly reminiscent of St. Francis. This unity between man and his environment speaks to him of both design and a unique Designer:

There could not be so great an agreement and likeness between man and the trees, plants and animals, if there were two designers, rulers or artificers in nature; nor would the operations of plants and trees be carried on so regularly after the manner of human operations, nor would they all be so much in man's likeness, except that He which guided and directed the operations of these trees and plants were the same Being that gave man understanding and that ordered the operations of trees which are after the manner of works done by understanding, since in trees and plants there is no reason nor understanding. And of far more strength is the oneness of matter and sameness of life in man, animals, trees and plants an evidence of the oneness of their Maker. 47

2.4 The Age of Discovery

Inquiry into final causes is sterile, and, like a virgin consecrated to God, produces nothing. F. Bacon

The developments heralding the birth of what has become known as the Renaissance view of the world have been exhaustively discussed by scholars. With hindsight, Nicholas Copernicus (1473-1543) appears to us a pivotal figure, the last of the Aristotelians and the harbinger of a fully mechanical model of the Universe. What is now equally clear is that his classic, De revolutionibus orbium celestium, had negligible influence until the seventeenth century. Few copies of it were sold and even fewer read in the early years after Copernicus' death; other great events, like the Portuguese voyages of discovery, completely overshadowed it. Although Copernicus' world model was new and heliocentric, his world-view was extremely anthropocentric and he appears a little reticent about relinquishing even the physical centrality of Man, but assures us that Man's displacement is really only very slight, given the immense size of the cosmos: 48

So it is also as to the place of the earth; although it is not at the centre of the world, nevertheless the distance [to that centre] is as nothing in particular when compared to that to the fixed stars 4 9

It is also interesting that Copernicus uses various tenets of Aristotelian teleology concerning the necessary harmony and order of the Universe to guide him in the construction of a purely mechanical model.

Design Arguments

50

Spherical configurations were appropriate for the celestial motions because 'this figure is the most perfect of all' and the coalescence of falling bodies inevitable because 'nothing is more repugnant to the order of the whole and to the form of the world than for anything to be outside of its place'. Following the heliocentric insights of Copernicus, a route was opened for philosophers to develop the notion of a 'plurality of worlds'. The Aristotelian cosmology could not have countenanced such an asymmetry and periodicity because of its hierarchical and geocentric structure. To the early Greeks the notion of 'many worlds' carried with it, not the more modern picture of additional solar systems and habitable planets, but rather reproductions of the entire Universe. This latter view was characteristic of the early Epicureans but the possibility of its extension into the Aristotelian cosmology was vigorously opposed by Aquinas on logical and aesthetic grounds. For, he claimed, if all worlds were similar then all bar one were superfluous, whilst if they were dissimilar then a semantic and logical contradiction has arisen because a world does not then contain all that is possible. The notion of 'multiverses' in both of the abovementioned senses was to be an enduring consideration, generating new arguments both for and against the naive anthropocentric teleologies. Copernicus' famous scientific successors, Galileo (1564-1642) and Kepler (1571-1630), held strong but diametrically opposed views on the subject of anthropocentric design. Whereas Galileo felt such ideas were simply unthinking manifestations of human presumption:

We arrogate too much to ourselves if we suppose that the care of us is the adequate work of God, an end beyond which the divine wisdom and power does not extend, 50

his contemporary, Kepler, was thoroughgoing teleologist in outlook, holding that 'all things have been made for man'. Furthermore, Kepler appealed to the obvious presence of order in the Universe to substantiate such a belief. Paul Janet, a nineteenth-century French philosopher, records this amusing domestic exchange between Kepler and his wife which was recounted in Bertrand's Les Foundateurs de VAstronomie Moderne: 51

52

Dost think, that if from the creation plates of tin, leaves of lettuce, grains of salt, drops of oil and vinegar, and fragments of hard-boiled eggs were floating in all directions and without order, chance could assemble them today to form a salad?' 'Certainly not so good a one' replied my fair spouse, 'nor so well seasoned as this'.

Kepler was convinced that God had created the Universe in accord with some perfect numerological or geometrical principle. In his astronomical work Kepler strove to use this Platonic conviction to search for the ultimate causes of the planetary motions. 53

51 Design Arguments

Not surprisingly, many other sixteenth-century scholars had little sympathy for this classical Design Argument drawn from the superficial order of the World. Indeed Kepler's contemporaries contrived some of the most cogent objections to teleology since those of the ancients. The French essayist Montaigne (1533-92) argued that most teleological arguments were too anthropocentric to be taken seriously and amusingly, he parodied Man's grand self-image with an ornithocentric teleology, arguing that we simply do not know for whom or what purpose natural contrivances are geared, Why should not a gosling say thus: All the parts of the Universe regard me; the earth serves me for walking, the sun to give me light, the stars to inspire one with their influences. I have this use of the winds, that of the waters; there is nothing which this vault so favourably regards as me; I am the darling of nature. Does not man look after, lodge, and serve me? It is for me he sows and grinds: if he eat me, so does he his fellow-man as well; and so do I the worms that kill and eat him 54

And he uses an objection to teleology that we remember was also cited by Velleius in Cicero's De Natura Deorum, Who has persuaded himself that this motion of the celestial vault, the eternal light of these lamps revolving so proudly above his head, the awful movements of this infinite sea, were established and are maintained so many ages for his convenience and service?. 55

More vehement was the criticism of Francis Bacon (1561-1626), one of the patrons of the modern inductive method and a pioneer in the logical systematization of scientific procedure. He felt most strongly that philosophy and theology should remain completely disjoint rather than fall confused and conjoined within some elaborate Scholastic synthesis. This made him extremely hostile to all aspects of Aristotelian science and a strong supporter of the early atomists. Although Bacon certainly did not wish to deny that Nature may both possess and display some divine purpose, he objected to the use of this belief in generating teleological 'explanations' which then became intermingled with the empirical investigations of the physical sciences. His attitude towards the fruitlessness of teleological and finalistic explanations in natural science is summarized by his famous jibe, which is the epigram for this section. For Bacon, final causes have a role to play only in metaphysics. In physics, experience guides us to exclude them. With Bacon's ideas we see the beginning of a trend that has continued to the present day with most scientists qua scientists ignoring 'ultimate' questions; and instead, concentrating on more limited local problems and the interconnection between material and efficient causes; Bacon claims this is advantageous 56

57

Design Arguments

52

because,

the handling of final causes mixed with the rest in physical inquiries, hath intercepted the severe and diligent inquiry of all real and physical causes, and given men the occasion to stay upon these satisfactory and specious causes, to the great arrest and prejudice of further discovery. For this I find done not only in Plato, who ever anchoreth upon that shore, but by Aristotle, Galen and others. For to say that... the clouds are for watering of the earth; or that the solidness of the earth is for the station and mansion of living creatures, and the like, is well enquired and collected in Metaphysic; but in Physic they are impertinent . . . the search of the Physical Cause hath been neglected and passed in silence . . . Not because those final causes are not true, and worthy to be enquired, being kept within their own province; but because their excursions into the limits of physical causes hath bred a vastness and solitude in that track. 58

In the course of his work Bacon isolated a number of 'idols' of natural or man-made origin which could cause us to stumble from the path to sure knowledge. Two are strikingly reminiscent of the snares pointed out by his medieval namesake: Idola Tribus—fallacies generically inherent in human thought, notably the proneness to perceive in Nature a greater degree of order than is actually present, and Idola Theatri—idols constructed around received and venerated systems of thought. The classical design argument has points of contact with each and Bacon's demarcation helps us to trace some of the psychological origins of this argument. Yet despite the good sense of Bacon's advice, there was amongst his contemporaries a notable Aristotelian; and one whose contribution to science will be remembered after Bacon is long forgotten. William Harvey (1578-1657) made his monumental discovery of the human circulatory system by employing the very style of reasoning derided by Bacon. Harvey was not an atomist and he regarded the facts uncovered by his studies of embryology as a refutation of any scientific philosophy devoid of purpose. In his final publication he claims that 'The authority of Aristotle has always had such weight with me that I never think of differing from him inconsiderably'. The way in which this respect for Aristotle was realized in Harvey's work seems to have been in the search for discernible purpose in the workings of living organisms— indeed, the expectation of purposeful activity—rather than any association in his mind with a vast labyrinth of metaphysical ideas about the structure of the World and the living organisms within it. Harvey's discovery of the human circulatory system actually arose as a consequence of his Aristotelian approach: on the one hand he wondered if the motion of human blood might be circular—with all the significance such a geometry would have for Aristotelians—whilst on the other he tried to conceive of how a purposeful designer would have constructed a system 59

53 Design Arguments

of motion. Robert Boyle records a conversation in which he asked Harvey how he had hit upon such an idea as circulation. Harvey replied that when he had noticed how carefully positioned were the valves within the veins so as to allow blood to pass towards the heart but not away from it, he was 60

... invited to imagine, that so Provident a cause as Nature had not so placed so many values without Design: and no Design seem'd more possible than that, since the Blood could not well, because of the interposing valves, be sent, by the veins to the limbs; it should be sent through the Arteries and return through the veins.

Elsewhere in Harvey's writings, we find even a desire to interpret the internal structure of the body as a form of mini solar system with the heart at the centre along the lines of an Aristotelian cosmology. These motivations were clearly not the sole reason for Harvey's success. He was also among the first of a new generation of physicians who did not look simply to Galen for their instruction but dissected, examined and recorded, and carried out their own experimental investigations. By his successful synthesis of teleology and experiment Harvey appears as the forerunner of a new type of teleologist, those with a special interest in the observation of the minute intricacy of Nature. Another illustrious contemporary of Bacon who was deeply concerned with the unverifiable and imprecise nature of the foundations of all types of philosophy was the founder of modern critical philosophy, Rene Descartes (1596-1650). Like Galileo and many other renaissance scientists he was convinced that the primary qualities of the Universe were mathematical in nature. This led him firmly to reject final causation as a useful scientific concept because it was associated with an anthropocentric and subjective view of the world, reflecting little more than our presumption in supposing we could unravel the purposes of God. Things have many ends, Descartes says, but most of these have no interaction with Man at all: 61

62

It is not at all probable that all things have been created for us in such a manner that God had no other end in creating them . . . Such a supposition would, I think, be very inept in reasoning about physical questions; for we cannot doubt that an infinitude of things exist, or did exist, though they have now ceased to do so, which have never been beheld or comprehended by man, and have never been of any use to him. 63

This view was reinforced by his belief that the Universe was infinite. Descartes's approach to natural philosophy was an attempt to deduce the essence of the world structure from self-evident primary principles solely by the methods of mathematical reasoning. The Cartesian worldview was 'Deistic'; that is, it maintained that order was inherent in the

Design Arguments

54 Gal ileo Gassendi

Kepler

Copernicus

Mon t a i q n e Descortes Harvey R Bacon

...

F Bacon ——""———~~" 1700

I

1600

I

V-J

1500

1300

Aq.uinas

L-• •

1200

Figure 2.2. The chronology of the principal contributors to our discussion of the Design Argument during the thirteenth to the sixteenth centuries.

properties of inorganic material and endowed at the moment of creation; thereafter all operates by mechanical causes alone:

God has so wondrously established these laws that even if we suppose that he creates nothing more than I have said [matter and motion], and even if he puts into this no order nor proportion, but makes of it a chaos as confused and perplexed as the poets could describe, they are sufficient to cause the parts of this chaos to unravel themselves, and arrange themselves in so good an order that they shall have the form of a very perfect world. 64

Whereas Bacon had banished final causes to the metaphysical world, Descartes wished to exorcise them from this realm as well. Following Francis Bacon's example, he made no attempt to deny that Nature may possess some ultimate end of premeditated design, but claimed that it is simply beyond our ken to identify it; for, the capacity of our mind is very mediocre, and not to presume too much on ourselves, as it seems we would do were we to persuade ourselves that it is only for our use that God has created all things, or even, indeed, if we pretended to be able to know by the force of our mind what are the ends for which he has created them. 65

The reason why the concept of teleology has arisen in our minds, Descartes claimed, is due to muddled thinking about the relationship between causes and effects rather than the reality of different types of cause as Aristotle would have it. By contrast the Cartesian approach would 'explain effects by causes, and not causes by effects'. Yet Descartes did seem to allow just one final cause; for he believed God has provided Man with a closely correlated body and mind to evade danger— mankind's end was survival. 65

55 Design Arguments

2.5 Mechanical Worlds

But of this frame, the bearing and the ties, The strong connections, nice dependencies, Gradations just, has thy pervading soul Look'd thro? Or can a part contain the whole? A. Pope

The seventeenth century saw a gradual change from an organic to a mechanical world picture; the opinion that an entity which generates life must therefore itself be alive steadily receded in the wake of the manifest success that flowed from the mechanistic paradigm. This appears as an important metamorphosis and one which we are apt to skip over, so familiar are we with the comings and goings of the theoretical models in modern physical science. In modern science, models and descriptions of natural phenomena are taken up and discarded solely according to their transient usefulness, whereas for early scientists they represented not just a model but the very essence of the Universe, the 'thing in itself'. Because of this attitude the new mechanical perspective brought with it a more interesting and enthusiastic form of eutaxiological argument which found support principally amongst British physicists. Although their arguments were strongly motivated by their theistic outlook, their arguments also grew out of careful observations and an experimental interrogation of the new clockwork world. It was Robert Boyle (1627-91) who became the most eloquent expositor and spirited supporter of the 'new' design argument. Boyle laid emphasis upon specific examples and coincidences in Nature, claiming them as 'curious and excellent tokens and effects of divine artifice'. His cosmological view required the Deity to initiate the primordial motion of atoms and thereafter remain in lawful and beneficent control to 'contrive them into the world he designed they should compose'; this establishes why the laws of nature bear the hallmark of design. Yet Boyle's approach was consistently mechanical throughout and, like Descartes, he rejected the Aristotelian world-view, based as it was upon an organic model of the Universe, along with the concepts of the Schoolmen which he saw, were an obstacle to the progress of science because they 'do neither oblige nor conduct a man to deeper searches into the structures of things.' Despite his admiration for many aspects of Descartes's work, Boyle disagreed strongly with him regarding his blanket exclusion of final causes, for to do thus would: 67

68

throw away an argument, which the experience of all ages shews to have been the most successful [and in some cases the only prevalent one] to establish, among philosophers, the belief and veneration of God. 69

Design Arguments

56

Whilst he agreed with Descartes that one could not hope to ascertain all the underlying purposes in Nature, he did not see why some, at least, could not be fathomed. But, unlike Descartes, Boyle felt that a major reason for the existence of the world was its service to man, though he certainly granted it could have other ends as well, for he writes,

And here it may not be amiss to take notice, in relation to the opinion, that the whole material world was made for man, that though the arguments we have used may be more probable than others hitherto proposed, against the Vulgar Opinion, especially as it relates to the celestial region of the world, yet amongst the ends designed in several of his works, especially plants, animals and metals, the usefulness of them were designed chiefly for men, yet God may design several ends in several creatures, which may find other, and more noble uses for several creatures than have yet been discovered. 70

Opponents of the Design Argument, like Montaigne, had highlighted the presumption attached to any affirmation of anthropocentric design in Nature; but as a corollary Boyle claimed that, given our fragmentary understanding, it was equally presumptuous of them to deny it. Another original aspect of Boyle's approach to final causes was his claim that the discovery of features pointing to design in Nature is promoted principally by experimental science and provides a strong motivation for these empirical investigations. It is because of lack of good experimental evidence that Boyle shows so little enthusiasm for arguing for manifest design in the astronomical world. He has serious reservations here, for

I am apt to fear that men are wont, with greater confidence than evidence, to assign the systematical ends and uses of the celestial bodies, and to conclude them to be made and moved only for the service of the earth and its inhabitants. 71

Instead, he preferred to find indications of design from the minutiae of flora and fauna, because of their more allegorical nature and the stronger possibility of deciding the purpose of their composite structures.

For there seems more admirable contrivance in the muscles of a man's body, than the celestial orbs; and the eye of a fly seems a more curious piece of work than the body of the sun 72

Such deductions were less obvious in the extraterrestrial realm:

I think that, from the ends and uses of the parts of living bodies, the naturalist may draw arguments, provided he do it with due cautions of which I shall speak. That the inanimate bodies here below that proceed not from seminal principles have a more parable texture . . . and will not easily warrant ratiocinations drawn from their supposed ends. 73

Like Aristotle before him, Boyle searched for particular examples of

57 Design Arguments

micro-engineering in the structure of animals and insects; such examples had, at that time, received a lot of publicity following the publication of Hooke's Micrographia in 1665. The invention of the microscope had, for the first time, allowed people to see the intricacy of the smallest organisms. In no small way this advance gave added momentum to the Design Argument. Boyle's discussions of these matters appeared in 1688 in a work bearing a rather intimidating title: Disquisition about the Final Causes of Natural Things: wherein is inquired whether and (if at all) with what caution a naturalist should admit them. There he attempted to classify the various ends one could discern in Nature into four categories: the 'universal' (divine), the 'cosmical' (which govern the celestial motions), the 'animal' ('which are those that the peculiar parts of animals are destinated to, and for the welfare of the animal itself') and 'human' (mental and corporeal). Each category provoked Design Arguments but they differed in character and force according to the quality of the evidence available and the impact they made on the imagination. Following Cicero's employment of the horological analogy of design, Boyle replied to Descartes's claim that final causes are irresolvable, dissipated in a sea of vague possibilities: 73

Suppose that a peasant entering in broad daylight the gardens of a famous mathematician, finds there one of those curious gnomonic instruments which indicate the position of the sun in the zodiac, its declination from the equator, the day of the month, the length of the day and so on; it would, no doubt, be a great presumption on his part, ignorant alike of mathematical science and of the intentions of the artist, to believe himself capable of discovering all the ends in view of which this machine, so curiously wrought, has been constructed; but when he remarks that it is furnished with an index, with lines and horary numbers, in short, with all that constitutes a sun-dial, and sees successively the shadow of the index mark in succession the hour of the day, there would be in his part as little presumption as error in concluding that this instrument, whatever may be its other uses, is certainly a dial made to tell the time. 74

Boyle argues that in many circ*mstances no ambiguity arises about the object and purpose of natural contrivances. The world is like a mechanism, and like all known mechanisms, is built for a specific purpose that can almost always be elucidated by a thoughtful inspection of its inner workings. In this contention he was supported by his continental contemporary Gassendi (1592-1655) who also disagreed with Descartes, You say that it does not seem to you that you could investigate and undertake to discover, without rashness, the ends of God. But although that may be true, if you mean to speak of ends that God has willed to be hidden, still it cannot be the case with those which he has, as it were, exposed to the view of all the world, and which are discovered without much labour. 75

58

Design Arguments

The specific influence of the new mechanical world model can be seen in an interesting way: Boyle is so impressed by the correspondence between the internal workings of the world and a timepiece, that he believes behind the world lurks a designer of mechanisms with a measure of human intelligence:

Thus, he who would thoroughly understand the nature of a watch, and not rest satisfy'd with knowing, in general, that a man made it for such uses, but he must, particularly, know of what materials the spring, the wheels, the chain, and the balance are made, he must know the number of the wheels, their magnitude, shape, situation and connexion in the engine, and after what manner or part moves another . . . In short, the neglect of efficient causes would render philosophy useless; but the studious search after them will not prejudice the contemplation of final causes. 76

The end of his statement reveals his stance: although immediate efficient causes of phenomena were entirely mechanical in Boyle's physics, their ultimate and final causes were seen as entirely supernatural. He hoped that his crusade for such a complementarity in the scientific view of the world would not die with him. To support and perpetuate teleological studies he bequeathed a sum of fifty pounds 'forever, or at least for a considerable number of years' to support a series of public lectures on Natural Theology. At this time those Protestant scientists who, like Boyle, supported the experimental approach advocated by Bacon were rapidly becoming impatient with the methodological dogmas of the Schoolmen. The lead given by Descartes and Boyle was enthusiastically followed by others who were more colourful in their condemnations as this extract from John Webster's view of Scholastic reasoning rather vividly indicates!

What is it else, but a confused chaos of needless, frivolous, fruitless, trivial, vain, curious, impertinent, knotty, ungodly, irreligious, thorny and hell-hatch'd disputes, altercations, doubts, questions and endless janglings, multiplied and spawned forth even to monstrosity and nauseousness. 77

The development of the new mechanized physics was to carry with it a design argument based upon the observation of meticulous contrivances in Nature and the conviction of an underlying order of its universal laws. But in biology the organic approach still held sway. An exceptional scientist who remained unconvinced of the mechanical analogy in all its facets was John Ray (1628-1704), the greatest of seventeenth-century English naturalists. In his famous teleological study, The Wisdom of God manifested in the works of Creation, he amassed a wealth of observational data to argue that animals were pre-adapted to survive in special environments. His comprehensive work also reviewed both the astronomical and terrestrial sciences and stressed the manner in which Man's welfare 78

59 Design Arguments

is ensured by the special properties of water, fire, air and wind. It was Ray's meticulous botanical and biological observations that led him to reject the mechanical analogy as too simplistic a view of Nature because it gave no insight into the reasons for the enormous differences in scale between intricately constructed organisms and the Universe as a whole. He challenged Boyle's contention that Nature originally possessed all the intrinsic properties necessary for its multivarious outworkings; rather he appealed to a vitalist force to provide for its constant orchestration: He concludes that I therefore incline to Dr Cudworth's opinion, that God uses for these effects the subordinate ministry of some inferior plastic nature .. , 79

The novelty of 'Dr. Cudworth's opinion' was the concept he termed 'Plastic Nature' which possessed a measure of irrational motion independent of the immediate direction of the Deity. This property enabled it to be employed as an explanation for the aberrations as well as the successes of Nature. Even the lack of design could now be attributed to design. A strong continental opponent of these attempts to introduce some finalistic design principle into physics was Benedict de Spinoza (1632-77). His antagonism toward any deployment of final causes or inferences from supposed design in the world is spelt out in an appendix to his Ethics published in the year of his death. Such notions, he claims, have only arisen because of our ignorance of mechanical laws of Nature and our gullibility regarding the prejudices of anthropocentric philosophy. Far from being in a position to determine the causes and effects of most things we tend to react in amazement, thinking that however these things have come out, they cannot but be for our benefit. This is why, he says, everyone who 'strives to comprehend natural things as a philosophere, in place of admiring them as a stupid man, is at once regarded as impious'. Those who employ finalistic reasoning simply confuse causes with effects because, 80

81

It remains to be shown that nature does not propose to itself any end in its operations, and that all final causes are nothing but pure fictions of human imagination. I shall have little trouble to demonstrate this; for it has already been firmly established . . . I will, however, add a few words in order to accomplish the total ruin of final causes. The first fallacy is that of regarding as a cause that which is by nature anterior, it makes posterior.. , 82

Also, if the doctrine of final causes is correct he argues, then those most perfect things we are seeking as irrefutable evidences of the 'perfect principle' must, by definition, lie in the unobservable future, for

If the things which God immediately produces were made in order to attain an end, it would follow that those which God produces last would be the most perfect of all, the others having been made in order to these. 83

60

Design Arguments

Spinoza claims that our deductions of final causes are probably nothing more than mere wish-fulfillment; expressing, not the nature of the real world, but the nature we hope it has:

When we say that the final cause of a house is to provide a dwelling, we mean thereby nothing more than this, that man, having represented to himself the advantages of the domestic life, has had the desire to build a house. Thus, then this final cause is nothing more than the particular desire just mentioned .. . 84

Such metaphysical and logical objections seemed to carry very little weight on the other side of the English Channel where the greatest scientific genius of his age, Isaac Newton (1642-1727), was giving his support to anthropocentric teleology:

Can it be an accident that all birds, beasts and men have their right side and left side alike-shaped (except in their bowels) and just two eyes and no more, on either side of the face; and just two ears on either side of the head ...? Whence arises this uniformity in all their outward shapes but from the counsel and contrivance of an Author? .. Did blind chance know that there was light, and what was its refraction, and fit the eyes of all creatures after the most curious manner to make use of it? 85

Underlying all Newton's thinking was his deeply-held belief that order was 'created by God at first and conserved by him to this Day in the same state and condition'. Our observation of the planetary orbits, he argued, should convince us that their arrangement did not simply 'arise out of chaos by the mere laws of Nature, though being once formed it may continue by those laws for many ages'. Whereas Robert Boyle had been a critic of Cartesian metaphysics, Newton also opposed Cartesian physics and in particular Descartes's vortex theory of celestial motions, which he showed, by employing angular momentum conservation, to be in conflict with Kepler's observed laws of planetary motion. In his last works Newton voices his exasperation at the omission of final causes in the Cartesian explanations, which he clearly felt to be incomplete because they provided no explanation for the economy and special constitution of Nature:

Whence is it that Nature does nothing in vain; and whence arises all that Order and Beauty which we see in the world? To what end are comets . . . How come the bodies of animals to be contrived . . . For what ends are their several parts? . . . Was the eye contrived without skill in optics? .. . 86

The Newtonian theory of the world, so carefully and impressively argued in his Principia, became the foundation for a steady stream of design arguments based upon optical and gravitational phenomena. Indeed Newton remarked that in writing the treatise he had an 'eye upon arguments' for belief in a deity and in the introduction to his Opticks he claims that the main business of natural philosophy is to deduce causes

61 Design Arguments

from effects until we arrive at the 'First Cause'. However, one man became inextricably linked with Newton in the propagation of these teleological interpretations of Newtonian physics; that man's name was Richard Bentley. Richard Bentley (1662-1742) was a Yorkshireman from humble beginnings who later, principally because of his successful Christian apologetics and classical scholarship, became the Master of Trinity College, Cambridge. Bentley came first into the public eye in 1691, when, while still chaplain to Edward Stillingfleet, the Bishop of Worcester, he was invited to give the inaugural Boyle Lectures on Natural Theology. They were entitled the Confutation of Atheism from the Origin and Frame of the World and in giving them he displayed an excellent knowledge and understanding of Newton's mathematical physics, a familiarity known to have been fostered by his close correspondence and dialogue on such matters with Newton himself. Bentley was to argue that design is most clearly witnessed by elegant mathematical laws of a general and invariant character rather than by the specific, but relative, adaptations we see in the animal world. He attempted to construct a eutaxiological design argument based upon our knowledge rather than, as often had been the case, a teleological argument founded upon our ignorance. The cornerstone of his argument, Newton's gravitational theory, derived, for the first time, what we still consider to be one of the fundamental constants of nature: the gravitational constant. It was this underlying universal constant that was responsible for the apparently universal nature of Newton's deductions and explanations in gravitation physics and it led to the belief that there was something absolute about the entire model of the world it gave rise to—a model that was mechanical, like the workings of a watch. In retrospect it is perhaps predictable that outstanding success in scientific model-building and explanation should lead to an accompanying proliferation of teleological and eutaxiological design arguments. One sees it in the Aristotelian period and in the twentieth-century study of cosmology and elementary particles. Whenever absolute deductions are possible from a theoretical model, and successfully explain what is seen, then some form of absolute credence tei*ds to be attributed to the mathematical model responsible. Newton's authority was also extensively employed by other apologists, notably Hales, Clarke, Whiston and MacClaurin, all with Newton's blessing according to David Gregory's report 87

89

In Mr. Newton's opinion a good design of a publick speech... may be to show that the most simple laws of nature are observed in the structure of a great part of the Universe, that the philosophy ought there to begin, and that Cosmic Qualities are as much easier as they are more Universal than particular ones, and the general contrivance simpler than that of animals, plants .. . 89

Design Arguments

62

The result of this enthusiasm and its widespread influence was to make Newton and his followers the principal target of Hume's attack in the Dialogues concerning Natural Religion. In his History of England Hume describes Newton and his achievement in two-edged terms 'the greatest and rarest genius that ever rose for the ornament and instruction of the species', but yet 'while Newton seemed to draw off the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored her ultimate secrets to that obscurity in which they ever did and ever will remain'. The statement of the Design Argument used by Hume in his work is in fact that given by Colin MacClaurin (1698-1746) in his book An Account of Sir Isaac Newton's Philosophical Discoveries wherein he remarks 113

the plain argument for the existence of the Deity, obvious to all and carrying irresistable conviction with it, is from the evident contrivance and fitness of things for one another, which we find throughout the Universe. 90

At this point, it is worth pausing to mention a gradual transition that has occurred in the nature of design arguments from the Scholastics to Newton. For the Schoolmen the causa finalis of Nature was God himself; the unmoved mover was Omega as well as Alpha. The future succession of effects must come to an end just as surely as the past procession of causes must have had a beginning and Man, they argued, should use this insight to know God. For Newton and his colleagues the ordered laws of motion themselves appear to be the end of Nature. God exists to uphold and perpetuate them, defending the world system from falling into chaos and irrationality. The second Boyle lecturer was another Newtonian, Samuel Clarke, but it is not for this that Clarke is chiefly remembered. Rather, it is for his dialogue with another scientist who was not so readily seduced by the Newtonian design arguments. Clarke's formidable opponent was Gottfried Leibniz (1646-1716) and throughout their famous correspondence Clarke was undoubtedly being coached by his compatriot, Newton. Leibniz believed that mechanistic science alone left no room for theocentric purpose. Such a purpose could only be evident through the recognition and incorporation of perfect geometrical principles into physics. In principle, Leibniz argued, there were many possible worlds that were logically self-consistent but the reason for the selection of the existing cosmos was its maximal degree of perfection; it was 'the best of all possible worlds'. He argued that the use of this principle of perfection was quite essential in physical modelling and 'So far from excluding final causes and the consideration of a Being acting with wisdom, it is from this that everything must be deduced in physics'. In conjunction with mechanical explanation the use of final causation and teleology provides a 91

92

63 Design Arguments

parallel line of analysis and it is to everyone's benefit that they be conjoined. In order to make use of his 'perfecting principle' Leibniz gave examples of laws in Nature that he believed were not metaphysically necessary. For example, the principle of continuity in the motion of physical systems which appears to be generic when one might have anticipated discontinuities ('leaps') to be prevalent a priori: The hypothesis of leaps cannot be refuted except by the principle of order, by the help of supreme reason, which does everything in the most perfect way. 93

So, in the beginning God established all things harmoniously and thereafter they maintained their harmony and mutual consistencies even though they were causally disjoint. The maintenance of order in this fashion was proposed by Leibniz as 'a new proof of the existence of God, which is one of surprising clearness'; it was an a posteriori argument from an initially established ordering. He is convinced of it because there seems to exist coordination between things that have never been in causal contact with one another (a dilemma known to modern cosmologists as the 'horizon problem') 94

This perfect harmony of so many substances which have no communication with each other can only come from a common cause. 95

Leibniz' perfect harmony does not necessarily have any anthropocentric bias and because of that it is not surprising that 'we find in the world things that are not pleasing to us', we would expect it because 'we know that it was not made for us alone'. In this contention Leibniz would have been supported by some Newtonians but the area where disagreement with Clarke, and thereby Newton himself, rested was in the manner of the maintenance of the world order. Clarke was an 'occasionalist' believing that God constantly intervenes to correct aberrations in the order of Nature just as the watchmaker occasionally finds it necessary to regulate or repair his watch. Leibniz held that such a view implied either that the laws of Nature and creation were in some way imperfect or the Deity was lacking in foresight; he could not believe the world needed repair 'otherwise we must say that God bethinks himself again'. Clarke retorted that Leibniz had turned the Deity into an absentee landlord and relegated the sphere of divine action to that of a limited initial cause but received the reply that, to the contrary, His dynamic role was the constant maintenance of the world order. Besides the two scientific giants of the age, there were several other more off-beat contributors to the Design Argument debate; not least the botanist Nehemiah Grew (1641-1712). In his study Cosmologia Sacra he gave not only many ingenious examples of design in crystallography but 96

Design Arguments

64

also an argument from the large scale regularity of Nature to the existence of extraterrestrial planetary systems:

there can be no manner of symmetry in finishing so small a part of the Universal expansion with so noble an apparatus as aforesaid, and letting innumerable and far greater intervals lie waste and void. If then there are many thousands of visible and invisible fixed stars, or suns, there are also as many planetary systems belonging to them, and many more planetary systems belonging to them, and many more planetary worlds. 97

An unusual continental commentary is provided in the famous drama Le Festin de Pierre by Moliere (1622-73). There, the Design Argument found itself on the lips of a pious valet who says to his unbelieving master: 98

This world that we see is not a mushroom that has come of itself in a night . . . Can you see the inventions of which the human machine is composed, without admiring the way in which it is arranged, one part within another? . . . My reasoning is that there is something wonderful in man, whatever you may say, and which all the savants cannot explain. 98

Another famous French author with interesting opinions on final causes, who was also a vehement opponent of Leibniz' entire world-view was Francois-Marie Arouet (1694-1778), better known by his nom-deplume, Voltaire. Voltaire is perhaps most succinctly categorized as an anti-Epicurean, anti-Christian, Newtonian Deist. His opinion of the order of Nature was that 'a watch proves a watch-maker, and that a Universe proves a God'. It was unthinkable to him that one could attribute the existence of the human mind to blind chance:

We are intelligent beings, and intelligent beings could not have been formed by a blind, brute, insensible thing . . . "

Furthermore, he maintained, the evident presence of intelligence in Nature made it necessary to consider final causes in Nature. Although he believed that normalizing selection could explain the adaption that animals displayed with respect to their environments it could account neither for their mental faculties nor the intricacy of the design actually engineered within them, and, as for chance as a feasible mechanism he claimed

The disposition of a fly's wings or of the feelers of a snail is sufficient to confound you. 99

Yet Voltaire was a scathing opponent of anthropocentric design arguments because he felt that our scanty knowledge made the objects and beneficiaries of design indeterminate and inevitably, the subject provided excellent material for his Dictionary article on 'Ignorance'. In the same

65 Design Arguments

volume he argues against the synthesis of Final Causes with these anthropocentric delusions on the grounds that things could not have been set up long ago with our present specific and unpredictable day-to-day needs in view,

In order to become certain of the true end for which a cause acts, that effect must be at all times and in all places. There have not been vessels at all times and on all seas: thus it cannot be said that the ocean has been made for vessels. One feels how ridiculous it would be to allege that nature had wrought from the earliest times to adjust itself to our arbitrary inventions, which have all appeared so late; but it is very evident that if noses have not been made for spectacles, they have been made for smelling, and that there have been noses ever since there have been men. 100

We also recall the caricature of Leibniz and his 'best of all possible worlds' philosophy through Dr. Pangloss, the professor of 'metaphysicotheologo-cosmolonigology' in Candide. One of Voltaire's co-editors of the Encyclopedic and the author of its mathematical content was D'Alembert (1717-83). He was, like Voltaire, sceptical of the numerous metaphysical bases to mathematical physics. Also interesting is his distinction between the intrinsic laws of nature and the mathematical models we use to represent them. This distinction he develops when discussing the form of the laws of motion, 101

It seems to me that these thoughts can serve to make us evaluate the demonstrations given by various philosophers of the laws of motion as being in accord with the principle of final causes, that is to say with the designs of the Author of Nature in establishing these laws. Such proofs can be convincing only insofar as they are preceded and supported by direct demonstrations and have been derived from principles which are within our reach; otherwise they could often lead us into error. It is for having followed that path, for having believed that it was the Creator's wisdom to conserve always the same quantity of motion in the Universe, that Descartes was mistaken about the laws of collision. Those who imitate him run the risk of either being deceived like him, or taking for a general principle something that takes place only in special cases, or finally of regarding a purely mathematical consequence of some formula as a fundamental law of nature. 102

For modern mathematicians D'Alembert's name is linked with that of his younger contemporary Moreau de Maupertuis (1698-1759) through their important contributions to the variational principles of mechanics. Such variational principles are remarkable quantitative examples of teleological reasoning being directly and predictively employed in mathematical physics and we shall discuss them in a little more detail in Chapter 3.4. Here, we just mention how they enabled Maupertuis to arrive at a quantification of the notion of 'the best of all possible worlds': the optimal configuration or state within an ensemble of logically consistent possibilities.

Design Arguments

66

In general, a variational principle indicates how the actual motion or state of a system differs from all of the kinematically possible motions permitted by its constraints. This principle may be differential, giving the difference between the actual and the optimal systems at each instant of time; or, less generally, it may be integral Integral variational principles establish the difference between the actual motion of a system and all of its kinematically possible motions during a finite time interval. Maupertuis' name is associated with the famous integral principle of variation— the Least Action Principle. Maupertuis used this idea to argue for a system of God-inspired final causes in Nature and claimed that it was a mathematically precise version of Leibniz' doctrine of 'the best of all possible worlds'. Formerly, Design Arguments had been implicitly making statements of comparative reference without any other "worlds" being available; the novelty of Maupertuis' Design Argument is that the other worlds do exist. They are the paths with non-stationary action. Yet, Maupertuis was well aware that the growth of accurate mathematical models of nature had spawned many over-zealous metaphysical extrapolations:

For all ages proofs of the wisdom and power of Him who governs the Universe have been formed by those who applied themselves to the study of it. The greater the progress in physics, the more numerous have these proofs become. Some struck with amazement at the divine tokens which we behold every moment in nature, other through a zeal misnamed religious, have given certain proofs greater weight than they ought to have, and sometimes taken for proof that which was not conclusive. 103

and he believed Newton to be the originator of this uncritical approach because,

That great man believed that the movements of the celestial bodies sufficiently demonstrate the existence of Him who governs them; such uniformity must result from the Will of a Supreme Being. 104

and other less distinguished authors, Derham, Fabricus and Lesser were chastised for their unimaginative repetition of earlier platitudes,

Almost all the modern authors in physics and natural history have done little else than expand the proofs drawn from the organization of animals and plants, and push them in the details of nature . . . A crowd of physicists since Newton have found God in stars, in insects, in plants, and in water; not to mention those who find him in the wrinkles of the rhinoceros' hide . . . leave such bagatelles to those who do not perceive their folly. 105

The only people with whom he appears to have less sympathy are those who would outlaw Final Causes at the behest of chance and mechanism. His own approach was grounded in a search for general regulatory

67 Design Arguments

principles and for physical laws generated by the precise formulation of a Least Action Principle. He argues that the only objective approach to evaluating the tendencies of nature is to dwell on the form of its laws—not its artefacts and organisms,

I review the proofs drawn from the contemplation of nature, and I add a reflection: it is, that those which have the greatest strength have not been sufficiently examined as regards their validity and extent. That the cosmos presents trains of agencies convergent to an end on a thousand occasions, is no proof of intelligence and design . . . skill in the extension is not sufficient .. the purpose must be rational . . . The organization of animals, the multiplicity and minuteness of the parts of insects, the immensity of celestial bodies, their distances and revolutions are better suited to astonish the mind than to enlighten it . . . Let us search for him in the fundamental laws of the cosmos, in those universal principles of order which underlie the whole, rather than in the complicated results of those laws. 106

A number of Maupertuis' criticisms were directed specifically at William Derham (1657-1735) the Boyle lecturer for 1711-12, a minor scientist and an enthusiast for the Newtonian world-view. His Boyle Lectures consisted of sixteen sermons delivered at St. Mary-le-Bow Church which appeared in book form a year later under the title Physico-Theology. He considered all the usual good fortunes of the world, the suitability of the terrestrial environment, the diurnal and seasonal variations and so on, all from an anthropocentric perspective. Extraordinarily, he pauses to wonder if the eye might have been more efficiently situated on the hand, but upon reflection, considers it safer from injury on the head! Another unusual trend in his argument is an attempt to persuade the reader that many minor disasters, which one might at first sight have found difficult to reconcile with providential design, were actually beneficial in staving off even graver catastrophes! For instance, 107

To instance the very worst of all things named, viz., the volcanoes ignivomous mountains: although they are some of the most terrible shocks of the globe and dreadful scourges of the sinful inhabitants thereof .. Nay, if the hypothesis of a central fire and waters be true, these outlets seem to be of the greatest use to the peace and quiet of the terraqueous globe in venting the subterraneous heat and vapours, which, if pent up, would make dreadful and dangerous commotions of the earth and waters. 108

Later he was to abandon this anthropocentric bias, referring to it as the 'old vulgar opinion that all things were made for man'. His more sophisticated teleological outlook was written-up in a later work AstroTheology. There, in contrast to his earlier work, he realizes the need to consider the role of the heavenly bodies whose motions appear to be of no possible relevance to ourselves. He uses their existence to support a

Design Arguments

68

eutaxiological argument by appeal to the manifest design of their orderly motions:

For where we have such manifest strokes of wide order and management, of the observance of mathematical proportions, can we conclude there was anything less than reason, judgement and mathematical skill in the case? Or that this could be effected by any other power but that of an intelligent Being. 108

Eighteenth-century biologists were beginning to think more carefully about the progressive development of forms but came to widely differing conclusions. The Swiss naturalist Bonnet (1720-93) introduced the term evolution to describe the ontogenetic development of an individual from fetus to adult and argued that the entire inorganic world was similarly preprogrammed. Further this complete determinism was sufficient to explain the match of living things to their local environment. Yet, his French contemporary, the zoologist Buffon (1707-88), believed that no useful information about animal function could be gleaned from the doctrine of Final Causes so commonly employed by the physicists:

Those who believe they can answer these questions by final causes do not perceive that they take the effect for the cause. 109

2.6 Critical Developments The believers in Cosmic Purpose make much of our supposed intelligence but their writings make one doubt it. If I were granted omnipotence, and millions of years to experiment in, I should not think Man much to boast of as the final result of all my efforts. Bertrand Russell

Besides Maupertuis, the most original approach to the metaphysical problems at the core of the mechanical world-view issued from the pen of Giovanni Vico (1688-1744), a Neapolitan professor of Jurisprudence. In his own time his work was not widely discussed, but retrospectively he is seen, by philosophers of science, as a forerunner of Kant. Vico was interested in refuting the Cartesian dogma that all science required in order to unravel the working of the World was an axiomatic basis for reasoning and a sound mathematical methodology. His approach was to establish a clear distinction between the world as it really is and the world which we create and cognize through the use of mathematical models and physical experiments. He realized that the understanding one has of something created by oneself is of a different nature to that understanding gleaned from simple observation. This distinction means we can never be free from subjectivism. Vico saw that mathematical models appear 110

69 Design Arguments

intelligible and coherent to our minds because our minds alone have made them. All our enquiry is necessarily anthropocentric because we employ man-made tools and human reason in its pursuit. Vico believed the 'real' world of nature, which obeyed knowable but inaccessible rules, differed in kind from our do-it-yourself model of intelligible but manmade laws;

Create the truth which you wish to cognize, and I, in cognizing the truth that you have proposed to one, will 'make' it in such a way that there will be no possibility of my doubting it, since I am the very one who has produced it. 111

Vico recognized four distinct types of knowledge and warned against abstracting conclusions drawn from information within one category of enquiry into others. One of his categories is Scienze : a priori knowledge of the real nature of things, which one can only possess of artefacts or models we have made. God alone possesses this type of knowledge of everything. Vico himself was a Christian teleologist who believed that we could only know the ultimate ends of Nature by revelation, (which would endow us with the third of his four types of knowledge). ' Yet, his ideas provide a natural prologue to the more critical analyses of the Design Argument and the theory of knowledge which were to be developed by David Hume and Immanuel Kant. In his posthumous publication, the Dialogues Concerning Natural Religion, David Hume (1711-76) mounted a sceptical attack on the logical structure of many naive design arguments and indeed also upon the rational basis of any form of scientific enquiry. In the Dialogues, and in other works, Hume calls the Design Argument 'the religious hypothesis' and proceeds to attack its foundation from a variety of directions. Hume's approach was entirely negative; whereas most of his contemporaries accepted the rationality and ordered structure of the world without question, Hume did not. A common-sense view of the world, along with the metaphysical trimmings that had been added to the Newtonian world model, Hume rejected. His Dialogues are analogous to Cicero's De Natura Deorum; the Dialogues describe a debate in which the sceptical Philo umpires and examines the argument between two supporters of different types of 'religious hypothesis'. On the one hand there is Demea, representing the school of a priori truth and revelation and on the other Cleanthes, who reasons in a posteriori manner, employing the fashionable synthesis between Final Causes and the mechanical world-view. The views of Newton's supporters are voiced through Cleanthes who actually adopts MacClaurin's statement of the Newtonian Design Argument when summarizing his position: 111 112

113

114

I shall briefly explain how I conceive this matter. Look round this world: Contemplate the whole and every part of it. You will find it to be nothing but one

Design Arguments

70

great machine, subdivided into an infinite member of lesser machines... All these various machines and even their most minute parts, are adjusted to each other with an accuracy, which ravishes into admiration all men who have ever contemplated them. The curious adapting of means to ends, throughout all nature, resembles exactly, though it much exceeds, the productions of human contrivance; of human design, thought, wisdom and intelligence.. , 115

The principal objections which Hume allows to surface during the course of the discussion are threefold: Firstly, the Design Argument is unscientific; there can be no causal explanation for the order of Nature because the uniqueness of the world removes all grounds for comparative reference. Secondly; analogical reasoning is so weak and subjective that it could not even provide us with a reasonable conjecture, never mind a definite proof. And finally: all negative evidence has been conveniently neglected. Hume maintains that a dispassionate approach could argue as well for a disorderly cause if it were to concentrate upon the disorderly aspects of the world's structure. His aim is not so much to refute the Design Argument as to show it only raises questions that are undecidable from the evidence available. Hume's spokesmen question the anthropocentric bias of the Design Argument

. . . we are guilty of the grossest, and most narrow partiality, and make ourselves the model of the Universe . . . What peculiar privilege has this little agitation of brain which we call thought, that we must thus make it the model of the whole Universe. 116

Hume also draws attention to the tautological nature of the deductions from animal structure. For if the harmonious interrelation of organs is a necessary condition for life how could we fail to inhabit a world of harmonious appearances

It is vain . . . to insist upon the uses of the parts in animals or vegetables and their curious adjustments to each other. I would fain know how an animal could subsist, unless its parts were so adjusted?. 117

An alternative explanation of order is suggested: perhaps the development of the world is random but has had an infinite amount of time available to it so all possible configurations arise until eventually a stable self-perpetuating form is found: . . . let us suppose it [matter] finite. A finite number of particles is only susceptible to finite transpositions. And it must happen in an eternal duration, that every possible order or position must be tried an infinite number of times... a chaos ensues; till finite though innumerable revolutions produce at last some forms, whose parts and organs are so adjusted as to support the forms amidst a continued succession of matter. 118

71 Design Arguments

Despite these counter-arguments Cleanthes' support for the Design Argument was so carefully built up that there has even been scholarly debate as to where Hume's own sympathies really lay. Elsewhere Hume appears to display a vitalist view, believing matter to possess some intrinsic self-ordering property: 113

119

... that order, arrangement, or the adjustment of final causes is not, of itself, any proof of design; but only in so far as it has been experienced to proceed from that principle. For aught we can know a priori matter may contain the source or spring of order originally, within itself, as mind does . . . It is only to say, that such is the nature of material objects and that they are originally possessed by a faculty of order and proportion.

Hume's most telling remarks in the Dialogues seek to convince the reader that problems of design simply cannot be meaningfully posed. Our position in the Universe introduces natural limitations upon our powers of generalization: 120

A very small part of this great system, during a very short time is very imperfectly discovered to us: And do we thence pronounce decisively concerning the origin of the whole? . . . Let us remember the story of the Indian philosopher and his elephant. It was never more applicable than to the present subject. If the material world rests upon a similar ideal world this ideal world must rest upon some other; and so on, without end. It were better, therefore, never to look beyond the present material world. 121

At the conclusion of the dialogue the sceptical Philo admits to 'a deeper sense of religion impressed on the mind', for even though the arguments he has heard in support of design are logically unsound they still have considerable psychological impact upon him; they strike him, he says, 'with irresistible force'. History shows that the Humean tirade against the simple design arguments of the English physicists fell, for the time being, upon deaf ears. There were probably a number of reasons for this. Many English intellectuals, for instance Samuel Johnson and Joseph Priestly, felt that Hume was being merely mischievous or downright frivolous in an attempt to ensure literary fame and he was an isolated and ignored figure in literary circles even during his own lifetime. His Dialogues were published posthumously. More significant hurdles to Hume's acceptance by the scientific community were his eccentric scientific ideas. His unusual theory of causality and the serious suggestion that the Universe may be organic rather than mechanical in nature must have seemed rather naive when held up against the staggering quantitative achievements of the Newtonian system. Those, like Maupertuis, who subscribed to more sophisticated systems of final causation would not have regarded his objections as relevant 122

121

Design Arguments

72

and some of his arguments could be falsified by detailed scientific examples. However, although his objections to the Design Argument were to lie temporarily dormant, they were to prove extremely significant for the future spirit of critical inquiry. At least one zoologist, Erasmus Darwin (1731-1802), who was Charles Darwin's grandfather, enthusiastically took up Hume's intimations concerning the organic nature of the World. Erasmus Darwin was starting to take the early steps towards an evolutionary theory of animal biology, maintaining that the components of an animal or plant were not designed for the use to which they are currently applied, but rather, have grown to fit that use by a process of gradual improvement. However, in order to maintain his belief in theistic design Darwin had to subsume this evolutionary development within some deeper all-embracing plan—a Universal Teleology, an idea common amongst romantic philosophers of this period: 123

The late Mr. David Hume . . . concludes that the world itself might have been generated, rather than created; that is, it might have been gradually produced from very small beginnings increasing by the activity of its inherent principles, rather than by a sudden evolution of the whole by the Almighty fiat—What a magnificent idea of the infinite power to cause the causes of effects, than to cause the effects themselves. 124

Of the few other thinkers who saw deeper possibilities and challenges to the Design Argument growing from David Hume's work the most famous is Immanuel Kant (1724-1804). He read Hume's Dialogues in a translated manuscript form in 1780 and subsequently acknowledged his debt to him for awaking him 'from his dogmatic slumbers'. Kant's early work had attempted to reconcile the mechanical and teleological views of the world contained in the works of Leibniz and Newton. There he displayed a cautious respect for the Design Argument and the way in which it had been deployed to deduce the existence of a Supreme Being as, for example, in Aquinas' Fifth Way.

In our humble opinion this cosmological proof is as old as the reason of man In this respect the endeavours of Derham, Nieuwentyt, and many others, though they sometimes betray much vanity in giving all sorts of physical insights or even chimeras a venerable semblance by the signal of religions, do human reason honour. 125

Kant's later critical works take up the claims of Hume concerning the impossibility of deriving sure and necessary principles of a universal nature from empirical data. Independently of Vico he recognizes the irreducible subjectivity of our observations and interpretations. In the Critique of Pure Reason Kant summarizes the Design Argument in detail and calls it the 'Physico-Theological Argument':

73 Design Arguments (1) In the world we everywhere find clear signs of an order in accordance with a determinate purpose, carried out with great wisdom; and this in a Universe which is indescribably varied in content and unlimited in extent. (2) This purposive order is quite alien to the things of the world and only belongs to them contingently; that is to say, the diverse things could not of themselves have co-operated, by so great a combination of diverse means, to the fulfilment of determinate final purposes, had they not been chosen and designated... (3) There exists, therefore, a sublime and wise cause (4) The unity of this cause may be inferred... with certainly in so far as our observation suffices for its verification, and beyond these limits with probability in accordance with the principle of analogy. 126

He admits great respect for this argument because of its stimulus to scientific enquiry: he realizes that many biological investigations have been motivated by the expectation of purpose in organic structures, It enlivens the study of nature . . . It suggests ends and purposes, where our observation would not have detected them by itself, and extends our knowledge of nature by means of the guiding concept of a special unity, the principle of which is outside Nature .. , 137

However, Kant then goes on to undermine the logical foundation of any contention that design exists in nature, arguing that we can neither prove nor disprove statements about the real world by pure reason alone. For, in reaching our conclusions we inevitably introduce facts and observations and employ our, possibly erroneous, 'practical reason'. It is only with respect to the 'practical reason' that the Design Argument can maintain its cogency: It would therefore be not only extremely sad, but utterly vain to diminish the authority of that proof... we have nothing to say against the reasonableness and utility of this line of argument, but wish on the contrary to commend and to encourage it, yet we cannot approve of the claims which this proof advances of apodictic certainty. 128

Then he explains how this lack of 'certainty' arises by pointing out that all our empirical enquiries into the structure of Nature regard it as an entity which incorporates within itself a system of empirical laws. These laws are unified and naturally adapted to the faculties of our own cognition. The design we perceive must be necessarily mind-imposed and subjective to our innate categories of thought. Although the 'things in themselves' are mind-independent, our act of understanding completely creates the categories in terms of which we order them. Inevitably we view the world through rose-coloured spectacles. These self-created categories cannot themselves be ascertained by observation; they are a priori, conditions of the experience we have, like the perception of the space-time continuum. We could not through our experience hope to

Design Arguments

74

ascertain the conditions of such experience. Our observation of order and structure in the Universe, he argues, arises inevitably because we have introduced such concepts into our analysis of experience. We must not then proceed to rederive them from it. We can say nothing stronger than that the world is such as to make its perception by our minds in any form but ordered, impossible. Kant claimed morality as the final end of nature, for when we consider moral beings, he writes, we have a reason for being warranted to regard the world as a system of final causes. 129

He thought that only through this ethico-teleology could the final cause of the world be discerned; but its nature is disjoint from the arena of 'physico-theological' design arguments because the latter do not concentrate on the character of final ends, only the transient ends that benefit ourselves here and now:

Now I say that no matter how far physico-theology may be pushed it can never disclose to us anything about a final end of creation; for it never even begins to look for a final end. 130

Kant's notion of teleology ' had an enormous influence on the work of German biologists in the first half of the nineteenth century. Like Kant, for the most part these biologists did not regard teleology and mechanism as polar opposites, but rather as explanatory modes complementary to each other. Mechanism was expected to provide a completely accurate picture of life at the chemical level, without the need to invoke 'vital forces'. Indeed, Kant and many of the German biologists were strongly committed to the idea that all objects in Nature, be they organic or inorganic, are completely controlled by mechanical physical laws. These scientists had no objection to the idea that living beings are brought into existence by the mechanical action of physical laws. What they objected to was the possibility of constructing a scientific theory, based on mechanism alone, which described that coming into being, and that could completely describe the organization of life. The impossibility of such a scientific theory was not due to non-mechanical processes in Nature, but rather it lay in the inherent limitations of the human mind. In Kant's view, a mechanical explanation, which was equivalent to a causal explanation in Kant's philosophy, could be given only when there is a clear separation between cause and effect. In living beings, causes and effects are inextricably mixed. An effect in a living being cannot be completely understood without describing every reaction in the being: ultimate biological explanations require a special nonmechanical notion of causality—teleology—in which each part is simul129 188,189

75 Design Arguments

taneously cause and effect. Parts related to the whole in this way transcend mechanical causality. The order and arrangement of the organism is, according to Kant, a fundamental explanatory mode in biological science. The limitation of explanation in terms of mechanical causality can perhaps be best understood by comparing a living being to a computer. As Michael Polanyi has pointed o u t the internal workings of the computer can of course be completely understood in terms of physical laws. What cannot be so explained is the computer's program. To explain the program requires reference to the purpose of the program, that is, to teleology. Even the evolution of a deterministic Universe cannot be completely understood in terms of the differential equations which govern the evolution. The boundary conditions of the differential equations must also be specified. These boundary conditions are not determined by the laws of physics which are the differential equations. The universal boundary conditions are as fundamental as the physical laws themselves; they must be included in any explanation on a par with the physical laws. In a biological organism, the analogues of the computer program are the processing and organizational plans coded in the organism's DNA. The German biologists who followed Kant's program—the historian Lenoir has named them the teleomechanists—sought to discover the plan in the over-all organization of the organism. As the physiologist Hermann Lotze put it, 190,191

Thus all parts of the animal body in addition to the properties which they possess by virtue of their material composition also have vital properties; that is, mechanical properties which are attributable to them only so long as they are in combination with the other parts... Life belongs to the whole but it is in the strictest sense a combination of inorganic processes . . . Biological organization is, therefore, nothing other than a particular direction and combination of pure mechanical processes corresponding to a natural purpose. The study of organization can only consist therefore in the investigation of the particular ways in which nature combines those processes and how in contrast to artificial devices she unites a multiplicity of divergent series of phenomena into complex atomic events. 192

193

The study of biological organization by the teleomechanists led to a number of important discoveries, particularly in embryology, which they studied because the action of an organism's organizational plan is most manifest when the creature is being formed. For example, such studies led to the discovery of the mammalian ovum by the teleomechanist von Baer. In spite of such scientific feats, by the latter part of the nineteenth century the teleomechanists had been eclipsed by the reductionists, led by 188

Design Arguments

76

Hermann Helmholtz. The great weakness of the teleomechanists was their tendency to think of teleology not only as a plan of organization but also as an actual life force, a tendency which Kant warned against. This led them to believe that it was impossible for organisms to change their fundamental plan of organization, that is, to evolve, under the action of inorganic forces. As a consequence, they later opposed Darwin's theory of evolution by natural selection, and as the evidence for such evolution became overwhelming, they ceased to exert an influence on the development of biology. Kant's important ideas in critical philosophy and the theory of knowledge which grew out of his work were to have little or no effect upon the growing momentum of the Design Argument in England. The first books describing Kant's work began to appear in English from about 1796 onwards but the logical difficulties they highlighted were not taken seriously by allies of William Paley (1743-1805), author of the famous Natural Theology, a work that was to become something of a minor classic in its own time and synonymous with the gospel according to anthropocentric design. Paley had a distinguished early career at Cambridge; the Senior Wrangler in 1763, he was later greatly admired by his students for a lucid and memorable lecturing style but his progressive social views prevented him rising to high office in the Church of England. On reading his work one is struck by the clarity of his explanation, the skill with which he marshalls his material and the na'iviety with which he uses his biological examples. This last trait actually led some European supporters of the Design Argument to disown him in embarrassment. However, because of its lucidity and the widespread support for its conclusions, Natural Theology was for many years a set text at Cambridge and a special edition was even produced with essay questions bound into it for undergraduate study. Charles Darwin was to recall how he 'was charmed and convinced by the long line of argumentation' on reading it during his undergraduate years. Where Kant was a model of obscurity Paley is a paragon of literary clarity. Paley bases his case for design entirely upon the constitution rather than the development of natural things and interprets this constitution in a completely anthropocentric fashion: everywhere in Nature, he claims, we see elements of design and purpose. Design implies a Designer. Therefore Nature is the result of a Designer who is, by implication, God. Paley claims that, wielded in this manner, teleology 'has proved a powerful and perhaps indispensible organ of physical discovery' but he expresses a dislike for the notion of 'Final Causes' largely because of its Scholastic undertones: 188

132

77 Design Arguments . . . it were to be wished that the scholastic phrase 'final cause' could, without affectation, be dropped from our philosophical vocabulary and some more unexceptional mode of speaking be substituted instead of it. 133

His central argument appears dramatically in the opening lines of his book.

In crossing a heath,... suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place... For this reason, and for no other, viz. that, when we come to inspect the watch, we perceive... that its several parts are framed and put together for a purpose. 134

The analogy of the watch-world had been the watchword of many earlier workers. The advantage of the analogy, Paley claims, is that it makes his point regardless of whether one knows the origin of the watch or understands every facet of its machinery. Furthermore, he believed it evaded other well-known objections: even though the world (watch) occasionally malfunctions it would be peculiar not to attribute its mechanism to contrivance. It would be senseless, he says, to claim it was merely 'one of possible combinations of material forms, a result of the laws of metallic nature' or the inevitable consequence of there having 'existed in things a principle of order.' Nothing, he argues, is to be gained

by supposing the watch before us to have been produced from another watch, that from a former, and so on indefinitely... A designing mind is neither supplied

by this supposition, nor dispensed with. The idea that postulating 'laws' of Nature gave explanations of design he thought to be a form of mysticism, 'a mere substitution of words for reasons, names for causes.' The so-called 'laws' of Nature may be, even now, nothing more than a way of codifying observations that have been made. They do not guarantee anything will take place in the future. They do not provide an explanation of the sort Paley required. Paley continues to consistently and obliviously mix analogies from the organic and mechanical realms; for example, when discussing explanations of order via evolutionary development and summarizing the general nature of his methodology he admits: 135

The generations of the animal no more account for the contrivance of the eye or ear, than, upon the supposition stated..., the production of a watch by the motion and mechanism of a former watch, would account for the skill and intention evidenced in the watch so produced . . . Every observation which was made . . . concerning the watch, may be repeated with strict propriety concerning the eye; concerning animals; concerning plants; concerning, indeed all the organized parts of the works of Nature. 136

This complete faith in the mechanistic analogy, even in the organic

78

Design Arguments

realm, convinces Paley that we can infer ultimate causes from local effects because of the string of causal and mechanical connections that will exist between them. He brushes aside the critique of Hume, Spinoza and Descartes regarding the transposition of causes for effects:

'Of a thousand other things,' say the French academicians, 'we perceive not the contrivance, because we understand them only by the effects, of which we know not the causes': but we here treat of a machine, all the parts whereof are visible; and which need only be looked upon to discover the reasons of its motion and action.. . 137

Like Galen, Boyle, Newton and many others before him Paley concentrates upon the internal structure of the human eye as the example of design par excellence; so enamoured is he by the eye's remarkable structure that he exclaims,

Were there no example in the world of contrivance, except that of the eye, it would be alone sufficient to support the conclusion which we draw from it. 138

There is much that is humorous in his examples of design: he dwells upon the foresight displayed by the provision of the epiglottis in the human throat; the following passage has been dubbed the 'devotional hymn to the epiglottis'! 139

Reflect how frequently we swallow, how constantly we breathe. In a city feast, for example, what deglutition, what anhelation! Yet does this little cartilage, the epiglottis, so effectually interpose its office, so securely guard the entrance of the wind-pipe that... Not two guests are choked in a century. 140

More noteworthy are the passing parries he aims at two alternative explanations of order. In accordance with his whole approach, firmly grounded in observation, (and we note in passing that Paley was keen amateur naturalist ) he excludes them on the basis of current observations. Concerning the argument that orderly forms were the inevitable result of normalizing selection from an array of randomly constituted organisms, he takes a blinkered approach to fossilized remains and remarks that: 141

[chance] . . . would persuade me to believe . . . every organized body which we see, are only so many out of the possible varieties and combinations of being, which the lapse of infinite ages has brought into existence; that the present world is the relic of that variety; millions of other bodily forms and other species having perished, being by the defect of their constitution incapable of preservation, of continuance by generation. Now there is no foundation whatever for this conjecture in anything which we observed in the works of nature; no such experiments are going on at present; no such energy operates... A countless variety of animals might have existed, which do not exist. 142

Paley felt that chance was not a mechanism, as many regarded it at that

79 Design Arguments

time, but merely a label for 'the ignorance of the observer.' He also claimed that appeal to some inherent and universal ordering principle in Nature was in conflict with observation:

... a principle of order, acting blindly and without choice, is negatived by the observation, that order is not universal; which it would be, if it issued from a constant and necessary principle . . . where order is wanted there we find it; where order is not wanted, i.e. where, if it prevailed, it would be useless, there we do not find i t . . . No useful purpose would have arisen from moulding rocks and mountains into regular solids, bounding the channel of the ocean by geometrical curves; or form a map of the ocean resembling a table of diagrams in Euclid's Elements, or Simpson's Conic Sections. 143

The second half of Paley's Natural Theology is much more interesting to post-Darwinians than the first. Here he moves from the world of zoology and anatomy to consider the laws of motion and gravitation and their role in astronomy. The first interesting remarks concern the velocity of light: because of its enormous value he infers that the mass of the photon needs to be extremely small to be compatible with our existence: Light travels from the sun at the rate of twelve millions of miles in a minute . . . It might seem to be a force sufficient to shatter to atoms the hardest bodies. How then is this effect, the consequence of such prodigious velocity, guarded against? By a proportionable minuteness of the particles of which light is composed. 144

He continues with a discussion of astronomical phenomena, gratefully acknowledging his debt to Rev. J. Brinkley, Professor of Astronomy at Dublin for assistance with many details. He confesses that he feels there to be severe disadvantages as well as advantages in this new line of reasoning: 145

My opinion of astronomy has always been, that it is not the best medium through which to prove the agency of an intelligent Creator; but that, this being proved, it shows, beyond all other sciences, the magnificence of his operations . . . but it is not so well adapted as some other subjects are to the purpose of argument. We are destitute of the means of examining the constitution of the heavenly bodies. The very simplicity of their appearance is against them. 146

In this area Paley feels adrift from the practice of direct observation he so values and is also relieved of his principal dialectical device because he feels 'we are cut off from one principal gound of argumentation— analogy'. Undoubtedly, he also feels a little less confident of his assertions in an area where he must seek considerable guidance from others. Now separated from his false analogical guide he proceeds with Brinkley's help to make a number of insightful observations concerning the stability of the solar system and the form of the law of gravitation. Many of these have been subsequently re-derived in connection with the question of whether we could, from the fact of our own existence alone, 147

Design Arguments

80

actually deduce that the world possesses precisely three spatial dimensions, (see section 4.8). Paley also points out that the evolution of the Sun rules out the possibility of an infinite steady-state history without evolutionary change:

it follows, that the sun also himself must be in his progress towards growing cold; which puts an end to the possibility of his having existed, as he is from eternity. 148

He goes on to describe the manner in which the terrestrial oblateness and ocean content sensitively determine the local environment and shows how the present topographical circ*mstances are necessary for our own existence. The next observations he makes are the most intriguing from a modern perspective: he points out the unique features that are intrinsic to Newton's inverse square law of gravitation. The basis for his comparative study is an imaginary ensemble containing all possible power laws of variation for the gravitational force. The size of the subset of this collection which are consistent with our existence can then be examined in Anthropic fashion:

whilst the possible laws of variation were infinite, the admissible laws, or the laws compatible with the preservation of the system, lie within narrow limits. If the attracting force had varied according to any direct law of the distance, let it have been what it would, great destruction and confusion would have taken place. The direct simple proportion of the distance would, it is true, have produced an ellipse; but the perturbing forces would have acted with so much advantage, as to be continually changing the dimensions of the ellipse, in a manner inconsistent with our terrestrial creation. 149

This enables Paley to quantify that formerly rather vague, qualitative notion of the mechanical optimality in the World's structure and laws. Next he considers the fitness of the various possible force laws in connection with the stability of the elliptical planetary orbits which he assumes are a necessary condition of our existence:

Of the inverse laws, if the centripedal force had changed as the cube of the distance, or in any higher proportion . . . the consequence would have been, that the planets, if they once began to approach the sun, would have fallen into its body; if they once, though by every so little, increased their distance from the centre, would forever have receded from i t . . . All direct ratios of the distance are excluded, on account of the danger from perturbing forces; all reciprocal ratios, except what lie beneath the cube of the distance,... would have been fatal to the repose and order of the system . . . the permanency of our ellipse is a question of life and death to our whole sensitive world. 149

Having thus narrowed down the form of the force law to an inverse power law he claims that the inverse square is uniquely selected because it allows extended bodies to behave gravitationally as point particles

81 Design Arguments

with an equal mass concentrated at the centre of mass of the body, (see section 4.8),

whilst this law prevails between each particle of matter, the united attraction of a sphere, composed of that matter obeys the same law... it is a property which belongs to no other law of attraction that is admissible . . . expected attraction varying directly as the distance. 150

The possibility of precisely circular orbits are also excluded on the grounds of stability and Paley argues that the selection of a force law which optimally serves 'to guard against [perturbations] running to destructive lengths, is perhaps the strongest evidence of care and foresight that can be given.' His case for anthropocentric design rests upon the concurrence in our solar system of the four circ*mstances required for the stability of the planetary orbits against perturbations of a 'periodical or vibrating' nature: ... viz. that the force shall be inversely as the square of the distance; the masses of the revolving bodies small, compared with that of the body at the centre; the orbits not much inclined to one another; and their eccentricity little. 151

To complete this intriguing collection of mathematical arguments for anthropocentric design Paley makes some remarks similar to those of Newton in his correspondence with Bentley concerning the gravitational stability of the Universe. This provides him with a simple argument for the finite age of the Universe:

If the attraction acts at all distances, there can be only one quiescent centre of gravity in the universe: and all bodies whatever must be approaching this centre, or revolving around i t . . . if the duration of the world had been long enough to allow of it, all its parts, all the great bodies of which it is composed, must have been gathered together in a heap round this point. 152

Despite the naivety of its earlier treatment of some of the human sciences, Paley's widely read work was to play an important role in summarizing and clearly placing before scientists' eyes the simple facts of adaption in the natural world. In order to supersede his teleological thesis another theory would be required to give a convincing explanation for the vast array of detailed examples he catalogues. The lack of a viable and positive alternative may possibly explain the negligible impact that the afore-mentioned metaphysical objections to the Design Argument actually had. Hume offered no such explanations or deductions with clear observational consequences whereas the hypothesis which was to displace the Paleyean branch of teleology—natural selection—did provide a plausible alternate explanation for the very facts upon which the anthropocentric design argument was based. The relevance of Paley's organic examples of 'design' was later recognized by Huxley who went so far as to

82

Design Arguments

remark that Paley 'proleptically accepted the modern doctrine of evolution'. It is also worth noting that Paley's astronomical examples—which are so similar to modern Anthropic arguments—are clearly of a different and inorganic nature and lie entirely outside the jurisdiction of Darwinian natural selection. Strangely, they have been ignored in subsequent evaluations of his work. Paley's work opened the floodgates for apologetic treatises on every conceivable aspect of 'design', although few of these had anything new to say. The most encyclopaedic and systematic arose at the bequest of the Rev. Francis Egerton, the Eighth Earl of Bridgewater who died in 1829. Egerton charged his executors with the duty of selecting eight eminent scientific authors to demonstrate:

The Power, Wisdom and Goodness of God, as manifested in the Creation; illustrating such work by all reasonable arguments, as for instance, the variety and formation of God's creatures in the animal, vegetable, and mineral Kingdoms; the effect of digestion, and thereby of conversion; the construction of the hand of man, and an infinite variety of other arguments. 153

The scholars chosen to carry out this task were Charles Bell, William Buckland, Thomas Chalmers, John Kidd, William Kirby, William Prout, Peter Roget and William Whewell with a later independent contribution by the mathematician Charles Babbage. They were all eminent scholars of their day; several held university lectureships in the sciences and some like the chemist William Prout are now famous for their scientific work—and almost everyone has Roget's Thesaurus on their bookshelves. Despite their varying subject matter the Bridgewater Treatises have two things in common: they were all published in London and all sold out almost at once, subsequently going through many editions. With the exception of Babbage's numerical study, the style of the contributions is reminiscent of earlier eighteenth-century works and marked by a dogmatically anthropocentric bias that may be ascertained from their fairly explicit titles. It has been suggested that the whole collection is well summed-up by a sentence in Prout's contribution, 'The argument of design is necessarily cumulative; that is to say, is made up of many similar arguments!' Whereas in England this teleological spirit appears to have been firmly entrenched in the minds of many scientists, evolutionary ideas were beginning to germinate elsewhere. The biologist von Baer (1792-1876) remarked in his 1834 lectures that 'only in a very childish view of nature could organic species be regarded as permanent and unchangeable types'. Another articulate critic of teleology who was considering the consequences of an evolutionary perspective was Goethe (1749-1832). A widely gifted man who was responsible for important contributions in anatomy, 154

155

83 Design Arguments

botany, poetry and philosophy, Goethe tried to introduce an evolutionary perspective into every one of these disciplines. As a student he studied in Leipzig and Strasbourg where his thinking was strongly influenced by the works of Bacon, Spinoza, and Kant. Like Francis Bacon, Goethe detects and rejects that systematic bias in Man's self-image which tempts him to elevate himself relative to* the world at large: Man is naturally disposed to consider himself as the centre and end of creation, and to regard all the beings that surround him as bound to subserve his personal profit... He cannot imagine that the least blade of grass is not there for him. 156

2.7 The Devolution of Design

The apparent uniqueness of the Universe primarily depends upon the fact that we can conceive of so many alternatives to it. C. Pantin

The seventy-fifth section of Kant's Critique of Judgement bears the title 'The conception of an objective finality of nature is a critical principle of reason for the use of the reflective judgement', and in it Kant made a confident claim: It is . . . quite certain that we can never get a sufficient knowledge of organized beings and their inner possibility, much less get an explanation of them, by looking merely to mechanical principles of nature... we may confidently assert that it is absurd for me even to entertain any thought of so doing or to hope that maybe another Newton may some day arise, to make intelligible to us even the genesis of but a blade of grass from natural laws that no design has ordered. Such insight we must absolutely deny to mankind. 129

When the young Charles Darwin (1809-82) began his theological studies at Christ's College Cambridge, where Paley had been both a student and a fellow, he did not study Kant; but for Darwin the study of Paley's various works was compulsory. Many years later Darwin was to recall in his autobiography these early studies: 157

In order to pass the B.A. examination, it was also necessary to get up Paley's... Evidences. The logic of this book and, as I may add, of his Natural Theology gave me as much delight as did Euclid. The careful study of these works, without attempting to learn any part by rote, was the only part of the academical course which, as I then felt and as I still believe, was of the least use to me in the education of my mind. I did not at that time trouble myself about Paley's premises; and taking these on trust, I was charmed and convinced by the long line of argumentation. 158

Following his monumental development of the theory of natural selection in parallel with Wallace, Darwin remarked on its interaction with the

Design Arguments

84

traditional design arguments:

The old argument from design in nature, as given by Paley, which formerly seemed to me so conclusive, fails, now that the law of Natural Selection has been discovered. 159

As he grew older Darwin became more agndstic, especially with regard to the awkward problem of the evolution of intelligence. He considered:

the impossibility of conceiving this immense and wonderful universe, including man... as the result of blind chance or necessity. When thus reflecting I feel compelled to look to a First Cause having an intelligent mind in some degree analogous to that of man and I deserve to be call a Theist. But then arises the doubt, can the mind of man, which has, as I fully believe, been developed from a mind as low as that possessed by the lowest animal, be trusted when it draws such grand conclusions. 160

Many have looked to the relegation of Man's special status in relation to the animal world as the principal cause of hostility between Darwinians and those of an orthodox religious persuasion. But it appears that the possible demolition of the Design Argument may have been an equally strong motivation for opposition. Charles Hodge made this explicit at the time in his book What is Darwinism: 161

It is, however, neither evolution nor natural selection which gives Darwinism its peculiar character and importance. It is that Darwin rejects all teleology, or the doctrine of final causes. 162

The nineteenth-century philosopher Winston Graham also pointed out that primarily Darwin had launched a successful assault on the Design Argument of the natural theologians:

Now it appears that Darwin has at last enabled the extreme materialist to attempt and carry the design argument, the last and hitherto impregnable fortress behind which natural theology has entrenched herself. 163

Ideas of a general evolutionary development had of course been in the wind and were suggested by many previous workers, but it was only Darwin's introduction of the concept of natural selection along with a vast collection of observational evidence that finally displaced the anthropocentric design arguments drawn from biology. The stress laid upon the many precise adaptions visible in Nature by writers like Paley and the Bridgewater authors can be seen to have played an interesting role in this development. Their claims for design were usually based upon a systematic study of biological and botanical observations and, whether or not the Design Argument was regarded as true, they served to focus the attention of naturalists upon a set of remarkable adapted features. The new evolutionary world-view led predictably to a re-evaluation of the teleological interpretation and the conception of a universal teleology 164

85 Design Arguments

that used the process of natural selection to direct events towards a final cause. Most notable amongst the supporters of this view was the American botanist and Calvinist, Asa Gray (1810-88). Gray had been appointed professor of natural science at Harvard in 1842 and through his exchange of ideas with Darwin before the publication of the Origin of Species in 1859 had confirmed its thesis by his own independent botanical studies. His approach to teleology was to use the Darwinian hypothesis as a panacea to solve many of the problems which had formerly been brushed under the carpet by supporters of the Design Argument, for: 165

Darwinian teleology has the special advantage of accounting for the imperfections and failures as well as for successes. It not only accounts for them, but turns them to practical account... So the most puzzling things of all to the old-school teleologists are the principles of the Darwinian,... it would appear that in Darwinian evolution we may have a theory that accords with, if it does not explain, the principal facts, and a teleology that is free from the common objection . . . if [a theist] cannot recognize design in Nature because of evolution, he may be ranked with those of whom it was said 'Except ye see signs and wonders ye will not believe'. 166

In a letter to de Candolle in 1863 Gray offered his

... hearty congratulations of Darwin for his striking contributions to teleology... knowing well that he rejects the idea of design, while all the while he is bringing out the neatest illustrations of it. 167

Darwin liked Gray's interpretation of his work, but perhaps only because it helped soothe the public antagonism to his ideas; he remarked in a private letter to Gray that 'what you say about Teleology pleases me especially and I do not think anyone else has ever noticed the point'. In the later editions of the Origin he even acknowledges Gray as 'a celebrated author and divine' who had: gradually learnt to see that it is just as noble a conception of the Deity to believe that he created a few original forms capable of self-development into other and needful forms . . .

Another American who recognized the impact of evolution on Design was the philosopher and science writer John Fiske (1842-1901), who gave a series of thirty-five lectures on Darwinian evolution at Harvard in 1871; they subsequently appeared in revised and expanded book-form as the Outlines of Cosmic Philosophy. Fiske was another to realize that it was the overthrow of the anthropocentric design arguments by the mechanism of natural selection that made Darwin's work so unpopular:

From the dawn of philosophic discussion, Pagan and Christian, Trinitarian and Deist, have appealed with equal confidence to the harmony pervading nature as the surest foundation of their faith in an intelligent and beneficient Ruler of the

Design Arguments

86

universe. We meet the argument in the familiar writing of Xenophon and Cicero, and it is forcibly and eloquently maintained by Voltaire as well as by Paley, and, with various modifications by Agassiz as well as by the authors of the Bridgewater Treatises. One and all they challenge us to explain, on any other hypothesis than that of creative design, these manifold harmonies, these exquisite adaptions of means to ends, whereof the world is admitted to be full, and which are, especially conspicuous among the phenomena of life . . . , in natural selection there has been assigned and adequate cause for the marvellous phenomena of adaption, which has formerly been regarded as clear proofs of beneficent creative contrivance. 168

Like Gray, Fiske believed that natural selection did not necessitate the rejection of a teleology that was conceived on a large enough scale. Fiske's development of these ideas was looked upon with approval by his friend Thomas Huxley, to whose memory his subsequent work Through Nature to God was dedicated. Huxley (1825-95) had set himself up as the principal public defender of the evolutionary 'faith' in England on Darwin's behalf, but was himself surprisingly sympathetic to the teleological interpretation of evolutionary theory. Huxley foresaw the demise of natural theology but was at first taken aback by the manner in which the evolutionary hypothesis had received a teleological interpretation from some of his colleagues: 169

It is singular how one and the same book will impress different minds. That which struck the present writer most forcibly on his first perusal of the Origin of Species was the conviction that teleology, as commonly understood, had received its death-blow at Mr. Darwin's hands. 170

Huxley was the first to draw attention to the contribution which the earlier teleological ideas had made in focusing attention upon a number of remarkable organic adaptions. This common interest of teleology and evolution, he said, meant that Darwin

. . . has rendered a most remarkable service to philosophic thought by enabling the student of nature to recognize, to the fullest extent, those adaptions to purpose which are so striking in the organic world, and which teleology has done good service in keeping before our minds . . . The apparently diverging teachings of the teleologist and of the morphologist are reconciled by the Darwinian hypothesis. 171

More interesting still is Huxley's recognition of an awkward problem for the idea of natural selection—determinism. He saw that because the mechanistic view of the world must regard the later products of natural selection as a completely determined function of the initial molecular configurations, it reduces to a specification of the initial conditions. Natural selection appeared to offer an 'explanation' that things are as

87 Design Arguments

they are only because they were as they were:

... there is a wider teleology which is not touched by the doctrine of evolution. This proposition is that the whole world, living and not living, is the result of the mutual interaction, according to definite laws, of the forces possessed by the molecules of which the primitive nebulosity of the universe was composed . . . The teleological and mechanical views of nature are not, necessarily, mutually exclusive. On the contrary, the more purely a mechanist the speculator is, the more firmly does he assume a primordial molecular arrangement of which all the phenomena of the universe are the consequences and the more completely is he thereby at the mercy of the teleologist, who can always defy him to disprove that this primordial molecular arrangement was not intended to evolve the actual phenomena of the universe . . . Evolution has no more to do with theism than the first book of Euclid has. 172

Huxley also speculated that the evolutionary approach to Nature might have a far wider applicability. For, suppose the laws of motion and energy conservation were also just the results of natural selection acting upon a collection of possibilities:

Of simplest matter and definitely operating energy... it is possible to raise the question whether it may not be the product of evolution from a universe of such matter, in which the manifestations of energy were not definite—in which for example laws of motion held good for some units and not for others, or for some units at one time and not another. 173

However, neither Huxley nor any of his colleagues addressed the astronomical design arguments based upon the co-presence of a number of coincidental features in the solar system dynamics and upon which the stability our environment so delicately hinges. The only debate that took place with physicists concentrated upon other more fundamental problems like reconciling evolutionary development with contemporary views on the age and origin of the earth. In that conflict the most critical opponent of Darwin's theory amongst the ranks of the physicists was Lord Kelvin who argued that the geophysical evidence pointed towards a terrestrial age too brief for natural selection to evolve the observed spectrum of living creatures. This objection against evolution, which at the time Darwin called 'the gravest yet advanced' generated an extremely significant debate which we shall present in extended form in Chapter 3 since it led to the first modern prediction derived from an Anthropic Principle. Kelvin's deepest sympathies were with design couched in a suitable form because of the difficulties inherent in making any observational test of the Darwinian evolutionary hypothesis: The essence of science consists in inferring antecedent conditions and anticipating future evolutions from phenomena which have actually come under observation.

Design Arguments

88

In biology the difficulties of successfully acting up to this ideal are prodigious . . . I have always felt that the hypothesis of 'the origin of species through natural selection' does not contain the true theory of evolution . . . I feel convinced that the argument of design has been greatly too much lost sight of in recent zoological speculations. 174

As we shall see, Kelvin's opposition was extremely influential because of his pre-eminent position amongst British scientists of his day and the greater respect most scientists had for arguments based upon mathematical physics rather than the purely qualitative hypothesis of natural selection. Another outstanding physicist who contributed to the argument concerning the place of final causes in the evolutionary view was James Clerk Maxwell. Maxwell focused his attention upon molecules, which were then regarded as invariant and fundamental structures. He argued that their time invariance and identical structure proves they could not have developed from some natural process in a statistical fashion. These invariance properties gave them 'the stamp of the manufactured article' and signalled a cut-off in the applicability of a principle of natural selection. His address to the British Association in 1873 contains a statement of these ideas:

No theory of evolution can be formed to account for the similarity of molecules, for evolution necessarily implies continuous change, and the molecule is incapable of growth, or decay, of generation or destruction. None of the processes of Nature, since the time when Nature began, have produced the slightest difference in the properties in the operation of any of the causes which we call natural... the molecules out of which these systems are built—the foundation stones of the material universe—remain unbroken and unknown. 175

These are the first glimmerings of a more sophisticated twentiethcentury approach to the invariant properties of crucial molecular structures and their relevance to the existence of a life-supporting environment. This approach was later to be developed in a remarkable way by the American biochemist Lawrence Henderson whose work we shall discuss at length in Chapter 3. One of Henderson's forerunners both in advocating such a view and as Professor of Chemistry at Harvard was Josiah Cooke. Cooke appealed strongly to the form of laws of Nature and the special properties of particular chemical compounds (for example, water) as evidences for order in Nature. However, he keeps these eutaxiological arguments distinct from those which appeal to purposeful design: 176

177

We can see that each property of water has been designed for some purpose . . . [But] the strength of our argument lies . . . in the harmonious working of all the separate details. To me the laws of nature afford the strongest evidences . . . I do not, therefore, regard the constitution of water as some-thing apart from law ... nor do I believe

89 Design Arguments that this argument from general plan could supply the place of the great argument from design. The last lies at the basis of natural theology.. . 178

We have seen that from the very earliest times there have been strong criticisms of attempts to 'explain' the structure of inorganic and organic phenomena on the basis of teleological or eutaxiological design arguments. Most antagonistic objectors attempted to show that the principal arguments for design were confused or vacuous whilst sceptical or agnostic commentators held that all such issues were undecidable. Very few of the treatises on natural theology or teleological science ever attempted to deal with these criticisms in a convincing or systematic fashion. One interesting exception, whose work signals the end of the pre-modern approach to the question of final causes, was the French philosopher Paul Janet. His Causes Finales was translated into English in 1878, several years after its publication in France and it provides a careful and moderately critical summary of ideas up to and including the Darwinian 'revolution'. Janet's work is characterized by a broad and undogmatic discussion of possible objections to a rightly conceived system of final causation which he defines at the outset in three points: 51

(I) There is no a priori principle of final causes. The final cause is an induction, a hypothesis, whose probability depends on the number and character of observed phenomena. (II) The final cause is proved by the existence in fact of certain combinations, such that the accord of these combinations with a final phenomenon independent of them would be a mere chance, and that nature altogether must be explained by an accident. (III) The relation of finality being once admitted as a law of the universe, the only hypothesis appropriate to our understanding that can account for this law, is that it is derived from an intelligent cause. 179

In his second point we see that Janet seeks to exclude any arguments based upon development and concentrates instead upon the simultaneous realization of inorganic configurations. The system is not intended to possess the anthropocentric orientation of Paley of whom he does not approve because, This anthropocentric doctrine as it has been called, appears to be connected with the geocentric doctrine, that made the earth the centre of the world, and ought to disappear with it. 180

Janet then attempts to counter a number of criticisms, both ancient and modern, against the accusation that finalists have consistently confused causes for effects. He cites an example of the 'chicken and egg problem' in which the effect of reproduction is then the cause of further reproduc-

Design Arguments

90

tion and acts

To perpetuate and to immortalize the species. Here, the order of causes is manifestly reversed, and whatever Lucretius and Spinoza may say, it is the causes that are the effects.

Janet then proceeds to argue against the claim, which he attributes to Maupertuis, (although Maupertuis merely cites Lucretius), that normalizing selection could have ensured the inevitable survival of ordered beings from random permutations. Like Paley, Janet asserts that there is no observational evidence for such a claim, but he glosses over the significance of the recent fossil finds. The theory of progressive evolutionary development, on the other hand, he cites approvingly as an excellent manifestation of final causes:

The progressive development of forms, far from being opposed to the theory of finality, is eminently favourable to it. What more simple and more rational law could have presided over creation than that of a progressive evolution, in virtue of which the world must have seen forms, more and more finished, successively appear? 181

Janet hopes to follow Boyle and Leibniz in propounding a doctrine of complementarity where both mechanism and finalism provide different, but equally valid complementary descriptions of the same phenomena, each complete within its own sphere of reference. Janet then continues his discussion with an evaluation of what he terms certain 'contrary facts'; these include the presence of apparently useless or vestigial organs in animals. Interestingly, he discusses them in relation to the Least Action Principle, suggesting that they may be byproducts of the quest for the most economical path of development. He believes that the variational principles have some application in deciding the pathway of evolutionary development:

For that certain pieces of the organism have ceased to serve is no reason why they should entirely disappear. The law of economy is only a particular application of the metaphysical principle of the simplicity of ways, appealed to by Malebranche, or of the mathematical principle of least action, defended by Euler and Maupertuis. 182

Janet then turns to discuss the status of final causation in a completely deterministic mechanical system, using Laplace's Nebular Hypothesis as the mechanical paradigm. He points out the logical equivalence of setting initial or final data for the evolution of a completely determined physical system. He also questions the notion of 'chaos' in completely determined systems because however a random a system might appear, it should still have evolved deterministically from definite initial conditions and will likewise evolve towards a definite final state: 183

91 Design Arguments The primitive nebula was, then already the actual world potentially . . . But let it be observed, the nebula is not a chaos; it is a definite form, whence there is to issue later, in virtue of the laws of motion, an ordered world... If you do not admit anything that guides and directs phenomena, you at the same time admit that they are absolutely undetermined, that is to say, disordered: now how are you to pass from this absolute disorder to any order whatever? 184

Janet has turned the argument against the evolutionist and the mechanist. In effect he is saying that determinism means we must suppose the Universe to have possessed very special initial conditions if human life is to result. There follows a discussion of the pros and cons of evolutionary theories of organic development and the principle of natural selection. Janet argues against the sufficiency of the latter hypothesis on two grounds: first, he claims that although such ideas work in the context of forced breeding experiments—unnatural selection—the probability of a sufficient number of advantageous selections occurring naturally in the real world is extremely small. Secondly, he argues that adaptations tend not to be propagated, but rather are diluted in their offspring and this tends to keep a species invariant. Janet's final discussions centre around the consequences of various theories of knowledge for his doctrine of final causes. Of particular interest is his discussion of Kant's claim that our knowledge of the world is a property of the observer not the observed. In the course of a lengthy discussion he cites a number of contemporary objections to Kant's thesis from the works of Trendelenburg and Herbart. If ordering is an inevitable selection effect created by our act of perception, why, he asks, do we find some things unintelligible and why do we not see everything as a teleological structure?

How is it . . . that the convenience of the arrangement of nature is only made evident in certain cases; that very often this convenience appears doubtful to us; in fine, that nature offers us a certain mechanical regularity, or even simple facts, of which it is impossible for us to give an account? 185

Janet closes his work with a discussion of the final end of Nature. He has already rejected the anthropocentric notion that this end is Man, and now he also rules out the possibility that the Deity might have created all for himself for this would suggest his privation—a contradiction. Janet then meanders through various lesser possibilities in a style reminiscent of the Scholastics, before concurring with Kant that ethical goals provide the only ultimate meaning for Nature:

... if there are no ends in the universe, there are none for man any more than for nature; that there is no reason why the series of causes should be mechanical up to the appearance of man, and become teleological from man upwards. If

92

Design Arguments 'Hume

Huxley Gray Bentley Newton

Paley

Maupertuis

Spinoza

Ra^r Boyle

Figure 2.3. The chronology of the principal contributors to discussions of the Design Argument from the sixteenth until the end of the nineteenth centuries. mechanism reigns in nature, it reigns everywhere, and in ethics as well as in physics... Morality is, therefore, at once the accomplishment and the ultimate proof of the law of finality. 186

Finally, we cannot resist citing our favourite Design Argument which is due to Bernadin de Saint-Pierre which is of a type that distresses Janet very greatly. Indeed, Janet feels that it is a member of a class of examples which 'one could believe . . . invented to ridicule the theory itself'. Bernadin claims that 'dogs are usually of two opposite colours, the one light and the other dark, in order that, wherever they may be in the house, they may be distinguished from the furniture, with the colour of which they might be confounded'. 187

2*8 Design in Non-Western Religion and Philosophy There was no confidence that the code of Nature's laws could ever be unveiled and read, because there was no assurance that a divine being, even more rational than ourselves, had ever formulated such a code capable of being read. J. Needham

Recently, the paleontologist Stephen Jay Gould characterized the Anthropic Principle as the latest manifestation of \ .. that age-old pitfall of Western intellectual life—the representation of raw hope gussied up as rationalized reality'. He further warned: 'Always be suspicious of conclusions that... follow comforting traditions of Western thought'. Actually, the idea that humanity is important to the cosmos and indeed the idea that the material world was created for man both seem to be present in many cultural traditions; they may even be universal. Although 194

194

93 Design Arguments

no study of non-Western teleology has ever been done, a cursory search of the anthropological literature shows teleological notions defended in Mayan, Zuni (New Mexico Indian), the 'Thompson' Indian of the North Pacific coast, Iroquois, Sumerian, Bantu, ancient Egyptian, Islamic-Persian, and Chinese traditions. In the Popal Vuh, the most important surviving work of Mayan literature, it is recorded that the dry Earth and all life thereon was created by the gods for the benefit of mankind: 195

196

197

200

198

201

199

205

204

Let it be thus done. Let the waters retire and cease to obstruct, to the end that earth exist here, that it harden itself and show its surface, to the end that it be sown, and that the light of day shine in the heavens and upon the earth; for we shall receive neither glory nor honour from all that we have created and formed until human beings exist, endowed with sentience. 195

In the Zuni Indian creation myth, much of the material world, including the moon, planets, rain, and vegetation, was formed for the benefit of both Mankind and animals, who were viewed as the children of the Creator gods:

Thus, as a man and woman, spake [the Earth-mother and Sky-father], one to the other. 'Behold!' said the Earth-mother as a great terraced bowl appeared at hand and within it water, 'this is as upon me the homes of my tiny children shall be. On the rim of each world-country they wander in, terraced mountains shall stand, making in one region many, whereby country shall be known from country, and within each, place from place. Behold again! said she as she spat on the water and rapidly smote and stirred it with her fingers. Foam formed, gathering about the terraced rim, mounting higher and higher. 'Yea', said she, 'and from my bosom they shall draw nourishment, for in such as this shall they find the substance of life whence we were ourselves sustained, for seel' Then with her warm breath she blew across the terraces; white flecks of the foam broke away, and, floating over the water, were shattered by the cold breath of the Sky-father attending, and forthwith shed downward abundantly fine mist and spray! 'Even so, shall white clouds float up from the great waters at the borders of the world, and clustering about the mountain terraces of the horizons be borne aloft and abroad by the breaths of the surpassing of soulbeings, and of the children, and shall hardened and broken be by the cold, shedding downward, in rain spray, the water of life, even into the hollow places of my lap! For therein chiefly shall nestle our children mankind and creature-kind, for warmth in thy coldness'. 'Even so!' said the Sky-father; 'Yet not alone shalt thou helpful be unto our children, for behold!' and he spread his hand abroad with the palm downward and into all the wrinkles and crevices thereof he set the semblance of shining yellow corn grains; in the dark of the early world-dawn they gleamed like sparks of fire, and moved as his hand was moved over the bowl, shining up from and also moving in the depths of the water therein. 'See!' said he, pointing to the seven grains clasped by his thumb and four fingers', by such shall our children be guided; for behold, when the Sun-father is not nigh, and thy terraces are as the

Design Arguments

94

dark itself (being all hidden therein), then shall our children be guided by lights... Yea! and even as these grains gleam up from the water, so shall seed-grains like to them, yet numberless, spring up from thy bosom when touched by my waters, to nourish our children'. Thus and in other ways many devised they for their offspring. 196

The 'Thompson' Indians of the North Pacific coast believed that the parts of the world were formed from five hairs which the Creator pulled from his head. The first two hairs chose to become women, the third the Earth, and

The fourth chose to be Fire in grass, trees, and all wood, for the good of man. The fifth became Water, to 'cleanse and make wise' the people. 'I will assist all things on earth to maintain life'. 197

In the Iroquois origin myth the Earth was created primarily for the benefit of mankind by the people of the Sky World. The sky god Sapling created the first man out of red clay, and then made a compact between the Earth people and the people of Sky World:

I have made you master over the Earth and over all that it contains. It will continue to give comfort to my mind. I have planted human beings on the Earth for the purpose that they shall continue my work of creation by beautifying the Earth, by cultivating it and making it more pleasing for the habitation of man. 198

Thus the Iroquois believed they had a mandate to change the Earth, in order to make it 'more pleasing for the habitation of man'. A similar motif appears in some of the Sumerian origin legends. Human beings were created to serve the gods primarily by offering sacrifices and homage, but also by imitating the gods in creating and preserving the cosmic order. According to the Boshongo, a Bantu Tribe in central Africa, the Universal Creator Bumba walked among mankind, saying unto them 'Behold [the] wonders [of the Earth]. They belong to you.' The ancient Egyptian text The Instruction of King Meri-ka-Re (written c. 2 0 0 0 BC) records 198

199

205

Men, the cattle of God, have been well provided for. He [the sun god] made the sky and the earth for their benefit... He made the air to vivify their nostrils, for they are his images, issued from his flesh. He shines in the sky, he makes plants and animals for them, birds and fish to feed them. 200

This passage appears to represent the typical Egyptian tradition concerning the origin and purpose of mankind. Islam is closely related to Christianity, for both are rooted in Judaism and both were influenced by Greek philosophy. Thus it is not surprising to find in Islam certain teleological ideas similar to those in Judaism and Christianity. Teleological concepts are prominent in the works of one of the

95 Design Arguments

most outstanding Muslim scientists, the Persian al-Biruni. (c. 1000 AD). This scholar held that Man's intellect made him God's vice-regent (Khalifat Allah) on earth. Because Man is God's vice-regent, the world is ordered for his benefit, and he is granted power over God's creation. The more abstract teleological ideas are also present in al-Biruni's works. In his view, everything in Nature was ordered according to God's plan. As al-Biruni put it: 'Praise therefore be unto Him who has arranged creation and created everything for the b e s t . . . . there is no waste or deficiency in His Work'. The idea in these passages are strikingly similar to the view of the Christian philosopher Leibniz, who contended that God has created the best of all possible worlds. The same notion of a perfectly ordered cosmos is found in both Christianity and Islam, for both religions have an omnipotent, omniscient, and perfect god who would naturally create a perfect world, a world in which no event or thing would be outside the Divine plan. More subtle notions of teleology were evolved in Chinese civilization, a civilization which never possessed the concept of a Supreme Deity. Like other peoples, the Chinese developed the idea that the Earth was made for Man, but early in their civilized history they were aware of the arguments against this rather naive form of teleology. The following story, taken from the book Lich Tzu attributed to the semi-legendary Taoist philosopher Lieh Yii-Khou (much of the book probably comes from the third century BC ) illustrates both: 201

202

202

204

197

Mr. Thien, of the State of Chhi, was holding an ancestral banquet in his hall, to which a thousand guests had been invited. As he sat in their midst, many came up to him with presents of fish and game. Eyeing them approvingly, he exclaimed with unction; 'How generous is Heaven to man! Heaven makes the five kinds of grain to grow, and brings forth the finny and the feathered tribes, especially for our benefit'. All Mr. Thien's guests applauded this sentiment to the echo, except the twelve-year-old son of a Mr. Pao, who, regardless of seniority, came forward and said; 'It is not as my Lord says. The ten thousand creatures [in the universe] and we ourselves* belong to the same category, that of living things, and in this category there is nothing noble and nothing mean. It is only by reason of size, strength or cunning, that one particular species gains mastery over another, or that one feeds upon another. None of them are produced in order to subserve the uses of others. Man catches and eats those that are fit for food, but how [could it be maintained that] Heaven produced them just for him? Mosquitoes and gnats suck [blood through] his skin; tigers and wolves devour his flesh—but we do not therefore assert that Heaven produced man for the benefit of mosquitoes and gnats, or to provide food for tigers and wolves'. 206

Needham cites this passage as an indication of the denial of general teleology by the Taoists, but we think it indicates an acceptance of naive teleology by most Chinese. Note that all except the boy agree with the 206

96

Design Arguments

teleological sentiments expressed by Mr. Thien. The criticism of teleology is probably placed in the mouth of a boy by the Taoist author in order to emphasize that the argument against naive teleology should be obvious even to a child. China had two major indigenous philosophical systems: Taoism and Confucianism. The former was concerned primarily with the order of Nature while the latter concerned itself primarily the proper ordering of human society. These two branches of Chinese philosophy and Buddhism were partially merged by the Neo-Confucian philosophers in the Sung dynasty (eleventh and twelfth centuries AD). Among these scholars, the most important was Chu Hsi (1131-1200). In Neo-Confucian philosophy, social order was placed in Nature, but Nature took on certain aspects of social order. In the view of Chu Hsi, the vast 'pattern' of Nature was normal because it was inevitable that moral values and moral behaviour would appear when the Universe had developed sufficiently far. Nevertheless, this Natural spontaneous moral order was not the result of conscious design: 208

Someone also asked, 'When Heaven brings into being saints and sages, is it only the effect of chance, and not a matter of design?' The philosopher replied, 'How could Heaven and Earth say: 'We will now proceed to produce saints and sages? It simply comes about that the required quantities [of matter-energy] meet together in perfect mutual concordance, and thus a saint or a sage is born. And when this happens it looks as if Heaven had done it by design'. 209

Chu Hsi's spontaneous ordering principle seems strikingly similar to Leibniz' pre-established harmony. Needham himself considers the emergent moral order in Chu Hsi's work as closely analogous to the Western idea of emergent evolution, defended in particular by Herbert Spencer, Henri Bergson, and Alfred North Whitehead (whose work we shall discuss at length in Chapter 3), wherein the moral order appears at later stages in the Universe's history. We see that a spontaneous ordering principle may or may not be teleological. It can properly be regarded as teleological only if the spontaneous order is generated as a consequence of the purposeful interaction of goal-directed organisms, or if the final state of the ordering process is emphasized over the initial and intermediate states. Otherwise, the spontaneous ordering principle is more properly regarded as eutaxiological. The concept of spontaneous order has been central in Chinese philosophy from the dawn of Chinese civilization to the twentieth century. The idea probably arose as a result of the close observation of the growth of plants and the non-coercive social organization which develops spontaneously among human beings in primitive farming communities. The following passage, by Liu Tsung-Yuan (773—819) a T'ang dynasty 204

97 Design Arguments

naturalist, illustrates both:

One day a customer asked ['Camel-Back' Kuo, a famous market-gardener, how he was so successful in growing plants], to which he replied: 'Old Camel-Back cannot make trees live or thrive. He can only let them follow their natural tendencies. In planting trees, be careful to set the root straight, to smooth the earth around them, to use good mould and ram it down well. Then, don't touch them, don't think about them, don't go and look at them, but leave them alone to take care of themselves, and Nature will do the rest. I only avoid trying to make my trees grow. I have no special method of cultivation, no special means for securing luxuriance of growth. I just don't spoil the fruit. I have no way of getting it either early or in abundance. Other gardeners set with bent root, and neglect the mould, heaping up either too much earth or too little. Or else they like their trees too much and become anxious about them, and are for ever running back and forth to see how they are growing; sometimes scratching them to make sure they are still alive, or shaking them to see if they are sufficiently firm in the ground; thus constantly interfering with the natural bias of the tree, and turning their care and affection into a bane and a curse. I just don't do those things. That's all'. 'Can these principles of yours be applied to government?' asked his listener. 'Ah', replied Camel-Back, 'I only understand market-gardening; government is not my trade. Still, in the village where I live, the officials are constantly issuing all kinds of orders, apparently out of compassion for the people, but really to their injury. Morning and night the underlings come round and say, 'His Honour bids us urge on your ploughing, hasten your planting, supervise your harvest. Do not delay with spinning and weaving. Take care of your children. Rear poultry and pigs. Come together when the drum beats. Be ready when the rattle goes'. Thus we poor people are badgered from morning till night. We haven't a moment to ourselves. How could anyone develop naturally under such conditions? It was this that brought about my deformity. And so it is with those who carry on the gardening business'. 'Thank you', said the listener. 'I simply asked about the management of trees, but I have learnt about the management of men. I will make this known, as a warning to government officials.' 210

We have quoted at length a minor T'ang writer, but the same notion of spontaneous order appears over and over again in most Chinese philosophical writing, including the most influential works. For example, the Tao Te Ching of Lao Tzu (fourth century BC), the most important of the Taoist books, considers the Tao to be simply a spontaneous ordering principle: The supreme Tao, how it floods in every direction! This way and that, there is no place where it does not go. All things look to it for life, and it refuses none of them; Yet when its work is accomplished it possesses nothing. Clothing and nourishing all things, it does not lord it over them. Since it asks for nothing from them It may be classed among things of low estate; But since all things obey it

Design Arguments

98

without coercion It may be named supreme. It does not arrogate greatness to itself And so it fulfils its Greatness. 211

In this passage, the action of the spontaneous ordering principle of Nature, the Tao, is contrasted with the order brought about by the conscious design of a ruler. The superiority of the order brought about spontaneously by human interaction over the order imposed from above by force is also a central motif in Confucian works. In fact, the early Confucians felt that the Tao of mankind was to be good, or rather to order naturally their relations with each other in mutually beneficial ways. They believed the ideal ruler would govern his people most effectively by his upright example rather than by force, as the following passage from the Analects of Confucius illustrates: Chi Khang Tzu asked Confucius about the art of ruling. Confucius said, 'Ruling is straightening. If you lead along a straight way, who will dare go by a crooked one?'. Chi Khang Tzu asked Confucius about government, saying, 'Supposing we liquidated all those people who have not the Tao in order to help those who do have the Tao, what would you think about it?' Confucius replied, 'You are there to rule, not to kill. If you desire what is good, the people will be good. The natural ruler [chiintzu] has the virtue of wind, the people the virtue of grass. The grass must needs bend when the wind blows over it'. 212

213

Similar remarks can be found throughout the works of the Confucians, at least through the tenth century AD; (see Chapter 9 of ref. 204 for representative examples). Politically, the Confucians can be regarded as China's native liberals. They were able to prevent the continuation, though not the formation, of a totalitarian state in China: the Chin Empire (second century BC). The advocates of such a state, the Legalists, were in the end defeated with the overthrow of the Chin and its replacement by the Han dynasty. The Legalists argued that the people should be governed according to positive law, fa, which were written rules expressing the arbitrary will of the supreme autocrat, while the Confucians, true to their tradition, countered that society should be ordered spontaneously according to evolved good customs, called li. It has been li rather than fa that has been the most significant force governing the dayto-day actions of the Chinese people from the formation of the Han Empire to the founding of the Republic in 1912. Needham argues that such an emphasis on li, as opposed to fa, made it impossible for the Chinese to develop the concept of natural laws, which in the West, he believes, were originally pictured as decrees from the Supreme Ruler of the Universe, God. However this may be, it would be difficult for the notion of teleology, to be developed in Chinese philosophy and applied to 214

99 Design Arguments

the cosmos since cosmic teleology involves planning in some sense by a thinking being. Nevertheless, there is a deep connection between teleology and spontaneous social order, a connection which has been pointed out by philosophers of the classical liberal tradition, of whom Fredrick Hayek is the most distinguished representative in our own time. Hayek received his original university training in Vienna in Law, but spent the first thirty or so years of his career in economics (he was awarded the Nobel prize in economics in 1974) at the University of London. He has concentrated his attention on questions of social organization during the last thirty years. Like the Confucians, Hayek is primarily concerned with spontaneous order in human society. Human language is the most obvious example of such an order. It was not formed by the conscious design of any individual or group of individuals. Rather, it just grew. It is growing and changing now by the daily interactions of countless numbers of human beings. Hayek argues in scores of articles and many books (e.g. refs. 216-22) that the free market is a similar sort of order, an order which is created by the decentralized action of many minds, using far more information than is available or could be available to any one mind, thus generating an order much more complex than any one mind could even imagine. The market order cannot be said to have an overall purpose in the naive sense of the word. As Hayek puts it: 215

Most important... is the relation of a spontaneous order to the conception of purpose. Since such an order has not been created by an outside agency, the order as such also can have no purpose, although its existence may be very serviceable to the individuals which move within such order. But in a different sense it may well be said that the order rests on purposive action of its elements, when 'purpose' would, of course, mean nothing more than their actions tend to secure the preservation or restoration of that order. 217

In effect, the different and often conflicting purposes of the many human beings interacting via the market are woven together into an orderly whole; the entire system evolves in a direction none can foresee, because the knowledge dispersed throughout the system, and sustaining the order, is much greater than any individual can comprehend: Certainly nobody has yet succeeded in deliberately arranging all the activities that go on in a complex society. If anyone did ever succeed in fully organizing such a society, it would no longer make use of many minds, but would be altogether dependent on one mind; it would certainly not be very complex but extremely primitive—and so would soon be the mind whose knowledge and will determined everything. The facts which could enter into the design of such an order could be only those which were known and digested by this mind; and as only he could decide on action and thus gain experience, there would be none of that interplay of many minds in which alone mind can grow. 218

Design Arguments

100

Teleology is definitely present, for the human actors all have their own purposes, but it is teleology in the small, not a global teleology. The market system harmonizes these individual purposes, but it has none of its own. The image of the market and its spontaneous order developed by Hayek appears strikingly similar to the picture of spontaneouslyordering human society given by the Chinese sages in the quotes above. Hayek himself points out that his notion of spontaneous social order is closely analogous to the Greek Kosmos, which originally meant 'a right order for a community'. The precise and subtle relationship between a spontaneously ordered social system and the teleology of the beings who comprise it has recently been worked out by the political scientist Robert Axelrod. He has shown on the basis of game theory that the spontaneous formation of a cooperative social order actually requires a very strong teleology to be acting at the individual level. That is, such an order can form spontaneously only if the future expectations of the individuals in the society are dominant over their immediate expectations in determining their present actions. The barrier to the spontaneous formation of cooperation in a population of individuals without teleology is illustrated by the famous Prisoner's Dilemma. Two prisoners are in separate gaol cells and not permitted to communicate. Their gaoler urges each to confess, telling each that if he confesses to the crime and his partner does not, then the party that confesses will go free, while the other will get the maximum punishment of five years. If both confess, the confession of each will be worth less, so they both will get three years. If neither confess, then both will be convicted of only a minor charge, and each will get only one year. What action should the prisoners take? Consider the strategy of prisoner A. If the other prisoner B confesses, then A has no choice but to confess also since otherwise he would get five years rather than three. On the other hand, if B does not confess, then it is in the interest of A to confess since then he would go free. Thus, whatever B does, it is in A's interest to confess. Since the same analysis applies to B, we conclude that the best strategy for each to adopt is to confess. But the joint confession results in both getting three years rather than the one year they both would have received if they had cooperated. Nevertheless, it would be against the self-interest of each not to confess, even though both would be better off if neither confesses. The Prisoner's Dilemma is faced by every individual in many, if not every, interaction with other individuals; for it is always in the self-interest of an individual to get something for nothing; it is always in the self-interest of an individual to cheat another in any given interaction even though both might be better off if neither cheated! How then is 219

220

101 Design Arguments

it possible for cooperation to arise spontaneously in a group of individuals each pursuing his own interests? Cooperation can arise because in general individuals will interact with a given individual not just once but many times. In the language of game theory, the Prisoner's Dilemma two-person positive-sum game must be replaced with a sequence of such games; the resulting game is termed an iterated Prisoner's Dilemma game. The pay-off matrix for the Prisoner's Dilemma game is Player (prisoner) A Cooperate Don't cooperate Cooperate

Both players get R

A gets S and B gets T

Don't cooperate

A gets T and B gets S

Both players get P

To fix ideas, let us choose R = 3, P = 1, S = 0, and T= 5. Then as in the example above, it is in the rational interest of both players not to cooperate, even though this means that they receive the pay-off P rather than the pay-off R which they would have both received if they had cooperated. In general, the Prisoner's Dilemma arises when T>R>P> S, with R>(T+S)12, and both players must choose their strategy before they know what strategy the other chooses. In the iterated Prisoner's Dilemma, the above game is played many times and the total pay-off is achieved over many games. However, the present value of a future pay-off is not as great as the present value of a present pay-off because a future good is not as valuable as a present good (if one is to receive a thousand pounds, it is better to receive it now rather than ten years from now), and also because there is some chance that the game will halt after a finite number of steps (in real life, interactions eventually will cease because one of the players dies, moves away, or becomes bankrupt). Therefore, the pay-off of each game is discounted relative to the previous game by a discount parameter w, where 0 < w < 1. The expected cumulative pay-off of an infinite number of games is obtained by adding all the expected pay-offs from each game, where the expected pay-off of each game is obtained by multiplying the pay-off of the immediately preceding game by w. For example, the expected cumulative pay-off accruing to both players if they cooperate in all games would be given by R + Rw + Rw + Rw +... = R/( 1 - w) when the sum is 2

3

Design Arguments

102

an infinite geometrical progression. Cooperation becomes a possible rational strategy, because although a given player does not know the other's choice on the present game, he does know what the other chose on previous games. He can choose his strategy for the nth game according to what the other player has chosen on the preceding (n - 1 ) games with him. The discount parameter w measures the importance of the future. One can prove ' that only if w is sufficiently close to 1—i.e., only if the present value of future pay-offs is sufficiently high—is it possible for 'nice' strategies (those which have the player cooperate until the other player doesn't cooperate) to be a collectively stable strategy. Axelrod calls a strategy 'collectively stable' if, when everyone is using such a strategy, then no one can do better by adopting a different strategy. In order for a strategy to persist in Nature, it must be collectively stable, for there will always arise by mutation individuals who try different strategies. In an evolutionary context, the collective stability of some cooperative strategies shows that, if a population of individuals using such strategies ever forms, it can persist and grow. Collectively stable strategies are essentially the same as what the evolutionist John Maynard Smith ' has called 'evolutionary stable strategies'. Furthermore, Axelrod shows that a population of non-cooperators can be successfully invaded by clusters of cooperators if w is high enough and if the relative frequency with which the cooperators interact with each other rather than with the non-cooperators is sufficiently high. For example, if we have T = 5, R = 3, P = 1, S = 0, and w = 0.9 then a cluster of individuals using the 'nice' strategy of 'cooperate until the other does not, then don't cooperate for one game, then cooperate until the other does not cooperate again' can successfully invade a population of non-cooperators if only 5 per cent of their interactions are with other members of the cluster. (Individual cooperators cannot invade a population of non-cooperators because the strategy of total non-cooperation is also collectively stable. But a cluster of non-cooperators cannot invade a population of cooperators using a collectively stable strategy.) These ideas have been applied to the evolution of cooperative behaviour by Axelrod and Hamilton, ' and by Maynard Smith. ' ' More speculatively, these results showing the importance of teleology for the formation of order suggest that if one wishes to model the physical cosmos after a biological or social cosmos—this is the idea underlying Wheeler's Participatory Anthropic Principle—then the state of the universe at any given time must be determined not only by the state of the universe an instant before, which is the usual physical assumption, but rather it must be a function of all the preceding states and all the future states. 220 221

223 224

220 222

223 224 225

103 Design Arguments

2.9 Relationship Between The Design Argument and The Cosmological Argument To someone who could grasp the Universe from a unified standpoint the entire creation would appear as a unique truth and necessity. J. D'Alembert The unrest which keeps the never stopping clock of metaphysics going is the thought that the non-existence of the world is just as possible as its existence. W. James

The cosmological argument is today probably the most often used theological existence proof. It was the main argument used by Father Copleston against Bertrand Russell in their famous BBC debate on the existence of God, and there have been several books written recently defending this argument; see refs 227-9. The argument in its most common version is based on two assumptions: (1) something exists, and (2) there must be a sufficient reason for everything that exists. The argument begins with the claim that the existence of everything in the Universe is merely contingent; that is, it could be otherwise than it is. To use an example of Matson, this book could have been as it is except for one extra typo. The book as it is is contingent, since we would think that the number of typos in the book is not a logically necessary feature of the Universe; we would not expect the Universe to be logically inconsistent with the extra typo. Thus by the principle of sufficient reason, there must be a reason why that typo is not there, namely our sharp eyes. But we are also apparently contingent, which means there has to be a reason (or cause) for our own existence. And so it goes for everything in the Universe upon which we are dependent. It is now contended that these other objects—which at this stage in the argument include everything in the Universe—must have a reason (cause) for existence. Since it has been argued that everything in the Universe must be explained in terms of something else, this other reason must be outside the Universe. Furthermore, this transcendental reason must be the final reason, for otherwise the hierarchy of causes would continue without end, and this hierarchy would itself want a reason. In order to avoid the charge that the Final Cause itself needs a cause, the defenders claim it is its own cause. We should emphasize that the hierarchy of 'causes' which is referred to in the cosmological argument does not refer to a hierarchy of causes in the sense of a series of causes preceding effects in time. It refers rather to hierarchy of underlying 'reasons' for events which are perceived by the human mind. Another example of such a hierarchy is as follows: (first 226

230

Design Arguments

104

level) our writing of these words has as its physical cause the muscle fibres in our arms, the cells of which (second level) obey the laws of chemistry, the laws of chemistry being derived from (third level) atomic physics, and finally the laws of atomic physics are determined by (fourth level) quantum physics. At present, we are forced to accept quantum physics as a brute fact, but physicists feel in their heart of hearts that there is some reason why the Universe runs according to quantum physics rather than, say, Newtonian physics. (There is a temporal version of the cosmological argument, with 'cause' being followed by 'effect' which is another cause, but as this version is easily demolished, no major theologian in the past thousand years has defended it. For instance, Aquinas did not accept the temporal version: he did not believe it was possible to show by reason alone whether or not the Universe had a beginning.) There are many problems with the non-temporal version of the cosmological argument, which is often called the argument from contingency. We shall focus only on those difficulties which are relevant to the Anthropic Principle; the reader can consult references 231 and 232 for a more complete discussion. One obvious problem with the argument is, why should we accept its minor premise? Why should the principle of sufficient reason be true? The defenders of the cosmological argument feel that the Universe must at bottom be rational, but again, why should it? Antony Flew, who is the most profound of the contemporary critics of theism, points out that not only is the principle of sufficient reason unjustified, but it is actually demonstrably false! Any logical argument must start with some assumptions, and these assumptions must themselves be unjustified. We might of course be able to justify these particular assumptions in the context of another demonstration from which the particular assumptions are deduced, but this just pushes the problem to a higher level; the basic underlying assumptions in the higher level argument are themselves unjustified. At some point we have to just accept some postulates for which we can give no reason why they should be true. Thus the nature of logic itself requires the principle of sufficient reason to be false Nevertheless, by insisting that the Universe is rational—which really means that the Universe has a causal structure which can be ordered by the human mind, and further that the ultimate reason for the existence of the Universe can be understood by human beings—the defenders of the cosmological argument are taking an Anthropic Principle position. In its insistence that there is an actual hierarchy of causes in the Universe which is isomorphic to the pyramid of causes constructed by human beings, the cosmological argument is analogous to the teleological argument, for the latter argument asserts that the order observed in the Universe is isomorphic to the order produced by human beings when they construct 231

2 3 3

105 Design Arguments

artifacts. In both arguments the mental activities of human beings are used as a model for the Universe as a whole. The major premise in the cosmological argument is that things exist, and further, that contingent things exist. But Hume in his Dialogues pointed out contingency could be just an illusion of our ignorance: To a superficial observer so wonderful a regularity [as a complex property of the integers] may be admired as the effect either of chance or design; but a skilled algebraist immediately concludes it to be the work of necessity, and demonstrates that it must forever result from the nature of these numbers. Is it not probable, I [Philo] ask, that the whole economy of the Universe is conducted by a like necessity, though no human algebra can furnish a key which solves the difficulty? 234

Most defenders of the cosmological argument have dismissed Hume's objection, but many modern cosmologists are coming to the conclusion that there is only one logically possible universe. These modern cosmologists have hubris that Hume's alter-ego, Philo, would have blanched at: some of them believe they have found the key (or rather, keys) which will permit a mathematical description of this single logically possible universe! For example, Hartle and Hawking and Hawking have obtained an expression for the 'wave function of the Universe', using path integral techniques. The wave function of the Universe, regarded as a function of three spatial variables and a time variable, is essentially a list of all possible histories, classical and otherwise, through which the universe could have evolved to its present quantum state, which itself includes all logically possible particles and arrangements of particles that could exist in the Universe at the present time. If we accept the ManyWorlds interpretation of quantum mechanics—as Hartle and Hawking do—then all these possibilities actually exist. In other words, the Universe, which is defined as everything that actually exists, is equal to all logically consistent possiblities. What more could there possibly be? Furthermore, there are strong indications that the mathematical structure of quantum mechanics requires that all observationally distinguishable possibilities are actually realized. More precisely, the Universal wave function can be shown to have only isolated zeros, if it is an eigenstate of the energy—the Hartle-Hawking Universal wave function is such an eigenstate—and if the Universal Hamiltonian is a self-adjoint operator This means that such a wave function is non-zero almost everywhere on the domain of possibilities. It is impossible to distinguish observationally between a function which is non-zero everywhere and one which is non-zero almost everywhere. If it could be proved that the mathematical structure assumed for quantum mechanics were logically necessary, then we would have a proof that only one unique Universe— the one we live in—is logically possible. The above discussion sounds a bit 235

2 3 7 2 3 8

236

106

Design Arguments

woolly, but it is possible to make predictions by restricting attention to a few parameters of the domain of possibilities. See Chapter 7 for a detailed analysis, from the point of view of the Many-Worlds interpretation, of a Universal wave function in which the only possibility considered is the radius of the sidereal Universe. As we shall point out in Chapter 3, the mathematician-philosopher A. N. Whitehead was the first to suggest that the problem of contingency might be solved if the actual Universe realized all possibilities: if this were the case, there would be no contingency in the large. The remainder of the cosmological argument's major premise is the assertion 'something exists'. This is a rather unobjectionable postulate, as it wins the assent of realists, idealists, and even solipsists. Nevertheless, the nerve of the cosmological argument lies in creating the suspicion that the entire Universe, even if it is necessarily unique, may want a reason for existence: that is, it suggests we should ask the question, 'why is there something rather than nothing?' This question has an answer only if there is something whose existence is logically necessary, which is to say, that the denial of its existence would be a logical contradiction. This brings us to the ontological argument for the existence of God, a proof claiming to deduce the existence of God from His definition. The ontological argument has had a rather mixed reception by theologians and philosophers since its introduction by St. Anselm in 1078. Aquinas did not accept it as valid, nor have the vast majority of modern philosophers, but both Descartes and Leibniz did believe it to be valid. As Kant and, more recently, Antony Hew have pointed out, the explicit rejection of the ontological argument puts those theologians who accept the cosmological argument in a difficult position, because the cosmological argument assumes that there is a Final Cause who is Its own reason for existence, and only if the existence of this Final Cause is logically necessary will it be superfluous to find a reason for its existence. At bottom, the cosmological argument presupposes the validity of the ontological argument. The reason Kant gave for the invalidity of the argument is basically the one which modern philosophers find convincing: existence is not a property, but rather it is a precondition for something to have properties. An example will make this clearer. It certainly makes sense to say 'some black lions exist', but the statement 'some black lions exist, and some don't' is conceptually meaningless. It is meaningless because although 'black' can be a property of lions, 'existence' cannot. Modern logicians have constructed a notation that makes it impossible to formulate in the notation existential sentences like 'some black lions exist, and some don't', which are grammatically correct in English but conceptually mean239

231

107 Design Arguments

ingless. In this notation, 'X exists' means 'X has an instance'. Another criticism that can be levelled against the ontological argument is that the concept of logical necessity applies to propositions, not to questions of existence. In our opinion, these criticisms of the ontological argument are correct; it is not possible to deduce the existence of any single being from its definition. But a caveat must be made. If the Universe is by definition the totality of everything that exists, it is a logical impossibility for the entity 'God,' whatever He is, to be outside the Universe if in fact He exists. By definition, nothing which exists can be outside the Universe. This is a viewpoint which more and more twentieth-century theologians are coming to hold: they are beginning to adopt a notion of deity which insofar as questions of existence are concerned, is indistinguishable from pantheism. As Paul Tillich succinctly put it, 'God is being-itself, not a being'. We do not concern ourselves with whether it is appropriate for a theologian defending such a position to call himself a theist, as most of them do. (The atheist George Smith has subjected such theologians to a very witty and scathing criticism on this point in ref. 232.) Rather, we are interested in the truly important implication of this notion of deity, which is that in the context of such a notion, the purpose of the ontological argument is to establish the existence of the Universe, or equivalently, the existence of something, as logically necessary. This is the caveat to the abovementioned refution of ontological argument which we wish to consider: granted that the existence of no single being is logically necessary, could it nevertheless be true that it would be a logical contradiction for the entire Universe, which is not a being, but all being considered as a whole, not to exist? If the Universe must necessarily exist, then modern logical notation cannot be applied to the single unique case of the ontology of the Universe, but it would be valid in every other situation. Even philosophical atheists differ as to the validity of the cosmological/ontological argument interpreted in such a way. David Hume, in the persona of Cleanthes, admitted that if the logic of the cosmological argument were sound, then 245

240

. . . why may not the material universe be the necessarily existent Being, according to this pretended explication of necessity? We dare not affirm that we know all the qualities of matter; and, for aught we can determine, it may contain some qualities which, were they known, would make its non-existence appear as great a contradiction as that twice two is five. 234

Bertrand Russell, on the other hand, thought we had to accept the existence of the Universe as an irreducible brute fact. As he expressed it in his BBC debate with Copies ton: COPLESTON: Then you would agree with Sartre that the universe is what he

Design Arguments

108

calls 'gratuitous'? RUSSELL: Well, the word 'gratuitous' suggests it might be something else; I should say that the universe is just there, and that's all. 241

The reason 'something exists' may be necessarily true arises from a close analysis of what the word 'existence' means. An entity X is said to exist if it is possible, at least in principle, to observe it, or to infer it from observation of other entities. If it is claimed that X had, has, and will have no influence whatsoever on anything we can possibly observe, then by definition X does not exist. But again by definition, there must exist this ill-defined entity we have termed 'the observer' to act as an arbiter for the existence of everything else, which implies that something—the observer, at least—actually exists. This argument will be immediately recognized as a self-reference argument, a category of arguments to which all the Anthropic Principle arguments belong. To put the argument another way, the phrase 'nothing exists' is logically contradictory because the phrase 'X has an instance' (equivalent to 'X exists' in modern logic) means that there is an 'observer' who can, at least in principle, observe X. Thus 'nothing has an instance' would mean that an observer has not observed anything. But the observer has observed himself, or at least he himself exists, which means it is not true that 'nothing has an instance.' Cognito ergo sum. Assuming the truth of 'nothing has an instance' implies its falsity, which means that it is contradictory and hence false. We do not defend this self-reference argument: we merely note it because of its Anthropic Principle flavour. The philosopher Charles Hartshorne, who is generally recognized as the most influential defender of the ontological argument in the twentieth century, is a pantheist in the sense described above in his ontology, and he believes that 'something exists' is a logically necessary truth. For Hartshorne, the phrase, 'God exists necessarily' means that the non-existence of the Universe is a logical contradiction. (His critics, e.g. Hick, seem unaware of this, and base their refutation of his arguments on another, more traditional concept of deity.) If one does not accept the non-existence of the Universe as logically contradictory, then one is forced into Bertrand Russell's position of regarding the Universe's existence as a brute fact. But if the speculations of some modern cosmologists are correct, there may be only one unique Universe which is logically possible, and the assumption of the Universe's existence is the only assumption we have to make. In this chapter we have attempted to outline the history of Design Arguments and the philosophical debates surrounding them. In this way we have been able to introduce some of the questions touched upon by the modern Anthropic Principles. At the very least we aim to have shown that the Anthropic Principle is not the new and revolutionary idea that 242

239

243

239

244

109 Design Arguments

many scientists see it to be. We have argued that the Anthropic Principles are but a modern manifestation of the traditional tendency to frame design arguments around successful mathematical models of Nature. Investigation reveals there to have existed quite distinct teleological and eutaxiological Design Arguments whose divide mirrors the divide between different varieties of Anthropic Principle. We found that both Western and Eastern cultures acquired an interest in the question of design. What characterizes the European interest especially is the use of these arguments to prove the existence of a deity from the apparent purpose or harmonious workings of the machinery of Nature. The surprising persuasiveness of such arguments can be traced to the dramatic success of the Newtonian approach to science to which they were wedded. This led us to consider the famous Cosmological Argument for the existence of God in some detail and discuss its connections with the Anthropic Principles. The blow dealt by Darwin to the traditional design arguments founded upon the existence of environmental adaption revealed two interesting features. On the one hand the early Design Arguments played a key role in leading Darwin to develop his theory of natural selection but on the other we must recognize that this advance still left untouched most of the design arguments of the day that were framed around non-biological phenomena. It is this class of eutaxiological design argument that has evolved into the more precise examples which motivated the modern Anthropic arguments. One of the most interesting features of the world to emerge from study of biological populations has been the possibility that order can develop spontaneously. Modern ideas concerning the spontaneous generation of order in social systems were discussed together with the relevance of this for the teleological behaviour of their members. This departure prepares the ground for a more detailed investigation of the use of teleological arguments in science and philosophy in the next chapter.

References

1. D. E. Gershenson and D. A. Greenberg, Anaxagoras and the birth of physics (Blaisdell, NY, 1964). 2. These fragments are listed in ref. 1 along with subsequent citations. They are also listed in G. S. Kirk and J. E. Raven, The presocratic philosophers: a critical history with a selection of texts (Cambridge University Press, 1957). Hereafter we shall reference fragments according to their catalogue number in Kirk and Raven, for example as KR999. 3. See Simplicius, Physics 164, 24; KR504 'mind controlled also the whole rotation, so that it began to rotate in the beginning. And it began to rotate first from a small area... Mind arranged . . . this rotation in which we are now rotating, the stars, the sun and the moon'. 4. Anaxagoras, KR504.

110

Design Arguments

5. Aristotle, Parts of animals, 687a7. 6. Aristotle, Metaphysics, 985al8. 7. Plato, Phaedo, 98B7 (KR522). For some further criticism, see Simplicius, Physics 327, 26. 8. Aristotle, De Caelo 2, 300b20. See also KR444151617. 9. J. A. Wheeler, 'Genesis and observership', in Foundational problems in the special sciences, ed. R. Butts and J. Hintikka (Reidel, Dordrecht, 1977). 10. Plato, The Republic, Bk7. 11. Xenophon, Memorabilia, 1(1), 10. 12. Xenophon, Memorabilia, IV(3), 5. 13. Diogenes, KR604. 14. KR505. 15. KR564. 16. For a discussion of Aristotle's cosmology see Theories of the universe, ed. M. K. Munitz (I.U. Free Press, Glencoe, 1957). 17. Aristotle, Physics Bk2, §2. 18. Aristotle, cited in On man in the universe (W. Black, NY, 1943) p. xiii. 19. Aristotle, Parts of animals Bkl, 1. There exists at least one example of how these unusual metaphysical ideas, so different from those of modern scientists, could be effective in making predictions. The Alexandrian, Hero (c. 50) was able to deduce the law of optical reflection from the Aristotelian 'axiom' that everything strives towards an optimal, perfect end. He interpreted this to mean that light rays should always traverse the shortest path available to them—this was the first use of a variational principle. See § 3.4 for further discussions of Hero's work. 20. Aristotle, Physics, cited by H. Osborne in From the Greeks to Darwin, 2nd edn (Scribners, NY, 1929), p. 85. 21. Aristotle, Parts of animals, II, 1. 22. Aristotle, Physics, II, 8. 23. Theophrastus, Metaphysics, transl. W. D. Ross and F. H. Forbes (Clarendon Press, Oxford, 1929). 24. Note that he does not wish to outlaw the study of teleology completely because he feels the study of phenomena apparently exhibiting design or remarkable contrivance to be legitimate. He merely wishes to encourage a little more scepticism in the deployment of explanations based on final causation. This is rather similar to modern approaches to 'design'. The Anthropic Principle picks on a large number of remarkable coincidences of Nature for examination but does not aim to use teleological reasoning to explain specific local phenomena. 25. Epicurus, Letter to Pythocles; Epicurus: the extant remains, transl. and notes by C. Bailey (G. Olms, Hildesheim, 1970). 26. Lucretius, On the nature of the universe, transl. R. E. Latham (Penguin, London, 1951).' 27. ibid., p. 196. 28. ibid., p. 156. 29. S. Jaki, Science and creation (Scottish Academic Press, Edinburgh, 1974); F.

111 Design Arguments 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53.

J. Tipler in Essays in general relativity, ed. F. J. Tipler (Academic Press, San Francisco, 1980), p. 21. Cicero, The nature of the gods, transl. H. C. P. McGregor (Penguin, London, 1972), p. 89. ibid., p. 161. ibid., 162. ibid., p. 163. Galen, On the usefulness of the parts of the body, transl. M. T. May (Cornell University Press, NY, 1968). Pliny, cited in C. Singer, A short history of scientific ideas to 1900 (Oxford University Press, Oxford, 1959), p. 106. A. M. S. Boethius, The consolation of philosophy, transl. J. F. Steward and E. K. Rand (Wm. Heinemann, NY, 1918). ibid., Bkl, vi. ibid., Bkll, prose vii. C. S. Lewis, The discarded image: an introduction to medieval and renaissance literature, (Cambridge University Press, Cambridge, 1964), p. 10. C.U.M. Smith, The problem of life: an essay in the origins of biological thought, (Macmillan, London, 1976). Averroes, Metaphysics, transl. M. Horten (M. Niemeyer, Halle, 1972), p. 200. The Old Testament is, of course, full of particular Design Arguments. For the Jews, as God's chosen race, the idea of teleology would have been completely accepted and such notions as the 'Day of the Lord' evidence of ultimate Final Causation. Maimonides, Guide for the perplexed, 2nd edn, transl. M. Friedlander (Routledge, London, 1956). ibid., Bklll, Chapter 14. Aquinas, Summa theologica, Q.2, Art 3; see also F. C. Copleston, Aquinas, (Penguin, London, 1955). R. Bacon, De sapientia veterum, in Works, ed. R. Ellis, J. Spedding, and D. Heath (Longmans, London, 1875-9), VI, p. 747. Raymonde, 'Theologia naturalis sive liber creaturarum\ see C. C. J. Webb, Studies in the history of natural theology (Cambridge University Press, Cambridge, 1915). N. Copernicus, On the revolution of the heavenly spheres, transl. C. G. Wallis, ed. R. M. Hutchins (Encyclopaedia Britannica, London, 1952). See also The nature of scientific discovery: Copernicus symposium, ed. O. Gingerich (Smithsonian, Washington, 1975). ibid., Bkl, Chapter 6. G. Galileo, Dialogues concerning two new sciences (Dover, NY, 1953), III, p. 400. P. Janet, Final causes, transl. W. Affleck (Clark, Edinburgh, 1878). The quote is taken from p. 154. For an interesting biography of Kepler by a strong admirer of his work see, A. Koestler, The sleepwalkers (Grosset & Dunlap, NY, 1970). The section

112 54. 55. 56.

57. 58. 59. 60. 61. 62.

63. 64. 65. 66. 67. 68. 69. 70. 71. 72.

Design Arguments on Kepler in this work was previously published as The watershed: a biography of Johannes Kepler (Doubleday, NY, 1960). M. Montaigne, Essays, ii, xii, transl. E. J. Trechmann (Oxford University Press, London, 1927). ibid., ii, xii. The earliest revival of 'atomism' may be N. Hill, Philosophia epicurea (Paris, 1601), who tried to make it theologically respectable by maintaining that the atoms and their motions were initiated by God, see G. McColly, Ann. Sci. 4, 390 (1949). The more familiar atomic revivalist is P. Gassendi, Observations on the tenth book of Diogenes Laertius (Lyon, 1649) and it appears he also took the view that a Deity must make the atoms. F. Bacon, De augmentis scientiarum Bklll, Chapter 5 (1623), and The philosophical works of Francis Bacon, ed. J. M. Robertson (Routledge, London, 1905). ibid., p. 96-7. W. Harvey, Anatomical exercises in the generation of animals (London, 1653), exercise 11. R. Boyle, A disquisition about the final causes of natural things (London, 1688), p. 157. W. Harvey, The motion of the heart and blood. Chapter 8, in Great books of the Western World', ed. R. M. Hutchins (Encyclopaedia Britannica, London, 1952), Vol. 28. The foundation of modern anatomical study began with Vesalius', De humani corporis fabrica (1543, Basilaae; reprint by Culture et Civilization, Brussels, 1964) which contained the results of a large number of dissections and which corrected many accepted dogmas of Galen. It was also the first printed book to contain diagrams. Formerly these were absent from medical treatises, and illustrations were simply reproductions of classical anatomical configurations. R. Descartes, Principles of philosophy, III, 3; The philosophical works of Descartes, ed. and transl. E. S. Haldane and G. R. T. Ross, 2 vols (Cambridge University Press, Cambridge, 1911-12). R. Descartes, Le monde, ed. V. Cousin (F. Levrault, Paris, 1824-6), Chapter 6, p. 249. R. Descartes, Principles of philosophy III, 2, op. cit. ibid., Ill, 4. The design argument from a mechanical world model was seen also in Cicero and in many early Stoic writings. R. Boyle, from The Christian virtuoso, cited in Anglicanism : the thought and practice of the Church of England, illustrated from the religious literature of the seventeenth century, ed. P. E. More and F. L. Cross (SPCK, London, 1935), p. 235. R. Boyle, A disquisition about the final causes of natural things (London, 1688), p. 522. R. Boulton (ed.) The theological works of the Honourable Robert Boyle (London, 1715), II, p. 235. ibid., II, pp. 211-12. ibid., II, pp. 221, 251.

113 Design Arguments 73. R. Boyle, ref. 69, p. 528. 74. R. Boyle, Letter on final causes, a reply to Descartes cited in Janet, ref. 51, p. 481. 75. P. Gassendi, Objections to the 4th meditation, Vol. II (Amsterdam, 1642), p. 179. 76. R. Boyle, ref. 69, p. 520. 77. J. Webster, Academiarum examen (London, 1654), p. 15. 78. J. Ray, The widsom of God manifested in the works of creation (London, 1691). This work went through twenty editions between 1691 and 1846. 79. J. Ray, cited in L. E. Hicks, A critique of Design Arguments (Scribner, NY, 1883). 80. R. Cudworth, The intellectual system of the universe, with notes and dissertations of J. L. Moshiem (Thomas Tegg, London, 1845). For further discussion of 'Plastic Nature' and Ray's interpretation of it see C. E. Raven, John Ray: naturalist: his life and works (Cambridge University Press, Cambridge, 1950) and C. E. Raven, Organic design: a study of scientific thought from Ray to Paley, 7th Dr. Williams' Library Lecture (Oxford University Press, Oxford, 1953). 81. B. Spinoza, Ethics (see The chief works of Benedict de Spinoza, transl. R. H. Elves, 2 vols, Bohn's Philosophical Library, repr. Dover, NY, 1951), appendix to Part I. The argument of Spinoza against the Design Argument is exactly the same as Cicero's for it—that men from underground would with 'stupid amazement' ascribe the regularities of Nature to a Deity. 82. B. Spinoza, ibid. 83. B. Spinoza, ibid. 84. B. Spinoza, ibid. 85. I. Newton, The reasonableness and certainty of the Christian religion (London, 1700) Bkll, 18. Elsewhere it appears that, although extremely pious, Newton preferred to allow others to make use of his ideas in support of the Design Argument rather than defend it himself directly. His views here were those of the Protestant orthodoxy of his day. The philosophical influence of his discovery of the universal law of gravity and the first constant of Nature can be seen in William Whiston's work Astronomical principles of religion (London, 1717) which was dedicated to Newton. On p. 131 he writes, 'The Universe appears thereby to be evidently One Universe; governed by One Law of Gravity through the whole; and observing the same laws of motion everywhere so that this Unity of God is now for ever established by that more certain knowledge we have of the Universe'. 86. I. Newton, Opticks, 4th edn (London, 1730), query 28. 87. R. Bentley, A confutation of atheism from the Origin and frame of the world (London, 1693). 88. The Newton-Bentley letters are reprinted in I. B. Cohen, Isaac Newton's papers and letters on natural philosophy and related documents (Harvard University Press, Cambridge, Mass., 1958). 89. D. Gregory, cited in H. Guerlac and M. C. Jacob, J. Hist. Ideas 30, 307 (1969). 90. C. MacClaurin, An account of Sir Isaac Newton's philosophical discoveries (London, 1748), p. 405, see also p. 400. 91. G. Leibniz, in The philosophic works of Leibniz, ed. G. M. Duncan (London,

114 92. 93. 94.

95. 96. 97. 98. 99. 100. 101. 102.

103. 104. 105. 106. 107. 108.

109.

135 Design Arguments 1890). On p. 101 he writes, 'the present state exists because it follows from the nature of God that he should prefer the most perfect', ibid., p. 36; this was a letter to Boyle. G. Leibniz, in letter to de Voider; C. I. Gerhardt, Die philosophischen Schriften von G. W. Leibniz (Georg. Olms, Berlin, 1960), Vol. 2, p. 193. Parts of the present Universe which have apparently never been in causal contact during its entire past history exhibit the same large-scale density and temperature to within one part in ten thousand, see J. D. Barrow and J. Silk, Scient. Am. (April 1979). The reason for this synchronization is a key problem of modern cosmology, see J. D. Barrow and M. S. Turner, Nature 292, 35 (1981). G. Leibniz, Refutations of Spinoza in ref. 91, p. 176. Second letter to Clarke, ref. 91, pp. 241-2. N. Grew, Cosmologia sacra (London, 1701). J. B. Moliere, Dom Juan, ou, Le festin de pierre, Act 3, scene 1 (1665), cited in P. Janet, Final causes, see ref. 51, p. 291; a modern edition is ed. W. Howarth (Blackwell, Oxford, 1958). Voltaire, 'Atheist atheism', Philosophical dictionary (1769), transl. and ed. P. Gay, 2 vols (Basic Books, NY, 1955). Ariticle on 'Causes finales' in ref. 99, 1, 271. Voltaire, Candide (Washington Square Press, NY, 1962). J. D'Alembert, Traite de dynamique: discours preliminaire (1742), transl. Y. Elkane. The particular errors of Descartes he is referring to were his rules of collision, that (1) if two bodies have equal mass and velocity before collision then after any collision they will have the same speeds as before, and (2) if two bodies have different mass then the lighter body is reflected and its velocity becomes equal to that of the larger one. Leibniz showed that these rules contradict the requirement of continuity on approach to the situation where the two masses are equal. This inconsistency was first pointed out by Leibniz in 1692 although it was not published until 1844. P. L. M. Maupertuis, 'Essai de cosmologie' (1751) in Oeuvres, Vol. 4, p. 3 (1768), (Lyon), ibid, ibid, ibid. W. Derham, Physico-Theology (London, 1714). W. Derham, Astro-theology (London, 1795). The titles of both Derham's books have a familiar ring to them. During the eighteenth century countless such works of natural theology were written. The naturalist Lesser wrote a sequence of books entitled Helio-theologie (1744), Testaceo-theologie (1757), 'Insecto-theologie' (1757), whilst Fabricus authored a Theologie de Veau (1741). G. de Buffon, 'History of animals', (in Natural history, transl. H. D. Symonds, London, 1797), Chapter 1. Buffon also doubted that it could be argued that the entire collection of celestial bodies and motions could be conceived as contrived for the service of humans.

115 Design Arguments 110. For a discussion of his work see I. Berlin, Vico and Herder: two studies in the history of ideas (Hogarth Press, London, 1976). 111. G. Vico, cited in On the study methods of our time (The Library of Liberal Arts, Bobbs-Merrill, NY, 1965). 112. The other categories were Conscienze'—the behavioural and everyday information common to all men—then intuition regarding ultimate things, and finally, human psychology. 113. D. Hume, Dialogues concerning natural religion, ed. N. Kemp Smith (Bobbs Merrill, Indiana, 1977). First published in 1779, probably in Edinburgh, but this is not certain. See J. V. Price, Papers Bibliog. Soc. Am. 68, 119 (1974). 114. For some discussion of the relation between Newton's influence and Hume's writings see R. H. Hurlbrutt III, Hume, Newton and the Design Argument (University of Nebraska Press, Lincoln, 1965). 115. Hume, op. cit., Part II. 116. ibid., Part II. 117. ibid., Part VIII. 118. ibid., Part VIII. 119. D. Hume, An enquiry concerning human understanding (London, 1748), § 11. Hume would probably have appeared a 'crank' to the Newtonians because of these outmoded vitalist views. 120. He also draws attention to the fallacy of composition which is latent in the classical argument that if everything in the Universe has a cause then so must the Universe. For example, although every member of a club has a mother it certainly does not follow that every club has a mother! The original argument is based upon a simple confusion between different logical classes. See section 2.9 for further discussion. 121. Ref. 113. 122. Boswell recalls that Johnson once left the room when Hume entered, see J. Boswell, The life of Samuel Johnson (Oxford University Press, London 1953). Johnson was, of course, an ardent admirer of Newton and his work. 123. For example, as any applied mathematician knows, an infinitesimal perturbation (cause) can often have an arbitrarily large effect when a system is unstable. For a discussion of the logical status of the Design Argument following Humes' work see R. G. Swinburne, Philosophy 43, 164 (1968). 124. E. Darwin, Zoonomia, or the laws of organic life, 2 vols (London, 1794). A discussion of his evolutionary ideas can be found in E. Krause, Erasmus Darwin (transl. W. S. Dallas, NY 1880). 125. I. Kant, Der einzig mogliche Beweisgrund au eine Demonstration des Daseyns Gottes (Konigsberg, J. J. Kanter, 1763) in Werke (Suhrkamp edn, Frankfurt, 1960), Vol. 1, p. 734. No full translation appears to exist. The translation we give is due to Hicks (ref. 79, p. 210) but it is wrongly attributed and referenced by him. 126. I. Kant, Kritik der reinen Vemunft (Riga, 2nd edn, 1787), Critique of pure reason, ed. N. Kemp Smith (Macmillan, London, 1968), p. 521. 127. ibid. 4

116

Design Arguments

128. ibid. 129. I. Kant, Kritik der Urteilskraft (Berlin and Leibau, 1790), transl. J. C. Meredith, The critique of judgement, ed. R. M. Hutchins (Encyclopaedia Britannica Inc., Chicago, London, Toronto, 1952), § 85. 130. ibid. 131. W. Paley, Natural theology (1802) in The works of William Paley, 7 vols, ed. R. Lynam (London, 1825). Natural theology was so successful that an expanded edition containing notes and further illustrations was published in 1836; Paley's natural theology with illustrative notes, 2 vols, ed. H. Brougham and C. Bell (London, 1836). Paley began work on Natural theology in the 1770's when he delivered a series of sermons entitled The being of God demonstrated in the works of creation, Works, Vol. 7, pp. 405-44. 132. For example, Paul Janet in ref. 51. 133. Cited in Hicks, ref. 79, p. 232. 134. There has been considerable debate as to the source of Paley's watch story. It is probable that its source was in B. Nieuwentyt's, Religious philosopher, (Vols 1-3), transl. J. Chamberlayne (London, 1719), where it appears in Vol. 1, p. xlvi; however, S. Leslie (History of English thought in the eighteenth century (ed. Harbinger, 1962), first publ. 1876, p. 347) thinks he abstracted it from Abraham Tucker (apparently Paley's favourite author), Light of Nature pursued (London, 1768-1778), 7 vols; i, 523, ii, 83. To confuse things further, Henry Brougham, Discourse of Nature theology, (London, 1835) says Paley's work is chiefly taken from the writings of Derham! Remarkably, Encyclopaedia Britannica articles on 'Nieuwentyt' in the nineteenth century claim that Paley 'appropriated' Nieuwentyt's ideas and arrangement 'without anything like honourable acknowledgement'. 135. Natural theology, p. 8-9. 136. ibid., p. 40-5. 137. ibid., p. 30. 138. ibid., p. 60. 139. J. Clive, Scotch reviewers: The Edinburgh review (1802-15) (Faber & Faber, London, 1957), p. 149; cited in D. L. LeMahieu, The mind of William Paley: a philosopher and his age (University of Nebraska Press, Lincoln and London, 1976), p. 74. This book contains further interesting biographical information, as does M. L. Clarke, Paley : evidences for the man (SPCK, London, 1974). 140. Paley, op. cit., p. 152. 141. Ref. 131, Works, Vol. 1, p. 320. 142. Natural theology, p. 52. 143. ibid., p. 58. 144. ibid., p. 317. Today we would say that the fine structure constant, which gives the strength of interactions between matter and light, must be small. 145. Brinkley was probably responsible for almost all the material in Chapter 22 of Natural theology. Like Paley he had been a Senior Wrangler at Cambridge. He became professor at Dublin in 1792 and met Paley shortly afterwards through their mutual friend John Law. 146. Natural theology, p. 318-9.

117 Design Arguments 147. ibid., p. 319. 148. ibid., p. 323. This idea seems to have originated with Buffon and Newton who estimated the age of the Earth using a law of cooling for a metal ball initially at red heat. Paley quotes Buffon's work on p. 339. 149. ibid., p. 332. 150. ibid., p. 333. The other case he mentions, with attraction varying as distance, corresponds to allowing a cosmological constant term in Newtonian gravity. 151. ibid., p. 334-5. 152. ibid., p. 341. 153. see C. C. Gillespie, Genesis and geology (Harper, NY, 1951). 154. The original titles allocated and written under were: T. Chalmers, On the power, wisdom and goodness of God as manifested in the adaption of external nature to the moral and intellectual constitution of Man, 2 vols (London, 1833), eight editions by 1884; J. Kidd, On the adaption of external Nature to the physical condition of Man (London, 1833), seven editions by 1887; W. Whewell, Astronomy and general physics, considered with reference to natural theology (London, 1833), nine editions by 1864; C. Bell, The hand, its mechanism and vital endowments, as evincing design (London, 1833), seven editions by 1865; P. M. Roget, Animal and vegetable physiology, considered with reference to natural theology, 2 vols (London, 1834), five editions by 1870; W. E. Buckland, Geology and mineralogy, considered with reference to natural theology, 2 vols (London, 1836), nine editions by 1860; W. Kirby, On the power, wisdom and goodness of God as manifested in the creation of animals, and in their history, habits and instincts (London, 1835), six editions by 1853; W. Prout, Chemistry, meteorology and the function of digestion (London, 1834), four editions by 1855. The unusual independent addition was that of C. Babbage, The 9th Bridgewater treatise; a fragment, 2nd edn (London, 1838). 155. C. C. Gillespie, ref. 153. 156. J. W. Eckermann, Conversations of Goethe with Eckermann and Soret, transl. K. Oxenford (Bell, London, 1892), Vol. 2, p. 282. See also C. S. Sherrington, Goethe on Nature and on science (Cambridge University Press, Cambridge, 1949). 157. Darwin met Whewell, Sedgewick, and Babbage at Christ's College, Cambridge. All were strong supporters of the Design Argument. 158. C. Darwin, The autobiography of Charles Darwin, ed. N. Barlow (Dover, NY, 1958 [first published 1898]), p. 19. 159. ibid. 160. F. Darwin, The life and letters of Charles Darwin, 3 vols (Appleton, NY, 1897) Vol. 1, p. 282. 161. J. C. Greene, The death of Adam (Iowa State University Press, NY, 1961); J. R. Moore, The post-Darwinian controversies (Cambridge University Press, Cambridge, 1979). 162. C. Hodge, What is Darwinism (Princeton, NY, 1874), p. 52. 163. W. Graham, The creed of science (London, 1881), p. 319. 164. It would be nice to attribute the discovery of natural selection to this impetus from the Design Argument, but it would not.be correct; Darwin

118

165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176.

177. 178.

179. 180. 181. 182. 183.

Design Arguments attributed his inspiration to reading T. Malthus, An essay on the principle of population (London, 1798), although his writings show the idea of natural selection was clear to him before he read Malthus; see S. Smith, Adv. Sci. 16, 391 (1960), and R. M. Young, Past & Present 43, 109 (1969). C. R. Darwin, The origin of species (Murray, London, 1859). A. Gray, Darwiniana (Appleton, NY, 1876). C. Darwin, Autobiography and selected letters, ed. F. Darwin (Dover, NY, 1958), p. 308. J. Fiske, Outlines of cosmic philosophy (Mifflin, Boston, 1874). J. Fiske, Through Nature to God (Mifflin, Boston, 1899). T. H. Huxley, Lay sermons, addresses and reviews (Appleton, NY, 1871), p. 301. Huxley was commenting on Kolliker's claim that 'Darwin is, in the fullest sense of the word, a teleologist'. T. H. Huxley, Critiques and addresses (Macmillan, London, 1873), p. 305. T. H. Huxley, 'On the reception of the Origin of Species' in Life and letters of Charles Darwin, ref. 160, Vol. 2, p. 179. T. H. Huxley, 'The progress of science', in Method and results (Macmillan, London, 1894), p. 103-4. Lord Kelvin, cited by A. Ellegard in Darwin and the general reader (Goteborg, 1958), p. 562. C. Maxwell, Address of the British Association (1873) in Scientific papers (Cambridge University Press, Cambridge, 1890), Vol. 2, p. 376. L. J. Henderson, The fitness of the environment (Harvard University Press, Mass., 1913) discusses the fitness of various chemical elements for incorporation of living systems. In a later work, The order of Nature (Harvard University Press, Mass., 1917), he discusses the philosophical background to the apparent teleology of chemistry, but study of his unpublished papers shows that his interpretation of chemical 'fitness' changed in later life; see Chap. 3, ref. 339. J. P. Cooke, Religion and chemistry (Scriven, NY, 1880), rev. edn. We were unable to locate a copy of the first edition anywhere in the United Kingdom. ibid., p. 161. Cooke's analysis of water provided the basis of several other American publications supporting the Design Argument; P. A. Chadbourne, Lectures on natural theology (Putnam, NY, 1870) has detailed discussion of the advantageous properties of carbon, nitrogen, oxygen and carbonic acid, whilst the later M. Valentine, Natural theology; or rational theism (Griggs, NY, 1885) summarizes Cooke's ideas about chemistry and also has extensive discussion of Darwinian evolution as a system of final causes. Ref. 51, p. 3. ibid., p. 415. ibid., p. 59. ibid., p. 148-9. Only recently has the connection between chaos and determinism been carefully considered by physicists. Behaviour that appears random to us— for example, fluid turbulence—is described by mathematical models that exhibit a very sensitive dependence on initial conditions. These mathematical models are deterministic in principle but not in practice: in order to

119 Design Arguments

184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204.

know the state of the system precisely at any future time one must know its initial state exactly. In practice, there always exists some minute error in our knowledge of the initial state, and this error is amplified exponentially in the evolution time of the system, so that very soon we have no idea where the state of the system resides. Laplacian determinism is impossible; see Chapter 3 for further discussion of the meaning of determinism. It is interesting to note that, as early as 1873, Maxwell urged natural philosophers to study 'the singularities and instabilities, rather than the continuities of things ... [which]... may tend to remove that prejudice in favour of determinism which seems to arise from assuming that the physical science of the future is a mere magnified image of the past.' [We are grateful to M. Berry for drawing our attention to this passage]. Maxwell's remarks are unusual, given the prevailing fascination of Victorians for the clockwork predictability of the world legislated by the Newtonian mechanical description. ibid., p. 202. ibid., p. 323-4. ibid., p. 424. ibid., p. 495, from Bernadin de St. Pierre, Etudes de la Nature—Bernadin also makes a memorable statement of what we might call 'natural antiselection'—'wherever fleas are, they jump on white colours. This instinct has been given them, that we may more easily catch them'. T. Lenoir, The strategy of life : teleology and mechanics in nineteenth century biology (Reidel, Dordrecht, 1982). J. D. McFarland, Kant's concept of teleology (University of Edinburgh Press, Edinburgh, 1970), pp. 69-139. M. Polanyi, Science 113, 1308 (1968). M. Polanyi, The tacit dimension (Doubleday, NY, 1966). H. Lotze, 'Lebenskraft', in Handwdrterbuch der Physiologie, Vol. 1, ed. Rudolph Wagner (Gottingen, 1842). This English translation of Lotze's remarks taken from ref. 188, 170-1. S. J. Gould, Natural History 92 (1983), 34-8. M. Eliade, From primitives to Zen: a thematic source book of the history of religions (Harper & Row, NY, 1967), p. 94. ibid., pp. 131-2. ibid., p. 135. D. Blanchard, unpublished notes, reported by Sol Tax, in Free Inquiry 2 (1982), No. 3, 45. M. Eliade, A history of religious ideas, Vol. 1 (University of Chicago Press, Chicago, 1978), pp. 59-60. M. Eliade, ref. 199, p. 90. S.-H. Nasr, An introduction to Islamic cosmological doctrines (Shambhala, Boulder, 1978), p. 150. Al-Biruni, Alberni India, quoted in ref. 201, p. 123. Al-Biruni, Kitab, al-jamahir, quoted in ref. 201, p. 123. J. Needham, Sciences and civilization in China, Vol. 2 (Cambridge University Press, Cambridge, 1956).

120

Design Arguments

205. M. Eliade, Gods, goddesses, and myths of creation (Harper & Row, NY, 1974), p. 92. 206. Ref. 204, pp. 55-6. 207. Ref. 204, p. 36. 208. Ref. 204, p. 453. 209. Chu Hsi, Chu Tzu Chhiian Shu (Collected works of Chu Hsi), Chapter 43, quoted in ref. 204, p. 489. 210. Liu Tsung-Yuan, quoted in ref. 204, p. 577. 211. Lao Tzu, Tao Te Ching, Chapter 34, quoted in ref. 204, p. 37. 212. Confucius, Lun YiX (Analects), XII, 17, quoted in ref. 204, p. 10. 213. Confucius, Lun Yu (Analects), XII, 19, quoted in ref. 204, p. 10. 214. Ref. 204, Chapter 18. 215. N. Barry, 'The tradition of spontaneous order', Literature of Liberty 5, (1982) No. 2, pp. 7-58. 216. F. A. Hayek, The constitution of liberty (University of Chicago Press, Chicago, 1960); Individualism and economic order (University of Chicago Press, Chicago, 1948); Studies in philosophy, politics, and economics (Routledge & Kegan Paul, London, 1967); New studies in philosophy, economics, and the history of ideas (Routledge & Kegan Paul, London, 1978); The counter-revolution of science (Liberty Press, Indianapolis, 1979); The road to serfdom (University of Chicago Press, Chicago, 1944). 217. F. A. Hayek, Law, legislation, and liberty, Vol. 1: Rules and order (University of Chicago Press, Chicago, 1973), p. 37. 218. Ref. 217, p. 49. 219. Ref. 217, p. 37. 220. R. Axelrod, The evolution of cooperation (Basic Books, NY, 1984). 221. R. Axelrod, Am. Political Sci. Rev. 75, 306 (1981). 222. R. Axelrod and W. Hamilton, Science 211, 1390 (1981). 223. J. Maynard Smith, Theor. Biol. 47, 209 (1974). 224. J. Maynard Smith, Evolution and the theory of games (Cambridge University Press, Cambridge, 1982). 225. R. Dawkins, The selfish gene (Oxford University Press, Oxford, 1976). 226. F. C. Copleston and B. Russell, 'Debate on the existence of God', repr. in The existence of God, ed. J. Hick (Macmillan, NY, 1964). 227. W. L. Rowe, The cosmological argument (Princeton University Press, Princeton, 1975). 228. W. L. Craig, The cosmological argument from Plato to Leibniz (Macmillan, NY, 1980). 229. R. Swinburne, The existence of God (Oxford University Press, Oxford, 1979). 230. W. I. Matson, The existence of God (Cornell University Press, Ithaca, 1965). 231. A. Flew, God and philosophy (Harcourt Brace & World, NY, 1966). 232. G. H. Smith, Atheism : the case against God (Prometheus, Buffalo, 1979). 233. Not every theologian agrees with Flew; see, for example, the discussion in ref. 227. 234. D. Hume, ref. 113, part IX.

121 Design Arguments 235. J. Hartle and S. W. Hawking, Phys. Rev. D 28, 2960 (1983). 236. S. W. Hawking, in Les Houches lectures 1983, ed. C. DeWitt (AddisonWesley, NY, 1984). 237. L. Landau and E. Lifsh*tz, Quantum mechanics: non-relativistic theory, 2nd edn (Pergamon Press, London, 1965), p. 60. 238. R. Courant and D. Hilbert, Methods of mathematical physics, Vol. 1 (Interscience, NY, 1953), pp. 451-64. 239. J. Hick, Arguments for the existence of God (Macmillan, London, 1970). 240. P. Tillich, Systematic theology, Vol. 1 (University of Chicago Press, Chicago, 1967), p. 236; quoted in ref. 323, p. 33. 241. Ref. 226, p. 175. 242. Antony Flew used this definition of 'existence' to great effect against theism in his classic paper 'Theology and falsification', repr. in John Hick's The existence of God, ref. 226. It has also been reprinted in A. Flew and A. Maclntyre, New essays in philosophical theology (Macmillan, NY, 1964), together with commentary by theologians. 243. C. Hartshorne, A natural theology for our time (Open Court, La Salle, 1967), pp. 50, 83. Professor Hartshorne disagrees with our interpretation of his work (private communication to FJT). However, his objections seem to be due to the meaning he gives to the word 'Universe', which differs from ours. By 'Universe' he means 'a particular collection of laws and particles', whereas we mean 'all collections of laws and particles which ever did, does, or ever will exist'; see also ref. 245 below. 244. As the epigrams to this section suggest, the literature on the logical necessary (or lack of it) of the Universe is immense. Any philosopher worthy of the name discusses it. For recent guides to the literature on the question, the interested reader might consult M. K. Munitz, The mystery of existence: an essay in philosophical cosmology (Appleton-Century-Crofts, NY, 1965); Anna-Teresa Tymieniecka, Why is there something rather than nothing? (Van Gorcum, Amsterdam, 1966); and M. Gardner, Scient. Am. 232 (No. 2, Feb.), 98 (1975). The interested reader is advised to avoid R. Nozick, Philosophical investigations (Harvard University Press, Harvard, 1981). Some philosophers are fond of arguing that nothing naturally engenders something. For a method of generating the entire real number system from the empty set, see J. H. Conway, On numbers and games (Academic Press, NY, 1976). 245. If the Universe is defined to be the totality of everything that exists, then every believer in God—however one defines God—is a pantheist by definition. The traditional distinctions between theism, deism, pantheism, etc. can be made only if it is possible to distinguish between God and the physical Universe. Most modern, philosophically minded theologians would contend that such a distinction can be made, but that the physical universe is actually a proper subset of God: the physical universe is in God, but God is more than the physical universe. This position was termed panentheism (literally, all-in-God) by the eighteenth-century German philosopher K. C. F. Krause. The philosopher R. C. Whittemore has pointed out (in an unpublished manuscript, 'The universal as spirit') that most of the philosophers traditionally regarded as pantheists (that is, it has been thought that these men identified God with the physical universe) were actually panentheists. For

122

Design Arguments example, Spinoza asserted in a letter that 'I assert that all things live and move in God . . . However, those who think that the Tractatus TheologicoPoliticus rests on this, namely, that God and Nature (by which they mean a certain mass, or corporeal matter) are one and the same, are entirely mistaken', (p. 343 of The correspondence of Spinoza (Letter LXIII), transl. and ed. A. Wolf (Dial Press, NY, 1955). Panentheism is distinguished from theism by saying that the latter contends that God is wholly other than the world. A problem with this distinction is that it is difficult to find a 'theistic' philosopher or theologian who really believes in theism in this sense of the word. For example, mystical Judaism is perhaps best described as panentheistic (G. G. Scholem, Major trends in Jewish mysticism (Thames & Hudson, London, 1955); H. Weiner, Nine and one-half mystics: the Kabbala today (Macmillan, NY, 1969)). Traditional Christianity also claims that not only is God everywhere in the physical universe, but—following St. Paul—that everything is also 'in' Him. The philosophical difficulty which panentheism must overcome is showing that it makes sense to talk about something which is outside (or 'transcendent to') the physical universe. One approach is to say that the organization (= information content) of the universe is distinguishable from and transcendent to the substance of which it is composed. This approach has been defended by Charles Hartshorne, whose work we discuss in Chapter 3. See also his book Omnipotence and other theological mistakes (State University of New York Press, Albany, 1984). According to J. Barr, Biblical words for time (SCM Press, London, 1962), pp. 80 and 145, the philology of the Biblical texts does not allow any distinction to be drawn between the ideas of God being external in time or eternal in the sense that he is beyond time.

3 Modern Teleology and the Anthropic Principles Once he has grasped this, he will no longer have to look at teleology as a lady without whom he cannot live but with whom he would not appear in public. E. von Briicke

3.1 Overview: Teleology in the Twentieth Century Science cannot solve the ultimate mystery of Nature. And it is because in the last analysis we ourselves are part of the mystery we are trying to solve. M. Planck

Teleological modes of explanation, which for some two thousand years after Aristotle were regarded as vastly preferable to efficient causes as modes of explanation, have been severely denigrated by the great majority of twentieth-century scientists. So far has the prestige of teleology fallen that the French molecular biologist Jacques Monod claimed that the 'cornerstone of biology', which he termed 'the Postulate of Objectivity', is 'the systematic or axiomatic denial that scientific knowledge can be obtained on the basis of theories that involve, explicitly or not, a teleological principle'. The rather violent hostility with which most scientists regard teleology is partly due to the failure of teleological arguments to account for adaptation in living things—evolution by natural selection is a much better explanation—but it is also due to the perceived paucity of significant scientific advances derived from teleological arguments. Most scientists would in fact claim that the attempt to introduce teleology into science has been positively harmful: not only has it led to no results, but it has seduced an enormous number of otherwise competent workers, who might have made important additions to true science, into wasting their lives exploring cul-de-sacs. We shall show in this chapter that although there is much truth in the above criticism of the use of teleology, it is not the whole truth: teleological ideas did on occasion lead to correct predictions, and in some cases these predictions were contrary to the ones obtained from Monod's 'Postulate of Objectivity'. In other cases, teleological arguments were able to obtain results—correct results—which the non-teleological methods of the time were too poorly developed to obtain. Even more 1

124

Modern Teleology and the Anthropic Principles

significant were the broad philosophical questions which teleology led people to ask early in this century, questions which were not followed up at the time perhaps because of the disrepute of teleology, but which bear a striking resemblance to some of the questions now being attacked on the frontiers of modern cosmology and high energy particle physics. It will be the purpose of this chapter to discuss many of the teleological predictions made and philosophical insights which derived from the teleological approach. We shall open our discussion of modern teleology with a summary of the use of this concept in contemporary biology. Monod notwithstanding, living creatures do exhibit purpose in their behaviour, and it is also obvious that bodily organs are most easily described in terms of the bodily purposes (functions) they serve. It is simply not possible to avoid using teleological concepts in biology, and in section 3.2 we shall describe the attempts of a number of biologists to prune teleology of the dubious features to which Monod objects. One feature of traditional teleology that modern biologists find particularly unscientific is its claim that mankind is the inevitable and foreordained outcome of the evolutionary process. One most often meets this claim in connection with the question of whether intelligent life exists on other planets. On the contrary, the consensus of modern evolutionists is that the evolution of intelligent life on Earth was not only not foreordained, it is so improbable that it is most unlikely to occur elsewhere in our Galaxy. We can understand its presence on Earth only by using the WAP: only on that unique planet on which it occurs is it possible to wonder about the likelihood of intelligent life. In section 3.2 we shall discuss briefly the reasons evolutionists have for believing intelligence to be an incredibly improbable accident. Additional arguments against the existence of extraterrestrial intelligent life will be found in section 8.7 and Chapter 9. Intelligent life can appear only where more primitive life has evolved first and, as we shall see in Chapter 8, it is likely that primitive life of the type which can later evolve to intelligence can arise spontaneously only if it is based on certain very special properties of a few elements. This fact was first pointed out by the Harvard University chemist Lawrence Henderson early in this century, and we shall discuss his work in section 3.3, and in Chapter 8. Monod's most serious charge against teleology is that it does not yield testable predictions, and is thus sterile and ipso facto unscientific. We shall begin a rebuttal of this claim in section 3.4, where we shall discuss action principles, a teleological formulation of physics. It is often claimed that action principles are fully equivalent to the standard non-teleological formulation of physical laws, but we shall demonstrate this is not entirely

125 Modern Teleology and the Anthropic Principles

true. Action principles have occasionally led to predictions which the standard non-teleological formulations of the day had been unable to make. Fermat was able to predict the law of light propagation through a material media correctly using an action principle argument, while Newton's non-teleological calculation led to an incorrect prediction. We ourselves shall point out that the very existence of a globally defined action for the universe requires it to be closed, a prediction which, as is well-known, cannot be obtained by non-teleological arguments. The Anthropic Principle, particularly in the form of SAP and FAP, suggests that mind is in some way essential to the cosmos. If this is so, it is natural to ask if mind is in fact everything. We have seen in the past how this question was posed and answered by Berkeley. Berkeley's empiricism had a strong influence on Kant, whose most significant German followers—Fichte, Schelling and Hegel—were led to a position vaguely analogous to Berkeley's which they called absolute idealism. We present an analysis of absolute idealism in section 3.5, using the concepts of computer theory to give a meaning to the basic undefined terms—such as 'thought' and 'mind'—of absolute idealism. We point out that there is a striking resemblance between certain speculations of modern computer theorists, in which the entire Universe is envisaged as a program being run on an abstract computer rather than a real one, and the ontology of the absolute idealists. As we shall show, the most important contribution made by Schelling was his introduction of a temporal notion of teleology into Western philosophy. Modern Anthropic Principle arguments, particularly those which lead to testable predictions, use evolutionary timescales as a crucial step. We have briefly mentioned in Chapter 1 Wheeler's argument that the Universe must be at least as large as it is in order for it to exist long enough for life to evolve (see also § 6.3). An analogous argument led Dicke to invent the WAP. In Chapter 7 timescale arguments will be important in obtaining SAP constraints on the wave function of the universe, and an evolutionary timescale will actually be derived as a testable WAP prediction in section 8.7. However, it is not often realized that an evolutionary timescale anthropic argument was used in the nineteenth century by the famous University of Chicago geologist Thomas Chamberlain to predict that the power source of the Sun was a force inside atoms. This prediction, which was ignored at the time, we count as the first successful Anthropic Principle prediction, and we discuss its genesis in detail in section 3.6. Chamberlain based his prediction on the Second Law of thermodynamics: in the absence of an atomic power source, the Sun could radiate for too short a period to be consistent with the evolutionary timescale. Another implication of the Second Law was the inevitable

126

Modern Teleology and the Anthropic Principles

extinction of life on Earth. Such an implication clearly conflicted with teleological contentions that life was important, indeed essential, to the cosmos; it indicated the cosmos was not only non-teleological, it was dysteleological! The Second Law extinction was called the 'Heat Death'; we discuss and compare the opinions of the nineteenth and twentieth century philosophers and scientists on the teleological implications of the Heat Death in section 3.7. One suggestion, made by the Austrian physicist Boltzmann, that the Heat Death is only a local phenomenon and that it is connected with a WAP selection of a local time direction, is sufficiently important to warrant discussion in a separate section 3.8. Also in this section, we discuss two failures of Anthropic arguments to yield correct predictions, and we show one failure was due to an incorrect use of the physics known at the time, and the other was based on incorrect observational data. The dysteleology of the Heat Death and the collapse of Paleyian teleology under the impact of the Darwinian revolution has forced theologians to modify drastically the traditional religious teleology. We discuss some of these new theological views of teleology in section 3.9. In general, the new views are much more abstract and less connected with the science of the day than were the older views. The primary exception was E. Barnes, an Anglican bishop and mathematician, who predicted in the early twientieth century that, on teleological grounds, the then currently accepted theory for the formation of the solar system had to be wrong, and he was correct. The most interesting defences of teleology in Nature were made in the post-Darwinian period by speculative philosphers rather than by theologians. We discuss the work of a number of these men—Marx, Spencer, Bergson, Alexander, Whitehead and Hartshorne—in section 3.10. Like Schelling, these philosophers in their different ways believed in a progressive Cosmos, evolving towards a higher state. To Bergson and Hartshorne belongs the credit for using a temporal version of teleology to infer that there had to exist a uniquely defined global time-ordering, that the lack of such a unique global temporal ordering meant that special relativity could not apply globally, though it might apply locally. This is now known to be correct; general relativity applied to cosmology allows the existence of such a unique universal time, although such a time is not permitted in special relativity. When asked by the American philosopher Dudley Shapere for examples of teleology in biology which could be ruled out by his 'Postulate of Objectivity', Monod gave the Marxian and Spencerian theories of progress, but he singled out the teleological cosmological theory of Teilhard de Chardin as being particularly untestable and hence unscientific. We shall discuss the Teilhardian theory at some length in section 3.11. We 1

127 Modern Teleology and the Anthropic Principles

point out that far from being unstable, it actually makes a prediction about the nature of thought, and this prediction has been falsified! Nevertheless, the structure of Teilhard's teleological cosmos has certain features which must appear in any theory of a melioristic cosmos that is consistent with modern science. He was really the first philosopher of optimism who faced the problem of the dysteleological Heat Death head-on. Although his specific cosmological model failed to correspond to reality, it is by no means impossible to construct a testable theory of a progressive cosmos which is roughly analogous to the Teilhardian theory. For illustrative purposes, we shall construct such a theory in Chapter 10. In general, it can be said that teleology failed, and gave either incorrect predictions or untestable nonsense, when it was applied in the small, to the details of the evolutionary history of the single species hom*o sapiens, or to questions of the physical structure of living things, which is to say, when it degenerated into vitalism. This was the erroneous use of teleology which Kant warned against in the eighteenth century. When teleology was restricted to global arguments—its true domain, according to Kant and according to T. H. Huxley, as we saw in Chapter 2—its predictions have, as we described briefly above and as we shall see in detail in this chapter, been by and large correct.

3.2 The Status of Teleology In Modern Biology We are the products of editing, rather than authorship. G. Wald

In the time of Paley and the Bridgewater Treatises, teleology was the explanation for most facts in the biological world. The marvellous adaptation of living creatures to their environment was attributed to the providential care and design of a Creator who constructed them to fit into their environment, just as a human watchmaker purposefully manufactures the components of a timepiece. The purpose of this intelligent Creator in constructing such creatures was also thought to be understood: the Universe and the creatures in it were created for both the enjoyment of the creatures and for the glory of the Creator. The Darwinian revolution changed all this. Recalling the words of T. H. Huxley, whom we quoted in Chapter 2: 'That which struck the present writer most forcibly on his first perusal of the Origin of Species was the conviction that Teleology, as commonly understood, had received its deathblow at Mr. Darwin's hands'. Adaptation of living beings was now seen to be due to natural selection acting over billions of years on modifications of organic structures created by random mutation. Some biologists, notably Asa Gray, attempted to retain the purpose of God in 2

3

Modern Teleology and the Anthropic Principles

128

Nature by giving Him the credit for causing—and directing—the mutations, but this view died out in the face of enormous evidence that the variations of genotype were truly random: a chance collision of a cosmic ray with a DNA molecule could in principle give rise to a wholly new biological structure. The nineteenth-century biologists also saw teleology at work not only in the adaptation of living things, but also in the over-all relationship of living beings to each other. As the historian A. O. Lovejoy has pointed out, pre-evolutionary biology regarded the living world as organized into a 'Great Chain of Being' with single-celled organisms at the bottom of the Chain, mankind somewhere in the middle, the Angels above him, and God at the top. This picture of living creatures was static; the species were created to fit into this ordering at the beginning of time and were ordained to remain so ordered for all time. God's purpose never changed since He was unchanging. A species could never become extinct. The non-extinction of species was justified by an assumption which Lovejoy termed the Principle of Plenitude : '... that no genuine potentiality of being can remain unfulfilled, [and] that the extent and abundance of creation must be as great as the possibility of existence, and commensurate with the productive capacity of a 'perfect' and inexhaustible Source'. The extinction of a species would mean that a gap in the Great Chain of Being would appear, and a possible species would not exist. The Principle of Plenitude was almost universally accepted by philosophers until well into the nineteenth century. The Darwinian revolution broke the Great Chain of Being and shattered the teleology-in-the-large of the Principle of Plenitude. Species arose in time and died out, to be replaced by other species. Over the past hundred years a number of biologists have attempted to retain teleologyin-the-large by changing the Great Chain of Being from a static relation in space to a dynamic relation in time. The picture developed by these men—primarily vitalists such as Driesch and J. S. Haldane in the early part of this century and du Noiiy, Sinnott, Wright and Teilhard de Chardin in the post World War II period—is of an inevitable development commencing three billion years ago from the simplest single-celled organisms then living to produce the incredible complexity of a human being today. These views, which the evolutionist George Gaylord Simpson has termed 'the new mysticism' have a certain beauty and emotional attraction, but are contradicted by a detailed examination of the evolutionary record. As Simpson ' and Ayala have discussed at length, there is no generally purposeful pattern evident in the collection of all lineages. Most lineages have died out, a few have regressed in the sense of becoming less complex, while some—including the branch of the 4

4

5

7

10

11

12 13

14,15

6

8

9

129 Modern Teleology and the Anthropic Principles

evolutionary tree which has led to Man—have increased the complexity of their nervous systems dramatically. One can adopt any of a number of criteria of progress—complexity of general structure, complexity of the nervous system, number of species in existence at a given time, number of ecological niches occupied—by any of these criteria, the collection of all lineages has at times advanced, but at some times retrogressed. As pointed out by Dobzhansky et al., the biosphere of the Earth is probably more advanced now than in Cambrian times in terms of the latter two categories, and some species—especially Man—are more advanced at present than any species in Cambrian times in terms of the former two categories. Other criteria of progress could be advanced (see refs. 12 and 13 for an extensive list) and by almost all of these criteria, the biosphere sometimes progresses and sometimes retrogresses. The major problem with most of these criteria is that they involve a value element. What is progression from the point of view of one species would be retrogression from the view of another. Human beings tend to take an anthropocentric position, and regard any development which leads to human characteristics as progressive, and any other line of development as either retrogressive or neutral. Given the WAP observation that Man exists, it follows that there must exist a lineage which is progressive by one of the anthropocentric definitions of progression, but there is no guarantee that a planet which contains living things must inevitably evolve an intelligent species, and so there is no guarantee that a biosphere anywhere would be progressive in this sense, and no guarantee that an intelligent species would continue to develop in intelligence. It is often claimed, particularly by believers in the existence of intelligent life on other planets, that because intelligence is advantageous in the struggle for life, natural selection will act to force an increase in the complexity of the nervous system at least in some lineages, and that as a consequence the intelligence of the most intelligent creature on Earth in a given epoch will increase with time. However, this is not necessarily true, because it is not intelligence alone which generates selective advantage; a sophisticated nervous system requires a huge number of support systems—such as eyes, manipulative organs, organs for transport, and so on—if it is to be effective. It is quite possible that no lineage on an earthlike planet will evolve the necessary support systems for a humanlevel intellect, and possible that even if they do, the genetic coding of the support systems will be such that an increase in the complexity of the nervous system will necessarily be offset by degeneration of some essential support organs in all possible lineages on the earthlike planet. That such an outcome is quite possible can be seen by reference to several lineages on Earth. No lineage in the entire plant kingdom has shown a significant increase in its ability to process information since the 16

17

Modern Teleology and the Anthropic Principles

130

metazoan ancestors of the plants first appeared some 500-1000 million years ago. Such increase as has occurred—the ability to orient towards the light, the ability of certain plants such as the Venus fly-trap to react to tactile sensations, for instance—have developed so slowly that were the increase to be projected into the future at the rate inferred in the past, it would require many trillions of years for the information-processing ability to reach the human level. Compare this with about 10 billion years, which is the total time the Sun will remain on the Main Sequence and radiate energy at an approximately constant rate. And it is most unlikely the rate of increase of information-processing ability could increase at the present rate; for plant metabolism simply cannot supply sufficient energy to supply a large nervous system. Even in hom*o sapiens, the brain is difficult to supply; it requires about 20% of the energy consumed by the body when resting. This fraction is comparable to the over-all metabolic requirements of active reptiles of comparable body size, and for this reason the paleontologist D. A. Russell has concluded that 'a large brain is incompatible with a reptilian metabolism'. On the Earth, out of many millions of lineages, only birds and mammals have a sufficiently high metabolic rate to support a large brain. For reptiles, the advantages of intelligence are irrelevant; they are unable to evolve human-level intelligence no matter how advantageous it is, unless they first evolve a non-reptilian metabolism. It is nevertheless true that on the Earth, there has been an increase of encephalization, which is the ratio of brain weight to body weight, in some lineages since the evolution of metazoans. Encephalization is thought to be a better measure of intelligence, or information-processing ability, than brain weight, because much of the brain is used to control body functions, and the larger the animal the larger the brain it must have in order to control these functions. The increase in the encephalization in the human lineage is in accord with an evolutionary trend established 200 million years ago. However, the encephalization rate was altered dramatically some 230 million years ago at approximately the same time as the massive extinction which defines the Permo-Triassic boundary: the rate of encephalization was much faster prior to this extinction than it was afterwards. Had the older rates persisted, a human level of encephalization would have been reached 60 million years ago, while the more recent rates of encephalization would have required 20 billion years to attain a human level from the level characteristic of primitive metazoans. The higher rate of encephalization characteristic of the pre-Triassic period was essential for the evolution of humanoid intelligence on Earth; the later rate would have been quite inadequate. Much of the earlier encephalization occurred in the oceans, and it is not at all clear it could have 18

19

20

21

19

19

131 Modern Teleology and the Anthropic Principles

continued to the human level. Technology requires a terrestrial environment. There is some evidence that encephalization goes just so far in marine lineages, and then stops increasing. For example, the encephalization of the cetacean (dolphin) lineage, whose encephalization is comparable to that of humans, reached its present level some 20-30 million years ago, but has undergone no significant change since. The dolphins are believed to have intelligence comparable to that of dogs by most biologists Very little is known about the evolution of the cephalopods, such as squid and octopi, which are often cited as highly intelligent creatures with large brains, for such soft-bodied animals leave little trace in the fossil record. However, the encephalization of the cephalopods has certainly not increased as rapidly as the vertebrates over the past 500 million years. What is known of their evolution is consistent with a rapid early encephalization, followed by essentially no increase in encephalization, as happened with the dolphins. Even if it evolves, high encephalization by no means guarantees the survival of a species or evolution to a higher grade. The Proboscidea (elephants) have an encephalization markedly higher than most other mammals, and yet they have been in decline since the Miocene, being represented by only two living species. They are survived by equally large but less-encephalized animals in similar ecological zones. Survival requires a good many animal body systems—and a benign environment—in addition to intelligence. In fact, as the evolutionist C. O. Lovejoy points out, an increased information-processing capacity in the nervous system is actually a reproductive liability both pre-natally (since a complex nervous system requires a long gestation period) and post-natally (since it takes longer to raise and teach the young). Intelligence has no a priori advantage, but it is a clear and unmistakable reproductive hazard. Thus for this reason alone we would expect such capacity to be selected for 'only in rare instances'. Primates are such an instance, but in this order of mammals encephalization is to a great extent directly related to highly unusual feeding strategies and locomotion. Furthermore, primate encephalization cannot be regarded as a typical trend of the mammals, because the primates are unusually primitive in the majority of mammalian traits. Even amongst the primates a well-defined limit on the degree of encephalization was reached in the Miocene in all primate lineages except that leading to hom*o sapiens, and the other hom*onid primates were replaced by less encephalized, more reproductively successful cercopithecoids. In short, the evolution of 'cognition', or intelligence and self-awareness of the human type, is most unlikely even in the primate lineage. As C. O. 20

2 2

19

20

23

23

23

24

Modern Teleology and the Anthropic Principles

132

Love joy puts it:

. . . man is not only a unique animal, but the end product of a completely unique evolutionary pathway, the elements of which are traceable at least to the beginnings of the Cenozoic. We find, then, that the evolution of cognition is the product of a variety of influences and preadaptive capacities, the absence of any one of which would have completely negated the process, and most of which are unique attributes of primates and/or hom*onids. Specific dietary shifts, bipedal locomotion, manual dexterity, control of differentiated muscles of facial expression, vocalization, intense social and parenting behaviour (of specific kinds), keen stereoscopic vision, and even specialized forms of sexual behaviour, all qualify as irreplaceable elements. It is evident that the evolution of cognition is neither the result of an evolutionary trend nor an event of even the lowest calculable probability, but rather the result of a series of highly specific evolutionary events whose ultimate cause is traceable to selection for unrelated factors such as locomotion and diet. 25

The believers in the existence of beings on other planets with humanlevel intelligence often cite the convergent evolution, (which means the independent evolutionary invention of a trait in two unrelated lineages), of eyes in vertebrates and cephalopods as indicating that the convergent evolution of intelligent life on different planets is not too improbable. The response to this argument by the great evolutionist Ernst Mayr is worth quoting in full: . . . the case of the evolution of eyes is [indeed] of decisive importance in the argument about the evolution of intelligence. The crucial point is that the evolution of eyes is not at all that improbable. In fact whenever eyes were of any selective advantage in the animal kingdom, they evolved. Salvini-Plawen and myself have shown that eyes have evolved no less than at least 40 times independently in the animal kingdom. Hence a highly complicated organ can evolve independently, if such evolution is at all probable. Let us apply this case to the evolution of intelligence. We know that the particular kind of life (system of macromolecules) that exists on Earth can produce intelligence. We have no way of determining whether there are any other macromolecular systems elsewhere in the universe that would have the capacity to develop intelligence. We know however, as I have said, that we do have such a system on Earth and we can now ask what was the probability of this system producing intelligence (remembering that the same system was able to produce eyes no less than 40 times). We have two large super-kingdoms of life on Earth, the prokaryote evolutionary lines each of which could lead theoretically to intelligence. In actual fact none of the thousands of lines among the prokaryotes came anywhere near it. There are 4 kingdoms among the eukaryotes, each again with thousands or ten thousands of evolutionary lineages. But in three of these kingdoms, the protists, fungi, and plants, no trace of intelligence evolved. This leaves the kingdom of Animalia to which we belong. It consists of about 25 major branches, the so-called phyla, indeed if we include extinct phyla, more than 30 of 26

133 Modern Teleology and the Anthropic Principles them. Again, only one of them developed real intelligence, the chordates. There are numerous Classes in the chordates, I would guess more than 50 of them, but only one of them (the mammals) developed real intelligence, as in Man. The mammals consist of 20-odd orders, only one of them, the primates, acquiring intelligence, and among the well over 100 species of primates only one, Man, has the kind of intelligence than would permit [the development of advanced technology]. Hence, in contrast to eyes, an evolution of intelligence is not probable. 27

For the above reasons, and many others which we omit for reasons of space, there has developed a general consensus among evolutionists that the evolution of intelligent life, comparable in information-processing ability to that of hom*o sapiens, is so improbable that it is unlikely to have occurred on any other planet in the entire visible universe. The consensus view has been defended by many of the leading evolutionists such as Dobzhansky, Simpson, Francois, Ayala et al and Mayr. The only evolutionist of any standing who has disagreed with the consensus is Stephen Jay Gould, and even Gould claims conscious intelligence is sufficiently unlikely to evolve that, should Mankind blow itself to bits, 'Conscious intelligence . . . has no real prospect for repetition [on the Earth]'. (We might also mention that Mayr called Gould's arguments in favour of his anti-consensus position—and in reality, it does not differ that much from the consensus—a 'sleight of hand', and we agree with Mayr's assessment.) In short, there is no indication in the geological record that the evolution of intelligence is at all inevitable; in fact, quite the reverse. It is true that, in the words of Simpson 'there is in evolution a tendency for life to expand, to fill in all available spaces in environments, including those created by the expansion of life itself'. But in so far as this occurs—and 'it does seem certain that life has, on the average, expanded throughout most of the evolutionary process', —this is due to the capacity of life to expand exponentially, combined with the fact that as more species come into existence, more ecological niches are formed so more species can come into being and so forth. There is absolutely no evidence to show it is due to some obvious over-riding Plan which is guiding the entire development. Furthermore, there is a definite limit to the expansion of life on Earth. The biomass is ultimately restricted by the efficiency of the basic metabolic processes which govern all living things, the mass of the Earth, and the amount of sunlight which strikes the Earth. Thus the evidence is against some of the traditional conclusions of teleological explanation in biology, and this has led a number of wellknown biologists, such as Mayr, to try to eliminate the concept of teleology from biology altogether. However, this is difficult to do. Animals, especially man, do show purposeful behaviour. In fact, as Monod has argued, 'purposeful behaviour is essential to the very definition of 28

29

30

31

16

32

33

33

27

34

35,36

37

Modern Teleology and the Anthropic Principles

134

living things'. (This does not contradict his anti-teleological views quoted in the introduction to this chapter, for Monod is only opposed to teleology in the large, to the idea that evolution has a plan.) Mayr and the other anti-teleological biologists are of course aware of this, and Mayr proposes to use the word 'teleonomic' to describe purposeful action in living creatures. This word allows Mayr to discuss purpose in biology without implying Design in the living world, as the use of the word 'teleological' in this context would tend to do. Dobzhansky et al. and Ayala, on the other hand, feel that such a terminological innovation would introduce more confusion than clarity into the analysis of the purposeful behaviour of living beings. In their opinion, we may as well admit that individual living things do exhibit teleology. Ayala has distinguished two valid uses of the teleological concept in biology. The first, which he calls artificial teleology (external teleology in the nomenclature of Dobzhansky ), is purposeful behaviour, or the teleology exhibited by objects constructed for a definite purpose. The watch—the favourite example in the Design Argument—fits into this category. The nests of birds, the hives of bees, and the burrows of certain rodents are examples of objects constructed for a definite purpose by non-human living creatures; they are also said to exhibit artificial teleology. The purposeful actions of living beings—a man making a watch or a mountain lion hunting a deer—are also examples of artificial teleology. In all cases of artificial teleology, it is possible to discover the action of some nervous system which either directs the behaviour toward some discernible end, or controls the construction of an object which will be used for some discernible purpose. The hand of a man and the wing of a bird also serve definite purposes: the former is used for manipulation and the latter for flying. However, they were not constructed under the guidance of a complex nervous system with a view to serving these purposes. They were created by natural selection acting upon the phenotypic results of random mutations in the genotype. Nevertheless they do serve a discernible purpose, and so are said by Ayala to exhibit natural teleology (internal teleology in the nomenclature of Dobzhansky ). One can subdivide natural teleology into two types: determinate natural teleology and indeterminate natural teleology. ' Determinate teleology occurs when the end purpose is achieved independently of small environmental fluctuations. Examples are the development of an egg into a chick, and a human zygote into a baby. In the terminology of Ayala, indeterminate natural teleology occurs when the final state is not uniquely determined from the initial state and indeed the final state of the system is just one of several possible final states which could have arisen from the system's initial state. We use the term 'indeterminate natural teleology' in 35

16

14

14

38

14

38

14 16

14

135 Modern Teleology and the Anthropic Principles

those cases where we are trying to discuss the evolution of a system in terms of its final stage, but where this final state is not the goal of a directing nervous system, nor the result of a deterministic developmental process. The evolution of a primate lineage into hom*o sapiens is an example of indeterminate natural teleology. We are extremely interested in knowing just how the final state—mankind—came about, but this final state was not an inevitable evolutionary outcome of any of the primate species which existed ten million years ago. Had the environmental pressures or the sequence of mutations been slightly different at any point during this period, the human species never would have arisen. Nevertheless, from our WAP viewpoint we want to know the steps in the evolutionary process leading to Man, so an explanation of this process is crucially dependent upon the final stages. We sift through the complex interaction of the closely-related hominid lineages to find the unique class that leads to hom*o sapiens: the development of the others are of much less interest. This explanation is thus a teleological one; in fact, one of indeterminate teleology since the specific environmental pressures and mutations which arose along the way could not be predicted (by biological means) from the initial biological state, and also one of natural teleology since no nervous system was guiding the evolution of the primates toward the goal of mankind. One can draw an analogy with the study of human history. In the nineteenth century a major school of British historians (and the philosopher Spencer, as we shall see in section 3.10) regarded liberal democracy as the apex of human development and viewed political history as progress toward this state. These scholars picked out those earlier events which led to liberal democracy, and de-emphasized or ignored those occurrences which did not seem to contribute to this development, even though some of those excluded events were regarded as most important at the time. The historian Herbert Butterfield felt this 'natural teleological' interpretation of history—he called it the 'Whig interpretation of history'—was a serious distortion of cultural development. We agree; and it is a distortion which arises from not taking WAP into account. Only if liberal democracy (the Whig Utopia) arises is it possible to believe that it will inevitably develop from earlier political systems. Judging from the historical record, it is more reasonable to say that from the information available to observers at a given epoch, the structure of the succeeding political system is unpredictable. Nevertheless, the people in the succeeding civilization are interested in the events that led to them, even if that history was most improbable, just as we are extremely interested in knowing the steps that led to the evolution of hom*o sapiens, even though those steps were exceedingly improbable. Political history, like biological history, can be regarded from a teleologi39

Modern Teleology and the Anthropic Principles

136

cal point of view if it is remembered that the teleology in question is indeterminate. The philosopher and biologist Grene has called the natural selection process which produces teleological structure in living things 'historical teleology '. She calls the teleology of organs which act in a useful way, like the wing of a bird 'instrumental teleology', and she calls determinate natural teleological processes, like the development of an egg into a chick, 'development teleology.' The evolutionary biologists seem to agree on the natural divisions of the teleological concept in biology, even though the terms used for these various divisions differ from one biologist to another. The question of whether teleological explanations in biology can be translated into causal explanations is a subtle one. In company with other natural scientists, evolutionary biologists have generally assumed that such a translation could in fact be made, though perhaps only with great difficulty. The natural teleological development of the egg into a chick could in principle be explained in terms of a series of complex biochemical interactions among the molecules comprising the egg. A similar description could be made of the working of the human hand or a bird's wing. It might even be possible to explain the purposeful behaviour of human beings in terms of physical interactions, with the brain regarded as merely an extremely complex chemical computer. But it seems likely that such a purely causal, non-teleological and complete explanation of purposeful biological behaviour would be so complex that no such explanation will ever be achieved. The justification for this assertion is a simple numerical estimate of the complexity of living beings. The amount of information that can be stored in a human brain is estimated to be between 10 and 10 bits, with the lower number assuming there is one bit stored on the average for each of the brain's lO cells. Now about 1% to 10% of the brain's cells are firing at any one time, at a rate of about 100 hertz. This gives a computation rate of 10 to 1000 gigaflops (a gigaflop is 10 (floating point computations per second). The lower bound of 10 gigaflops is about the rate at which the eye processes information before it is sent to the brain. For comparison, the fastest computer in existence today, the Cray-2, has a speed of 1 gigaflop and storage capacity for 2 x l 0 bits (in 64 bit words). (The IBM-AT personal computer can have up to 10 bits of RAM. Currently available 32-bit personal computer central processors can address about 10 bits of RAM. However, currently available RAM chips can store only about 10 bits, but 10 bit RAM chips should be available by the year 2000. ) So the most powerful computer has a storage capacity and information processing rate between 10 and 1000 times less than that of a human being. But only the information which a human being can process consciously, 40

10

15

10

9

41

1 0

42

7

10

6

9

42

137 Modern Teleology and the Anthropic Principles

or hold in the forefront of the mind, can be used in forming a humanly acceptable explanation. We don't know exactly how much this would be, but it is comparable in order of magnitude to the information coded in a single book, which is typically 1 to 10 million bits. No explanation humans have ever dealt with has been as complex as this. The content of most science books has been concerned with justifying the explanation rather than presenting it. Furthermore, there is an enormous redundancy in books. 10 bits is at least 4 orders of magnitude below the amount of information required for a numerical simulation of a human brain, assuming it could be done. The amount of information required for a numerical simulation of a higher mammal is within two orders of magnitude of that of a human being. This argument assumes of course that we require at least lO bits—the lower bound to the brain capacity of the human mind—in order to carry out a numerical simulation of a human being. If anything, this is a wild underestimate, because it ignores round-off errors. Even more important, in fact the essential point in estimating the difficulty of carrying out a numerical simulation of a living creature, is that the actions of living creatures are unstable from the causal (numerical simulation) point of view: a tiny change in the initial input or stored information can lead to a drastic change in the macroscopic behavior. For this reason it is not possible to reduce substantially the amount of data required in a simulation much below 10 bits. We can drastically reduce the amount of data we require to understand our fellows because we know that they will typically react in certain ways to certain stimuli. But this drastic reduction in the data set is precisely what is accomplished by teleological explanation! Using teleology, we learn that certain data, processed via teleological concepts, are sufficient for us to understand human beings and animals. In contrast, a purely causal explanation cannot make use of the same simplifications in the data, for by assumption such an explanation is not allowed to organize the data teleologically. It will be possible, we believe, to construct a computer that can process information at the human level; that is to say, be as intelligent as a human being. In fact, our arguments in Chapters 9 and 10 will assume such a computer to be possible. But we will never be able to completely understand such a machine at the causal level; it will sometimes act unpredictably, and we will find teleological explanations of its actions more useful than causal ones, at least in understanding its most complex behaviour. This is not to advocate vitalism in computers; we assume of course that computer elements obey the laws of physics, and that there are no 'vital' forces acting anywhere in Nature. A similar view of teleology in computers can be found in a paper by the mathematician Norbert Wiener. 6

10

10

43

Modern Teleology and the Anthropic Principles

138

A position of our sort was perhaps best defined by Ayala who distinguishes between three types of reductionism: ontological, methodological, and epistemological. Ontological reductionism claims that the 'stuff' comprising the world can be reduced ultimately to the particles and forces studied by physics; the vast majority of biologists (and we ourselves) are ontological reductionists. Methodological reductionism holds that in the study of living phenomena, researchers should always look for explanations at the lowest level of complexity, ultimately at the level of atoms and molecules (or even the elementary particles that compose them.) We partially support this form of reductionism, noting however, that such methods have definite limits, and other methods will often yield better results. In fact, many advances are due to letting different levels of explanation interact, and this will be our strategy in this book. Epistemological reductionism holds that theories and experimental laws formulated in one field of science can always be shown to be special cases of laws formulated in other areas of science. It is this form of reductionism which we deny. We do not think teleological laws either in biology or physics can be fully reduced to non-telelogical laws, for the reasons given above. We note that even in physics the Second Law of Thermodynamics cannot be derived from molecular mechanics without anthropic assumptions, (see section 3.8). The most indefatigable modern critic of methodological and epistemological reductionism has been Michael Polanyi, ' whose work we discussed briefly in Chapter 2. He always emphasized that he was an ontological reductionist. The distinction which Ayala draws between various forms of reductionism suggests the following distinctions between various forms of determinism: Ontological determinism claims that the evolution equations which govern the time development of the ultimate constituents of the world are deterministic; that is, the state of these constituents at a given time in the future is determined uniquely by the state of these constituents now. All theories of physics which have ever been proposed as fundamental— Newtonian particle physics, the electromagnetic field equations of Maxwell, Einstein's general relativity theory for gravity, and even quantum mechanics—all of these are ontologically deterministic theories. They differ only in the nature of the entities which are claimed as fundamental. For Newtonian physics, particles were fundamental; for Maxwell and Einstein, physical fields were fundamental; and for quantum mechanics, the wave function is fundamental. Although the fundamental constituents of the world have changed with each successive scientific revolution, the fundamental evolution equations for these entities have always been 44

45 46

139 Modern Teleology and the Anthropic Principles

deterministic. Thus there is no evidence whatsoever that the fundamental equations are not deterministic; in fact, to the extent that we believe the fundamental equations to be true, we are forced by the evidence to be ontological determinists. Methodological determinism holds that in the study of complex phenomena, such as living beings, we should always look for deterministic laws governing the phenomena. In our opinion, this form of determinism is much too strong. It is often the case that complex phenomena are better described by statistical laws in which chance is fundamental. In fact, the laws of classical thermodynamics are statistical laws which are often more useful in describing heat engines and living things than the deterministic laws from which they are often 'derived'. Epistemological determinism holds that it is possible, using the deterministic fundamental evolution equations (which are assumed to exist), to compute and hence predict the future behaviour of complex systems, in particular the future behaviour of living organisms. This form of determinism we also deny. The theory of quantum mechanics itself tells us that it is impossible to get the necessary information to predict the future wave function, even though the future wave function is in fact determined. We have argued at length above that the behaviour of living organisms like ourselves is too complex to be predictable by beings of similar complexity. There is considerable evidence that the behaviour of living beings cannot be predicted for any significant length of time by any intelligent being, no matter how intelligent. Computer scientists term ' a computation problem intractable if the number of computations needed to solve the problem grows exponentially with the length of time over which the prediction is to be made. Intractable problems are effectively unsolvable by computer no matter how powerful. Wolfram has recently shown that intractable problems are quite common in simple physical models; tractable problems may be the exception rather than the rule. In fact, the instability of living systems, which we noted above, probably makes the calculation of their future behaviour an intractable problem. The difficulty of translating the teleology of living systems into the usual causal language of physical science has led the economic philosopher Ludwig von Mises to draw a fundamental distinction between these sciences and the 'science' of human action, which is basically teleological. His view of human history is similar to the biologists' view of evolution: it is an example of indeterminate natural teleology. There are no 'historical forces' in the sense of Marx. There are only the plans of individual people who only frame their purposes in the short term. These plans and their resulting actions interact to produce a development which has no regularity after the manner of physical laws, and which is unpre120 121

341

Modern Teleology and the Anthropic Principles

140

dictable in the long run. He asserts that 'it is ideas that make history, and not history that makes ideas', and ideas which originate amongst a small number of intellectuals can be transmitted very rapidly and begin to strongly influence the actions of senior government officials and other members of the society. The result of this amplification of ideas is on occasion to change drastically the course of social evolution. Von Mises' student, Friedrich A. Hayek (whose work on spontaneous order we discussed in section 2.8), attributed the cause of this indeterminate teleology of a human social system to the inherent organizational complexity of the system: 47

Organized complexity here means that the character of the structures showing it depends not only on the properties of the individual elements of which they are composed, but also on the manner in which the individual elements are connected with each other. In the explanation of the working of such structures we can for this reason not replace the information about the individual elements by statistical information, but require full information about each element if from our theory we are to derive specific predictions about individual events. Without such specific information about the individual elements we shall be confined to what on another occasion I have called mere pattern predictions—predictions of some of the general attributes of the structures that will form themselves, but not containing specific statements about the individual elements of which the structures will be made up. This is particularly true of our theories accounting for the determination of the systems of relative prices and wages that will form themselves on a wellfunctioning market. Into the determination of these prices and wages there will enter the effects of particular information possessed by every one of the participants in the market process—a sum of facts which in their totality cannot be known to the scientific observer, or to any other single brain. It is indeed the source of the superiority of the market order, and the reason why, when it is not suppressed by the powers of government, it regularly displaces other types of order, that in the resulting allocation of resources more of the knowledge of particular facts will be utilized which exists only dispersed among uncounted persons, than any one person can possess.4ft

Hayek is concerned with describing the behaviour of a free-market economic system, but it is clear that the teleological behaviour of this system is exactly the same as the teleological behaviour of the entire living world: the teleology is there, but it occurs only on the level of the individual, who has purposes planned only for the short-term future. The entire system has a teleological structure only in so far as these individual teleologies interact to govern the dynamical behaviour of the entire system. The long-term evolution of a biological or economic system is unpredictable and any trends which may be visible at a given time could be reversed in the future. This makes it impossible for evolutionists to make long-term predictions about the future of the human race. ' ' 11 49 50

141 Modern Teleology and the Anthropic Principles

The local, short-sighted teleology of biological and economic systems does tend to increase the complexity of the systems, however. In part this occurs as a consequence of the increased stability of complex systems. As Paul Ehrlich, one of the leaders of the ecological movement, puts it: 17

. . . we have both observational and theoretical reasons to believe that the general principle holds: complexity is an important factor in producing stability. Complex communities, such as the deciduous forests that cover much of the eastern United States, persist year after year if man does not interfere with them . . . A cornfield, which is a man-made stand of a single kind of grass, has little natural stability and is subject to almost instant ruin if it is not constantly managed by man. 51

In the same work Ehrlich points out that attempts by Man to artificially stabilize such a simplified ecosystem often increases its instability. Of course, Hayek and Milton Friedman make the same point in regard to government attempts to stabilize the economy and the money supply: the effect of attempting to stabilize a complex system artificially often increases the instability rather than decreases it. (The Ehrlich statement is very similar to the Hayek statement above if the words 'Man' and 'ecology' in the former are replaced by 'government' and 'economic system' respectively). In both ecology and economics the maximum use of information—and the maximum stability—occurs when no attempt is made to simplify the system by imposing a single or small number of goals upon it. Maximum stability and maximum teleological development of the entire system occur when the teleology inherent in the system—the different interacting goals of all living things in an ecology or all humans in an economy—is maximized. Yet ecologists like Ehrlich seem unable to extend their correct observations and correct biological theories into political economy, even though their descriptions of ecological system and their moral arguments in favour of natural systems are exactly the same as the descriptions of economic systems and the arguments in favour of free markets by Mises, Hayek, and Friedman. A similar criticism was made against ecologists like Ehrlich by William Havender, a biologist at the University of California at Berkeley. Ehrlich wants government to impose a single goal upon the whole of mankind: 52

53

54

Perhaps the major necessary ingredient that has been missing from a solution to the problems of both the United States and the rest of the world is a goal, a vision of the kind of Spaceship Earth that ought to be and the kind of crew that should man her. 55

The general complexity theory of the ecologists themselves shows that this attempt to impose a goal would have the same effect on the political-economic system as Man's interference has on the ecology. A complex system like an ecology or a market economy cannot have a goal in the sense that a single individual can, and any attempt to impose one

Modern Teleology and the Anthropic Principles

142

leads to disaster. Since complex systems tend to be more stable than simple ones—this improves their selective advantage amongst systems and makes a given complex system difficult to replace except by one of increased complexity—there does seem to be a long-term trend of increasing complexity in the evolutionary record, according to Stebbins. However, this trend can be reversed—indeed, it occasionally has been — and cannot be regarded as a uni-directional teleological trend. In recent years a number of philosophers of science have attempted to describe the 'progressive' teleological development of science in terms of Darwinian evolutionary concepts. However, as Stephen Toulmin has emphasized, most of these philosophers have depicted the teleology as acting in the large to cause an inevitable development of science towards ultimate truth. Both Toulmin and Thomas Kuhn have attempted to argue that the teleology is local just as in evolutionary biology; theories compete in the sense that scientists decide between them on the basis of such things as explanatory and predictive power amongst the theories which are known to the scientists at the time the decision is made, but there is no evidence that the historical sequence of physical theories is approaching some limit which could be termed 'Ultimate Truth'. As Kuhn puts it: 'Comparison of historical theories give no sense that their ontologies are approaching a limit: in some fundamental ways Einstein's general relativity resembles Aristotle's physics more than Newton's'. This vision of the scientific enterprise was best summed-up by Kuhn in the concluding pages of his famous work The Structure of Scientific Revolutions : 17

13

56

57

58

59

58

The developmental process described in this essay has been a process of evolution from primitive beginnings—a process whose successive stages are characterized by an increasingly detailed and refined understanding of nature. But nothing that has been or will be said makes it a process of evolution toward anything . . . need there be any such goal? Can we not account for both science's existence and its success in terms of evolution from the community's stage of knowledge at any given time? Does it really help to imagine that there is some one full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to that ultimate goal? . . . the entire [scientific development] process may have occurred, as we now suppose biological evolution did, without benefit of a set goal, a permanent fixed scientific truth, of which each stage in the development of scientific knowledge is a better exemplar. Anyone who has followed the argument this far will nevertheless feel the need to ask why the evolutionary process should work. What must nature, including man, be like in order that science be possible at all? It is not only the scientific community that must be special. The world of which that community is a part also possess quite special characteristics, and we are no closer than we were at the start to knowing what these must be. That problem—What must the world

143 Modern Teleology and the Anthropic Principles be like in order that man may know it?—was not, however, created by this essay. On the contrary, it is as old as science itself, and remains unanswered.

It is the goal of the Anthropic Principle to answer it, at least in part.

3.3 Henderson and The Fitness of The Environment What is matter?—Never mind What is mind?—It doesn't matter. Anon

Lawrence J. Henderson was a professor of biological chemistry at Harvard at the turn of the century, and he published his two seminal books on teleology, The Fitness of the Environment, and The Order of Nature in 1913 and 1917, respectively, before quantum mechanics was available to provide the basis for the understanding of the physical underpinnings of chemistry. Nevertheless, his discussion of what we might term 'physical teleology' was grounded on physical principles sufficiently general that the core of his argument has withstood the bufferings of several scientific revolutions which have occurred between his time and ours. His work, as updated by several modern biochemists, notably George Wald, still comprises the foundation of the Anthropic Principle as applied to biochemical systems. We shall discuss more modern work in Chapter 8. Henderson was led to reflect on teleology in the biochemical world through his work on the regulation of acidity and alkalinity in living organisms. He noticed that of all known substances, phosphoric acid and carbonic acid (C0 dissolved in water) possessed the greatest power of automatic regulation of neutrality. Had these substances not existed, such regulation in living things would be much more difficult. Henderson searched the chemical literature and uncovered a large number of substances whose peculiar properties were essential to life. Water, for example, is absolutely unique in its ability to dissolve other substances, in its anomalous expansion when cooled near the freezing point, in its thermal conductivity among ordinary liquids, in its surface tension, and numerous other properties. Henderson showed that these strange qualities of water made it necessary for any sort of life. Furthermore, the properties of hydrogen, oxygen, and carbon had a number of quirks amongst all the other elements that made these elements and their properties essential for living organisms. These quirks were discussed in detail in his book Fitness of the Environment These properties were so outstanding in the role they played in living things that '...we were obliged to regard this collocation of properties as in some intelligible sense a preparation for the process of planetary evolution . . . . Therefore the properties of the elements must for the present be regarded as possessing a teleological character.' Henderson never actually asserted 60

61

62

63

2

64

Modern Teleology and the Anthropic Principles

144

that no life would be possible in the absence of the elements hydrogen, oxygen, and carbon, just t h a t . . . 'No other element or group of elements possesses properties which on any account can be compared with these. All such are deficient at many points, both qualitatively or quantitatively. . . . The unique properties of water, carbonic acid, and the three elements constitute, among the properties of matter, the fittest ensemble of characteristics for durable mechanism.' In earlier days such observations would be cited as evidence of a Designer—indeed, Henderson himself quotes the Bridgewater Treatise of William Whewell as pointing out many of the unique properties of water—but Henderson takes a distinctly modern approach. He discusses the theories of vitalism and mechanism at length in both of his books, and strongly criticizes the former, arguing that a scientist must always assume that living things operate according to physical laws, that there are no laws like the vital forces of Bergson and Driesch which operate in life only. In short, in the living world evolution is controlled by efficient causes and by efficient causes only. He, in contrast to the directed-evolution philosophers discussed earlier, bases his analysis on the assumption—which all moderns accept—that the development of life was at all times the result of natural selection acting on changes in the hereditary structure. Thus ultimately there is no teleology acting in a living organism; the planning which a living creature undertakes to guide his future actions can ultimately be reduced to mechanism, to the interaction of the elements in accordance with ascertainable physical laws. Furthermore, concerning the existence of a Designer, Henderson remained an agnostic. However, from the apparent 'preparation' of the elements and their properties for the eventual evolution of life he could not escape: 65

66

67,339

[we want a term]... from which all implication of design or purpose is completely eliminated. By common consent that term has come to be recognized as teleology. Thus we say that adaptation is teleological, but do not say that it is the result of design or purpose. I shall therefore . . . assert that the connection between the properties of the three elements and the evolutionary process is teleological and non-mechanical. 68

(Henderson was unaware of the distinction, which we introduced in Chapter 2, between teleology and eutaxiology, although this distinction had been introduced in 1883 by the American philosopher L. E. Hicks. Clearly, Henderson was impressed by eutaxiological, not teleological, order.) But how can this connection be non-mechanical if all interactions in the Universe, both living and non-living, are mechanical? The answer is simple: this teleological order of the three elements which is a prepara-

145 Modern Teleology and the Anthropic Principles

tion for life, was imposed in the beginning: 'For no mechanical cause whatever is conceivable of those original conditions, whatever they may be, which unequivocally determine the changeless properties of the elements and the general characteristics of systems alike'. One might think that this state of affairs would make the scientific study of the teleological order impossible, for it would seem that the work of science consists of finding efficient causes. The conditions at the beginning are, as Henderson said, presently beyond investigation. Nevertheless, Henderson argued that one can study the teleological order by the probabilistic analysis standard in other areas of physical science, and that therefore conclusions reached through this analysis have a similar force: 69

The chance that this unique ensemble of properties should occur by 'accident' is almost infinitely small (i.e., less than any probability which can be practically considered). The chance that each of the unit properties [heat capacity, surface tension, number of possible molecules, etc.] of the ensemble, by itself and in cooperation with the others, should 'accidentally' contribute a maximum increment is also almost infinitely small. Therefore there is a relevant causal connection between the properties of the elements and the 'freedom' of evolution. 70

The 'freedom of evolution' was '... freedom of development. This freedom is, figuratively speaking, merely the freedom of trial and error. It makes possible the occurrence of a great variety of trials and a large proportion of successes'. That is, the peculiar properties of the three elements hydrogen, oxygen, and carbon permitted a large number of molecules to be formed, and this enormous number of molecules allowed a large number of possible organisms to be based on these molecules. If the properties of the elements were slightly different, if there were no carbon atoms in the world, if for instance living things attempted to substitute silicon instead, then vastly fewer molecules would be possible, and evolution by natural selection on different genotypes would be impossible. Probably no organisms as complex as a single cell would arise, and certainly no creatures as complex as human beings would evolve. 'Hence the operations of a final cause, if such there be, can only occur through the evolution of systems. Therefore the greatest possible freedom for the evolution of systems involves the greatest possible freedom for the operation of a final cause'. Thus the theory of evolution by natural selection—i.e., evolution by trial and error and not goal-directed evolution—was essential to Henderson's argument. Note that Henderson's concept of final cause, since it operates by allowing many possible developments rather than making one particular development inevitable, allows evolution to Man, but does not require it. It thus subsumes the notion of 'indeterminate natural teleology' of the modern biologists. Although the properties of the elements allowed the maximum possible 71

72

Modern Teleology and the Anthropic Principles

146

freedom of evolutionary development, the properties themselves were not free to interact with living things and so evolve themselves. In Henderson's opinion, this precluded a mechanical explanation of the elemental properties, and required a teleological explanation:

It cannot be that the nature of this relationship [between the elements which allows life to evolve] is, like organic adaptation, mechanically conditioned. For relationships are mechanically conditioned in a significant manner only when there is opportunity for modification through interaction. But the things related are supposed to be changeless in time, or, in short, absolute properties of the Universe. 73

This argument for the non-mechanical determination of the elemental properties assumes these properties to be unchanging:

Nothing is more certain than that the properties of hydrogen, carbon, and oxygen are changeless through-out time and space. It is conceivable that the atoms may be formed and that they may decay. But while they exist they are uniform, or at least they possess perfect statistical uniformity which leads to absolute constancy of all their sensible characteristics, that is to say of all the properties with which we are concerned... . Accordingly, the properties of the elements to be regarded are fully determined from the earliest conceivable epoch and perfectly changeless in time. This we may take as a postulate. 74

75

Although we now believe that the elements have evolved in the sense of changing their numbers relative to hydrogen, we still believe their properties to be fixed by natural laws just as Henderson did. Thus the elements cannot 'evolve' in the sense of having the freedom to take different evolutionary pathways like living creatures can. This portion of Henderson's argument must still be regarded as sound. The part of his argument which is more questionable is his contention that the various unique properties of matter which make the Earth's environment the fittest for the evolution of life, are statistically independent. This difficulty is the bugbear of all Anthropic Principle arguments. One can never be sure that future developments in physics will not show that the supposedly independent properties of matter are in fact subtly related, and that only one very particular collection of material properties are logically possible. To his great credit, Henderson was aware of this difficulty, and he attempted to meet it in two ways. First, he contended that there was a fundamental distinction between the laws of nature properly speaking, which might be deduced a priori from the laws of thought and the properties of matter, which are not laws of thought. Henderson said the Second Law of thermodynamics might be an example of a law of thought: 'Possibly the second law of thermodynamics . . . might have been worked out by a mathematician in perfect ignorance of how energy should be conceived.' Although the laws of thermodynamics 76

147 Modern Teleology and the Anthropic Principles

may be laws of thought, the types of interactions and types of matter to which they apply are not; forces other than those which actually exist are possible. In particular, .. the prediction of electrical phenomena by one ignorant of all such phenomena seems to be quite impossible; see also ref. 77. Second, Henderson argued that an application of Gibbs' Phase Rule, a general theorem of thermodynamics which is so fundamental that it is unlikely ever to be overthrown, indicated that the elemental properties were indeed independent: 76

... since the whole analysis is founded upon the characteristics of systems and therefore upon concepts which according to Gibbs are independent of and specify nothing about the properties of the elements, it is unnecessary to examine the possibility of the existence of other groups of properties which may otherwise be unique. 78

In short, Henderson presented what must still be regarded as a powerful argument that the properties of matter are, in a fundamentally teleological sense, a preparation for life:

The properties of matter and the course of cosmic evolution are now seen to be intimately related to the structure of the living being and to its activities; they become, therefore, far more important in biology than has previously been suspected. For the whole evolutionary process, both cosmic and organic, is one, and the biologist may now rightly regard the Universe in its very essence as biocentric. 79

Henderson's work on the Fitness of the Environment had very little impact on his scientific contemporaries. The Fitness of the Environment was reviewed in Nature, but without critical comment; only a paragraph appeared summarizing the argument. The physiologist J. S. Haldane reviewed The Order of Nature for Nature. Haldane, who was at heart a vitalist and believed in goal-directed evolution, gave the book fulsome praise, but did not appreciate Henderson's arguments. The main effect of Haldane's reading was in influencing his son, J. B. S. Haldane, who in a number of letters to Nature used Henderson's ideas to explain why the laws of Nature are seen to have the properties they do. The greatest early impact of Henderson's ideas on his contemporaries was not in science but in theology, as we shall see in section 3.9. By and large, Henderson's work did not lead to any new work on the question of the fitness of the environment by scientists, although a few biologists, for instance Joseph Needham and George Wald occasionally mention his work with approval. Most biologists, however, either ignored his work or took the attitude of the zoologist Homer Smith: 80

81

82

62

One should not be surprised that there is a remarkable 'fitness' between life and the world it lives in, for the fitness of the living organism to its environment and

Modern Teleology and the Anthropic Principles

148

the fitness of the environment to the living organism are as the fit between a die and its mould, between the whirlpool and the river bed. 83

3.4 Teleological Ideas and Action Principles The great end of life is not Knowledge but Action. T. Huxley

Teleological ideas have played a role in mathematical physics mainly in the form of 'minimal' principles. In 'minimal' (or more precisely, extremum) principles, one deduces the behaviour of a physical system between times and t by requiring that the evolution of the system be such as to minimize a certain quantity. For example, in the first use of a minimum principle in physics, Hero of Alexandria showed in the first century AD that if a light ray goes from an object to a mirror, and from the mirror to an observer's eye, the path taken by the ray is shorter than any other path from the object via the mirror to the eye. Putting the observed behaviour into teleological language, we would say the light ray seems to know that its goal is the observer's eye, and it picks out among all paths from the object to the mirror to the eye the shortest one—its behaviour is teleological in other words, since its behaviour is determined by its final destination. Hero did not discover anything new about the behaviour of light rays through use of the minimal principle, for the path taken by light during reflection from a mirror was already known. He did, however, regard his teleological principle as an explanation for the behaviour of light. Hero's explanation fitted in well with Aristotle's dictum that final causes were to be regarded as the primary causes. Furthermore, Aristotle himself had argued that planets moved in circular orbits because, of all closed curves bounding a given area, the circle is the shortest Both Aristotle and Hero connected these shortest paths with the maximum speed of motion; that is, the motion also attempts to minimize the time spent in motion. The principle of least time was the basis of the next use of a minimal principle, by the seventeenth-century French mathematician and lawyer Fermat. He argued that the behaviour of a ray of light in both reflections and refractions could be understood by assuming that it always travels from one point to another so as to make the time of travel a minimum. For reflection, Fermat's Principle of Least Time reduces to Hero's law of reflection; but for refraction, Fermat was able to show that his Principle implied both Snell's law (which was known at the time), and the fact that light travels more slowly in a medium with a higher refractive index (which was not shown experimentally until two centuries after Fermat). This is the first known case in mathematical physics where 2

84

85

86

149 Modern Teleology and the Anthropic Principles

thinking about physics teleologically led to an experimentally verifiable (and correct) prediction. Fermat's work led the German philosopher Leibniz to argue in a letter written in 1687 that in as much as the concept of purpose was basic to true science, the laws of physics should and could be expressed in terms of minimum principles. It is not known whether he followed-up this suggestion with an explicit reformulation of the laws of mechanics in terms of a minimum principle, but if he did, it was never published. The first such formulation was given by the French scientist Maupertuis who in 1744 presented a paper to the French Academy of Sciences showing that the behaviour of bodies in an impact could be predicted by assuming the product mvs, where m is the mass, v the velocity, and s the distance, to be a minimum. He contended that his formulation indicated the operation of final causes in Nature, and that final causes imply the existence of a Supreme Being. Maupertuis, following Leibniz and Wolff, called the quantity mvs, which has dimensions of energy times time, the action. Maupertuis' Principle of Least Action was immediately generalized by his friend, the brilliant mathematician Leonhard Euler, into an integral theorem, valid for the continuous motion of a single particle acted on by an arbitrary conservative force. Euler showed that if the mass of the particle was assumed constant, then the integral { v ds taken along the path of a particle between its initial and final positions, was an extremum along the actual path of the particle. That is, if the value of f v ds along the actual path were subtracted from the value of f vds along paths infinitesimally close to the actual path (the particle energy having the same value on all paths), this difference would be an infinitesimal quantity of second order. This vanishing of the difference to first order is a necessary (but not a sufficient) condition for the integral J vds to be an actual minimum on the real path, and in fact there are cases in which the action is not a minimum for the actual path, but a maximum. The action principle of Euler was later extended to the case of motion of a system of interacting particles by Lagrange in 1760 and given a particularly useful formulation by Hamilton in 1835. In both the Lagrangian and Hamiltonian formulations the behaviour of a general physical system is determined by the requirement that the time integral of a function of the system be an extremum. Thus Maupertuis was incorrect in calling his discovery a principle of least action, though he was quite right in interpreting the principle as a teleological formulation of physics, since the motion of a physical system is determined in the action principle formulation by both the initial and the final states of the system. This aspect was also emphasized by Euler. Physicists have disagreed on the significance of the fact that mechanics 87

88

89

90

91

92

93

Modern Teleology and the Anthropic Principles

150

can be formulated in telelogical language. Like Maupertuis, Euler was attracted to the Action Principle formulation of mechanical laws because of its teleological aspects. Euler also believed that the Action Principle formulation could solve problems which were intractable in the usual approach to mechanics. As Euler put it: All the greatest mathematicians have long since recognized that the [least action] method... is not only extremely useful in analysis, but that it also contributes greatly to the solution of physical problems. For since the fabric of the universe is most perfect, and the work of a most wise Creator, nothing whatsoever takes place in the universe in which some relation of maximum and minimum does not appear... there is absolutely no doubt that every effect in the universe can be explained as satisfactorily from final causes, by the aid of the method of maxima and minima, as it can from the effective causes themselves... since, therefore, two methods of studying effects in Nature lie open to us, one by means of effective causes, which is commonly called the direct method, the other by means of final causes, the mathematician uses each with equal success. Of course, when the effective causes are too obscure, but the final causes are more readily ascertained, the problem is commonly solved by the indirect method; on the contrary, however, the direct method is employed whenever it is possible to determine the effect from the effective causes. But one ought to make a special effort to see that both ways of approach to the solution of the problem be laid open; for thus not only is one solution greatly strengthened by the other, but, more than that, from the agreement between the two solutions we secure the very highest satisfaction. 90

On the other hand, Poisson, Hertz, and Mach felt that such a formulation was merely a mathematical curiosity, rather than something fundamental about the world. In particular, these men emphasized that the usual approach to mechanics and the action principle approach are really mathematically equivalent, but the usual approach—which calculates the future state from initial data—is much easier to handle in practical problems. Even those who believe in the fundamental nature of action principles rarely do calculations by computing the minimum of the action integral. Instead, they use the action principle only to infer the differential equations of motion which allow one to calculate a future state from the present state. Once they have obtained the equations by motion, they proceed in the usual way. In addition, the opponents of the action principle have expressed a hostility toward introducing the concept of teleology into physics, for this notion has usually served as a wedge to infiltrate religious and metaphysical ideas into what should be a purely physical discussion. D'Abro, and, as we shall see in more detail in section 3.10, Henri Bergson have pointed out that in a deterministic system, there is no real difference between a teleological description and a 'mechanistic' description—a description which deduces the future states 94

97

95

96

151 Modern Teleology and the Anthropic Principles

from initial state information via the equations of motions. If the system is deterministic, then one could calculate the initial state from the data available in the final state; thus in this sense the initial state is determined by the final state. Finally, Yourgrau and Mandelstam have argued that for any set of evolution equations, an 'action' can be defined which is an extremum for the actual path. This would mean that action principles in general have no physical content. Nevertheless, many physicists have contended that the action principle formulation of mechanics is more fundamental than the mechanistic formulation. In the latter part of the nineteenth century Helmholtz argued that an action principle could act '... as a heuristic and guiding principle in our endeavour to formulate the laws governing new classes of phenomena.' Max Planck also felt the action formulation was a more fundamental view of natural phenomena than the mechanistic approach, primarily because he was partial to teleological explanations for religious reasons, but also because action principles expressed the laws of physics in a relativistic manner—the action was a scalar, and so its value did not depend on the choice of the coordinate system,—and because action appeared to play a fundamental role in quantum mechanics. Planck's constant has the dimensions of action. Helmholtz' assertion that action principles can suggest new physical laws has been confirmed in the twentieth century. The German mathematician Hilbert discovered the final form of the Einstein field equations independently of Einstein by combining hints coming from earlier attempts by Einstein to construct gravitational field equations with the requirement that the equations be derived from a 'simple' action integral. In this case adopting the attitude that the action—and hence by implication, a teleological process—is basic to nature led to a major discovery. Nevertheless, the teleological aspects of the action were really not paramount in Hilbert's thinking. The explicitly teleological aspect of the action was, however, basic to the early work of Richard Feynman. While still a graduate student at Princeton, Feynman developed with his teacher John Wheeler a theory of classical electrodynamics in which the radiation reaction of an electrically charged particle is explained in terms of an interaction of the particle with other particles in the past and in the future. Thus the motion of a particle today depends on what the other particles in the Universe will be doing in the far future. The action principle formulation of WheelerFeynman electrodynamics is conceptually simpler than the usual fieldand-particles formulation, in that it does not need to introduce the notion of an electromagnetic field—electrodynamics is due to the direct action of the particles on themselves. In the Wheeler-Feynman picture, the electromagnetic field is not a real physical entity, but just a book-keeping 98

99

100,101

102

103,104

Modern Teleology and the Anthropic Principles

152

device constructed to avoid having to talk about the particles teleologically. In the conventional particles-and-fields electrodynamics, the future behaviour of the particles and fields is determined by information given at one instant of time. In contrast, it is not possible to determine the future behaviour of the particles alone solely by giving the initial position and velocities of the particles. One must also specify some information about their future and past behaviour; that is, one must discuss the particles teleologically. To date, the Wheeler-Feynman formulation of electrodynamics has not led directly to any important new discoveries (see, however, refs. 105 and 106). However, this teleological way of thinking about the motion of charged particles led Feynman to develop his sumover-histories formulation of quantum mechanics, which is a method of expressing quantum mechanics in terms of an action principle. In this formulation the wave function *i) of a particle at the present time is determined from the wave function tl/(x , t ) at an earlier time t by summing a function of the classical action of the particle over all possible paths the particle could take in going from x to x in time ti —1- Using this formulation of quantum mechanics, Feynman was able to derive the so-called Feynman Rules for the scattering of elementary particles. As happened in previous centuries, many physicists (for instance, S. Weinberg) felt that teleological formulations of physical theories such as the sum-over-histories method were unphysical, and these physicists soon developed alternative ways of deriving the Feynman rules. The value of the sum-over-histories method over the alternative methods was demonstrated, however, by the proof of't Hooft and others, using the sum-overhistories method, that exact spontaneously broken gauge symmetry theories would be renormalizable. This proof encouraged experimenters to test the gauge theories, particularly the gauge theory for the electro-weak interaction of Weinberg and Salam, with the result that the Weinberg-Salam theory has now been confirmed. Weinberg now asserts that the sum-over-histories method is the best way to prove the renormalizability of the gauge theories, and he no longer feels that the sum-over-histories method is unphysical. Since the whole of contemporary particle physics is now formulated in terms of gauge theories, and since these theories must be analysed in some respects in terms of the sum-over-histories action principle method, it would seem that teleological thinking has become essential to modern mathematical physics. The sum-over-histories technique can be formulated in a non-teleological language but the other formulations lack the great heuristic power of the sum-over-histories approach, as the inventors of the alternative formulations admit. We shall use the Feynman sum-over-histories method in Chapter 7 to obtain an expression for the wave function of the Universe. We approach 107

x

108

109

109

110

110

153 Modern Teleology and the Anthropic Principles

the problem of finding the Universal wave function via the action principle because the action principle can enormously simplify the problem of the boundary conditions: as we shall point out in section 7.3, the action principle formulation strongly suggests the Universe is closed, since only closed universes have finite action and no difficulties with the boundary conditions at infinity. We could have another teleological prediction: the Universe must be closed. This prediction depends crucially on taking the action formulation as fundamental, and it cannot be obtained from a non-teleological approach employing differential equations. More discussion on the closed universe prediction can be found in ref. 111.

3*5* Teleological Ideas in Absolute Idealism It is no use arguing with a prophet; you can only disbelieve him. Winston Churchill

German absolute idealism arose at the end of the eighteenth century in part as a reaction to Kant's notion of 'thing-in-itself'. Kant had argued that we could not know an object as it actually is, but rather our minds act to force our sensory experience of the object into certain patterns which may or may not resemble the actual object being experienced. There was, nevertheless, a real object underlying our experience of the object. This 'real object' was the 'thing-in-itself'. The difficulty with the notion of a thing-in-itself is of course the fact that by definition, it is absolutely unknowable. No possible experiment is capable of giving us any information at all about the thing-in-itself. As the first absolute idealist, J. G. Fichte, put it in 1797:

A finite rational being has nothing beyond experience; it is this that comprises the entire staple of his thought. The philosopher is necessarily in the same position; it seems, therefore, incomprehensible how he could raise himself above experience . . . the thing-in-itself is a pure invention and has no reality whatever. It does not occur in experience for the system of experience is nothing other than thinking.. . 112

113

Fichte and the other absolute idealists proposed to eliminate the concept of the thing-in-itself altogether; thought comprises all of reality:

[an object]...is nothing else but the totality of [all] relations [of the object] unified by the imagination, [Fichte's emphasis] and that all these relations constitute the thing; the object is surely the original synthesis of all these concepts. Form and matter are not separate items; the totality of form is the matter .. . 114

Fichte's notion that a real object consists of all possible experiences it can generate in the mind of a potential observer is in all essentials the same as Niels Bohr's view of what is meant by a 'objectively real'

Modern Teleology and the Anthropic Principles

154

property of a quantum mechanical object. It is also similar to the economist F. Hayek's idea that the total capital in an economic system can be adequately described by listing all the possible products it could generate. But if everything is thought, what is thought? Fichte pointed out that one must be careful not to think of thought as a sort of substance, for this would get us nowhere: 115

116

[the intellect] has no being proper, no subsistence, for this is the result of an interaction and there is nothing . . . with which the intellect could be set to interact. The intellect, for idealism, is an act, and absolutely nothing more; we should not even call it an active something, for this expression refers to something subsistent in which activity inheres. 117

Since the absolute idealists claim everything is thought, we shall attempt to make sense of this and other passages by translating the statements of this philosophical school into a rigorous modern language: abstract computer theory. The central concept of computer theory is the idea of a program, or procedure. A program can be regarded abstractly as a map f:N—>N from the set of natural numbers, N, into itself. That is, an input data set will be specified by an integer, and the program will generate from this number an output which is another number. The whole of computer theory can be said to be concerned with deciding what constitutes an effective procedure, and with describing the attributes of an effective procedure. By the Turing Test, ' which we shall discuss at length in section 8.2, a human intellect can be equated with a particular type of program. But it is often pointed out (e.g. ref. 119) that we can go further. We can in fact simulate—in the computer language sense of representing the evolution of—the entire Universe with a program, for the Universe evolves deterministically from an initial state (input data set of the program) into a final state (output data set) and the Universal states are operationally denumerable. (We should mention that even a quantum mechanical Universe is deterministic; see Chapter 7. The evolution equation (7.37) for the Universe is a deterministic equation, since a state at any time is determined uniquely from its initial state.) The absolute idealists want to make the step which many computer scientists have taken (see ref. 119 for examples) and equate the Universe with its simulation. This is not as unreasonable as it sounds at first hearing. If a simulation is perfect, then those subprograms which are isomorphic to human beings in the general Universal Program act the same in the simulation as do human beings in the actual Universe. They laugh, they cry, they live, and they die. By the underlying logic of the Turing Test, they have to be regarded as persons. 118 119

155 Modern Teleology and the Anthropic Principles

Now the Universal simulation need not be run on an actual computer; it can be regarded as an abstract sequence of mappings from one input set to another. The actual Universe is a representation of the abstract Universal Program in the same sense that the written Roman numerical III is a representation of the abstract Idea of three, or as an actual physical computer is a representation of an abstract program. ' A rational subprogram inside the Universal Program cannot by any logically possible operation distinguish between the abstract running of the Universal Program, and a physically real, evolving Universe. Such a physically real Universe would be equivalent to the Kantian thing-in-itself. As empiricists, we are forced to dispense with such an inherently unknowable object: the Universe must be an abstract program, or Absolute Idea, which is of the same nature as the human intellect, or program. Fichte's act, the undefined basic property of the intellect, can be thus equated with the basic map (= procedure = program) that takes one state into another, or more precisely, equated with the class of basic operations of an abstract universal machine. ' The human mind is a very complex yet very special type of program. It is capable, in particular, of forming a model of itself as a subprogram, and studying this subprogram. This model-building and analysing process is called consciousness. The model is only a rough model, for Godel's Theorem shows an exact model to be impossible even for infinite machines such as the universal Turing machine. The problem the absolute idealists had to deal with was explaining why the Universal Program is as complex as it is observed to be, involving many subprograms, including those which can be called rational. This difficulty can be attacked in one of two ways. The first approach, which could be termed subjective idealism, would take the finite rational subprogram as the basic entity, and try to construct the Universal Program out of the inherent logical nature of the rational subprogram. This was Fichte's approach. The obvious problem with this approach is that it is difficult to avoid solipsism. The second approach, objective idealism, which was the one preferred by Fichte's successors, Schelling and Hegel, is to take the Universal Program as basic, and to argue that rational subprograms are produced by the very nature of the Universal Program. As Schelling put it: 120 121

120 121

Fichte could side with idealism from the point of view of reflection. I, on the other hand, took the viewpoint of production with the principle of idealism. To express this contrast most distinctly, idealism in the subjective sense had to assert, the ego is everything, while conversely idealism in the objective sense had to assert: Everything = ego and nothing exists but what = ego. These are certainly different views, although it will not be denied that both are idealism. 122,123

None of the absolute idealists would however, accept Berkeley's solution

Modern Teleology and the Anthropic Principles

156

that a God external to the universe held the non-rational order in existence when no rational ego was observing. Rather, they all in their different ways attempted to construct reality from itself. Fichte argued that a rational self-conscious mind must of logical necessity have experiences which it would interpret as a world external to itself: the subjective posits the objective. But to posit an object means that the self limits itself. Furthermore, other finite rational minds must exist in order for the freedom of a rational program to be fully realized. ' Translating this into computer language, we would say that a program sufficiently complex to count as rational would have to act as if embedded in a larger program which would contain other rational subprograms and submappings which would be interpreted by the rational subprograms as an external world. But we know at least one rational program exists because each individual knows himself to be one by self-reflection. According to Fichte, each rational being by its innate nature must have goals, which is to say it must be teleological. The ends of all rational beings in the Universal Program must impart a limited teleology to Nature itself, which as Mind must also have a Purpose Itself, but Fichte did not investigate this Purpose. Schelling, who regarded the Absolute Ego, or Universal Program as fundamental, was concerned with its Ultimate Purpose: 124 125

Has creation a final purpose at all, and if so why is it not attained immediately, why does perfection not exist from the very beginning? There is no answer to this except the one already given: because God is a life, not a mere being. All life has a destiny and is subject to suffering and development. God freely submitted himself to this too, in the very beginning . . . in order to become personal . . . for being is only aware of itself in becoming... . All history remains incomprehensible without the concept of a humanly suffering God. Scripture, too, . . . puts that time into a distant future when God will be all in all, that is, when He will be completely realized. For this is the final purpose of creation, that which could not be in itself, shall be in itself... . Succession itself is gradual. I.e., it cannot in any single moment be given in itself entirely. But the farther succession proceeds, the more fully the universe is unfolded. Consequently, the organic world also, in proportion as succession advances, will attain to a fuller extension and represent a greater part of the universe... . 126

127

In the opinion of the great historian of ideas Arthur O. Love joy, it is this first introduction into philosophy of an evolutionary metaphysics, or more particularly, the notion of an evolving God, who at the final state of the Cosmos will be both fully realized and one with the Cosmos, that is the chief contribution of Schelling to human thought. In his celebrated debate with the philosopher Jacobi, who was defending the traditional conception of a perfect, unchanging Deity, Schelling put it thus: 128

I posit God, as the first and the last, as the Alpha and the Omega; but as Alpha

157 Modern Teleology and the Anthropic Principles he is not what he is as Omega, and in so far as he is only the one—God 'in an eminent sense'—he cannot be the other God, in the same sense, or in strictness, be called God. For in that case, let it be expressly said, the unevolved [unentfaltete] God, Deus implicitus, would already be what, as Omega, the Deus explicitus is. 129

In Schelling's view the Universal Program would give rise to selfconscious subprograms which would, in the fullness of time, merge together into one self-knowing Mind. Nature is teleological for two reasons: the rational subprograms are presently an image of the Universal Program, and further—as a consequence of being an image of an intrinsically teleological entity—the Universal Program has the goal of universal self-consciousness. Hegel agreed with Schelling that the Absolute Idea (= Universal Program) is fundamentally teleological: the Universe, or totality, is ultimately self-thinking thought; or to put it another way, the process of Nature is the teleological movement toward the Universe becoming aware of itself. The human species is the means whereby the Universe becomes aware of itself. In fact Hegel contended the struggle of the Universe to become aware of itself was the purpose of human history:

. . . the final cause of the World at large, we allege to be the consciousness of its own freedom on the part of Mind [Geist], and ipso facto, the reality of that freedom. . . . substance is essentially subject . . . the Absolute is Mind . . . Mind alone is reality. 130

131

In contrast to Schelling, Hegel did not believe in a perpetually evolutionary cosmos. In the words of the English idealist John McTaggart, 'while [Hegel] did not explicitly place any limits to the development of the universe in time, he seems to have regarded its significance . . . as pretty well exhausted when it had produced the Europe of 1820', which is to say, with the development of Hegelian philosophy. Absolute idealism went into a decline with the deaths of Schelling in 1854 and Hegel in 1831, but it flourished anew at the end of the nineteenth century in both the United States and Great Britain. McTaggart was one of the leaders of the British idealist school, which also included F. H. Bradley and B. Bosanquet. These men were influenced mainly by Hegel rather than by Fichte or Schelling; they were regarded as neo-Hegelians by contemporary British realists such as Russell. Nevertheless, toward the end of his career McTaggart had moved from the static absolute idealism of Hegel to the cosmic evolutionary idealism of Schelling, of which he was apparently unaware. McTaggart argued that value, or the good in the universe is increasing with time, and that it must become infinite in finite time. This infinite good is the ultimate goal of a teleological universe. Most of McTaggart's idealist contemporaries, like 132

133

Modern Teleology and the Anthropic Principles

158

Bosanquet, retained Hegel's static comology in which Man was the ultimate knowing subject; the Universe was teleological only through Man, and because teleological Man was the image of the Universe. McTaggart felt that 134

. . . those Idealists . . . seem generally unwilling to adopt a view which makes the selves that we know numerically insignificant in the universe . . . the conclusion that the time to be passed through before the goodness of the final state is reached may have any finite length, cannot be altogether attractive to those who feel how far our present life is from that great good Hegel is perhaps the strongest example of this unwillingness to accept the largeness of the universe But the universe is large, whether we like that largeness or not. 133

The American idealist school included the Harvard philosopher Josiah Royce, and to a certain extent Charles Sanders Peirce, considered by many to be the greatest American philosopher. Peirce held a view which he termed 'tychistic idealism', in which life, regarded as being a sort of intrinsic chance or spontaneity, is a fundamental aspect of everything. In some of his writings, Peirce argued that the Universe was too vast to have any character, teleological or otherwise. In other writings, Peirce defended a 'Cosmogonic Philosophy', in which the very development of life would cause it to gradually lose its spontaneous character, and thus life would eventually totally order an initial universal chaos: 136

136

[Cosmogonic Philosophy] would suppose that in the beginning—infinitely remote—there was a chaos of unpersonalized feeling, which being without connection or regularity would properly be without existence. This feeling, sporting here and there in pure arbitrariness, would have started the germ of a generalizing tendency. Its other sportings would be evanescent, but this would have a growing virtue. Thus the tendency to habit would be started; and from this, with the other principles of evolution, all the regularities of the universe would be evolved. At any time, however, an element of pure chance survives and will remain until the world becomes an absolutely perfect, rational, and symmetrical system, in which mind is at last crystallized in the infinitely distant future. 137

Royce, on the other hand, always defended a cosmic teleology; for example, he did so in his Gifford Lectures. In Royce's view, Nature arises from a sort of mutual interaction between the knower and the known: 135

Reality is not the world apart from the activity of knowing beings, it is the world of the fact and the knowledge in one organic whole. 135

Royce's most significant contribution to teleology, however, was not contained in his published work, but rather lay in his discussions with Lawrence Henderson on the subject. Royce had organized a private evening seminar, which included Henderson and a number of other

159 Modern Teleology and the Anthropic

Principles

scholars at Harvard, including for a time even T. S. Eliot. H. T. Costello took minutes of these meetings throughout the period 1913-1914, and they reveal that Henderson's Fitness of the environment and the possible interpretations of the chemical concept of 'fitness' which Henderson proposed was the topic of presentations and debate at the seminar for over three months. Although Henderson did not obtain his idea of the fitness of the environment from Royce or others at the seminar, Henderson acknowledged that his insight was sharpened by the debate. 339

340

3.6 Biological Constraints on The Age of The Earth: The First Successful Use of An Anthropic Timescale Argument Anthropic, (or Anthropical): of, or relating to mankind or the period of man's existence on earth. Webster's Dictionary, 1975

The Anthropic Principle imposes constraints on the types of physical processes allowed in the Universe by requiring that these processes must be of such an age that slow evolutionary processes will have had time to produce intelligent beings from non-living matter. Thus one sort of physical prediction which can be made using the Anthropic Principle would be a prediction of the types of energies and materials which can be present in the Earth and Sun, with the prediction being based on purely biological arguments of the minimum time needed for the evolution of intelligence. This is in fact the approach we shall use in later chapters to study constraints on the physical constants. We shall take from biology the estimate that a lower bound of a billion years is required for the evolution of intelligence, which implies that stars must be stable for at least that long, and so on. However, the first Anthropic prediction of this sort was actually made in the latter part of the nineteenth century in the course of a debate on the age of the Earth between biologists and physicists. This debate was initiated by Lord Kelvin, one of the most influential physicists of the nineteenth century. The first scientific attempt to measure the age of the Earth was made in the late eighteenth century by the great French scientist Buffon. Buffon adopted the point of view that the Sun's heat was insufficient to warm the Earth; heat from the Earth's interior was essential to provide enough heat for organic life. He also assumed that the Earth's internal heat was not being continuously generated, but was residual—the Earth had been initially very hot, but has been cooling down ever since its formation. Earlier Newton had pointed out in the Principia that a globe of red hot iron the size of the Earth would need at least 50,000 years to cool. Buffon confirmed Newton's estimate by measuring the time required for balls

Modern Teleology and the Anthropic Principles

160

made of various substances to cool from red heat to the absence of glow and then to room temperature. Extrapolating to a globe the size of the Earth, Buffon estimated that an initially molten Earth would need about 36,000 years before it would be cool enough for organic life to begin, and that about 39,000 years had passed from this beginning of organic life to the present d a y . This attempt by Buffon to calculate the age of the Earth attracted a great deal of attention, and a desire to put Buffon's cooling calculations on a more rigorous basis and thus to put an estimate of the age of the Earth on a more secure foundation was what led Fourier to develop his theory of heat conduction. ' Fourier's work was the basis of Lord Kelvin's well-known estimate of the age of the Earth and Sun. In his 1863 paper, 'On the Secular Cooling of the Earth\ Kelvin assumed that the cooling of the Earth could be modelled by that of an infinite hom*ogeneous solid. That is, Fourier's heat conduction equation was solved by assuming that the temperature varied in one direction only, the x-direction say. For a constant value of x the temperature was the same for any values of y and z, the two orthogonal directions. Kelvin also assumed that initially the Earth was a solid sphere of uniform temperature throughout. He justified this assumption on the basis that he felt solid rock would be denser than molten rock, and so rock cooling near the Earth's surface would sink before solidifying, thereby creating convection currents which would maintain a constant temperature throughout the entire Earth until its interior was solid throughout. The initial constant temperature would thus be the melting temperature of rock, which Kelvin estimated to be between 7000 and 10,000 degrees Fahrenheit. The centre of the Earth would still be at this temperature (i.e., T=10,000°F at x = 4000 miles). A final assumption made by Kelvin was that the thermal conductivity of the Earth was constant throughout, and equal to a suitable average of the conductivities of various surface rocks. These assumptions allowed Kelvin to calculate the thermal gradient on the surface of the Earth as a function of time. It was generally accepted that a thermal gradient of one degree Fahrenheit per 50 feet of depth was a probable mean over the present surface of the Earth, so Kelvin's formula yielded the estimate of 98 million years since the solidification of the Earth. Because of the uncertainties, Kelvin extended the limits of this period to between 20 million and 400 million years. Fourier had actually given the same formula for the age of the Earth and suggested roughly the same data in 1820, but had not written down the resulting age of the Earth. In the opinion of the historian Stephen Brush, Fourier apparently felt that 100 million years was such an incredibly large number it was not even worth writing down! In a paper published a year earlier, Kelvin had also obtained an estimate of the Sun's age. By assuming that the source of the Sun's heat 138139

140 141

142

140

143

161 Modern Teleology and the Anthropic Principles

was gravitational potential energy—the Sun was envisaged to have been formed from meteors initially very far apart and with zero kinetic energy—Kelvin was able to place a lower limit to the original supply of solar energy at 10 times the present annual heat loss. Because of the uncertainties involved—the Sun's density, its specific heat, and the amount of its present-day contraction were not known—the upper limit could be up to 10 times higher. Kelvin summarized his result as follows: 7

It seems, therefore, on the whole most probable that the sun has not illuminated the earth for 100,000,000 years, and almost certain that he has not done so for 500,000,000 years. As for the future, we may say, with equal certainty, that inhabitants of the earth cannot continue to enjoy the light and heat essential to their life, for many million years longer, unless sources now unknown to us are prepared in the great storehouse of creation. 143

Kelvin concluded in his later paper:

. . . most probably the sun was sensibly hotter a million years ago than he is now. Hence geological speculation assuming somewhat greater extremes of heat, more violent storms . . . are more probable than those of the extreme quietest, or 'uniformitarian' school . . . it is impossible that hypotheses assuming an equality of sun and storms for a million years can be wholly true. 142

These papers by Kelvin appeared some three years after the first edition of Darwin's Origin of Species, and although Kelvin pointed out in his papers the basic incompatibility of his chronology and Darwin's theory (a desire to refute Darwin was his motivation for writing the papers), biologists did not immediately respond to Kelvin's challenge. The first important reference to this incompatibility was Fleming Jenkin's review of the Origin in 1867. The Scot Jenkin was himself a physicist, and a close personal friend of Kelvin. Jenkin pointed out that 144

. . . Darwin's theory requires countless ages, during which the earth shall have been habitable, and he claims geological evidence as showing an inconceivably great lapse of time, and as not being in contradiction with inconceivably greater periods that are even geologically indicated—periods of rest between formation, and periods anterior to our so-called first formations, during which the rudimentary organs of the early fossils became degraded from their primeval uses. 145

As to a numerical estimate for the timescale, Jenkin claimed

. . . we doubt whether a thousand times more change than we have any reason to believe has taken place in wild animals in historic times, would produce a cat from a dog, or either from a common ancestor. If this be so, how preposterously inadequate are a few hundred times this unit for the action of the Darwinian theory. 146

Modern Teleology and the Anthropic Principles

162

Jenkin emphasized the inconsistency between this Darwinian timescale, and Kelvin's chronology: 'From the earth we have no very safe calculation of past time, but the sun gives five hundred million years as the time separating us from a condition inconsistent with life.' The arguments of Kelvin and his minions gradually began to tell on the biologists; by 1871 both Wallace, the co-discoverer of natural selection, and Huxley, the chief fighter for evolution in the public arena, had yielded to Kelvin's arguments to the extent of admitting that evolutionary change may have occurred much more rapidly in the past than now, with the result that the entire evolution of living things occurred within Kelvin's timescale of 100 million years. In the sixth and last edition of the Origin, Darwin made a similar concession: 147

It is, however, probable, as Sir William Thomson [sic; Thomson was ennobled as Lord Kelvin] insists, that the world at a very early period was subjected to more rapid and violent changes in its physical conditions than those now occurring; and such changes would have tended to induce changes at a corresponding rate in the organisms which then existed. 148

Later in the book, however, Darwin included a hedge:

With respect to the lapse of time not having been sufficient since our planet was consolidated for the assumed amount of organic change . . . , I can only say firstly that we do not know at what rate species change as measured in years, and secondly that many philosophers are not as yet willing to admit that we know enough of the constitution of the universe and of the interior of our globe to speculate with safety on its past duration. 149

Although the biologists and geologists were willing to accept Kelvin's limit of 100 to 400 million years for the age of the Earth and Sun, several physicists and astronomers began to argue in the 1870's that Kelvin had been far too generous in assigning his upper limit, and that in fact it was much lower. Kelvin's friend and fellow Scot, the physicist Tait, contended in a series of public lectures delivered in 1874 that further calculations of the Earth's cooling indicated that the time since the Earth's solidification could be 10 to 15 million years at most, that evidence from tidal friction implies less than 10 million years, and that the Sun had heated the Earth for no more than 15 to 20 million years. In 1878 the American astronomer Simon Newcomb reviewed Kelvin's arguments on the Sun's heat, and came to a conclusion similar to Tait's, that the Sun could not have supported life for more than 10 million years. The American Clarence King, an unconventional field geologist who served as the first director of the United States Geological Survey, obtained a figure similar to Tait's for the age of the Earth—22 to 24 million years. Kelvin himself agreed that the age of the Earth and Sun should be reduced below his original estimate, but not quite to the drastic 150

151

152

163 Modern Teleology and the Anthropic Principles

reduction of Tait. This reduction to 20 million years was more than the geologists and biologists could accept as consistent with the observations in their own fields, and the new number provoked cries of outrage. Darwin, in particular, referred to Tait's new number as 'monstrous'. The Scottish geophysicist James Croll argued in response to Tait's number that the evidence of geology alone show 'without absolute certainty that [the Earth's age] must be far greater than 20 million years'. He went on to say 153

154

. . . it does not follow as a necessary consequence, as is generally supposed, that [the Sun's initial] store of energy must have been limited to the amount obtained from gravity in the condensation of the Sun's mass. The utmost that any physicist is warranted in affirming is simply that it is impossible for him to conceive of any other source. His inability, however, to conceive of another source cannot be accepted as a proof that there is no other source. But the physical argument that the age of our earth must be limited by the amount of heat which could have been received from gravity is in reality based upon this assumption—that, because no other source can be conceived, there is no other source. It is perfectly obvious, then, that this mere negative evidence against the possibility of the age of our habitable globe being more than 20 to 30 million years is of no weight whatever when pitted against the positive evidence [from geology] that its age must be far greater. 154

Even Archibald Geikie, the director of the Geological Survey of Scotland and a friend of Kelvin, was moved to reply to these later estimates, though he had originally accepted Kelvin's earlier estimate of 100 million years. Indeed, Geikie was a major cause of the widespread acceptance of the earlier estimate among geologists. In his 1892 Presidential Address to the British Association for the Advancement of Science, Geikie asserted:

After careful reflection on the subject, I affirm that the geological record furnishes a mass of evidence which no arguments drawn from other departments of nature can explain away, and which it seems to me, cannot be satisfactorily interpreted save with an allowance of time much beyond the narrow limits which recent physical speculation would concede. . . . that there must be some flaw in the physical argument I can, for my own part, hardly doubt, though I do not pretend to be able to say where it is to be found. Some assumption, it seems to me, has been made, or some consideration has been left out of sight, which will eventually be seen to vitiate the conclusions, and which when duly taken into account will allow time enough for any reasonable interpretation of the geological record. 155

155

As Geikie pointed out, the arguments of the geologists and paleontologists for a vaster timescale were based on observations of the present rate of geological and biological change, and the total absence of any evidence that these rates had changed during the history of the geological record. On the contrary, there was positive evidence that these rates had not changed over time. Edward B. Poulton, professor of zoology at Oxford,

Modern Teleology and the Anthropic Principles

164

listed some of this positive evidence in his address as president of the zoological section of the British Association in 1896. For example, many insects in the Carboniferous period had large wings, but insects in stormy areas today are wingless. Thus storms could not have been more violent then, as Kelvin's argument would require. Poulton asserted that if the rate of deposition of sediment were constant, then 400 million years must have passed since the Cambrian period, and this number must be further increased, 'perhaps doubled', to account for evolution prior to the Cambrian. Poulton contended that natural selection takes much longer to alter simple organisms than more complex ones, and that since except for the vertebrates the origin of no phylum can be found in the fossil record (since the beginning of the Cambrian), it follows that a very long period must have preceded the Cambrian—Darwin had taken a similar position in the first edition of the Origin. John G. Goodchild, curator of the Geological Survey Collections at the Edinburgh Museum of Science and Art, also made a calculation of the age based on geological and biological evidence, and concluded that 700 million years had passed since the beginning of the Cambrian, with at least an equal period for the pre-Cambrian. Even if Kelvin's arguments were wrong—and by the end of the nineteenth century most biologists and geologists who thought about the matter were convinced that he was, when Kelvin began to defend a very low age of the Earth—there remained the problem of where the error lay. Many writers took the point of view that there must be some source of energy which Kelvin had overlooked, as Darwin implied in the last edition of the Origin. (For references to these writers see ref. 144.) However, it was also possible that some of Kelvin's approximations were in error. This possibility was first discussed in detail in 1895 by John Perry, a former assistant of Kelvin. Perry pointed o u t ' that Kelvin's age of the Earth was sensitive to his assumption that the thermal conductivity was a constant throughout the Earth, and that if, instead, it increased by a factor of ten from the Earth's surface to the centre, then Kelvin's time limit had to be increased by a factor of fifty-six. Perry also contended that some degree of fluidity must exist in the Earth's interior, and so heat conduction would be augmented by convection, which would have the effect of increasing the heat flow and hence the effective conductivity. Furthermore, he gave a mechanism for increasing the amount of energy available to the Sun. In his reply Kelvin expressed doubts that the internal conductivity of the Earth could be as high as Perry's argument would require, but he admitted that on the basis of the Earth's heat alone, an upper limit to the Earth's age could be set at 4000 million years. However, he still insisted that the Sun's heat limited the Earth's age to a few score million years. 156

157

158 159

160

165 Modern Teleology and the Anthropic Principles

As this limit was based on the available amount of gravitational energy (and Kelvin had long before pointed out that all other known forms of energy were even more inadequate) and the assumption that the solar output had to be essentially the same as it was today—and this was supported by the biological evidence itself—then this limit had to be accepted, if one granted that all sources of energy were known. The most emphatic denial that all sources were known was made in 1899 by Thomas C. Chamberlain, professor of geology at the University of Chicago, who is best known today for his development of a solar system formation theory somewhat similar to Laplace's nebular theory. Chamberlain's argument amounted to an Anthropic Principle prediction: Is present knowledge relative to the behavior of matter under such extraordinary conditions as obtained in the interior of the sun sufficiently exhaustive to warrant the assertion that no unrecognized sources of heat reside there? What the internal constitution of the atoms may be is yet open to question. It is not improbable that they are complex organizations and seats of enormous energies. Certainly no careful chemist would affirm either that the atoms are really elementary or that there may not be locked up in them energies of the first order of magnitude. No cautious chemist would probably venture to assert that the component atomecules, to use a convenient phrase, may not have energies of rotation, revolution, position, and be otherwise comparable in kind and proportion to those of the planetary system. Nor would they probably be prepared to affirm or deny that the extraordinary conditions which reside at the center of the sun may not set free a portion of this energy. 161

As is well-known, the extreme conditions at the Sun's centre cause thermonuclear fusion of hydrogen into helium, and this in effect sets free a portion of the energy locked up in these atoms (in the form of mass). This process was first discussed some thirty years later, long after it was realized that radioactive decay in the Earth's interior also invalidated Kelvin's argument on the cooling of the Earth. In principle, Chamberlain's arguments could have led to experiments on the behaviour of matter at very high energies which could have led to the discovery of nuclear fusion reactions much earlier. Thus Anthropic constraints—evolutionary time scales—on the behaviour of matter in effect predicted nuclear sources of energy. We shall use analogous evolutionary timescale arguments to make some other predictions in later chapters, in particular Chapters 7 and 8.

Modern Teleology and the Anthropic Principles

166

3.7 Dysteleology: Entropy and the Heat Death A man said to the Universe: 'Sir, I exist.' 'However', replied the Universe, 'The fact has not instilled in me a sense of obligation' Stephen Crane

Modern science presents a critical problem for teleological arguments. The very notion of teleology, that there is some goal to which the Universe is heading, strongly suggests a steady improvement as this goal is approached. Although progress was not strictly allowed by the Newtonian physics of the day, the defenders of the teleological argument before the nineteenth century generally held this optimistic view. Meliorism even survived Darwin's destruction of traditional teleology. Darwin himself felt that his theory of evolution justified such an optimistic view. As he wrote in the closing pages of the first edition of On the Origin of Species: 162,163

As all the living forms of life are the lineal descendants of those which lived long before the Silurian epoch, we may feel certain that the ordinary succession by generation has never once been broken, and that no cataclysm has desolated the whole world. Hence we may look with some confidence to a secure future of equally inappreciable length. And as natural selection works solely by and for the good of each being, all corporeal and mental endowments will tend to progress towards perfection. 164

Darwin wrote these words in 1859, just slightly after the formulation of the Second Law of Thermodynamics, but before its dysteleological implications became generally known. The great German physicist Hermann von Helmholtz was the first to point out, in an article published in 1854, that the Second Law suggested the Universe was using up all its available energy, and thus within a finite time all future changes must cease; the Universe and all living things therein must die when the Universe reaches this final state of maximum entropy. This is the famous 'Heat Death' of the Universe. It strongly denies the Universe is progressing toward some goal; but rather is using up the store of available energy which existed in the beginning. The Universe is actually moving from a higher state to a lower state. The Universe, in other words, is not teleological, but dysteleological ! As the historian of science Stephen Brush has pointed out, this Heat Death concept had a profoundly negative effect on the optimism of the late nineteenth and early twentieth centuries. The popular books on cosmology written in the 1930's by the British astronomers Jeans and Eddington were particularly important in making the general public aware of the Heat Death. The new attitude this produced concerning the 165

140

167

168

167 Modern Teleology and the Anthropic Principles

relationship between Man and the Cosmos was epitomized in 1903 in a famous passage of Bertrand Russell's:

... the world which science presents for our belief is even more purposeless, more void of meaning, [than a world in which God is malevolent]. Amid such a world, if anywhere, our ideas henceforward must find a home. That man is the product of causes which had no prevision of the end they were achieving; that his origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms; that no fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave; that all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins—all these things, if not quite beyond dispute, are yet so nearly certain that no philosophy which rejects them can hope to stand. Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built. 169

The dysteleology of the long-term evolution of the Universe did not worry Russell. He suggested it meant we should take a short-term view of life:

I am told that that sort of view is depressing, and people will sometimes tell you that if they believed that, they would not be able to go on living. Do not believe it; it is all nonsense. Nobody really worries much about what is going to happen millions of years hence. . . . Therefore, although it is of course a gloomy view to suppose that life will die out—at least I suppose we may say so, although sometimes when I contemplate the things that people do with their lives I think it is almost a consolation—it is not such as to render life miserable. It merely makes you turn your attention to other things. 170

But some people were unable to take a short-term view. For example, by the end of his life, Charles Darwin's own optimism had been severely shaken by the prospect of the Heat Death, which he learned about in the course of the late nineteenth-century debates on the age of the Earth. As Darwin recorded in his Autobiography: [consider] . . . the view now held by most physicists, namely that the sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life—there is a clash between 'life' and 'Believing'. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress. 171

Most philosophers, especially those who defended teleology in Nature, were, like Darwin, unable to take Russell's indifferent attitude. For instance the mathematician and controversial Anglican bishop E. W.

Modern Teleology and the Anthropic Principles

168

Barnes, whose work we discuss in section 3.9, was much troubled by the Heat Death. The dilemma it creates for a value system based on science was clearly expressed by the paleontologist and mystical theologian Teilhard de Chardin: 172

. . . what disconcerts the modern world at its very roots is not being sure, and not seeing how it ever could be sure, that there is an outcome—a suitable outcome— to . . . evolution. . . . And without the assurance that this tomorrow exists, can we really go on living, we to whom has been given—perhaps for the first time in the whole story of the universe—the terrible gift of foresight? Either nature is closed to our demands for futurity, in which case thought, the fruit of millions of years of effort, is stifled or else an opening exists—that of the super-soul above our souls.. . 173

174

As we shall see in section 3.11, Teilhard accepted the notion of an evolving God. William R. Inge, the Dean of St. Paul's, (and known as the 'gloomy Dean'!) preferred the other horn of the dilemma: he rejected the possibility of an ethics based on the scientific world-view. Inge wrote in the 1930's an entire book, God and the Astronomers, to discuss the Heat Death theory presented by Jeans and Eddington. He called the Heat Death 'the new Gotterdammerung' in reference to the Norse myth which held that the world would end with the destruction of everything, including the gods. Inge was not bothered by the Heat Death; indeed, he welcomed it: 175

The idea of the end of the world is intolerable only to modernist philosophy, which finds in the idea of unending temporal progress a pitiful substitute for the blessed hope of everlasting life, and in an evolving God a shadowy ghost of the unchanging Creator and Sustainer of the Universe. It is this philosophy which makes Time itself an absolute value, and progress a cosmic principle. Against this philosophy my book is a sustained polemic. Modernist philosophy is, as I maintain, wrecked on the Second Law of Thermodynamics; it is no wonder that it finds the situation intolerable, and wriggles piteously to escape from its toils. 176

In other words, theologians should welcome the Heat Death, for such a future for the Universe precludes the possibility of the Universe being an emotionally acceptable home for Man. People will be forced to return to the traditional Christian static God, who is wholly outside the Universe, and hence not subject to the Heat Death. The opposing views of Teilhard and Inge are but an echo of the debate between Schelling and Jacobi in the previous century (see section 3.5). The views of Inge himself were echoed by the British mathematical physicist E. T. Whittaker, best known for his monumental history of electromagnetism, in his 1942 Riddell Lectures, which he entitled The Beginning and the End of the World: 177

The knowledge that the world has been created in time, and will ultimately die, is of primary importance for metaphysics and theology: for it implies that God is not

169 Modern Teleology and the Anthropic Principles Nature, and Nature is not God; and thus we reject every form of pantheism, the philosophy which identifies the Creator with creation, and pictures him as coming into being in the self-unfolding or evolution of the material universe. For if God were bound up with the world, it would be necessary for God to be born and to perish.... The certainty that the human race, and all life on this planet, must ultimately be extinguished is fatal to many widely held conceptions of the meaning and purpose of the universe, particularly whose central idea is progress, and which place their hope in an ascent of man. 343

Whittaker nevertheless believed that there was a purpose in the Universe, and he felt the Heat Death itself indicated what that purpose was. Although Man and all his works would eventually vanish, the universe began with a sufficient amount of free-energy to permit his emergence, and thus The goal of the entire process of evolution, the justification of creation, is the existence of human personality: of all that is in the universe, this alone is final and has abiding significance, and we believe that this has been granted, in the eternal purpose of God, in order that the individual man, born into the new creation of the Church, shall know, serve, and love Him forever. 343

However, an evolving cosmos, particularly a cosmos evolving toward a bad end like the Heat Death, poses the following problem for teleology pointed out by Bertrand Russell:

. . . why should the best things in the history of the world [such as mankind] come late rather than early? Would not the reverse order have done just as well? . . . Before the Copernican revolution, it was natural to suppose that God's purposes were specially concerned with the Earth, but now this has become an unplausible hypothesis. If the purpose of the Cosmos is to evolve mind, we must regard it as rather incompetent in having produced so little in such a long time. It is, of course possible that there will be more mind later on somewhere else, but of this we have no jot of scientific evidence. 178

179

This criticism will be recognized as the standard, centuries-old argument against an evolving, melioristic cosmology. We have previously seen it directed against Schelling's cosmos. It has been recently repeated by Roger Penrose as a criticism of the Anthropic Principle. The only possible answer to the criticism, as pointed out by Schelling, is that the evolutionary process is logically necessary; the most advanced forms of life could not appear in the very beginning. The Heat Death is most often discussed today in terms of the ecology of the planet Earth. The leaders of the 'ecology movement', for instance the Stanford biologist Paul Ehrlich (whose work we mentioned in section 3.2), have argued that Second Law limitations on terrestrial energy-flow require humanity to switch from a steadily growing economy to a steadystate one in which the energy use is constant and comparable in order of magnitude to the current total human energy use, about 3 x 10 joules 180

20

Modern Teleology and the Anthropic Principles

170

per year. For comparison, the net amount of energy stored by all the photosynthetic plants on the Earth is about 3 x 10 joules per year, an order of magnitude higher, and a single human being requires about 4x 10 joules per year (2500 calories per day) in food energy. Ehrlich points out, quite correctly, that the exponential growth in energy use and population size which has been typical of recent times cannot continue indefinitely; in fact, an exponential growth rate of one per cent per year in either population, or energy use, or anything else would exhaust all conceivable resources in the entire solar system in the order of a thousand years. Unfortunately, Ehrlich's proposed steady-state economy will also eventually run out of resources: a civilization restricted to the Earth will in the end succumb to the Heat Death. It is a simple matter to derive some upper bounds to the length of time an Earth-restricted civilization, or indeed Earth-restricted and carbon-based life, could survive. The total energy available to such a civilization is equal to the energy-equivalent of the mass of the Earth, 5.4x 10 joules. A single human being, with the above-mentioned food energy requirement could survive at most 2 x 10 years; a billion people at most 2x 10 years. The human species could continue to use energy at the rate which is is currently doing for all purposes for at most 2 x 10 years. A single cell, with an energy requirement roughly 10~ that of a single human being, could survive at most 2x 10 years. The entire biosphere, with energy-use approximately that of the total net energy stored by photosynthesis, could survive at most 2x 10 years. If we imagine future human civilization limited to the entire solar system, then the above upper bounds to survival times are increased by a factor of about 3 x 10 , which is the ratio of the mass of the Sun to the mass of the Earth. These upper-bounds to our survival time are summarized in Table 3.1. These survival upper-bounds are of course extremely large in comparison to the timescales with which human beings normally concern themselves: even economists who try to project economic trends into the 'extreme far future' generally limit themselves to the next 100-500 years. They are nevertheless the order of, or in many cases substantially less than, many timescales of physical processes which physicists are now measuring: for example, the expected proton lifetime on the basis of the SU(5) grand unified gauge theory is about 10 years (see Chapters 5 and 6 for a detailed discussion). But the essential point is the fact that the survival times are finite. No matter what we, or any other form of life based on DNA do, we (or rather our descendants) are doomed if we restrict our operations to a single planet, or even a single solar system. We shall discuss the question of unlimited survival of life in more detail in Chapter 10; it is sufficient 181

21

182

9

181

41

32

183

23

21

10

42

20

5

184

31

171 Modern Teleology and the Anthropic Principles T A B L E 3.1

Table of upper bounds to survival times for carbon-based life forms in our solar system Type of life and its energy usage 1 living cell, using just food energy 1 person, using just food energy 10 people, using just food energy Human civilization, using energy at the rate the whole of mankind used energy in 1973 Entire biosphere, using energy at rate provided by net photosynthesis on Earth today

Survival time: upper bound using massenergy of the Earth 2 x 10 years 2 x 10 years 2 x 10 years

Survival time: upper bound using massenergy of the solar system 5 x 10 years 5 x 10 years 5 x 10 years

2x 10 years 21

6 x 10 years

2 x lO years

6 x 10 years

42

32

23

9

20

Other significant timescales Estimated proton lifetime, predicted by minimal SU(5) gauge theories Length of time Earth-based civilization can use energy at current rate and at current price, using uranium in Earth's crust as energy source Period the Sun will remain on main sequence Upper bound to future life of biosphere (see Chapter 8) Average survival time of mammalian species Length of time modern man (hom*o sapiens sapiens) has existed. 192

47

37

28

26

25

10 years 31

lxlO 5 x 10 5 x10 lxlO 4x 10

10 9

8

6

4

years years years years years

for now to note that such survival requires expansion beyond our solar system, and that carbon-based life is doomed in any case. As Ehrlich himself admits, 'almost all economists' disagree with him and most other ecologists on the necessity for a steady-state economy. The economists' argument has perhaps been best presented by Julian Simon in his book The Ultimate Resource} The basic difference between the ecologists and the economists is the fact that the former view the ecological and economic system in terms of a flow of energy and material resources, while the latter view it in terms of a flow of information. According to the economists, the economic system is concerned with producing not specific goods, but services: as consumers we are interested in the services we can get from energy and material resources rather than in the resources themselves. To use an example of Simon's, the copper in a cooking-pot can be replaced by other materials as technology develops substitutes, for we desire a cooking service rather than a pot made of a 185

86

Modern Teleology and the Anthropic Principles

172

certain metal. Thus the important cost is the cost of providing the cooking service rather than the cost of copper. As human knowledge grows, the number of materials we can use to perform a given service and our ability to obtain any given material grows also, with the result that 'The cost trends of almost every natural resource—whether measured in labour time required to produce the energy, in production costs, in the proportion of our incomes spent for energy, or even in the price relative to other consumer goods—have been downward over the course of recorded history'. In fact, as Simon documents, the price of raw materials and energy have, on the long term average, been decreasing exponentially over the past two centuries (the period for which we have good data) with a decay-constant of about 50 years. This means that a project whose cost is dominated by raw material costs will be much cheaper to carry out in the future than it is now, if past experience is any guide. The implications of these price trends will be important when we consider the likelihood of interstellar travel in Chapter 9. The modern economists' view of the economic system as being concerned with the production and transfer of services (utilities), goes back at least to the foremost English economist of the nineteenth century, John Stuart Mill. Both Simon's analysis (and similar analyses by almost all economists who have considered trends in raw material costs) indicate that the costs of all services are controlled almost in their entirety by information located in human minds and elsewhere in human civilization. The prices of the services are controlled in their entirety by their subjective valuation in human minds, as shown by marginal utility theory. Indeed, it was thought for thousands of years that the price of a product was an objective feature like its weight, but modern economic theory demonstrates it is a purely subjective quantity generated by the collective interaction of the human race via the product's marginal utility. The price of an object is an example of an apparently objective feature of the world which actually exists only in human minds. We can regard the price structure of an economic system as a Participatory Anthropic Principle in operation. In general, we may say that all services—the entire output of the economic system—may be each equated with a form of 'information' in the sense this word is used in information theory. We can make this clearer by returning to Simon's cooking-pot example. Ultimately, we do not buy the pot to obtain even a cooking service, but rather to obtain a release from hunger and to obtain the sensation of having eaten a delicious meal. It is possible in principle to obtain the same service by direct transfer of material directly to the body cells while causing nerve pulses to be sent to the brain which fools the mind into believing it has enjoyed a real meal. One could go even further, along the lines discussed in the section on absolute idealism, and imagine the program which 187

186

188

189

173 Modern Teleology and the Anthropic Principles

corresponds to a human mind being run on a Universal Turing machine with input to the mind-program being chosen so that the input gives rise in the program to the complete sensation of eating a delicious meal. Both of these possibilities are of course far, far beyond current technology. But the Turing machine example demonstrates that, ultimately, services are a form of information input for a very complex program called a human mind. Thus the ultimate limits of economic systems and civilizations are exactly the same as the ultimate limits of minds: they are all ultimately limited by the amount of information that can be read, processed, and stored. As yet we are ignorant of how many bits of information a given economic service and a given amount of human knowledge correspond to, but for our limited purposes it is sufficient to know just that both are forms of information. We shall make use of this fact in Chapter 10 to calculate some very interesting constraints on the behaviour of life in the far future. Civilization can continue to grow in the far future only if it eventually leaves the solar system, as the economists also grant. ' The ecologists seem unwilling to admit this; see however ref. 191. The bare fact that the economic system is wholly concerned with generating and transferring information has an interesting ethical implication. If we assume (as intellectuals generally do) that the government should not interfere with the generation and transfer of information, then does it not follow that the government should not interfere with the operation of the economic system? Furthermore, if it is argued (as scientists often do) that the growth of knowledge is maximized by information generation and flow being unimpeded by government intervention, does it not follow that the growth of economic services would be maximized if unimpeded by government intervention? Conversely, if social utility may sometimes require governmental restrictions on the evolution of the economic system, may it not likewise require governmental restrictions on academic freedom and the growth of scientific knowledge? Both the unlimited growth of scientific knowledge and unlimited economic growth may be regarded as undesirable but an argument for restricting one is automatically an argument for restricting the other and an argument for not restricting one is automatically an argument for not restricting the other. 118,119

186 187

190

3.8 The Anthropic Principle and The Direction of Time Time is defined so that motion looks simple. J. A. Wheeler

The Weak Anthropic Principle was used by the Austrian physicist Ludwig Boltzmann to explain the direction of time. By the middle of the nineteenth century, it had been realized that there was only one physical

Modern Teleology and the Anthropic Principles

174

law which defined a time direction, and that was the Second Law of thermodynamics. In the latter part of the nineteenth century, Boltzmann began a research programme to deduce the Second Law of thermodynamics from classical mechanics. By applying the statistical techniques of Maxwell to atomic collisions, Boltzmann 'deduced' his so-called H-Theorem. The H-Theorem asserted that a quantity denoted by H, which was a function of the positions and velocities of the atoms of the system, must always decrease with time or remain constant. Identifying the function H with the negative of the entropy, Boltzmann claimed to have a proof of the Second Law. ' However, many physicists were a bit dubious about a 'proof' which deduced irreversibility—the fact that H never increased—from reversible classical mechanics. In particular, Loschmidt, Boltzmann's colleague at the University of Vienna pointed out that for every evolution of a system of atoms in which H decreased, one could obtain an evolution in which it increased by reversing the velocities of all the particles. Therefore, it would seem impossible to prove that H never increased whatever the initial conditions. Boltzmann admitted that one could not show that H never increased whatever the initial conditions, but he contended that almost all initial states which were far from a Maxwellian equilibrium state (the Heat Death state of maximum entropy) would approach this equilibrium state in which H would be at a minimum, and that this is sufficient to account for the Second Law of thermodynamics. This law, he asserted, has only statistical validity; furthermore, its validity is due to the fact that atoms are so small relative to human beings. A being the size of the molecules would not see a continuous increase of entropy. Maxwell later gave a striking example of this. An intelligent being—a demon—the size of a molecule could violate the Second Law by using the fact that, even in equilibrium, a gas of atoms would contain atoms with a range of velocities. This demon could station itself beside a door between two containers initially at the same temperature. The demon would allow only fast-moving atoms to pass in the other direction. After a while one container would contain atoms with a higher average velocity than the other, and so it would attain a higher temperature, since temperature is a measure of average atomic kinetic energy. This demon would thus create a temperature difference without doing work, which would violate the Second Law. Hence, as Lord Kelvin first pointed out in 1874, this example suggested that the Second Law was not an absolute law of nature, but a human artefact resulting from the relative size of Man to atom and of the Law of Large Nunfbers. Planck's student Zermelo pointed o u t ' that Poincare had proven a theorem showing that almost any mechanical system with finite potential energy, finite kinetic energy, and bounded in space must necessarily 193

194 195

193

196

197

166

198

199 200

175 Modern Teleology and the Anthropic Principles

return to any previous initial state. Thus, whatever the state of the Universe now, the entropy as defined by Boltzmann would almost certainly have to decrease in the future back to its present value. Thus the observed entropy increase which the Universe is presently undergoing could occur only if it is assumed that for some mysterious reason the Universe just happens to be in one of the extremely rare low entropy initial states. Zermelo went on to say: 'But as long as one cannot make comprehensible the physical origin [Zermelo's emphasis] of the initial state, one must merely assume what one wants to prove; instead of an explanation one has a renunciation of any explanation'. In his reply , Boltzmann acknowledged that one could prove an H-Theorem only for those initial states which are far from equilibrium. However, one is not necessarily forced thereby to assume a special Universal initial state: 201

202

One has the choice of two kinds of pictures. One can assume that the entire universe finds itself at present in a very improbable state. However, one may suppose that the eons during which this improbable state lasts, and the distance from here to Sirius, are minute compared to the age and size of the universe. There must then be in the universe, which is in thermal equilibrium as a whole and therefore dead, here and there relatively small regions of the size of our galaxy (which we call worlds), which during the relatively short time of eons deviate significantly from thermal equilibrium. Among these worlds the state probability [the H-function] increases as often as it decreases. For the universe as a whole the two directions of time are indistinguishable, just as in space there is no up or down. However, just as at a certain place on earth's surface we can call 'down' the direction toward the centre of the earth, so a living being that finds itself in such a world at a certain period of time can define the time direction as going from less probable to more probable states (the former will be the 'past' and the latter the 'future') and by virtue of this definition he will find that this small region, isolated from the rest of the universe, is 'initially' always in an improbable state. This viewpoint seems to me to be the only way in which one can understand the validity of the Second Law and the heat death of each individual world without invoking a unidirectional change of the entire universe from a definite initial state to a final state. The objection that it is uneconomical and hence senseless to imagine such a large part of the universe as being dead in order to explain why a small part is living—this objection I consider invalid. I remember only too well a person who absolutely refused to believe that the sun could be 20 million miles from Earth, on the grounds that it is inconceivable that there could be so much space filled only with aether and so little with life. ' 202

203

Boltzmann wrote the above words in 1897, and as is well-known, within 20 years the statistical interpretation of the Second Law became universally accepted. Thus physicists were implicitly forced to choose between Boltzmann's 'two pictures' for the origin of the observed present-day improbable universal state. (We say 'implicitly' because most

Modern Teleology and the Anthropic Principles

176

physicists simply ignored the problem.) One could either adopt the 'creation' interpretation of the Second Law, which held that the Universe at some initial time was simply 'given' in an improbable initial state, or one could adopt the 'anthropic-fluctuation' interpretation, which claimed the Second Law is observed to hold because intelligent life can exist only in regions where the initial conditions allow the Second Law to hold. Both pictures have had their advocates. Boltzmann himself adroitly avoided committing himself definitely to either picture; he even gave credit to his old assistant Dr. Schuetz for the anthropic-fluctuation interpretation. The French physicist Poincare was mildly attracted to the anthropic-fluctuation interpretation because of the promise it held for avoiding the Heat Death of the Universe, and he pointed out that intelligent life would probably be impossible in a world in which the entropy decreased with time. In such a world prediction would be impossible. For instance, friction would be a destabilizing force rather than a damping force. Two bodies initially at the same temperature would later acquire different temperatures, and it would be essentially impossible to predict in advance which one would become the warmer. Thus intelligent action would be impossible. The American mathematician Norbert Wiener also emphasized that communication between worlds with different directions of entropy increase would be impossible. The creation interpretation became dominant after the discovery that the universe is expanding, since the expansion defined a natural time—the beginning of the expansion—at which to impose initial conditions Zermelo's problem of the origin of these initial conditions would then be solved (or avoided) by noting that the laws of motion and the universal initial conditions came into being at the same instant, and so the origin of the latter would be no more mysterious than the former. The expansion of the Universe would cause matter to become spread out over an ever-increasing volume, and thus Poincare recurrence would not be inevitable. Even closed universes, which do not expand forever, will avoid Poincare recurrence because the momentum space is unbounded in this type of universe. Thus Poincare recurrence does not hold in any cosmology governed by general relativity. The statistical interpretation of the Second Law could be combined with the idea of an irreversible Heat Death. The wide diffusion of these ideas stimulated a few non-physicists to revive and defend the anthropic-fluctuation interpretation. The British biologist J. B. S. Haldane calculated, from Jeans' estimate of the size of the Universe, that the time needed for a run-down universe (one at maximum entropy) to return to an atomic distribution as improbable as the one observed at present is L O years. 'During all but a fraction of eternity of this order of magnitude, nothing definite happens. But on a 204

205 2 0 6

207

2 0 8

209

209

210

1 0 1 0 0

198 Modern Teleology and the Anthropic Principles

Materialistic view there is no one to be bored by it'. Haldane went on to say:

If this view is correct, we are here as the result of an inconceivably improbable event, and we have no right to postulate it if any less improbable hypothesis will explain our presence. If there are other stars on which intelligent beings are wondering about their origin and destiny, a far smaller and therefore vastly more probable fluctuation would be enough to account for the existence of the human race. 211

Haldane argued on the basis of solar system formation theories current in the 1920's that planets with life are very rare, and hence '. . . it becomes fairly likely that our planet is the only abode of intelligent life in space'. He concluded that

. . . if this is correct, the [anthropic-fluctuation interpretation] becomes plausible. We have not assumed a more improbable fluctuation than is necessary to account for our being there to marvel at its improbability. If the future progress of astronomy substantiates the uniqueness of our earth, the [anthropic-fluctuation interpretation] of course will gain likelihood. 211

Haldane's argument will be recognized as a Weak Anthropic Principle argument. It is a variant of Wheeler's argument that the Universe must be at least as big as it is in order to contain intelligent life, and it is an argument we shall be using on many occasions in this book. Actually, the fluctuations could not occur because of gravitational instabilities, as we shall discuss in Chapter 10. Thus if Boltzmann and Haldane had used the correct physics which was known in their day, they would not have reached an incorrect conclusion using the Weak Anthropic Principle. Furthermore, the physicist Richard Feynman, in a 1965 lecture, levelled an objection to the fluctuation theory which is sufficiently general to apply against any Anthropic size argument. He called the anthropic-fluctuation theory 'ridiculous' on the grounds that a fluctuation much smaller than the entire visible universe would account for the existence of an inhabited planet, and thus it is most unlikely that the entire visible universe would be in an improbable state, as it is observed to be. Only if intelligent life ultimately requires a space much larger than a single planet can the Anthropic size argument be defended against Feynman's objection. We shall show why a much larger space is needed in Chapter 6. In the past 40 years the anthropic-fluctuation interpretation has been defended mainly by philosophers, while physicists and astronomers have generally developed versions of the creation interpretation shaped to fit the observed fact of universal expansion. The only major exceptions to this rule were those astrophysicists who supported the steady-state theory. Since this theory explicitly and intentionally violated both the First and Second laws of thermodynamics, these men were not forced to 212

213 2 1 4

178

Modern Teleology and the Anthropic Principles

choose between the two pictures. (They simply assumed that matter was created in a low entropy state.) The discovery of the microwave background radiation in 1965 ruled out the steady state theory, and in recent years debate had centred on what sort of initial conditions were imposed in the beginning on the initial singularity. There are two schools of thought. The 'orderly singularity' school, represented by the British mathematician Roger Penrose, contend that the initial singularity had a very regular structure, with just enough irregularity to give rise to the stars and galaxies. The other opinion has been dubbed 'chaotic cosmology' by its chief proponent, the American cosmologist Charles Misner. In this view the Universe would have its approximately regular aspect now no matter what the initial condition of the singularity because dissipative processes in the early universe would have smoothed out major irregularities by the present epoch (when intelligent life has arisen). Since irregular initial states are much more numerous than regular initial states one would expect the initial singularity to be very chaotic in structure. The attractiveness of the chaotic cosmology idea lies in the fact that it obviates the necessity of explaining the initial conditions, while the orderly singularity school is faced with explaining them and with Zermelo's problem. We shall give an anthropic explanation for initial conditions which give a globally defined direction of time, in Chapters 7 and 10, thus combining the two possible pictures of Boltzmann into one. The chaotic initial condition model will be discussed in more detail in Chapter 6. The universal direction of time, which is determined by the conditions imposed on the initial singularity, is ultimately explained anthropically. It is possible in principle to test whether or not the Universe has an overall time direction, in which entropy always increases no matter how far into the future we go. We shall assume in our arguments elsewhere in this book that entropy always increases; but, suppose on the contrary that entropy were to rise to a maximum at the point of maximum expansion of a closed universe (see Chapter 6 for a discussion of the behaviour of the various cosmological models) and thereafter begin to decrease, with a return at the final singularity to the conditions which prevailed at the initial singularity. In order for such a return to occur, the disintegration of radioactive materials (for example) must be counterbalanced by a spontaneous regeneration even today, and this could be searched for. John A. Wheeler and W. J. co*cke have considered the experimental implications of this regeneration in some detail. Since such a reversal of entropy would make the continued increase of knowledge by intelligent life impossible, it would contradict FAP, we predict that any experiment which looked for a spontaneous regeneration will have negative results. 215

216

217

218

219

179 Modern Teleology and the Anthropic Principles

There remains the question as to whether Maxwell's Demon could create, in the small, a direction of time in reverse to the large-scale universal direction of time by violating the Second Law. He cannot, for it was shown in the 1930's and 1940's that Maxwell's Demon cannot operate. The concept of Maxwell's Demon assumes that it is possible for an intelligent being to operate—to gather information and act on this information—on a scale much smaller than the atoms from which the everyday world is constructed. The Demon might be subjected to a set of thermodynamic laws appropriate to his own scale, but would be oblivious to those of our scale. This was not nonsense when the idea of Maxwell's Demon was first developed in the course of an exchange of letters between Maxwell and Tait ' ' in the 1860's. At that time there were actually some observations, namely the absorption of starlight in interstellar space, which were interpreted by Kelvin's friend, the physicist Tait as evidence of a leakage of energy from our everyday world into another 'world' with its own laws of thermodynamics. Tait, in fact, later tried ' ' to use this concept of a hierarchy of 'worlds' to prove the existence of angels, not demons! Tait's cosmology was developed in order to allow intelligent life to escape the Heat Death by moving from one 'world' to another one of an infinite set. The egregious failure of Tait's idea is a warning example to those who would construct a cosmology wherein life can escape the Heat Death, as many have tried to do after him, from the semi-mystical approach of Teilhard de Chardin (section 3.11) and to the more scientific approach of ourselves (Chapter 10). The failure of Tait's theory is a failure of the Anthropic Principle applied in the large, where we have argued at length the AP should be valid. It counts as evidence against the AP as a methodological principle. Nevertheless, it can be said in defence of the AP that Tait's theory was based on false observations. No scientific principle can yield correct theories if false information is used. Once it is accepted that such a hierarchical structure does not exist, and that any intelligent being would have to use the materials and physical laws of a single unique scale in his operations, it can be shown that Maxwell's Demon cannot exist. Szilard, and later Brillouin, ' pointed out that in order to separate the fast-moving molecules from the slow-moving ones, the Demon would first have to measure the speeds of the molecules moving toward his door. This measurement necessarily increases the entropy of the system more than the separation of the molecules would decrease it, and so the Second Law would not be violated . The arguments for the non-existence of Maxwell's Demon have suggested to Bohr's student Leon Rosenberg that the observer whose existence gives rise to the complimentarity principle in quantum 220 221 222

223

223 224 225

226

227 228

229

230

180

Modern Teleology and the Anthropic Principles

mechanics, must have a size and complexity comparable to human beings. In the course of his proof that Maxwell's Demon cannot operate, Brillouin obtained a formula for the minimum amount of energy that must be expended to obtain a bit of information. This formula will be crucial in obtaining ultimate limits to the activities and indeed the existence of intelligent life, which we will do in Chapter 10. Since the statistical interpretation of the Second Law is often cited as an example of the reduction of one theory to a more fundamental theory, we might mention that in reality no one has ever been able to deduce the Second Law rigorously from either classical or quantum mechanics without using anthropic arguments or unphysical assumptions ' . The key problem is that in both quantum and classical mechanics, the phase space occupied by the system, measured by Boltzmann's factor H, is an exact constant of motion: it does not change with time. Only by assuming that the observer only roughly measures the factor H can it be shown that H increases with time. The exact H cannot change with time, and this is true whatever the initial conditions. 193 231

3.9 Teleology and the Modern 'Empirical Theologians 9

But then arises the doubt, can the mind of man, which has, as I fully believe, been developed from a mind as low as that possessed by the lowest animal, be trusted when it draws such grand conclusions? Charles Darwin

The 'Empirical' theologians, those theologians who address the question of the purpose of the physical Universe—if any—and the place of Man in it, are a vanishing breed in the twentieth century. Having been burned by the Darwinian refutation of the Paleyian design teleology, most modern theologians try to avoid altogether discussion of this question, and the few that do consider the question, generally answer it by making sweeping assertions with very few actual examples from the physical world either to back-up or illustrate those assertions. For instance, Andrew PringlePattison, a Scottish theologian who is considered to have been a major figure in natural theology at the turn of the century, claimed: '... my contention is . . . that man is organic to the world, o r . . . the world is not complete without him. The intelligent being is, as it were, the organ which the Universe beholds and enjoys itself'. He argued that philosophy would be defective if it did not indicate a purpose in the Universe, and that 'philosophy must be unflinchingly humanistic, anthropocentric' . The purpose which Pringle-Pattison found in Nature is akin to Henderson's, although independently conceived. Pringle-Pattison admitted that 232

233

234

181 Modern Teleology and the Anthropic Principles

the crude teleology of Paley was finished, and he held that this was a good thing: cosmic teleology is the only teleology which can now be defended:

A teleological view of the Universe means the belief that reality is a significant whole. When teleology in this sense is opposed to a purely mechanical theory, it means intelligible whole as against the idea of reality as a mere aggregate or collocation of independent facts. 235

This notion of 'cosmic teleology' was developed in far more detail by a British theologian, F. R. Tennant, in an influential book Philosophical Theology, first published in 1930, and still in print today. His basic argument for teleology is now familiar:

The forcibleness of Nature's suggestion that she is the outcome of intelligent design lies not in particular cases of adaptedness in the world, nor even in the multiplicity of them... [it] consists rather in the conspiration of innumerable causes to produce, by either united and reciprocal action, and to maintain, a general order of Nature. Narrower kinds of teleological arguments, based on surveys of restricted spheres of fact, are much more precarious than that for which the name of 'the wider teleology' may be appropriated in that the comprehensive design-argument is the outcome of synopsis or conspection of the knowable world. 236

According to Tennant, there were three types of natural evidence in favour of teleology acting on a cosmic scale: (1), the fact that the world can be analysed in a rational manner; (2), 'the fitness of the inorganic to minister to life'; and (3), 'progressiveness in the evolutionary process culminating in the emergence of man with his rational and moral status'. Both type (1) and type (2) are essentially Anthropic Principle arguments. In defence of the first type, Tennant points out that it is logically possible to imagine a world which is nothing but a chaos in which similar events never occurred, in which there were no laws. Since the events of the world can be ordered into what Tennant calls 'anthropic categories'—this appears to be the first use of the word 'anthropic' in this context—it follows that the world is selected out of all possible universes to allow the existence of a reasoning creature; 'anthropocentrism, in some sense, is involved in cosmic teleology'. In short, there is a relation between '.. .the intelligibility of the world to the specifically anthropic intelligence possessed by us, a n d . . . the connection between the conditioning of that intelligibility, on the one hand, and the constitution and process of Nature, on the other hand'. Note that it is the entire orderliness of Nature that shows teleology in Tennant's view. The Universe is a Cosmos in the Greek sense of the word. Tennant emphasizes that 'anthropic' in his use of the term did not necessarily mean that Man as a species was the ultimate purpose of creation. He meant that the Universe was anthropocentric in the sense of being consistent with 237

238

239

240

241

241

242

Modern Teleology and the Anthropic Principles

182

rational being: 'it is of course, a matter of indifference to teleology and anthropocentrism whether the material heavens contain a plurality of worlds'. In defence of the evidence of type (2), Tennant cited Henderson's work with approval, and essentially repeated Henderson's arguments. The result of this approach is a teleological picture that '... no longer plants its God in the gaps between the explanatory achievements of natural science, which are apt to get scientifically closed up'. The disadvantage of this is a more abstract notion of teleology which is apt to lose all connection with Nature. Type (3) evidence sounds a bit like the directed evolution of the philosophers discussed earlier, but is really a concept intermediate between this and the no-global-teleology of modern biologists. Tennant asserted that 'the forthcoming alternative views, between which facts scarcely enable us to decide, may be briefly mentioned': 243

242

The divine purposing may be conceived as pre-ordination, in which every detail is foreseen. An analogy is presented in Mozart's (alleged) method of composition, who is said to have imagined a movement—its themes, development, embroidery, counterpoint, and orchestration—in all its detail and as a simultaneous whole, before he wrote it. If God's composition of the cosmos be regarded as similar to this, all its purposiveness will be expressed in the initial collocation, and evolution will be preformation. On the other hand, God's activity might be conceived as fluent, or even as 'increasing', rather than as wholly static, purpose. It might then be compared, in relevant respects, with the work of a dramatist or a novelist such, perhaps, as Thackeray, who seems to have moulded his characters and plot, to some extent, as he wrote. 244

Although Tennant granted that the question of 'what the ultimate purpose or goal of the world-process i s . . . may admit of no complete answer by man, nevertheless, in Tennant's view we can say that '... man is the culmination, up to the present stage of the knowable history of Nature, of a gradual ascent'. The type (2) evidence for teleology as given above by Tennant has, in the intervening half-century, been echoed by a number of distinguished theologians: Laird, Gibson, Bertocci, and Raven. These authors always cite Henderson's work as evidence for such cosmic teleology, but it is clear that they learned of Henderson from Tennant, and they add nothing very original to the argument. The type (1) argument did not originate with Tennant, however. For example, the psychologist James Ward, asserted in his Gifford Lectures delivered in 1896-1898 and published in two volumes with the title Naturalism and Agnosticism, that: 238

245

246

247

248

249

. . . we are now . . . entitled to say that this unity and regularity of Nature proves that Nature itself is teleological, and that in two respects: (1) it is conformable to

183 Modern Teleology and the Anthropic Principles human intelligence and (2), in consequence, it is amenable to human ends. Such is the new step in our [teleological] argument, and it contains all that is essential to complete it. 250

W. R. Matthews, the Dean of St. Paul's, put this somewhat differently in 1935:

The facts from which the [general teleological] arguments start are general characters of the universe as experienced by us. There is first the impression of an order which is both rational and sublime; there is secondly, the fact that the universe, when interrogated by reason, seems to be a coherent system . . . , 251

Matthews gave an interesting defence of this perceived order to Hume's criticism, which we discussed in Chapter 2, that it could all be due to some anthropic selection out of chaos—it is well-known that a finite number of elements would in infinite time go through all possible combinations, some of which would have order. Matthews responds to Hume in two ways: first, the assumption that the number of things in the Universe is finite is in itself an assumption of order. If the number of things is indefinite, then there need not be a repetition of all events. Second, Matthews points out that according to modern science, the Universe has only existed for a finite time, and in this finite time only a finite number of events could have occurred, all of which seem to be orderly. (Henderson makes a similar appeal to observation in the visible Universe; as far as we can tell, the entire universe is orderly, which is contrary to what we would expect if we merely lived in an island of order.) This direct appeal to experimental fact in support of cosmic teleology is unfortunately rare among modern natural theologians. Both Peaco*cke, ' who is not only a theologian but also a physical biochemist, and Mascall —to list only two of the more well-known of the recent writers on the relation between religious topics and science—defend cosmic teleology by arguing that the continuing operation of physical laws needs some teleological justification. If, as we have reason to believe, chaos is much more probable than any form of order, why does the Universe not lapse into chaos the next instant? Why are our expectations of seeing the familiar types of order tomorrow always fulfilled. This sort of argument is so general that it would be consistent with any scientific result, and so, although interesting, it is completely useless. Indeed, Mascall goes out of his way to argue that both a steady-state Universe and the Big-Bang cosmology would be consistent with it. Both Peaco*cke and Mascall mention the Anthropic Principle—Peaco*cke in its modern Dicke-Carter form, and Mascall in a primitive version due to Whitrow (which we shall discuss in section 4.8)—but only in passing. However, there was actually one heroic attempt in the 1930's to 252

253 254

255

255

254

255

Modern Teleology and the Anthropic Principles

184

combine the anthropic viewpoint that intelligence is important or even essential to the Universe with the science of the day, and actually make a testable prediction. Ernest W. Barnes, the Bishop of Birmingham, was both a theologian and a mathematician with a Sc.D. from Cambridge. His Gifford lectures, delivered in the years 1927-1929, are probably unique amongst modern lectures on theology: about half of the book form of the lectures—which take up about 650 pages—consists of tensor equations together with an exposition of the quantum theory. The book could be used as a textbook in mathematical cosmology circa 1930. At the time Barnes was giving the lectures, the generally accepted theory of planetary formation was binary collisions between stars; of this idea he writes, But, if planetary systems originate in actual collisions, there may be merely a few hundred of such systems in our Universe . . . this number seems utterly disproportionate to the size of the galactic universe, if we regard that universe as having been created with a view to the evolution of intelligent beings. [Barnes' emphasis]... and the suggestion forces itself upon us insistently that the cosmos was made for some end other than the evolution of life. Certainly, however, no such end is apparent to us. My own feeling that the cosmos was created as a basis for the higher forms of consciousness leads me to speculate that our theory of the formation of the solar system is incorrect. 256

It is well-known that Barnes, was correct; the theory of solar system formation held in the 1930's was incorrect. This is a correct prediction obtained by Anthropic Principle reasoning. (We shall argue in Chapters 8 and 9 that intelligent life is most unlikely to evolve on any other earthlike planet. But we shall also claim in Chapter 10 that the other solar systems could serve a purpose for intelligent life. Furthermore, if the evolution of intelligence is improbable, many solar systems must exist if there is to be a reasonable chance of intelligent life arising at least once.) Although he believed that the purpose of the Universe was to be a home for intelligent beings, Barnes did not regard mankind as the apex of intelligent life: But as the millions of years go by, so to, if we may judge the future by the past, will humanity as we know it ultimately yield place to some other animal form? What form? Whence evolved? We cannot say. But some Cosmic Intellect, watching the mature capacities of this unknown form, will almost certainly judge it to be more highly evolved, of greater value in the scheme of things, than ourselves. On Earth man has no permanent home; and if, as I believe, absolute values are never destroyed, those which humanity carries must be preserved elsewhere than on this globe. 257

185 Modern Teleology and the Anthropic Principles

3.10 Teleological Evolution: Bergson, Alexander, Whitehead and the Philosophers of Progress Why is it that you physicists always require so much expensive equipment? Now the Department of Mathematics requires nothing but money for paper, pencils and waste paper baskets and the Department of Philosophy is better still. It doesn't even ask for waste paper baskets. Anonymous University President

By the start of the nineteenth century, evolutionary concepts had begun to seep into philosophical systems, and in some cases, like that of Schelling, they formed the basis of the system. The idea of an evolutionary cosmos came initially not from the observation of Nature, but rather from a new view of human history. The scholastics of the Middle Ages considered themselves inferior to, or at best equal to, the ancient Greek philosophers. In their opinion, there had been no significant change in basic knowledge or any other fundamental aspect of human society over the whole of human history, which in duration had been about the same as the length of time the Universe had been in existence. Thus there was no reason to believe in an evolutionary Universe. In contrast, the philosophers of the Enlightenment believed themselves vastly more knowledgeable than the Greeks, as shown by the very name of this period. Their scientific knowledge, particularly Newtonian physics and astronomy, was clearly superior to anything the ancients had developed. This indicated an evolutionary change in human knowledge, and it was a change for the better. Progress had obviously occurred in human history, and it required but a short leap of the imagination to go from a progressive humanity to a progressive Universe. By the late nineteenth and early twentieth centuries, it had become generally accepted that any realistic picture of the Universe had to be evolutionary. The evolutionary world view is not totally dominant even in this century, and the idea of a static cosmos has held a strong attraction: Einstein's first cosmological model, proposed in 1917, was globally static. More recently, the steady-state model both in its original 1950's form and in its contemporary inflationary universe guise —which we discuss at length in Chapter 6—are both attempts to retain a cosmos which on a sufficiently large scale does not change . In general, however, philosophical systems and cosmological models of the present day are fundamentally evolutionary. But an evolving cosmos can be either teleological or non-teleological. There are three ways in which a philosophical system could be teleologi258

258

259

252

Modern Teleology and the Anthropic Principles

186

cal. First, some event which the philosopher regards as supremely good— such as the eventual evolution of the human species and its progress to higher and higher levels of civilization—could be considered an inevitable eventual outcome of the evolutionary process. Second, it could be held that the entire Universe is evolving toward some goal. Third, the Universe could be pictured as an organism which by its very nature is teleological. In this section we shall discuss three of the four most influential such teleological systems to be formulated in the twentieth century: the systems of Bergson, Alexander, and Whitehead. The fourth system, the one developed by Teilhard de Chardin, is sufficiently unusual to warrant a separate treatment in section 3.11. These philosophers did not work in a vacuum; they had an enormous number of nineteenth-century predecessors who we might term the 'philosophers of progress'. As the distinguished historians Bury and Nisbet have demonstrated, the belief that human history is progressive reached its height in the nineteenth century, although such a view was not unknown even in classical antiquity. The two best known of the philosophers of progress were Karl Marx and Herbert Spencer. The Marxist theory of human development, in which the human social system evolves from the capitalist society of the nineteenth century into first a socialist and finally a communist society, egalitarian and anarchist, is sufficiently familiar to twentieth-century readers to make a detailed discussion unnecessary, but Spencer and his philosophy are almost unknown. The reverse would have been true in the late nineteenth century; Spencer was the most celebrated philosopher of his day: a rationalist, an anti-imperialist, and the last of the great laissez-faire liberals. Spencer's theory of an evolving cosmos is probably derived ultimately from his political philosophy, although he claimed the deduction proceeded in the opposite direction. His first publication, The Proper Sphere of Government, which he published in 1843 at the age of twenty-three, was a defence of the individual against the power of the state: in this short book he opposed not only state interference with religion and the press, but also government schools and government support of the poor. He reluctantly granted the state the right to make war, but he wished to impose more restrictions on this power than any government has ever accepted. For Spencer, progress occurred through the voluntary cooperation of individuals. The level of advancement of human society could be measured by the amount of restriction it imposed on voluntary cooperation. According to the classical economic theory of Adam Smith in which Spencer believed, a voluntary or free market society would inevitably develop an increasing amount of human heterogeneity due to the increasing division of labour. A more heterogeneous society would contain more 260

261

262

263

264

187 Modern Teleology and the Anthropic Principles

net knowledge than a hom*ogeneous society, for each individual could concentrate on being expert in one area, rather than having to know everything. If everybody possesses essentially the same information that is possessed by everyone else, then the amount of information in the entire system is no greater than the information a single individual has. Only the division of labour could permit the growth of civilization: the more heterogeneous a society is, the more advanced it is, if the differentiation arises by voluntary cooperation. In contrast to Spencer, Marx regarded the division of labour not as an essential feature of an advanced civilization, but merely as a mark of class exploitation. Marx, and his followers to the present day, believed that in a communist industrial society, every individual could do all jobs: . . . where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes . . . [it would be] possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without having ever becoming hunter, fisher, shepherd, or critic . . . the enslaving subjugation of individuals to the division of labour, and thereby the antithesis between intellectual and physical labour have disappeared . . . when the all-around development of individuals has also increased their productive powers. 265

266

Similar views of advanced societies can be found today as a general rule only among those socialists ignorant of economics. Those socialists who are knowledgeable of economics (e.g., refs 267, 268) recognize the necessity of division of labour for an advanced society, as do the vast majority of economists of all political beliefs. It thus seems reasonable to assume that Spencer was correct at least in this respect about the social organization of all possible advanced societies. This will be important in the analysis of the likely behaviour of advanced extraterrestrial civilizations, which will be covered in Chapter 9. Spencer divided social systems into two types: military and industrial The former are characterized by rigid hierarchical social classes like an army. Cooperation and the resulting division of labour is restricted in such societies by force, for cooperation would interfere with the privileges of the ruling classes. The industrial society is the form of free market society ushered in by the industrial revolution. Since the industrial society is both more knowledgeable and based on cooperation rather than violence, it is morally superior to the military society. Being able to use more knowledge, industrial society is competitively superior to the more primitive military society, and consequently it should eventually replace the military society. Thus human social evolution is clearly progressive; evolution has a goal, and this goal is freedom for the individual. Spencer's cosmology is teleological in the first sense defined above. 269

261

Modern Teleology and the Anthropic Principles

188

Spencer argued that the driving force behind progressive human social evolution—increasing differentiation—was also operative in nonhuman biological and inorganic realms of the Universe. The Spencerian cosmos began with a hom*ogeneous cloud of matter, which the force of gravity differentiated into stars and planets. Inorganic matter differentiated under the action of electrical forces into first, complex forms of non-living compounds, and later, life. In Spencer's opinion, the increasing complexity of living creatures seen in the fossil record is best understood by comparing it to the increasing complexity which occurs in a developing embryo: it begins as a single cell, which divides and differentiates into the various cell types required by the cell division of labour in the metazoan. The cosmic differentiation process has now progressed to the human level, and it should continue to improve the human type. Spencer never considered the possibility that the differentiation process might eventually generate a species superior to hom*o sapiens. He did, however, worry about the ultimate fate of his cosmology, for there might be a limit to the heterogeneity of matter, and he was aware also of the Heat Death problem. He concluded that the Universe is fundamentally cyclic, and that eventually the Universe would re-hom*ogenize. ' As mentioned above, Spencer's ideas had an immense influence on intellectuals the world over at the beginning of the twentieth century. Amongst them was the American palaeobotanist and sociologist Lester Ward, who argued that the next stage of evolution, which Western Man was just entering, was characterized by 'telic evolution', or 'social telesis', in which government would provide more precise guidance to progress. ' Ward's ideas were echoed in Britain by L. T. Hobhouse. ' Ward, Hobhouse, and later John Dewey, were the main philosophers of progress who changed liberalism from its classical or laissez-faire form, in which progress would result from the unregulated free market, into its modern form, in which the goal is best obtained by government oversight. In either form, liberalism claims human social development is inevitably melioristic, and hence liberalism is teleological in the first sense defined above. None of 'inevitable' social developments predicted by any of the above mentioned philosophers of progress actually happened. Spencer would be shocked by the increase of government control of the economy in this century, a development he would have regarded as reactionary, while Marx would be shocked by the continued existence and expansion of the free market, a development which he would have regarded as reactionary. Social philosophers such as Karl Popper and Friedrich Hayek ' have argued that the future evolutionary history of a complex social system is inherently unpredictable in the long run because a prediction would have to be based on an accurate model of society, and a sufficiently 270 2 7 1

271272

140,252 271

261 273

261 274

275

276 277

189 Modern Teleology and the Anthropic Principles

accurate model would be too complex to be coded in any mind or computer in the society. The memory of a finite state machine is inadequate to describe everything including itself. One of Hayek's arguments was actually a formal mathematical proof that a finite state machine could not predict its future evolution. A similar proof for an infinite state machine was first obtained by the famous computer scientist Alan Turing some years after Hayek. Popper has developed this argument that unpredictability in social evolution is due to the impossibility of complete self-reference. Henri Bergson (1859-1941) is generally regarded as the foremost French philosopher of the twentieth century. His philosophy is based on 'Becoming', or the temporal aspects of reality, as the fundamental metaphysical concept. 'Being', or existence, is the basic metaphysical entity in the Cartesian philosophical tradition which was the dominant influence in French philosophy before Bergson. In philosophies of Being, time or more generally evolution, is regarded as illusory or of no fundamental importance. Teleology, which is basically temporal, is also not regarded as primary. The most significant contribution of Bergson was to make French philosophy take evolution seriously, an effect Schelling had earlier had on German philosophy. In fact, the historian Lovejoy, whose classic work The Great Chain of Being is largely an analysis of the tension between the ideas of Being and Becoming in Western philosophy and theology, considers Bergson's philosophy to be largely a reworking of Schelling's. Bergson's influence on such French evolutionary philosophers as Teilhard de Chardin was immense. Bergson carefully distinguished his version of teleology, or finalism, from the versions which were at bottom really equivalent to mechanism: 276

278

4

. . . Radical mechanism implies a metaphysic in which the totality of the real is postulated complete in eternity, and in which the apparent duration of things expresses merely the infirmity of a mind that cannot know everything at once... we reject radical mechanism. But radical finalism is quite as unacceptable The doctrine of teleology, in its extreme form, as we find it in Leibniz, for example, implies that things and beings merely realize a programme previously arranged. But there is nothing unforeseen, no invention or creation in the universe; time is useless again. As in the mechanistic hypothesis, here again it is supposed all is given. Finalism thus understood is only inverted mechanism. Yet finalism is not, like mechanism, a doctrine with fixed rigid outlines... It is so extensible, and thereby so comprehensive, that one accepts something of it as soon as one rejects pure mechanism. The theory we shall put forward... will therefore necessarily partake of finalism to a certain extent... [the doctrine of finality] realizes that if the universe as a whole is the carrying out of a plan, this cannot be demonstrated empirically.. , 279

280

Modern Teleology and the Anthropic Principles

190

Bergson's version of teleology was what he termed 'external finality', by which he meant that all living beings were ordered for each other: In [our theory], finality is external, or it is nothing at all,, , If there is finality in the world of life, it includes the whole of life in a single indivisible embrace. 281

282

Evolution in Bergson's opinion was fundamentally creative in the sense that it always engendered something wholly new, something whose nature and whose coming-into-being could not have been foreseen by knowledge of what had come before. Only if evolution worked in this way could Becoming and not Being be regarded as metaphysically primary. Nature was an organic whole, ultimately teleological because it is driven by a non-physical Life Force, but whose future and goals are ultimately unknowable: Never could the finalistic interpretation, as w e . . . propose it, be taken for an anticipation of the future. ... the universe is not made, but is being made continually. It is growing, perhaps indefinitely, by the addition of new worlds. 283

284

Bergson was aware of the difficulty which the Heat Death posed for his philosophy through the books of the French physicist Meyerson, but he tried to play down the problem. He could only suggest that life may be able to take a form in which the ultimate Heat Death, the final use of all free energy, was delayed indefinitely. He also suggested that 'considerations drawn from our solar system' might not apply to the Universe as a whole. These were good guesses, as we shall see in Chapter 10. Samuel Alexander (1859-1938) was a metaphysician who was born in Australia, but who spent his adult life in England. His most noteworthy contribution was an attempt to infer on philosophical grounds the future evolutionary history of the most advanced branch of life. In 1930 he was made a member of Order of Merit (an honour which is more highly regarded by British academics than winning a Nobel prize) for his work. He presented a fully developed version of his theory as a series of Gifford Lectures at the University of Glasgow in 1916-1918. His system had a great influence on speculative British philosophy in the early part of this century. ' Whitehead's metaphysical system can be regarded as an elaboration and extension of Alexander's from a somewhat different perspective. For Alexander, the fundamental entity was Space-Time, which engenders first matter, then life, and finally mind. But there is a stage beyond mind, termed 'deity' by Alexander, which is as superior to mind as mind is to life without mind. Just as a mind exists in a living being, but most living beings (all non-human living beings, in fact) do not have mind, so 284

285

286 287

191 Modern Teleology and the Anthropic Principles

deity is a property which will exist in mind, but most minds will not possess deity. The purpose of the universe is to bring deity into being: 288

Deity is thus the next higher empirical quality to mind, which the universe is engaged in bringing to birth. That the universe is pregnant with such a quality we are speculatively assured There is a nisus in Space-Time which, as it has borne its creatures forward through matter and life to mind, will bear them forward to some higher level of existence. . . . our supposed angels are finite beings with the quality of deity, that quality which contemplates mind as mind contemplates life and matter... beings with finite deity are finite Gods. 2 8 9

290

With Alexander, the notion of an evolving God, who does not always exist but rather comes into existence, first appears in English philosophy. In the distant past there was no deity, just as there once was no mind, and even further back in time there was no life. For Alexander, God is the whole universe as possessing the quality of deity. Of such a being the whole world is the 'body' and deity is the 'mind' God includes the whole universe, but his deity, though infinite, belongs to, or is lodged in, only a portion of the universe. 2 9 1

292

Alexander's concept of an evolving God will be recognized as similar to that of Schelling and Teilhard de Chardin. However, Alexander did not leave behind him a school which developed his particular brand of evolutionary and teleological metaphysics. Alfred North Whitehead (1861-1947) was trained as a mathematical physicist, and his metaphysics reflects his training, in the sense that it was far more consistent with the physical science of his day—relativity and quantum mechanics—than were the systems of Alexander and Bergson. Whitehead's cosmology received its most comprehensive expression in his Gifford Lectures delivered at the University of Edinburgh during 19271928, which were published under the title Process and Reality : An Essay in Cosmology. Throughout this work Whitehead constantly asserts the natural world to be an organism, by which he meant that it resembles an organism in that the essence of each object lies not in its intrinsic nature, but rather in its relation to the whole: his view was quite similar to that of the nineteenth-century German biologists and the ancient Chinese Taoists and Confucians whose philosophies we discussed in Chapter 2. Like the Chinese, Whitehead applied his philosophy of organism not only to living things, but also to the inorganic physical universe. Whitehead used the very suggestive word 'society' to refer to the order which results: 293

The members of the society are alike because, by reason of their common character, they impose on other members of the society the conditions which lead to that likeness. 294

Modern Teleology and the Anthropic Principles

192

The entities of physical science formed such a society:

Maxwell's equations of the electromagnetic field hold sway by reason of the throngs of electrons and of protons. Also each electron is a society of electronic occasions, and each proton is a society of protonic occasions. These occasions are the reasons for the electromagnetic laws; but their capacity for reproduction, whereby each electron and each proton has a long life, and whereby each electron and each proton come into being, is itself due to these same laws.... Thus in a society, the members can only exist by reason of the laws which dominate the society, and the laws only come in to being by reason of the analogous characters of the members of the society. 295

In other words, the laws of physics and the elementary particles come into existence spontaneously by a sort of mutual self-consistency requirement. But a self-ordered society is not forever:

But there is not any perfect attainment of an ideal order whereby the indefinite endurance of a society is secured. A society arises from disorder, where 'disorder' is defined by reference to the ideal for that society; the favourable background of a larger environment either itself decays, or ceases to favour the persistence of the society after some stage of growth: the society ceases to reproduce its members, and finally after a stage of decay passes out of existence. Thus a system of 'laws' determining reproduction in some portion of the universe gradually rises into dominance; it has its stage of endurance, and passes out of existence 2 9 5

Thus the laws governing the elementary particles which exist today together with the elementary particles themselves will gradually pass out of existence, and they will be replaced by other types of elementary particles governed by different laws. Whitehead explicitly lists as something bound to pass away not only the laws of electromagnetism, but also the four-dimensional nature of the space-tune continuum, the axioms of geometry, and even the dimensional character of the continuum. A period of universal history in which a definite self-consistent set of physical laws holds sway was termed a cosmic epoch by Whitehead. In the fullness of time all logically possible universes will exist; our own Universe—our own cosmic epoch—is just one of many which will eventually pass away. Whitehead rejected Leibniz' theory of the best of all possible worlds (which is Leibniz' explanation of why just one of all logically possible worlds exists) as 'an audacious fudge'. Broadly speaking, Whitehead's cosmology is the same as the globally static cosmologies generating so much interest today. His picture of cosmic epochs is similar to the 'bubble universe' model developed by Fred Hoyle and by Richard Gott, in which the visible portion of the Universe is just one of an infinite number of bubbles in an over-all chaotic universe, or the bubble universes in the inflationary universe models (see Chapter 6 for a detailed discussion). In the inflationary universe, our 295

296

297

259

298 2 9 9

193 Modern Teleology and the Anthropic Principles

bubble universe has certain physical properties because of the particular way in which it condensed out of a chaotic medium. As in Gott's theory, the bubble may disappear, with the material in the bubble returning once again to chaos. On a sufficiently large scale, the universe is pictured as chaotic; for, assuming global chaos obviates the problem of assuming certain initial conditions for the field equations. This is the modern analogue of Whitehead's solution of why just one of all logically possible worlds exists. There is no problem if they all exist, as we mentioned in section 2.9. We shall discuss another version of this solution in Chapter 7, when we describe the Many-Worlds interpretation of quantum mechanics. In contrast to the bubble universe model, the Many-Worlds model allows evolution to occur on a global scale while simultaneously allowing all logically possible universes to exist. The difference is due to the fact that the different universes exist in the bubble universe model in physical space, whereas in the Many-Worlds model the different universes exist in a Hilbert space of realized possibilities. Whitehead can be regarded as the first philosopher who appreciated the advantages of the Many-Worlds ontology. The ontology of the Many-Worlds cosmology and Whitehead's cosmology is that which was implied by what A. O. Lovejoy called the 'Principle of Plenitude', (see section 3.2). We can regard these cosmologies as a modern expression of this principle. Whitehead's cosmology is also remarkably similar to Wheeler's earlier proposal (which he no longer believes) that a closed universe may go through an infinite number of cycles, with the physical laws being different in each cycle. Wheeler's proposal is often mentioned as a possible model for WAP: our particular type of intelligent life is consistent only with a very special set of physical laws, so we naturally exist in that cycle in which such laws hold sway. The basic mechanism for change in Whitehead's cosmology is teleological. When an object A changes into an object B, the change is not pictured as random, but rather A is to be thought of as orienting its changes toward B. Furthermore, processes occur in a cosmic epoch because 'eternal objects'—which are somewhat analogous to the Platonic forms that exist in the realm of ideas—act as a 'lure' (Whitehead's term) for the process. This teleological process at the most fundamental level gives rise to the efficient causes which scientists investigate. Whitehead regarded efficient and final causes as complimentary modes of explanation: 300

252,301

302

A satisfactory cosmology must explain the interweaving of efficient and final causation. Such a cosmology will obviously remain an explanatory arbitrariness if our doctrine of the two modes of causation takes the form of a mere limitation of the scope of one mode by the intervention of the other mode. What we seek is such an explanation of the metaphysical nature of things that everything deter-

Modern Teleology and the Anthropic Principles

194

minable by efficient causation is thereby determined, and that everything determinable by final causation is thereby determined. The two spheres of operation should be interwoven and required, each by the other. But neither sphere should arbitrarily limit the scope of the alternative mode. 342

Whitehead's main physical evidence for the existence of final causation was the very existence of a 'bubble' Universe:

Our scientific formulation of physics displays a limited universe in the process of dissipation. We require a counter-agency to explain the existence of a Universe in dissipation within a finite time. 342

Nevertheless, in a more fundamental sense, Whitehead's cosmology is not teleological in the large, as the cosmology of Alexander is, for the Universe in the sense of the totality of everything that exists is not evolving toward some goal. An anthropic prediction was made in the 1960's by Whitehead's follower, the philosopher Charles Hartshorne, whose work on the ontological argument we discussed in section 2.9. Hartshorne did not accept the overall lack of progress in Whitehead's cosmology, and he modified the Whiteheadian Universe so that there was net progress from one time to the next. This requires that it be possible to define globally a time coordinate, for otherwise it would not possible in the large to define the temporal sequence of events. In special relativity it is not possible to define a unique global time-coordinate because of the global properties of the Poincare group. As a mathematical physicist, Whitehead was aware of this, and we submit that this awareness prevented him from endowing his cosmology with progress in the large. Hartshorne, as a philosopher, did not feel the constraints of physics as strongly as Whitehead. Nevertheless, he was aware that the popular view of relativity posed problems for his cosmology: 303

304

Relativity physics is a puzzling case for my thesis, the most puzzling indeed of all. If reality is ultimately a self-surpassing process, embraced in a self-surpasssing divine life, there must be something like a divine past and future. According to relativity physics, there is indeed, for our localized experience, a definite cosmic past and a definite cosmic future, but not a definite cosmic present. We may have two contemporaries out in space, one of which is years in the past of the other. And there seems no way to divide the cosmic process as a whole into past and future. Yet if neoclassical theism is right, it seems there must, for God at least, be a way. What is God's 'frame of reference', if there is no objectively right frame of reference for the cut between past and future? I can only suppose that we have in this apparent conflict a subtler form of the illicit extrapolation to the absolute from observational facts. Somehow relativity as an observational truth must be compatible with divine unsurpassability. 303

As we mentioned in section 2.9, in so far as his ontology is concerned,

195 Modern Teleology and the Anthropic Principles

Hartshorne is basically a pantheist. As he put it, 'Pantheism in this sense is simply theism aware of its implications... [on ontological questions] there is indeed no real issue between theism and pantheism'. Thus his deity must be subject to the rules of temporal succession implied by physics. Since special relativity does not permit the required temporal succession, Hartshorne insists that the temporal succession rules of special relativity do not apply globally to the Universe. Hartshorne is quite correct; it doesn't. The frame of reference in which the cosmological background radiation has the same temperature in all directions defines a unique global time coordinate in which the notions of past, present and future of a given event can be defined. Furthermore, the global time defined by this frame is essentially the same as the global time defined by the constant mean curvature foliation in a Universe which is approximately hom*ogeneous and isotropic. This point will be discussed in more detail in Chapter 10. In special relativity no unique global time can be defined, but general relativity is Lorentz invariant only locally, not globally, and thus a global time can be defined. The existence of a global time in cosmology was actually pointed out by Sir Arthur Eddington in 1920. Hartshorne was unaware of this, so his argument counts as a correct prediction about the global temporal structure based on the Anthropic Principle, for in his view men were to be thought of as 'nerve cells' of God. Hartshorne even tried to justify the speed of light limitation anthropically: 305

306

307

308

There is a conceivable teleological justification for relativity. What good would it do us to be able to transmit messages with infinite velocity? It is bad enough being able to learn about troubles around the world in seconds, but to get bad news quickly from remote planets, and have to reply almost at once—that would be too much. Thank God we are isolated by the cosmically slow speed of light—we have enough complexity on our hands with this planet. Thus, once more, the heavens declare the glory of God 3 0 9

3.11 Teilhard de Chardin: Mystic, Paleontologist and Teleologist Schopenhauer was a degenerate, unthinking, unknowing, nonsense scribbling philosopher, whose understanding consisted solely of empty, verbal trash. Ludwig Boltzmann

As stressed by Joseph Needham, one of the reasons for the widespread and continuing popular interest in the philosophical work of Teilhard de Chardin is the man himself. Many theologians and philosophers before him had attempted to make their religious beliefs or 310

Modern Teleology and the Anthropic Principles

196

philosophical systems consistent with or even based on the fact of an evolving cosmos. Many devout scientists before him had tried to show their evolutionary science was consistent with their religion. Previously we have given examples of both. But Teilhard combined in one person the scientist and theologian: he had acquired a world-wide reputation as a paleontologist specializing in the evolution of Man; he was also a Jesuit priest. When it came to reconciling science and religion, a scientist and theologian could speak perhaps with double authority. Our society tends simultaneously to respect authority and to distrust it. An authority who is silenced by authority is thus especially interesting. Teilhard had begun in the 1920's to lecture about his speculations on combining Catholicism with evolution. The leaders of the Jesuit order exiled him to China to prevent further discussion of these views in his native France. He was forbidden to publish any of his philosophical works in his lifetime. When a chair in paleontology became vacant at the College de France, he was not permitted to apply for the position. He moved to New York City, where he died in 1955. He is exiled even in death: he is buried in the cemetery of a small monastery some 50 miles from New York, far from his beloved France. When Teilhard's ideas on evolutionary Christianity were published in the year of his death, his friends (of which there were many, for by all accounts he was an extraordinarily likeable man) spread far and wide the pathos of his life-story. Undoubtedly, this resulted in his ideas being given a vastly more sympathetic hearing than they might otherwise have received (and than they probably deserve!). Nevertheless, it would be a mistake to think that the enormous initial and continuing interest in the work of Teilhard is due entirely or even primarily to mere psychological and social factors. His evolutionary theological cosmology has certain key features which distinguish it from the somewhat similar systems of Schelling, Alexander, and Bergson. Many of the theologians in the English speaking world, notably Philip Hefner, ' Arthur R. Peaco*cke, and Charles Raven have been very sympathetic to Teilhard's work. Teilhard opens what is generally regarded as his most significant philosphical work, The Phenomenon of Man, with the statement: "if this book is to be properly understood, it must be read not as a work on metaphysics, still less as a sort of theological essay, but purely and simply as a scientific treatise. The title itself indicates that'. His critics—for instance the evolutionary biologist G. G. Simpson, and the zoologist Sir Peter Medawar have taken him to task for this assertion, but we believe a close reading of Teilhard's central work will justify his claim. The work was admittedly not written in standard scientific style; Teilhard used a more mystical language, which certainly annoyed many scientists. 311

312 313

254

314

10

11

315

197 Modern Teleology and the Anthropic Principles

Medawar, for example, was so put off by the language that he charged the book '... cannot be read without a feeling of suffocation, a gasping and flailing around for sense . . . the greater part of it is nonsense, tricked out by a variety of tedious metaphysical conceits, and its author can be excused of dishonesty only on the grounds that before deceiving others he has taken great pains to deceive himself'. Most of The Phenomenon of Man is devoted to a poetic description of an evolving Earth, beginning with the formation of the planet, and then moving on to the development of life from its most primitive manifestation to the emergence of Man. On a phenomenological level Teilhard's picture is the standard scientific one of the late 1930's when the book was written. Some phyla of single-celled organisms eventually develop into metazoans, some phyla of which in turn develop organisms with highly developed nervous systems, and one lineage of these creatures finally acquires intelligence: the 'hominisation'—Teilhard's word—of the world has at last occurred. If the picture is standard, the physical mechanism behind the ascent of life is not. Teilhard argued that energy existed in two basic modes, 'tangential' and 'radial'. The former is essentially the energy measured by the instruments of the physicist, while the latter can be regarded as a sort of psychic or spiritual energy. Teilhard's motivation for introducing the latter variety is two-fold: First, his cosmological system evolves higher and higher order in its biota as time proceeds, and this seemed to him to be forbidden by the Second Law of Thermodynamics which he admits governs the evolution of the usual variety of energy. Furthermore, the eventual Heat Death predicted by the thermodynamicists would undermine any hope of having Ultimate Intelligence permanently immanent in the Cosmos. He is well-aware that if intelligence is at bottom completely dependent on tangential energy, it must be doomed to extinction in the end, however powerful it becomes, if in fact the Heat Death occurs. Therefore, his radial energy is subject to a universal law contrary to the Second Law of tangential energy: radial energy becomes more concentrated, more available with time, and it is this concentration that drives the evolution of life to Man, and beyond. Radial energy—psychic energy—is as ubiquitous as tangential energy. It is present in all forms of matter at least to a rudimentary extent, and so all forms of matter have a low-level sort of life. To modern scientists this vitalism seems archaic, even occult, but such a concept was held by a number of distinguished thinkers at the time Teilhard was writing. In the opinion of Teilhard, '... the idea of the direct [his emphasis] transformation of one of these two energies into the other... has to be abandoned. As soon as we try to couple them together, their mutual independence becomes as clear as their interrelation'. His reasons for 315

316

317

318

319

Modern Teleology and the Anthropic Principles

198

this view are as follows:

.. /To think, we must eat'. But what a variety of thoughts we get out of one slice of bread! Like the letters of the alphabet, which can equally well be assembled into nonsense as into the most beautiful poem, the same calories seem as indifferent as they are necessary to the spiritual values they nourish. The two energies—of mind and matter—spread respectively through the two layers of the world (the within and the without) have, taken as a whole, much the same demeanour. They are constantly associated and in some way pass into each other. But it seems impossible to establish a simple correspondence between their curves. On the one hand, only a minute fraction of 'physical' energy is used up in the higher exercise of spiritual energy; on the other, this minute fraction, once absorbed, results on the internal scale in the most extraordinary oscillations. A quantitative disproportion of this kind is enough to make us reject the naive notion of 'change of form' (or direct transformation)—and hence all hope of discovering a 'mechanical equivalent' for will or thought. Between the within and the without of things, the interdependence of energy is incontestable. But it can in all probability only be expressed by a complex symbolism in which terms of a different order are employed. 320

Since this passage was written, we have discovered in effect the 'mechanical equivalent' for will or thought. These manifestations of mind are just two types of information, and the minimum amount of energy that must be dissipated in order to generate a given number of thoughts (or bits of information) can be calculated rather simply. The detailed theory will be presented in Chapter 10, but here we can remark that, to take Teilhard's example, a piece of bread can generate at most about 10 bytes of thought. Information theory thus removes a cornerstone of Teilhard's theory, and qua scientific theory it crashes to the ground. Medawar mentioned that 'Teilhard's radial, spiritual or psychic energy may be equated to "information" or "information content" in the sense that has been made reasonably precise in the sense of communications engineers', and he realized that information did not avoid the restrictions of the Second Law. However, the fact that it is possible to demolish Teilhard's theory by reference to physics shows it was in fact a scientific theory as Teilhard claimed, for a general conceptual scheme which is in principle falsifiable is a scientific theory. Many modern philosophers of science would not agree with Karl Popper that falsifiability is a necessary condition for a theory to count as scientific, but most, if not all, would agree that it is a sufficient condition. Although the specific theory advanced by Teilhard has been refuted, his basic meta-theoretical notion of a melioristic cosmos, a universe which evolves God, has not been refuted and indeed cannot be. No mere experiment can destroy a general conceptual scheme. Furthermore, any evolving universe theory which rejects dysteleology must be broadly similar to Teilhard's (or Schelling's 25

315

321

322

199 Modern Teleology and the Anthropic Principles

or Alexander's). In Chapter 10 we shall present a mathematical model of a different sort of melioristic cosmos. According to his biographer Claude Cuenot, Teilhard became fascinated in the 1950's with computers and the relation between information and entropy. Cuenot also claims that Teilhard's new knowledge of information theory led him to go beyond the distinction which he had drawn earlier in The Phenomenon of Man between radial and tangential energy. (We believe Teilhard would have been unable to replace radial energy with information, for reasons we discuss below.) In a letter of May 15, 1953, Teilhard wrote: 323

324

What really interests me in cybernetics is the transformation of materialism it suggests to us. A machine is not (or is no longer) an affair primarily of energy set in motion, but of information put together and transmitted. 324

Teilhard felt that the information-processing of a computer was analogous to human thought. ' He did not believe that computers would replace Man 'for a variety of biological reasons'. Rather, he envisaged Man and the computer in a partnership which would enormously expand human mental powers. ' In Teilhard's theory the radial energy generated single-celled organisms on the newly condensed Earth, then drove these organisms to cover the Earth and combine to form the metazoans. More than half of The Phenomenon of Man is devoted to describing the expansion and combination process in terms, which in rough outline, do not differ significantly from standard evolutionary textbooks. The great evolutionists Simpson and Dobzhansky differ about whether Teilhard believed evolution to be orthogenetic, (orthogenesis means the development of life throughout the entire past history of the Earth is nothing but a predetermined unfolding of characteristics already present in the beginning of organized life), or whether he believed, with the vast majority of contemporary evolutionists, that the evolutionary process is opportunistic, with no foresight. Teilhard certainly described evolution as 'orthogenetic'. From this and the apparent fact that in Teilhard's picture the development of Man is inevitable, Simpson dismisses Teilhard's work as 'evolutionary mysticism'. Medawar and many others are also particularly hard on Teilhard's theory because it apparently requires orthogenesis. However, we must agree with Dobzhansky that '... in spite of himself, Teilhard was not an exponent of orthogenesis'. Teilhard himself said he used orthogenesis '... for singling out and affirming the manifest property of living matter to form a system in which 'terms succeed each other experimentally, following constantly increasing degrees of centro-complexity'. [Teilhard's stress and quotes]... Without orthogenesis life would only have spread; with it there is an ascent of life that is invincible'. 324 325

325

323 325

11

50

315

326

327

Modern Teleology and the Anthropic Principles

200

The key word in this passage is 'experimentally'. Teilhard, as Dobzhansky emphasizes, is quite aware that the success of new forms of life is not guaranteed. New species are experiments of radial energy—the life force, as it 'gropes'—Teilhard's word—its way to higher and higher complexity. Only increased organization of life is inevitable, because of the inherent centralizing properties of the radial energy. As a devout Catholic priest convinced of Man's free will, Teilhard could not possibly be advocating anything that would resemble determinism. Orthogenesis would entail determinism if Teilhard used the word in its standard sense. Most orthogenetic theories imply the inevitable evolution of intelligent life. But Teilhard believed it would be unlikely 'that if the human branch disappeared, another thinking branch would soon take its place'. He also thought the evolution of extraterrestrial intelligent life to have a '.. .probability too remote to be worth dwelling on'. However, there is a weak determinism acting: although any individual or species may fail, the most complex organism cannot do so until it has engendered its even more complex successor: 328

329

. . . we must not forget that since the birth of thought man has been the leading shoot of the tree of life. That being so, the hopes for the future . . . (of biogenesis, which in the end is the same as cosmogenesis) is concentrated exclusively upon him as such. How then could he come to an end before his time, or stop, or deteriorate, unless the universe committed abortion upon itself, which we have already decided to be absurd? . . . Man is irreplaceable. Therefore, however improbable it might seem, he must reach the goal, not necessarily, doubtless, but infallibly [Teilhard's emphasis]. 330

We could claim that Teilhard's distinction between 'necessary success' and 'infallible success' is his way of distinguishing between strong and weak determinism in the manner we discussed above. This distinction is a traditional one in Catholic theology (it has also been drawn by Leibniz): it is essentially Aquinas' distinction between absolute and hypothetical necessity. Such a distinction is mandatory if a metaphysics is to contain both free will and an omniscient and omnipotent Deity, as Catholicism does. What is the goal of mankind, according to Teilhard? Just as nonsapient life covered the Earth to form the biosphere, so mankind— thinking life—has covered the Earth to form what Teilhard terms the noosphere, or cogitative layer. At present the noosphere is only roughly organized, but its coherence will grow as human science and civilization develop, as 'planetization'—Teilhard's word—proceeds. Finally, in the far future, the radial energy will at last become totally dominant over, or rather independent of, tangential energy, and the noosphere will coalesce into a super-sapient being, the Omega Point. This is the ultimate goal of 331

201 Modern Teleology and the Anthropic Principles

the tree of life and of its current 'leading shoot', hom*o sapiens. As Teilhard poetically puts it in The Phenomenon of Man :

This will be the end and the fulfilment of the spirit of the Earth. The end of the world: the wholesale internal introversion upon itself of the noosphere, which has simultaneously reached the uttermost limit of its complexity and centrality. The end of the world: the overthrow of equilibrium [read, 'Heat Death'], detaching the mind, fulfilled at last, from its material matrix, so that it will henceforth rest with all its weight on God-Omega. 332

So speaks Teilhard the Catholic mystic, who has identified the Omega Point with the Christian God (or rather with Christ, who in the Catholic doctrine of the Trinity is regarded as the manifestation of God in the physical Universe). But Teilhard claims to be writing qua scientist in this book, and in fact some phenomenological properties of the Omega Point can be gleamed from some of the book's passages. One key property of the Omega Point is that It, in contrast to the dysteleology of the Second Law of Thermodynamics as understood by the physicists of the early twentieth century, must allow mankind to finally escape the Heat Death, the inevitable end of the forces of tangential energy:

The radical defect in all forms of belief in progress, as they are expressed in positivist credos, is that they do not definitely eliminate death. What is the use of detecting a focus of any sort in the van of evolution if that focus can and must one day disintegrate? To satisfy the ultimate requirements of our action, Omega must be independent of the collapse of the forces with which evolution is woven . . . Thus something in the cosmos escapes from entropy 3 3 3

The Omega Point must in some sense be in the future, at the end or boundary of time, after the end of matter;

. . . Omega itself i s . . . at the end of the whole process, in as much as in it the movement of synthesis culminates. Yet we must be careful to note that under this evolutive facet Omega still only reveals half of itself. While being the last term of its series, it is also outside all series. Not only does it crown, but it closes If by its very nature it did not escape from time and space which it gathers together, it would not be Omega. 333

The details of the transition from the disorganized noosphere to the unity of the Omega Point are (not surprisingly!) few. Teilhard speaks, however, of the transition from animal existence to reflecting, thinking life in terms which make us suspect he was envisaging an analogous process for the origination of the Omega Point:

. . . taking a series of sections from the base towards the summit of a cone, their area decreases constantly; then suddenly, with another infinitesimal displacement,

Modern Teleology and the Anthropic Principles

202

the surface vanishes leaving us with a point [Teilhard's emphasis].... what was previously only a centered surface became a center... Thus by these remote comparisons we are able to imagine the mechanism involved in the critical threshold of reflection. 334

In other words, the Omega Point could be compared to a conical singularity. Coincidentally, this is essentially the view of the end of time one finds in modern cosmology for closed universes, and indeed in another Omega Point theory developed in the final chapter of this book; there the Omega Point will actually be identified with a point on the c-boundary of space-time. It is essential that the Universe be closed— that is, be finite in spatial extent—if the future c-boundary is to have a point-like structure. Interestingly, Teilhard's Omega Point theory also seems to require a boundedness of the spatial structure—the Earth in his theory—in order for the Omega point to be generated out of the coalescence of mankind: . . . there intervenes a fact, commonplace at first sight, but through which in reality there transpires one of the most fundamental characteristics of the cosmic structure—the roundness of the Earth. The geometrical limitation of a star closed, like a gigantic molecule, upon itself . . . What would have become of humanity, if, by some remote chance, it had been free to spread indefinitely on an unlimited surface, that is to say left only to the devices of its internal affinities? Something unimaginable, certainly something altogether different from the modern world. Perhaps even nothing at all, when we think of the extreme importance of the role played in its development by the forces of compression. 335

The 'forces of compression' about which Teilhard speaks are the social forces which arise from Man communicating with his fellows. It is the requirement of ceaseless communication in the future Universe that implies a point c-boundary structure for the future end of the Universe, as shown in Chapter 10. In the theory developed there as well as in Teilhard's theory, an Omega Point can evolve only in a bounded world. Teilhard's bounded world was the finite Earth. He did not believe that space travel would ever be an important phenomenon in the future evolution of mankind. Indeed as the immediately preceding passage makes clear, a mankind freed from the confines of the Earth would probably never combine into the Omega Point. Teilhard made this point explicitly in a private conversation in 1951. As recorded by J. Hyppolite, a professor of philosophy at the Sorbonne, Teilhard said: 336

Following in the steps of [J.B.S.] Haldane, the neo-Marxist tends to escape into the perspectives of a vital expansion, in other words, into a vitalization of the Totality of stellar Space. Let me stress this second point a little. From his own viewpoint, the Marxist will approach willingly and with an open mind the idea of an eschatology for a classless society in which the Omega Point is conceived as the

203 Modern Teleology and the Anthropic Principles point of natural convergence for humanity. But suppose we remind him that our Earth, because of the implacable laws of entropy, is destined to die; suppose we ask him what will be the outcome allowed humanity in such a world. Then he replies—in terms that H. G. Wells has already used—by offering perspectives of interplanetary and intergalactic colonization. This is one way to dodge the mystical notion of a Parousia, and the gradual movement of humanity towards an ecstatic union with God. 337

The necessity of restricting mankind to the Earth in Teilhard's Omega Point theory is one major difference between his theory and the one developed in Chapter 10, which is closer to the 'neo-Marxist' theory. We believe the entropy problem and the finiteness of the Earth would have made it impossible for Teilhard to give up radial energy for information, as Cuenot suggested he might have. Teilhard did not consider the non-terrestrial part of the Universe to be very important. What was truly significant was life, and this was apparently restricted to the Earth: . . . what matters the giddy plurality of the stars and their fantastic spread, if that immensity (symmetrical with the infinitesimal) has no other function but to equilibrate the intermediary layer where, and where only in the medium range of size, life can build itself up chemically? 343

This is strikingly similar to Wheeler's idea that the Universe must be at least as large as it is in order for any intelligent life at all to exist in it. In a sense, the large amount of matter in the Universe 'equilibrates'—permits the existence over long periods of time—the planetary envirionment upon which life must arise. Teilhard continually uses spatial images to describe the Omega Point:

. . . [the noosphere] must somewhere ahead [in time] become involuted to a point which we might call Omega [Teilhard's emphasis], which fuses and consumes [it] integrally in itself. However, immense the sphere of the world may be, it only exists and is finally perceptible in the directions in which its radii meet—even if this were beyond space and time. 338

In a closed universe, the radii of the Universe meet beyond space and time in the final singularity—the mathematical Omega Point defined rigorously in Chapter 10 of this book. Teilhard made only one drawing of the Omega Point (Diagram 4 in the Phenomenon of Man), and amusingly, it is quite similar to the Penrose diagram for a closed universe whose future c-boundary is a single point (see Figure 10.5)! In the Penrose diagram, the convergence of the lines into a point is a mathematical expression of unlimited communication between spatially separated regions. By the convergence of the lines in his figure, Teilhard intended to signify the integration by communication of the entire noosphere.

Modern Teleology and the Anthropic Principles

204

Teilhard's original theory was conceived before the advent of information theory (which made the idea of radial energy at least a possibility at the time), and of modern cosmology. His original theory has been refuted, or perhaps we should say it has become obsolete. However, the basic framework of his theory is really the only framework wherein the evolving Cosmos of modern science can be combined with an ultimate meaningfulness to reality. As the dysteleologists have argued at length, if in the end all life becomes extinct, meaning must also disappear. In the final chapter we construct a mathematical Omega Point theory and by so doing we suggest that value may be able to avoid extinction. In this chapter we have investigated what we consider to be the most influential uses of teleological reasoning in science, philosophy and theology. The way in which local teleological ideas are used in modern biology and physics was carefully distinguished from their indiscriminate global deployment in past centuries. The developments in physics during early years of this century saw examples where essentially Anthropic arguments led to successful physical predictions. However, since that time, the study of teleology was dominated by an interesting collection of philosophers and theologians whose work we have tried to unravel and present in a logical progression. Interesting connections with the ideas of some modern economists can also be traced. A time-chart displaying the lifespans of the principal individuals whose ideas have been discussed in this chapter is given in Figure 3.1. This completes our non-mathematical survey of teleological ideas in science and philosophy and provides a back-drop against which to view the modern form of the Anthropic Principle enunciated by cosmologists T

I

I

T

r

Fichte Schelling Barnes Teilhard Henderson Bergson Kelvin Russell

Hejej

Bottzmann Tennant Spencer Alexander

J

1

1950

Whitehead

I

1900

I

1850

I

1800

I

1750

Figure 3.1. The chronology of some of the principal scientists and philosophers whose work is discussed in this chapter.

205 Modern Teleology and the Anthropic Principles

interested in the existence of a collection of surprising numerical coincidences in the make-up of the physical world.

References

1. J. Monod, in Studies in the philosophy of biology, ed. F. J. Ayala and T. Dobzhansky (University of California Press, Berkeley, 1974). 2. T. Huxley, Lectures and essays (Macmillan, NY, 1904), pp. 178-9. 3. J. R. Moore, The post-Darwinian controversies (Cambridge University Press, Cambridge, 1979). 4. A. O. Lovejoy, The Great Chain of Being (Harvard University Press, Cambridge, Mass., 1936). 5. H. A. E. Driesch, The history and theory of vitalism (Macmillan, London, 1914); Man and the universe (Allen & Unwin, London, 1927); Mechanism, life and personality (J. Murray, London, 1914). 6. J. S. Haldane, The philosophy of a biologist (Clarendon Press, London, 1935); The philosophical basis of biology (Doubleday, Garden City, 1931). 7. P. Lecomte du Noiiy, Human destiny (Longmans, NY, 1955). 8. E. W. Sinnott, The biology of the spirit (Viking Press, NY, 1955). 9. S. Wright, Monist 48, 265 (1964). 10. P. Teilhard de Chardin, The phenomenon of Man, rev. English transl. (Harper & Row, Colophon edn, NY, 1975), p. 29. 11. G. G. Simpson, 'Evolutionary theology: theology: the new mysticism', in This view of life: the world of an evolutionist (Harcourt Brace & World, NY, 1964), p. 213. 12. G. G. Simpson, ref. 11, section 3. 13. G. G. Simpson, The meaning of evolution (Yale University Press, New Haven, 1967). 14. F. J. Ayala, Phil Sci. 37, 1 (1970). 15. F. J. Ayala, 'The concept of biological progress', in Studies in the philosophy of biology, see ref. 1. 16. T. Dobzhansky, F. J. Ayala, G. L. Stebbins, and J. W. Valentine, Evolution (Freeman, San Francisco, 1977). 17. G. L. Stebbins, The basis of progressive evolution (University of North Carolina Press, Chapel Hill, 1969). 18. L. Sokoloff, in Basic neurochemistry, 2nd edn, ed. G. J. Siegel, R. W. Alberts, R. Katzman, and B. W. Arganoff (Little Brown, Boston, 1976). 19. D. A. Russell, in Life in the universe, ed. J. Billingham (MIT Press, Cambridge, Mass. 1981). 20. H. J. Jerison, The evolution of the brain and intelligence (Academic Press, NY, 1973). 21. H. J. Jerison, Current Anthropol 16, 403 (1975). 22. E. O. Wilson, Sociobiology (Harvard University Press, Cambridge, 1974). 23. C. O. Lovejoy, in Life in the universe, see ref. 19. 24. C. O. Lovejoy, Science 211, 341 (1981).

Modern Teleology and the Anthropic Principles

206

25. Ref. 23, p. 326. 26. L. v. Salvini-Plawen and E. Mayr, in Evolutionary biology, ed. M. K. Hecht, W. C. Steere, and B. Wallace (Plenum, NY, 1977). 27. Letter from Ernst Mayr to FJT dated December 23, 1982. 28. The existence of such a consensus is attested to in ref. 27; see also ref. 23. 29. T. Dobzhansky, in Perspectives in biology and medicine, Vol. 15, p. 157 (1972); T. Dobzhansky, Genetic diversity and human equality (Basic Books, NY, 1973), pp. 99-101. 30. G. G. Simpson, This view of life (Harcourt Brace & World, NY, 1964), Chapters 12 and 13. 31. J. Francois, Science 196, 1161 (1977); see also W. D. Mathew, Science 54, 239 (1921). 32. E. Mayr, Scient. Am. 239 (Sept), 46 (1978). 33. S. J. Gould, Discovery 4 (No. 3, March), 62 (1983). Gould's argument was part of a three-way debate on extraterrestrial intelligence between himself, Carl Sagan, and FJT. 34. Ref. 13, p. 512. 35. E. Mayr, 'Teleological and teleonomic, a new analysis', in Boston studies in the philosophy of science, Vol. 14 ed. R. S. Cohen Wartofsky (Reidel, Dordrecht, 1974), p. 91. 36. E. Mayr, 'Cause and effect in biology', in Cause and effect, ed. D. Lerner (Free Press, NY, 1965). 37. J. Monod, Chance and necessity (Knopf, NY, 1971). 38. T. Dobzhansky, The genetics of the evolutionary process (Columbia University Press, NY, 1970), p. 4. 39. H. Butterfield, The Whig interpretation of history (Bell, London, 1931). 40. M. Grene, The understanding of Nature: essays in the philosophy of biology (Reidel, Dordrecht, 1974). 41. M. M. Waldrop, Science 224, 1225 (1984). 42. I. Goodwin, Physics Today 37 (No. 5, May), 63 (1984). The estimate that 10 bit RAM chips will be available by the year 2000 was made by S. Chou, director of Intel's Portland Technology Development; see Computers and Electronics 22 (No. 8, Aug.), 16 (1984). Computer speeds can also be measured in MIPS (million instructions per second). The relationship between 'flops' and MIPS is difficult to define exactly, for they depend in a complicated way upon machine architecture. Roughly speaking, a MIPS is equal to a megaflop to within an order of magnitude, and the two measures of speed become closer the the faster the machine described. A typical fast mainframe computer like the IBM 3081 has a speed of about 10 MIPS. 43. A. Rosenblueth, N. Wiener, and J. Bigelow, in Purpose in nature, ed. J. Canfield (Prentice-Hall, Englewood Cliffs, NJ, 1966). 44. F. J. Ayala, 'Introduction', in Studies in the philosophy of biology, see ref. 1. 45. M. Polanyi, Personal knowledge (University of Chicago Press, Chicago, 1958); see especially pp. 140, 158, 394, and 396. 46. M. Polanyi, The study of man (University of Chicago Press, Chicago, 1959), pp. 48-51. 47. L. von Mises, Human action: a treatise on economics, 3rd edn (Henry Regnery, Chicago, 1966), p. 83. 9

207 Modern Teleology and the Anthropic Principles 48. F. A. Hayek, 'The pretence of knowledge' (1974 Nobel Lecture), reprinted in New studies in philosophy, politics, economics, and the history of ideas (Routledge & Kegan Paul, London, 1978), pp. 26-27. 49. T. Dobzhansky, Mankind evolving (Yale University Press, New Haven, 1962). 50. T. Dobzhansky, The biology of ultimate concern (New American Library, NY, 1967). 51. P. R. Ehrlich and A. H. Ehrlich, Population, resources, and environment (Freeman, San Francisco, 1970), p. 159. 52. F. A. Hayek, Unemployment and monetary policy (Cato Institute, San Francisco, 1979). 53. M. Friedman and R. Friedman, Free to choose (Harcourt Brace Jovanovich, NY, 1980). 54. W. Havender, Reason 14 (December), 52 (1982). We are grateful to Mr. H. Palka for this reference. 55. Ref. 51, p. 324. 56. S. Toulmin, Human understanding, Vol I: The collective use and evolution of concepts (Princeton University Press, Princeton, 1972), Chapter 5. 57. S. Toulmin, ref. 56, pp. 324-40. 58. T. Kuhn, The structure of scientific revolutions, 2nd edn (University of Chicago Press, Chicago, 1970), pp. 170-3. 59. T. Kuhn, 'Reflections of my critics', in Criticism and the growth of knowledge, ed. I. Lakatos and A. Musgrave (Cambridge University Press, Cambridge, 1972), p. 265. 60. L. J. Henderson, The fitness of the environment, reprint with an introduction by George Wald (Peter Smith, Gloucester, Mass., 1970). The original edition was published by Harvard in 1913. 61. L. J. Henderson, The order of Nature (Harvard University Press, Cambridge, Mass., 1917). 62. G. Wald, Origins of Life 5, 7 (1974). 63. Ref. 60, p. vi. 64. Ref. 61, p. 192. 65. Ref. 61, p. 184. 66. Ref. 61, p. 185. 67. Ref. 61, p. 208. 68. Ref. 61, p. 204. 69. Ref. 61, p. 203. 70. Ref. 61, p. 191. 71. Ref. 61, p. 183. 72. Ref. 61, p. 200. 73. Ref. 61, p. 211. 74. Ref. 61, p. 198. 75. Ref. 61, p. 201. 76. Ref. 61, p. 146. 77. Ref. 61, p. 181. 78. Ref. 61, p. 184. 79. Ref. 60, p. 312.

208

Modern Teleology and the Anthropic Principles

80. Anon., Nature 91, 292 (1913). 81. J. S. Haldane, Nature 100, 262 (1917). 82. J. Needham, Order and life (Yale University Press, New Haven, 1936), p. 15. 83. H. W. Smith, Kamongo (Viking Press, NY, 1932) p. 153. We are grateful to Professor J. A. Wheeler for this reference. 84. Hero of Alexandria, Catoptics 1-5, in I. B. Cohen and I. E. Drabkin, A source book in Greek science (McGraw-Hill, NY, 1948). 85. Aristotle, De caelo II. 4, 287a. 86. P. de Fermat, Oeuvres (1679). 87. G. Leibniz, in Leibniz selections, ed. P. P. Wiener (Scribner's, NY, 1951), p. 70. 88. P. L. M. de Maupertuis, Accord de differentes lois de la Nature, in, Vol. 4, p. 3 (1768). 89. P. L. M. de Maupertuis, Essai de cosmologie, in Oeuvres, Vol. 1, p. 5 (1768). 90. L. Euler, Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, additamentum, in Collected works, Vol. 24 (ed. C. Caratheodory, 1952). English translation by W. A. Oldfather, C. A. Ellis, and D. M. Brown, Isis 20, 72 (1933). We are grateful to Professor S. G. Brush for this reference. 91. J. L. Lagrange, Mecanique analytique, Oeuvres, Vol. 11 (1867). 92. W. R. Hamilton, 'Second essay on a general method in dynamics', Phil Trans. R. Soc. 1, 95 (1835). 93. L. Euler, in Maupertius, Oeuvres, Vol. 4 (1768). 94. S. D. Poisson, quoted in ref. 97. 95. H. Hertz, quoted in ref. 97. 96. E. Mach, The science of mechanics (Dover, NY, 1953). 97. A. d'Abro, The rise of the new physics, Vol. I (Dover, NY, 1951). 98. W. Yourgrau, and S. Mandelstam, Variational principles in dynamics and quantum theory (Pitman, London, 1960). 99. H. von Helmholtz, quoted in ref. 98. 100. M. Planck, A survey of physical theory (Dover, NY, 1960), pp. 69-81. 101. M. Planck, Scientific autobiography and other papers (Greenwood, Westport, 1971), pp. 176-87. 102. J. Mehra, Einstein, Hilbert, and the theory of gravitation (Reidel, Boston, 1974). 103. R. P. Feynman, Science 153 699 (1966). 104. J. A. Wheeler, and R. P. Feynman, Rev. Mod. Phys. 17 157 (1945). 105. F. J. Tipler, Nuovo dm. 28 446 (1975). 106. R. B. Partridge, Nature 244 263 (1973). 107. R. P. Feynman, Rev. Mod. Phys. 20 267 (1948). 108. E. S. Abers and B. W. Lee, Phys. Rep. C9 1 (1973). 109. S. Weinberg, quoted in Physics Today 32 (No. 12) 18 (1979). 110. L. S. Schulman, Techniques and applications of path integration (Wiley, NY, 1981), pp. 7, 12.

209ModernTeleology and the Anthropic Principles 111. J. D. Barrow and F. J. Tipler, 'Action in Nature', preprint, 1985. 112. J. G. Fichte, Science of knowledge, transl. of Wissenschaftslehre by P. Heath and J. Lachs (Cambridge University Press, Cambridge, 1982), p. 8. 113. Ref. 112, p. 10. 114. Ref. 112, p. 23. 115. N. Bohr, in Albert Einstein: philosopher-scientist, Vol. 1 (Harper & Row, NY, 1959), p. 234. 116. F. A. Hayek, The pure theory of capital (University of Chicago Press, Chicago, 1934). 117. Ref. 112, p. 21. 118. A. M. Turing, Mind 59, 433 (1950); reprinted in ref. 119. 119. D. R. Hofstadter and D. C. Dennett, The mind's I (Basic Books, NY, 1981). 120. F. S. Beckman, Mathematical foundations of programming (AddisonWesley, London, 1980). 121. M. Machtey and P. Young, An introduction to the general theory of algorithms (Elsevier North-Holland, Amsterdam, 1978). 122. F. W. J. von Schelling, Darstellung meines Systems der Philosophie, English transl. in ref. 123, p. 15. 123. F. W. J. von Schelling, The ages of the World, transl. with notes by F. Bolton (Columbia University Press, NY, 1942). 124. T. P. Hohler, Imagination and reflection: intersubjectivity—Fichte's 'Grundlage' of 1794 (Martinus Nijhoff, The Hague, 1982). This reference interprets Fichte as taking the finite ego as fundamental. Fichte has also been interpreted by other philosophers, particularly those who have approached absolute idealism through Hegel, as taking the Infinite Will, or Universal Program, as fundamental (see ref. 125 for such an interpretation). We follow our own reading, and what seems to be a general consensus of Fichte scholars, in the 'finite ego as fundamental' interpretation. 125. F. Copleston, A history of philosophy, Vol. 7: Fichte to Hegel (Doubleday, NY, 1965). 126. F. W. J. von Schelling, Of human freedom, transl. of Uber das Wesen meschlichen Freiheit, by J. Gutmann (Open Court, Chicago, 1936). 127. F. W. J. von Schelling, System of transcendental idealism, English transl. in ref. 126, p. 318. 128. Ref. 4, p. 323. 129. F. W. J. von Schelling, Denkmal der Schrift von den gottlichen Dingen (1812); English transl. of quoted passage in ref. 4, p. 323. 130. G. W. F. Hegel, The philosophy of history (Colonial Press, NY, 1899), p. 19. 131. G. W. F. Hegel, The phenomenology of mind, 2nd edn, English transl. of Die Phaenomenologie des Geistes, by J. B. Baillie (Allen & Unwin, London, 1931), p. 85. 132. G. W. F. Hegel, Philosophy of nature (Oxford University Press, Oxford, 1970), pp. 20 and 284. 133. J. M. E. McTaggart, The nature of existence, Vol. II, ed. C. D. Broad (Cambridge University Press, Cambridge, 1927), p. 478-9. 134. B. Bosanquet, Proc. Br. Acad. 2, 235 (1905-6).

210

Modern Teleology and the Anthropic Principles

135. J. Royce, The world and the individual (Macmillan, NY, 1908). We are grateful to Prof. R. Whittemore for this extract. 136. C. S. Peirce, Collected papers, Vol. VI: Scientific metaphysics, ed. C. Hartshorne and P. Weiss (Harvard University Press, Cambridge, 1935), pp. 174 and 299. 137. Ref. 136, p. 26. 138. G. L. Buffon, Oeuvres completes de Buffon, Vol. 9 (Paris, 1854). 139. F. C. Haber, The age of the world: Moses to Darwin (Johns Hopkins Press, Baltimore, 1959). 140. S. G. Brush, The temperature of history (Franklin, NY, 1978). 141. J. B. J. Fourier, Theorie analytique de la chaleur (Paris, 1822). 142. Lord Kelvin, (W. Thomson), Phil. Mag. (ser. 4) 25 (1863), 1; also in Kelvin's Mathematical papers, Vol. 3, p. 295. 143. Lord Kelvin, (W. Thomson), Macmillan's Mag. (5 March, 1862), 288. 144. J. O. Burchfield, Lord Kelvin and the age of the Earth (Macmillan, London, 1975), p. 73. This is the most important book on the subject, a gold-mine of information. 145. F. Jenkin, North British Review (June, 1867), 277; quote from p. 304. 146. Ref. 145, p. 301. 147. Ref. 145, p. 305. 148. Quoted in ref. 144, p. 77. 149. Quoted in ref. 144, p. 79. 150. P. G. Tait, Lectures on some recent advances in physical science (Macmillan, London, 1976). 151. S. Newcomb, Popular astronomy (Harper, NY, 1878), pp. 505-11. 152. C. King, Am. J. Sci. 145 (1893), 1. 153. C. Darwin, Letter to Reade, 9 Feb. 1877, in More letters of Charles Darwin, ed. F. Darwin and A. C. Seward (Appleton, NY, 1903), vol. 2, pp. 211-12; see also ref. 144, p. 110. 154. J. Croll, Quart. J. Sci. 7 (1877), 307. Quotes are on pp. 307 and 317-18. 155. A. Geikie, Landscape in history and other essays (Macmillan, London, 1905). Quotes are on pp. 172 and p. 186 respectively. 156. E. B. Poulton, Essays in evolution (Oxford University Press, Oxford, 1908), pp. 1-45. 157. J. G. Goodchild, Proc. R. Phys. Soc. Edin. 13 259 (1896). 158. J. Perry, Nature 51, 224 (1895). 159. J. Perry, Nature 51, 582 (1895). 160. Kelvin, Lord (W. Thomson), Nature 51, 438 (1895). 161. T. C. Chamberlin, Science 10, 11 (1899). See S. G. Brush, J. Hist. Astron. 9, 13 (1978) for a discussion of Chamberlin's views on the thermal history of the Earth. Chamberlin's views were based in part on a belief in global teleology; see H. C. Winnik, J. Hist. Ideas 31, 441 (1970) for some discussion of Chamberlin's teleological views. 162. D. Kubrin, J. Hist. Ideas 28, 325 (1967). 163. R. H. Hurlbutt III, Hume, Newton and the Design Argument (University of Nebraska Press, Lincoln, 1965).

211 Modern Teleology and the Anthropic Principles 164. C. Darwin, On the origin of species by means of natural selection, 2nd edn (John Murray, London, 1860), p. 486. 165. H. von Helmholtz, 'On the interaction of the natural forces', rep. in Popular scientific lectures, ed. M. Kline (Dover, NY, 1961). 166. W. Thomson, Proc. R. Soc. Edin. 8, 325 (1874); repr. in ref. 195, p. 176. 167. J. Jeans, The universe around us (Cambridge University Press, Cambridge, 1929). 168. A. S. Eddington, The nature of the physical world, Gifford lectures 1927 (Cambridge University Press, Cambridge, 1928). 169. B. Russell, Why I am not a Christian (George Allen & Unwin, NY, 1957), p. 107. 170. Ref. 169, p. 11. 171. N. Barlow (ed.), The autobiography of Charles Darwin (Harcourt Brace, NY, 1959), p. 92. 172. E. W. Barnes, Scientific theory and religion, Gifford lectures 1927-1929 (Cambridge University Press, Cambridge, 1933). 173. Ref. 10, p. 229. 174. Ref. 10, p. 233. 175. W. R. Inge, God and the astronomers, Warburton lectures 1931-1933 (Longmans Green, London, 1934), p. 24. 176. Ref. 175, p. 28. We suspect the last word of this quote was a misprint of 'coils' in the original text. 177. E. Hiebert, 'Thermodynamics and religion: a historical appraisal' in Science and contemporary society, ed. F. J. Crosson (University of Notre Dame Press, London, 1967). 178. B. Russell, Religion and science (Oxford University Press, NY, 1968), p. 210. 179. Ref. 178, p. 216. 180. R. Penrose, in General relativity: an Einstein centenary survey, ed. S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979). 181. P. R. Ehrlich, A. H. Ehrlich, and J. P. Holdren, Ecoscience: population, resources, environment (Freeman, San Francisco, 1977), p. 393. 182. Ref. 181. p. 74. 183. Obviously a single human is not immortal. The calculation envisages only that a single human being is alive at any given time. This could be the case only if, for example, a baby were born the instant the previous solitary inhabitant of the Earth died. 184. B. L. Cohen, Before it's too late: a scientist's case for nuclear energy (Plenum, London, 1981), p. 181. 185. P. and A. Ehrlich, Extinction : the causes and consequences of the disappearance of species (Random House, NY, 1981), p. xii. 186. J. L. Simon, The ultimate resource (Princeton University Press, Princeton, 1981). 187. J. L. Simon, Science 208, 1431 (1980). Anne and Paul Ehrlich do not regard Simon's work highly. They assert that this paper in particular '... would have been a fine centerpiece for an April Fools' issue of any scientific journal', (ref. 185, p. 291, note 14).

212

Modern Teleology and the Anthropic Principles

188. J. S. Mill, Principle of political economy, Vol. 1, 5th edn (Appleton, NY, 1895), p. 71. 189. Marginal utility is discussed at length in any modern economics textbook; e.g. Economics, 10th edn, by P. Samuelson (McGraw-Hill, NY, 1975). 190. P. R. Ehrlich, Coevolution Quart., Spring, 1976. 191. Ref. 181, p. 823. 192. B. L. Cohen, Am. J. Phys. 51, 75 (1983). 193. S. G. Brush, The kind of motion we call heat, 2 vols (North-Holland, Amsterdam, 1976). 194. L. Boltzmann, Sber. Akad. Wiss. Wien, part II, 66 (1872), 275, English translation in ref. 195, p. 88. 195. S. G. Brush, Kinetic theory: Vol. 2—Irreversible processes (Pergamon, Oxford, 1966). 196. L. Boltzmann, Sber. Akad. Wiss. Wien, part II, 75 (1877), 67; English translation in ref. 195, p. 188. 197. J. C. Maxwell, Theory of heat, 3rd edn (Longmans, London, 1872), p. 208. 198. K. Pearson, The grammar of science (Dent, London, 1937). 199. E. Zermelo, Ann. Physik 57, 485 (1896); English translation in ref. 195, p. 208. 200. E. Zermelo, Ann. Physik 59, 793 (1896); English translation in ref. 195, p. 229. 201. Ref. 195, p. 235. 202. L. Boltzmann, Ann. Physik 60, (1897); English translation in ref. 195, p. 238. 203. Ref. 195, p. 242. 204. L. Boltzmann, Nature 51, 413 (1895). 205. H. Poincare, Rev. de Metaphysique et de Morale 1 (1893), 534; English transl. in ref. 195, p. 203. 206. H. Poincare, The foundations of science (Science Press, Lancaster, 1946). 207. N. Wiener, Cybernetics, 2nd edn (MIT Press, Cambridge, Mass., 1961), pp. 34-5. 208. E. A. Milne, Modern cosmology and the Christian idea of God (Oxford University Press, Oxford, 1952). 209. F. J. Tipler, Nature 280, 203 (1979) see also ref. 252. 210. J. B. S. Haldane, Nature 122, 808 (1928). 211. J. B. S. Haldane, The inequality of man (Chatto & Windus, London, 1932), p. 169. 212. R. P. Feynman, The character of physical law (MIT Press, Cambridge, Mass., 1965). 213. A. Griinbaum, Philosophical problems of space and time (Knopf, NY, 1963), p. 227. 214. H. Reichenbach, The direction of time (University of California Press, Berkeley, 1971). 215. P. J. Peebles, Comm. Astrophys. Space Phys. 4 (1972), 53; however, see J. D. Barrow and R. A. Matzner, Mon. Not. R. astron. Soc. 181, 719 (1977)

213 Modern Teleology and the Anthropic Principles for a discussion of this paper. 216. R. Penrose, in General relativity: an Einstein centenary survey, ed. S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979). 217. C. W. Misner, Astrophys. J. 151 (1968), 431. 218. J. A. Wheeler, Frontiers of time (North-Holland, Amsterdam, 1978) pp. 54-73. 219. W. J. co*cke, Phys. Rev. 160, 1165 (1967). 220. Ref. (193), p. 589. 221. M. J. Klein, Am. Scient. 58, 84 (1970). 222. E. E. Daub, Stud. Hist. Phil. 1, 213 (1970). 223. B. Stewart, and P. G. Tait, The unseen universe; or physical speculations on a future state (Macmillan, London, 1875). 224. W. K. Clifford, Fortnightly Rev. 23, 776 (1875). 225. P. M. Heimann, Br. J. Hist. Sci. 6, 73 (1972). 226. L. Szilard, Z. Physik 53, 840 (1929). English translation in Behavioral Science 9, 301 (1964). 227. L. Brillouin, Science and information theory (Academic Press, NY, 1962). 228. L. Brillouin, Appl. Phys. 22, 334 (1951). 229. R. Carnap, and A. Shimony, Two essays on entropy (University of California Press, Berkeley, 1977). 230. L. Rosenberg, Nature 190, 384 (1961). 231. I. Prigogine, Science 201, 772 (1978). 232. P. A. Bertocci, The empirical argument for God in late British thought (Harvard University Press Cambridge, Mass. 1938). 233. A. S. Pringle-Pattison, The idea of God in the light of recent philosophy (Oxford University Press, Oxford, 1917), p. 111. 234. A. S. Pringle-Pattison, Man's place in the cosmos, 2nd edn (William Blackwood, London, 1902), p. 42. 235. Ref. 233, p. 330. 236. F. R. Tennant, Philosophical theology, Vol. II (Cambridge University Press, Cambridge, 1930), p. 79. 237. Ref. 236, p. 80. 238. Ref. 236, p. 81. 239. Ref. 236, p. 82. 240. Ref. 236, p. 83. 241. Ref. 236, p. 113. 242. Ref. 236, p. 104. 243. Ref. 236, p. 114. 244. Ref. 236, p. 117. 245. Ref. 236, p. 101. 246. J. Laird, Theism and cosmology, Gifford lectures 1939 (Allen & Unwin, London, 1940). 247. A. B. Gibson, Theism and empiricism (Schocken Books, NY, 1970). 248. P. A. Bertocci, Introduction to the philosophy of religion (Prentice-Hall, NY, 1951).

214

Modern Teleology and the Anthropic Principles

249. C. E. Raven, Natural and Christan theology, Gifford lectures 1952 (Cambridge University Press, Cambridge, 1953). 250. J. Ward, Naturalism and agnosticism, Gifford lectures 1896-1898 (Adam & Charles Black, London, 1906), 3rd edn, Vol. 2, p. 254. 251. W. R. Matthews, The purpose of God, Robertson lectures 1935 (Nisbet, London, 1935), p. 64. 252. F. J. Tipler, in Essays in general relativity, ed. F. J. Tipler (Academic Press, NY, 1980). 253. A. R. Peaco*cke, Science and the Christian experiment (Oxford University Press, Oxford, 1971). 254. A. R. Peaco*cke, Creation and the world of science, Bampton lectures 1978 (Oxford University Press, Oxford, 1979). 255. E. L. Mascall, Christian theology and natural science, Bampton lectures 1956 (Longmans Green, London, 1956). 256. Ref. 172, p. 402. 257. Ref. 172, p. 503. 258. F. J. Tipler, C. J. S. Clarke, and G. F. R. Ellis, in General relativity and gravitation, Vol. II, ed. A. Held (Plenum, NY, 1980). 259. R. Gott, Nature 295, 304 (1982); F. Hoyle and J. V. Narlikar, Proc. R. Soc. A 290, 162, 177 (1966). 260. J. B. Bury, Idea of progress: an inquiry into its origins and growth (Macmillan, London, 1921). 261. R. Nisbet, History of the idea of progress (Basic Books, NY, 1980). Strangely, this book has no bibliography. The bibliography was published as part of a separate article in Literature of Liberty 2, 7 (1979). 262. L. Edelstein, The idea of progress in classical antiquity (Johns Hopkins University Press, Baltimore, 1967). 263. H. Spencer, The Man versus the State, ed. E. Mack (Liberty Press, Indianapolis, 1981). 264. H. Spencer, The proper sphere of government, repr. in ref. 263; original publication in 1843. 265. K. Marx and F. Engels, The German ideology, ed. C. J. Arthur (International Publishers, NY, 1970). 266. K. Marx, The Gotha program (New York Labor News Company, NY, 1935). 267. H. D. Dickinson, Economics of socialism (Oxford University Press, Oxford, 1939). 268. O. Lange and F. M. Taylor, On the economic theory of Socialism, ed. B. E. Lippincott (University of Minnesota Press, Minneapolis, 1948). 269. F. A. Hayek, 'Socialist calculation', in Individualism and economic order (University of Chicago Press, Chicago, 1948). 270. H. Spencer, 'Progress: its law and cause', in Essays, Vol. 1 (Appleton, NY, 1901). 271. H. Spencer, First principles (Appleton, NY, 1901). On p. 473 he shows that he appreciates the effect of gravitational instability in bringing about structure in the Universe, for 'any finite hom*ogeneous aggregate must inevitably lose its hom*ogeneity, through the unequal exposure of its parts to incident [gravitational] forces'.

215 Modern Teleology and the Anthropic Principles 272. H. Spencer, 'The nebular hypothesis', in Essays, Vol. 1 (Appleton, NY, 1901). 273. L. Ward, Applied sociology : a treatise on the conscious improvement of society by society (Ginn, NY, 1906). 274. L. T. Hobhouse, Development and purpose: an essay towards a philosophy of evolution (Macmillan, London, 1913). 275. K. Popper, The open society and its enemies, rev. edn (Princeton University Press, Princeton, 1950). 276. F. A. Hayek, The sensory order (University of Chicago Press, Chicago, 1952). Although this book was first published in 1952, it was written in the 1920's, before Turing's work on the Halting Problem. We are grateful to Professor Hayek for pointing out this reference to us. 277. F. A. Hayek, 'The use of knowledge in society', rep. in ref. 269. 278. K. Popper, Br. J. Phil Sci. 1, 117 (1950); and 1, 173 (1950). This argument has various interesting consequences for the question of whether human action can be predicted and also known to be predicted, see D. M. Mackay, Freedom of action in a mechanistic universe (Cambridge University Press, Cambridge, 1967). 279. H. Bergson, Creative evolution, transl. A. Mitchell (Macmillan, London, 1964), p. 41. 280. Ref. 279, p. 42. 281. Ref. 279, p. 43. 282. Ref. 279, p. 46. 283. Ref. 279, p. 54. 284. Ref. 279, p. 255. 285. S. Alexander, Space, time, and Deity: Gifford lectures at Glasgow, 19161918, Vol. II (Macmillan, London, 1966). 286. J. Macquarrie, Twentieth century religious thought (Harper & Row, NY, 1963). 287. R. G. Collingwood, The idea of Nature (Oxford University Press, Oxford, 1945). 288. Ref. 285, p. 355. 289. Ref. 285, p. 347. 290. Ref. 285, p. 346. 291. Ref. 285, p. 353. 292. Ref. 285, p. 357. 293. A. N. Whitehead, Process and reality: an essay in cosmdlogy, corrected edition, ed. D. R. Griffin and D. W. Sherburne (Free Press, NY, 1978). 294. Ref. 293, p. 89; see also p. 34. 295. Ref. 293, p. 91. 296. Ref. 293, p. 168. 297. Ref. 293, p. 47. 298. A. H. Guth, Phys. Rev. D 23, 347 (1981). 299. J. D. Barrow and M. Turner, Nature 298, 801 (1982). 300. A. Guth, private communication to FJT. 301. C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973).

Modern Teleology and the Anthropic Principles

216

302. Ref. 287, p. 167. 303. C. Hartshorne, A natural theology for our time (Open Court, La Salle, 1967), 93. 304. The philosopher Milic Capek has pointed out in his book Bergson and modern physics (Reidel, Dordrecht, 1971), pp. 252-3, that Whitehead was inconsistent in his various writings on the question of whether a global time exists. In Science and the modem world, (p. 172), Whitehead argues that in special relativity there is no 'unique present instant'. But in his book The concept of nature, Whitehead distinguishes between what he terms 'the creative advance of nature', which seems to be something like a global time sequence, and the 'discordant time-systems' in special relativity. At bottom, Whitehead knew, like his follower Hartshorne, that a progressive cosmos required a globally defined time. Bergson also realized this. He was very much worried about the lack of a global time in special relativity, as demonstrated by the Twin Paradox. Bergson wrote an entire book Duration and simultaneity, transl. L. Jacobson (Bobbs-Merill, Indianapolis, 1965), in which he tried to argue that physically a global time existed even in special relativity, and that in particular, the Twin Paradox could not occur. He was wrong. The Twin Paradox is a valid (and experimentally confirmed) prediction of both general and special relativity. We should mention that the analysis of the Twin Paradox by Capek is incorrect, due to a misunderstanding of what is meant by the terms 'special' and 'general' relativity. Special relativity is an analysis of the space-time (TJ, R ), where TJ is the Minkowski metric and R is the Euclidean four-manifold. General relativity is an analysis of the general spacetime (g, M), where g is a general non-degenerate metric with signature -2, and M is a general 4-manifold. Clearly, general relativity reduces to special relativity when TJ = g and M = R . But equally clearly, it is possible to analyse accelerated motion in special relativity, and to talk about accelerated reference frames in special relativity, just as it is possible to use non-linear coordinate systems in Euclidean space. Properly speaking, the realm of general relativity is those space-times in which the Riemann curvature tensor is not identically zero. The Twin Paradox can be completely analysed in a region of space where the curvature is essentially zero, and so it is a purely special relativity effect. See E. F. Taylor and J. A. Wheeler, Space-time physics (Freeman, San Francisco, 1966) for a very nice, clear discussion of the Twin Paradox. In general relativistic cosmologies a global time exists, and all the times of all observers advance according to this global time. But the rates of advance depend on the individual observer, and this is the import of the Twin Paradox. 305. C. Hartshorne, Anselm's discovery: a re-examination of the ontological proof for God's existence (Open court, La Salle, 1965), p. 109. 306. J. E. Marsden and F. J. Tipler, Phys. Rep. 66, 109 (1980). 307. A. S. Eddington, Space, time, and gravitation (Cambridge University Press, Cambridge, 1920), p. 163. 308. Ref. 303, p. 98. 309. Ref. 303, p. 96. 310. J. Needham, 'Cosmologist of the future: a review of The phenomenon of man by Teilhard de Chardin', New Statesman 88 (1959), 632. 4

4

4

238 Modern Teleology and the Anthropic Principles 311. M. Lukas and E. Lukas, Teilhard (Doubleday, NY, 1977). 312. P. Hefner, The promise of Teilhard (Lippincott, Philadelphia, 1970). 313. Professor Hefner recently (February, 1984) remarked to FJT that Teilhard was wrong in many respects, but his heart was in the right place'. 314. C. Raven, Teilhard de Chardin: scientist and seer (Harper & Row, NY, 1962). 315. P. B. Medawar, 'Critical review of The phenomenon of man\ in Mind 70 (1961), 99-106. 316. See in particular the passages on pp. 43, 52, and 66 of ref. 10. 317. See, for example, his remarks to this effect on p. 149 of ref. 10. 318. This opinion is expounded on p. 57 (see especially the footnote) and on pp. 71 and 301 of ref. 10. 319. Ref. 10, pp. 63-4. 320. Ref. 10, p. 64. 321. Ref. 315, p. 103. 322. K. Popper, The logic of scientific discovery (Harper & Row, NY, 1959); rev. edn (Hutchinson, London, 1968). 323. C. Cuenot, Teilhard de Chardin (Helicon, Baltimore, 1965), p. 290. 324. Ref. 323, p. 352. 325. Teilhard de Chardin, Etudes 264 (1950), 403-4. This note has no by-line, but Cuenot (ref. 323, p. 453), lists Teilhard as the author. 326. Ref. 50, p. 120. See also Dobzhansky's articles on Teilhard in Zygon 3, 242 (1968), and in Beyond chance and necessity, ed. J. Lewis (Humanities Press, Atlantic Highlands, NJ, 1974). 327. Ref. 10, pp. 108-9. 328. Ref. 10, p. 275. 329. Ref. 10, p. 286. There is some indication that Teilhard changed his view on extraterrestrial life in the 1950's. His biographer Claude Cuenot records him as saying: 'The more we expand the world and the potentialities of the biosphere, the more out of character and even unworthy of God it seems that all the energy of matter and its combinations should be dispersed over an immense universe for just one single living human kind'. (Ref. 323, p. 365.) However, as we discuss in the text, this quote is inconsistent with Teilhard's fl-point theory as developed in the Phenomenon of man. Even more inconsistent is an essay on the subject of extraterrestrial life which Teilhard wrote in 1953 (first published in the collection of essays entitled Christianity and evolution (Harcourt Brace & Jovanovich, NY, 1971), p. 229). In this essay Teilhard argues that orthogenesis makes the evolution of intelligent life inevitable on numerous planets throughout the cosmos! Furthermore, in this essay Teilhard asserts that the noosphere on Earth is just one of many noospheres scattered throughout the universe; presumably this implies that the fl-point on the Earth is not the ultimate goal of life, but rather a goal which will be achieved on many planets. Thus, in this essay Teilhard gives up the idea of universal evolution (because he believes the noospheres on the various planets cannot communicate, and hence cannot combine, and in any case Teilhard has identified the fl-point achieved on Earth with Christ, who has no higher stage)! As we point out in the text, Teilhard's fl-point theory cannot be extended consistently beyond the Earth.

218

Modern Teleology and the Anthropic Principles

330. 331. 332. 333. 334. 335. 336. 337.

Ref. 10, p. 276. Ref. 4, p. 74. Ref. 10, pp. 287-8. Ref. 10, pp. 270-1. Ref. 10, pp. 168-9. Ref. 10, pp. 239-40. Ref. 10, pp. 286-7, 307. J. Hyppolite, letter of June 24, 1957 to Claude Cuenot. Quoted in ref. 323, pp. 254-5. Ref. 10, p. 259. G. Smith, ed. Josiah Royce's seminar, 1913-1914: as recorded in the notebooks of Harry T. Costello (Rutgers University Press, New Brunswick, 1963). J. Parascandola, J. Hist. Biol. 4, 64 (1971). S. Wolfram, Phys. Rev. Lett. 54, 735 (1975). A. N. Whitehead, The function of reason: Louis Clark Vanuxem lectures 1929 (Princeton University Press, Princeton 1929), pp. 22-3. E. T. Whittaker, The beginning and the end of the World (Oxford University Press, Oxford, 1942), pp. 40-2.

338. 339. 340. 341. 342. 343.

4 The Rediscovery of the Anthropic Principle I believe there are 15, 747, 724, 136, 275, 002, 577, 605, 653, 961, 181, 555, 468, 044, 717, 914, 527, 116, 709, 366, 231, 425, 076, 185, 631, 031, 296 protons in the Universe and the same number of electrons. A. S. Eddington

4.1 The Lore of Large Numbers

Then feed on thoughts that voluntary move Harmonious numbers. John Milton

The modern form of the Weak Anthropic Principle arose from attempts to relate the existence of invariant aspects of the Universe's structure to those conditions necessary to generate 'observers'. Our existence imposes a stringent selection effect upon the type of Universe we could ever expect to observe and document. Many observations of the natural world, although remarkable a priori, can be seen in this light as inevitable consequences of our own existence. Cosmological interest in such a perspective arose from attempts to explain the ubiquitous presence of large dimensionless ratios in combinations of micro and macrophysical parameters. Whereas most local dimensionless physical constants lie within an order of magnitude or so of unity, there exist a number of notorious and flagrant exceptions: the ratio of the electric and gravitational forces between a proton and electron is approximately 10 whatever their separation; the number of nucleons in the Universe is ~10 ; the ratio of the action of the Universe to the quantum of action is ~ 1 0 ; and so forth. In this chapter we shall describe some of the background to these and other cosmological 'coincidences' and show how, in the period 1957-1961, they led to Dicke's proposal of an anthropomorphic mode of explanation. En route to this goal we shall describe a variety of numerical coincidences which have attracted the attention of physicists. We shall also give some historical examples to show how purely numerological relations, although originally viewed as coincidental, have occasionally stimulated the development of precise casual explanations for the interrelations they display. The above-mentioned 'large numbers' will be a recurrent theme in our discussion, and it is amusing to recall that such huge magnitudes 1

40

80

120

1

The Rediscovery of the Anthropic Principle

220

first found their way into the pages of scientific papers as early as about 2 1 6 BC.

Archimedes wrote two papers on the problems of arithmetic enumeration. No copies of the first survive but this work, entitled Principles, (apxai), was addressed to his colleague Zeuxippus and appears to have proposed a system of symbolic representation for integers of arbitrarily large magnitude. The famous follow-up to this work was addressed to Gelon, then King of Syracuse. It bears the title The Sand Reckoner (^a/uijuurns) and besides meeting some objections brought against the scheme outlined in his earlier paper, Archimedes devoted it to enumerating the number of sand grains in the Universe as a worked example to display the economy of his new notation. He argues that previous mystical claims to the effect that the number of grains of sand on the Sicilian sea-shore are beyond the power of man to number are completely groundless. Moreover, his system of accounting could not only perform this enumeration quite compactly but was capable of enumerating the number of sand grains in the entire Universe! Archimedes' Universe consisted of a sphere enclosing the Sun and the fixed stars with its centre at the Earth. Using a series of geometrical arguments he is able to calculate the diameter of this celestial sphere in terms of the distance from the Earth to the Sun and the terrestrial and solar diameters. The latter was estimated experimentally by the parallax method of Aristarchus. Following these steps Archimedes was led to conclude that the Universe is a sphere of diameter 10 stadia (~10 cm) and contains 10 sand grains; the average sand grain he assumes to extend about —2.5 x l O of a finger's breath. Assuming Archimedes' finger is about one centimetre wide his calculation implies that the Universe contains ~ 1 0 nucleons! If we were to make Archimedes' (false!) assumption that the average density of the solar system is that of a sand grain ~ 1 gm c m then the number of nucleons in a sphere of radius ~ 1 0 c m enclosing the outer planetary orbits and centred on the Sun would be ~10 , quite close to his estimate of 10 . 2

3

14

18

63

- 6

4

80

-3

14

66

63

4.2 From Coincidence to Consequence ... And thus they spend The little wick of life's poor shallow lamp In playing tricks with nature giving laws To distant worlds, and trifling in their own. W. Cowper

Numerological and mystic speculation was especially rife amongst German Romantics and Naturphilosphen during the nineteenth century and 5

221 The Rediscovery of the Anthropic Principle

grew out of ancient teleological speculations concerning the harmonious distribution of the heavenly bodies. Such speculation was by no means confined to the celestial motions; in 1818 the Kantian mineralogist Christian Weiss even argued for a link between aspects of rhombicdodecahedral crystal structure and the musical scale of tones! However, such flights of imaginative fancy generally had little impact upon the work of serious scientists; with one notable exception. In 1766 Johann Daniel Titius von Wittenberg was preparing a German translation of Charles Bonnet's Contemplation de la Nature. To the section on planetary motions he added a now famous footnote pointing out that the radii of all the planetary orbits can be generated by the following simple algorithm, (where r is measured in astronomical units (1 AU= 1.496 x l O cm): r = 0.4 + 0 . 3 x 2 ; n = 0,1,2,... . (4.1) This formula provided a striking approximation for the distance from the Sun to the six then-known planets: Mercury, Venus, Earth, Mars, Jupiter and Saturn. Their distances from the Sun at the time of the Law's inception are indicated together with Titius' predictions as follows: 6

7

8

9

13

n

n

Planet

Measured r (AU) n n

Mercury Venus Earth Mars Jupiter Saturn

0.39 0.72 1.00 1.52 5.20 9.55

'Predicted' r

n

0.4 0.7 1.0 1.6 5.2 10.0

0 1 2 4 5

In 1772 Johann Bode came across Titius' footnote and inserted it into the new edition of his own astronomy book, but without a reference to Titius and this led to Bode becoming erroneously associated with its discovery. Titius' purely numerical relation initially had two great successes. First, it successfully predicted the discovery of the next planetary body, Uranus, at a distance r ~ 19.2 AU from the Sun. This planet was in fact named by Bode following its discovery in 1781 by Herschel. Later, an extensive search revealed that the 'gap' in the Titius sequence at r ~ 2 . 8 A U was filled by the asteroid belt and since it was conceivable that the bodies filling this band arose from a past planetary disintegration this was counted as another significant success for the formula. However, if we calculate r ~ 38.8AU and r ~ 7 7 . 2 A U there is a dramatic disagreement with the observed orbits of Neptune (30.1 AU) 12

6

13

3

7

8

The Rediscovery of the Anthropic Principle

222

and Pluto (39.5 AU). In the final analysis if we account for the original input in (4.1) of three parameters (0.4,0.3,2), which ensures at least three predictions must accord with observation, we are left with five successes and two outright failures. Whether this means that the Titius law is physically significant or compatible with any reasonably spaced sequence of purely random numbers remains a matter of some debate amongst planetary astronomers to this day. An example of numerology with a more fruitful outcome is provided by the Balmer formula for the spectral frequency of hydrogen. By the 1880's the hydrogen spectrum was seen to possess an obvious pattern and this tempted various physicists to suggest an empirical formula which would summarize its structural features. In 1885 the Swiss Johann Balmer suggested the following numerical law 14

15

m

2

m = 3, 4, 5, 6 , . . . , (4.2) A = constant for the wavelengths, A, of H (6563 A), H (4861 A), H (4340 A) and H (4102 A) spectra. This was generalized for all alkali spectra by Rydberg in 1890 and various similar algorithms were found to fit spectral series in other wavebands by Lyman, Paschen, Brackett and Pfund. These purely numerical formulae were later found to have a beautiful and precise explanation in Bohr's quantum theory of hydrogen atom. According to Rosenfeld , Bohr was significantly guided by these empirical formulae. He records Bohr remarking to him about the problem of atomic structure that 'as soon as I saw Balmer's formula, the whole thing was immediately clear to me' and recalls how in 1911-12, according to Bohr's recollection, he was asked by the young Danish physicist Hans Marius Hansen how atomic theory could explain the spectra. In Bohr's view, the experimental spectra were too complicated for a simple explanation to exist but Hansen disputed this and simply pointed to Balmer's formula. Another closely related numerological debate began at the turn of the century when, in May 1899, Planck first stated a value for the fundamental constant that now bears his name (h = 6.62x 10~ ergsec). Six years later he wrote in a letter to Paul Ehrenfest claiming that A

= o m2 - 74 > A

a

3

y

s

16

17

27

18

it seems to me not completely impossible . . . h has the same order of magnitude as e /c 2

and regarded it as plausible that there might exist some link between electrical processes and the new quantum of action. In 1909 Einstein took this suggestion a little further; he realized that e /c possessed the dimension of an action and was, to within a reasonable numerical factor, of 2

223 The Rediscovery of the Anthropic Principle

order Planck's new constant h. He remarked that

19

It seems to me that we can conclude from h = e /c that the same modification of theory that contains the elementary quantum e as a consequence, will also contain as a consequence the quantum structure of radiation. 2

Soon afterwards these words were read by Haas who was motivated to equate quantities with the dimensions of potential and kinetic energy in Thomson's model of the atom, obtaining e /a ~ hv, where a is the atomic radius and v some characteristic frequency. With one more dimensional estimate he gave Planck's constant in terms of the electron mass, m , the atomic radius a and electric charge e, as h = 2ire(am ) (4.3) This, in February 1910, is actually Bohr's formula for the ground-state radius of the hydrogen atom. Few took his result seriously although Lorentz did refer to it as a 'daring hypothesis'. Bohr emphasized on various occasions that he had no knowledge of Haas' early work but he was clearly influenced indirectly by Sommerfeld's knowledge of it. Sommerfeld was the first to spell-out clearly the physical significance of the dimensionless parameter e /hc. Again, we see an interesting chain of events sparked by purely dimensional and numerological speculation but culminating in rigorous quantitative developments. In 1856 Weber and Kohlrausch made the first experimental determination of the ratio between the units of electric and magnetic charge. They obtained the value 3.107x 1 0 c m s and the proximity of this number to the measured value for the velocity of light was noticed by Kirchhoff in 1857. Maxwell and Riemann were also singularly impressed by this numerical 'coincidence' and the following year Riemann presented a paper to the Gottingen Academy in which he formally deduced their equality and so began the development of a unified theory of electricity and magnetism. As a final, and more recent, example of such numerological serendipity it is interesting to recall the development of black hole thermodynamics. It had been known for some time prior to 1974 that the theoretical relations governing mechanical interactions between black holes bore an uncanny formal resemblance to the laws of thermodynamics. In fact, if one associated an entropy with the area of the black hole event horizon and a temperature with its surface gravity then the zeroth, first and second laws of thermodynamics were simply known properties of black hole mechanics in disguise. For some while these analogies were treated as curiosities devoid of any real physical content because no particles could emerge from a classical black hole to endow it with the thermal properties of an object at non-zero temperature. Eventually, intrigued by 2

e

20

e

1/2

17

21

2

22

10

-1

23

The Rediscovery of the Anthropic Principle

224

these analogical concidences, Hawking made a monumental discovery, namely, that black holes are black bodies. They radiate particles with thermal characteristics. Their surface area and gravity do precisely determine the entropy and temperature of the radiated particles and they obey the laws of equilibrium thermodynamics. This realization has prompted a tremendous concentration of effort by theoretical physicists to investigate the unsuspected interconnections between quantum mechanics, general relativity and thermodynamics. It could be said that this fruit has grown principally from the roots of coincidence. 24

4.3 'Fundamentalism

9

He thought he saw electrons swift Their charge and mass combine. He looked again and saw it was The cosmic sounding line. The population then said he, must be 10 . H. Dingle 79

In modern times the first scientist to notice the presence of large dimensionless numbers in Nature appears to have been the mathematical physicist Hermann Weyl. As an aside to his early discussion of general relativity, published in 1919, he remarks on the huge difference between the electric and gravitational radii of the electron: 25

It is a fact that pure numbers appear with the electron, the magnitude of which is totally different from 1; so for example, the ratio of the electron radius to the gravitational radius of its mass, which is of order lO ; the ratio of the electron radius to the world radius may be of similar proportions. 40

25

The idea of explaining such occurrences, and indeed exploiting them to pursue a programme which had as its goal a calculation of all the fundamental physical constants of Nature, was suggested by Arthur Eddington in 1923. The quest for his 'Fundamental Theory' of the physical world in which the basic interaction strengths and elementary particle masses would be predicted entirely combinatorically by simple counting processes was vigorously pursued until his death in 1944. Although still fragmentary even then, to our modern eyes this work appears mysterious, if not slightly eccentric. Yet despite its peculiar nature it had some interesting consequences and served to isolate many problems which still cry out for an explanation. Whittaker has described the guiding principle of Eddington's approach to the fundamental constants of Nature in the following words : 26

27

All the quantitative propositions of physics, that is, the exact values of the pure numbers that are constants of science, may be deduced by logical reasoning from

225 The Rediscovery of the Anthropic Principle qualitative assertions without making any use of quantitative data derived from observation.

This is truly a 'philosopher's dream' and Eddington, in the 1923 edition of his book The Mathematical Theory of Relativity, began to ponder the disconcerting presence of large dimensionless numbers in the local and global model of the universe he had done so much to construct: 26

among the constants of Nature there is one which is a very large pure number; this is typified by the ratio of the radius of an electron to its gravitational mass=3.10 . It is difficult to account for the occurrence of a pure number (of order greatly different from unity) in the scheme of things; but this difficulty would be removed if we could connect it with the number of particles in the world—a number presumably decided by pure accident'. 42

Through this speculation, and the ways in which it was developed in his later work, Eddington was the first to suggest that the total number of particles in the Universe, N, might play a part in determining other fundamental constants of Nature. He evaluated this number to high precision and it is now often termed the 'Eddington number' N = 2 . 1 3 6 x 2 ~ 10 (4.4) One of the attractions of this quantity for Eddington was the necessity that its value be integral This meant that it could, in principle, be calculated exactly. In these early days when the weak and strong interactions were still unknown Eddington set about constructing a model of the Universe from the following collection of dimensional physical constants: G, c, m , m , e, h which denote the gravitation constant, the velocity of light, the electron and proton masses, the electron charge and Planck's constant respectively. From them he derived three independent dimensionless ratios m /m ~ 1840; hc/e ~ 137; e IGm m ~ 10 (4.5) To these he added two cosmological parameters: the Eddington number, N ~ 1 0 , and Einstein's cosmological constant, A. From the latter he constructed a further dimensionless ratio 28

256

79

e

N

2

e

2

N

e

N

39

79

(4.6) (where the numerical value is that used by Eddington, who believed the Hubble constant H to be —500 kms Mpc ). The last expression gives the ratio of the radius of curvature of the de Sitter space-time to the geometric mean of the electron and proton Compton wavelengths. Through the introduction of these two cosmological parameters he could 0

_1

_1

The Rediscovery of the Anthropic Principle

226

begin to develop a set of Machian interconnections between the micro and macro-physical worlds by exploiting the dual numerical coincidences between (4.5), (4.6) and N . In isolating these dimensionless ratios Eddington highlighted the fact that their values are not uniformly distributed over the entire range of real numbers but reside, within a factor of a hundred or so, around 1, 10 and 10 . His subsequent work sought to ascertain whether or not these quantities were reducible to simpler forms or calculable from first principles. If these numbers are necessarily fixed by the internal consistency of Nature they could, in principle, be determined by theory. However, if they are completely arbitrary then only experiment can reveal their values to us. A typical example of Eddington's methodology, which displays the manner in which he sought to employ the number of particles in the Universe as a mediator between gravitational and atomic phenomena, is given by his attempt to calculate a fundamental mass. His argument went like this: 1/2

40

80

Since most of the particles in the Universe interact very infrequently they may be represented by plane waves with a uniform probability distribution. If their positions are random, each with positional uncertainty R then, by the law of large numbers, the centroid of this distribution also possesses a positional uncertainty, Ax, where Ax ~ RIN ' . 1 2

If we employ the Uncertainty Principle of Heisenberg, a mass scale m can be associated with this uncertainty, m ~ hN /Rc. Eddington claimed that this mass uncertainty arises entirely as a consequence of the finite space in which the N particles reside. Now if R is the gravitational mass of the Universe so that we have R ~ GM/c (4.7) where M ~ N m is the mass of the Universe, and if the limit of precision measurement of each particle is taken to be the classical electron radius, r , where 0

1/2

,2

N

e

then we have the prediction that, (4.8) and, (keeping account of all the numerical factors), Eddington calculated

227 The Rediscovery of the Anthropic Principle

the associated 'fundamental' mass m to lie close to the proton mass hN m — — ~ 3xl0- gm (4.9) 0

1/2

25

The conclusion drawn from relations like (4.8) and (4.9) was that the 'large' numbers —lO , and powers thereof, are of this huge order of magnitude because they are determined by N. Dimensionless quantities with values neighbouring unity are simply those whose values are not explicitly conditioned by N. Exact versions of the formula (4.9) initiated a later numerological excursion culminating in a 'determination' of the electron and proton masses. These were determined as the roots of a certain quadratic equation /137\ 1 0 m - 136mm + y— J m = 0. (4.10) 40

28

5/6

2

2

This gave the two solutions for m as:

m = 9.10924 x l 0 " g m e

^

2 8

m = 1.67227 x 1 0 gm Another version of this calculation employed the roots of 1 0 m - 136m + 1 = 0 (4.12) which lie in the ratio 1847.6. Other arguments of this ilk were arranged to display the fine structure constant as the reciprocal of the number of terms in a symmetric 16-dimensional tensor -24

N

2

30

a

_

1 =

16^-16

+ 1 6 = 1 3 6

( 4 l 3 )

Later, unity was added to this value to align it better with the experimental value 137.036. Such post facto changes in some of his combinatorical predictions damaged the credibility of much of this work. Despite a sceptical reaction from other scientists Eddington worked very seriously throughout a long period of his life on arguments of this nature and generated a vast array of results that still lack a coherent basis. A fair idea of how some notable physicists viewed this work at the time can be obtained from two 'spoofs' which were specifically designed to parody the Eddington methodology. The following article entitled 'Concerning the quantum theory of absolute zero' was written by Beck, Bethe and Riezler and appeared in the 9 January issue of Naturwis27

31

The Rediscovery of the Anthropic Principle

228

senschaften in 1931:

Let us consider a hexagonal crystal lattice. The absolute zero of this lattice is characterized by the fact that all degrees of freedom of the system are frozen out, i.e., all inner movements of the lattice have ceased, with the exception, of course, of the motion of an electron in its Bohr orbit. According to Eddington every electron has 1/a degrees of freedom where a is the fine structure constant of Sommerfeld. Besides electrons our crystal contains only protons and for these the number of degrees of freedom is obviously the same since, according to Dirac, a proton is considered to be a hole in a gas of electrons. Therefore to get to the absolute zero we have to remove from the substance per neutron (=1 electron plus 1 proton; our crystal is to carry no net charge) 2 / a - l degrees of freedom since one degree of freedom has to remain for the orbital motion. We thus obtain for the zero point temperature T = —{2/a — 1) degrees. Putting T =-273°, we obtain for 1/a the value 137, in perfect agreement within the limits of accuracy with the value obtained by totally independent methods. It can be seen very easily that our result is independent of the particular crystal lattice chosen. 0

In his 1944 lectures on 'Experiment and Theory in Physics' Max Born writes of Eddington's numerology, 32

Eddington connects the dimensionless physical constants with the number n of the dimensions of his E-spaces, and his theory leads to the function f(n) = n (n +1)/2 which, for consecutive even numbers n = 2 , 4 , 6 , . . . assumes the values 10,136,666 Apocalyptic numbers, indeed. It has been proposed that certain well-known lines of St. John's Revelation ought to be written in this way: 'And I saw a beast coming up out of the sea having f(2) horns . . . and his number is / ( 6 ) . . . ' but whether the figure x in '... and there was given to him authority to coninue x months...' is to be interpreted as l x / ( 3 ) - 3 x / ( l ) or as i[f(4)-f(2)] can be disputed 2

2

Although Eddington's 'Fundamental Theory' is very easy to criticize, it is still interesting for the vision of an underlying unity in Nature which it displays. A vision that has since materialized in an entirely different form. Through his work in this area Eddington directed the attention of many other workers to the ubiquity of large dimensionless numbers. This, in turn, stimulated other approaches to cosmological theory that have borne more fruit than their progenitor. Of the other early contributors to this style of working the most prolific appears to have been Haas who, during the period 1932-8 devoted a whole series of short papers and a large portion of a book to these matters. For example, in 1935 he derived a value for the gravitational mass of the Universe and then, by a similar argument to that of Eddington given above, gives the uncertainty in the Universe's centre of mass as R/N . This yields a relation between N and the gravitational coupling similar to (4.8) 33

1/2

N

1

/

2

= - f ^

(4.14)

229 The Rediscovery of the Anthropic Principle

Another early example of a now familiar type of cosmological coincidence was given by Stewart in 1931. Out of the constants e, h, c, G, m and m he formed the three dimensionless quantities hc/e , e IGm and m /m . By trial and error he found a combination roughly equal to the present Hubble radius CHQ , 34

e

2

N

N

2

2

e

1

Stewart suggests that this 'formula is simpler than would be expected if it is assumed to represent a relationship due merely to chance'. More recently Weinberg has pointed out that one can construct a mass close to the pion mass (m^ ~ 140 MeV/c ) out of h, c, G and H 35

2

If we rewrite it in the form we can see its resemblance to the Stewart coincidence and this arises because of the additional numerical coincidence, e lhc — (mjm^). Clearly, one can systematize such coincidences through dimensional analysis: any mass formed from the parameters G, h, c, and H depends on just one free index, A, as follows: 2

^.(sjpsp

and Weinberg's coincidence is m(-§). These coincidences have reappeared in some later work that has many similarities with Eddington's use of (4.9). A long series of papers by the Japanese physicists Hayakawa, Tanaka, and Hokkyo have attempted to explain a relation equivalent to (4.6) which has the form h~mc t N~ (4.19) where t ~ H o is the age of the Universe. If there exists a dispersion Am in the mass of elementary particles in the Universe then for a Gaussian distribution they expect its scatter to be of the form Am/m~AT (4.20) If the Uncertainty Principle is the origin of this dispersion then A mc ~h/t (4.21) and (4.20) and (4.21) yield (4.19). 36

2

Q

1/2

1

1/2

2

The Rediscovery of the Anthropic Principle

230

If one combines (4.20) and (4.21) with the relativistic relations R~ ct ~ GAf/c ~ GmN/c we obtain the Weinberg coincidence with m ~ m^ and 2

2

m (/hc\f )

1/2

N~

1

/

(4.22)

4

Edward Teller appears to have been the first to speculate that there may exist a logarithmic relation between the fine structure constant and the parameter Gm%/hc~ 10" of the form 37

39

(4,3, (in fact a-1 = In (3.17 x 10 ) and the formula is too insensitive to be of very much use in predicting exact relations). Various authors have attempted to place such a relationship on a more formal footing. Salam et al tried to remove the ultraviolet divergence in the electron self-energy by the inclusion of a gravitational self-energy term E . This yields 60

38

s

E

In N

s

(4.24)

Peebles and Dicke and Landau have derived relations of the form (4.23) by attempting to take into account renormalization terms in the calculation of a. There exists another whole class of purely numerical concidences whose significance is even harder to assess than those sketched above. Some of the most striking such coincidences are the proximity of m lm (=1836.1515) to 6tt (= 1836.118); the ratios of the proton, A, 2 and S masses to a regular progression, m : m : m*: m = 1 : 2 : 2 : 2 , (4.25) the mass-splitting coincidence involving the neutron mass, m , 40

39

41

N

e

5

42

N

A

s

1/4

1/3

1/2

43

n

(4.26) m -m — a and the ratio of the new J/if/ and i// particle masses: J/V(3684)/mj/¥(3098) = 1.1891542, which is roughly 2 = 1.1892071. MacGregor's correlation between powers of a and the life-times of metastable states is another curious trend: many other 'coincidences' of dubious significance undoubtedly exist. Peres has suggested an instructive mathematical approach to evaluating the real significance of many of these numerical formulae. For A

N

1/4

44

45

104

231 The Rediscovery of the Anthropic Principle

example, if we take the numerical coincidence 'calculated' by Wyler for the fine structure constant a

-i

=

219/43-7/451/4^11/4 137.036082

105

(4.27)

=

one might ask a more general question. Given the numbers 2, 3, 5 and it how well can we approximate a by juggling with powers of these four numbers? Quantitatively we look for integers a, f>, c and d so that the relation (1 - e)a" < ( 2 3 5 7 r ) < (1 + e)a" (4.28) can be satisfied for very small e, (e.g., pick e = 1.5 x 10~ ). Then one is confronted with examining a three-dimensional surface a log 2 + b log 3 + c log 5 + d log 7r in the four-dimensional lattice space spanned by the integers a, b, c and d. The distance between the two limiting surfaces is calculated to be 1

a

b

c

d

1/4

1

6

8e[(log 2 ) + (log 3 ) + (log 5 ) + (log 7 t ) ] - = 5.4 x 1 0 " 2

2

2

2

1/2

6

(4.29)

So, on average, within any three-dimensional area of size 1.85 x 10 one should find one lattice point in the slab (4.29). This corresponds to searching the interior of a sphere of radius 35 and Peres claims that (at the given level of 'surprise' of e = 1.5 x 10 ) one would only be surprised to find (4.28) satisfied if the solution set {a, f>, c, d} had a distance from the origin much smaller than 35. In Wyler's example it is only 23. Such a sphere is large enough to contain a lattice point (solution to (4.28)) with good probability and so (4.27) is likely a real 'numerical' coincidence. Most of the early work of Eddington and others on the large number coincidences has been largely forgotten. It has little point of contact with ideas in modern physics and is now regarded as a mere curiosity in the history of ideas. Yet in 1937 Paul Dirac suggested an entirely different resolution of the large numbers dilemma which, because of its novelty and far-reaching experimental consequences, has remained an idea of recurrent fascination and fundamental significance. 5

-6

4.4 Dirac's Hypothesis

You and I are exceptions to be laws of Nature; you have risen by your gravity, and I have sunk by my levity. Sydney Smith

Dirac's explanation for the prevalence of the large numbers 10 and 10 amongst the dimensionless ratios involving atomic and cosmological quantities rests upon a radical assumption. Rather than recourse to the mysterious combinatorical juggling of Eddington, Dirac chose to abandon one of the traditional constants of the physical world. He felt this step to 111

40

80

46

The Rediscovery of the Anthropic Principle

232

be justified because of the huge gulf between the 'large numbers' and the more familiar second set of physical constants like m /m and e /hc, which lie within a few orders of magnitude of unity. This dissimilarity suggested that some entirely different mode of explanation might be appropriate for each of these sets of constants. Consider the following typical 'large numbers': N

2

e

N = e / m c ~ 6 x 1 0 - = atomic: light-crossing ,° . time a g

x

u

f U n i V e r S e

e

(4.30)

c N =— Gm m 2.3 x l O electric force between proton and electron gravitational force between proton and electron (4.31) The similarity between the magnitude of these superficially quite unrelated quantities suggested to Dirac that they might be equal (up to trivial numerical factors of order unity) due to some unfound law of Nature. To place this on a more formal basis he proposed the 'Large Numbers Hypothesis' (LNH). 2

39

22 "

N

e

46

Any two of the very large dimensionless numbers occurring in Nature are connected by a simple mathematical relation, in which the coefficients are of the order of magnitude unity.

Now, because Dirac chose to include a time-dependent factor—the Hubble age t , amongst his combinations of fundamental parameters, this simple hypothesis had a dramatic consequence: any large number —lO equated with N must also reflect this time variation. The pay-off from this idea is that the time variation explains the enormity of the numbers: since all numbers of order (10 ) must now possess a time variation * t , they are large simply because the Universe is old. There are now several routes along which to proceed. Incorporating the required time-dependence of N into e , m or m would have overt and undesirable consequences for well-tried aspects of local quantum physics and so Dirac chose to confine the time variation within Newton's gravitational 'constant' G. For consistency with the LNH we see that gravity must weaken with the passage of cosmic time: G oc r (4.32) Before following this road any further it is worth stressing that in this argument the variation of G (or any other 'constant') with time is not a consequence of the LNH per se. It has arisen because of a particular, subjective choice in the ranks of the large numbers. If one were to assume 0

40

1

39

n

2

2

N

1

e

n

233 The Rediscovery of the Anthropic Principle

the Universe closed and finite in space-time then the proper time, t taken by the Universe to expand to maximum volume is a fundamental cosmic time independent of the epoch at which we observe the Universe and list our large numbers. In our Universe, observation suggests f lies within an order of magnitude or so of the present time, t , and so if t replaces t in the combination then the quantitative nature of the large number coincidence N ~ N remains. The qualitative change could not be greater: now the quantity t ^m c \e possesses no intrinsic time-variation and so in conjunction with the LNH it can precipitate no time variation in other sets of traditional constants like N . In this form the LNH merely postulates exact equivalence between, otherwise causally unrelated, collections of natural constants. The conclusion that constants must vary in time can be spirited away if we believe the Universe to be closed (bounded in space and time). A formulation along these lines appears implicit in a paper by Haas published in 1938 sandwiched in time between the two initial contributions by Dirac. Instead of having Dirac's coincidences N~N we have replaced by N[ = G(Nm )mJe ~10 . Rather than three independent large numbers N N and N we now have only two because N[N = N. Other criticisms of Dirac's approach could be imagined: in the real world the Hubble age is a local construction. It changes from place to place because of variations in the density and dynamics or because of non-simultaneity in the big bang itself. If the age of the Universe is a spatial variable then the LNH implies that this spatial variation should be carried by the constants in N just as surely as the temporal variation. To overcome this difficulty one would have to find some spatially-averaged Hubble age and employ that in the LNH as the fundamental cosmic time. If spatial variation is introduced the possibility of an observational test of the hypothesis is considerably occluded. All our good tests of gravitation theories focus upon the behaviour of particular systems, for example the binary pulsar dynamics, and it is not clear how one would extricate the time and space variations in any particular case in order to test the theory against experiment. In 1963 when several second generation theories incorporating varying G were popular and viable theories of gravity, a criticism of this sort was put very strongly by Zeldovich max9

max

max

x

2

m

e

2>

2

2

47

48

2

N

u

2

1/2

40

2

2

2

49

the local character of the general theory of relativity is not in agreement with the attempts of some authors to introduce an effect of the world as a whole on the phenomena occurring at a given point, and on the physical constants which appear in the laws of nature. From such an incorrect point of view one would have to expect... the physical constants would change with time If we start from the Friedman model of the world, that state of the world can be characterized by the mean radius of curvature of space. The curvature of space is a local concept. One now assumes in the framework of local theory that a length contracted from

The Rediscovery of the Anthropic Principle

234

physical constants is proportional to the radius of curvature of space. Since in the Friedman world the radius changes in the course of time, the conclusion is drawn that the physical constants also change in the course of time. This pseudological view, however, cannot withstand criticism: the Friedman solution has a constant curvature of space only when one makes the approximation of a strictly uniform distribution of the matter density! . . . a dependence of the constants on the local value of the curvature would lead to great differences in the constants at the earth's surface and near the sun, and so on, and hence is in complete contradiction with experience.

The novel course taken by Dirac leads to many unusual and testable predictions. If the Universe were finite then, because the number of particles contained within it is the square of a large number, this number must increase with time N t . To avoid a violation of energy conservation Dirac concluded from this that the Universe must be infinite so N is not defined. Similar reasoning led to the conclusion that the cosmological constant, A, must vanish. Were this not the case, Eddington's large number involving A given in (4.6) would have to vary with epoch. The earliest published reaction to Dirac's suggestion was that of Chandrasekhar who pointed out that the LNH had a variety of consequences for the evolution of 'local' structures like stars and galaxies, whose sizes are governed by other large dimensionless numbers. He showed that if we form a set of masses out of the combination m , G, h and c then we can build a one-parameter family of masses: 2

50

N

(4.33) Ranging through the values of members of this family are seen to lie remarkably close to the masses we observe in large aggregations of luminous material in the Universe. For instance, the Eddington number, N, is just m(2)/m and, / hc\ J - 2 ~ 6 x 1 0 3 4 g m ~ M (star) (4.34) m (i.5) = N

3 / 2

m

/hc\

7/4

m(1.75)= y—J m „ ~ 1.7x 1 0 M ~ M (galaxy) 5/2

u

o

(4.35)

m (2) = /hc\ y—J mN ~ 1 0 M ~ M (visible universe) (4.36) These relations imply that the LNH should predict an increase in the 'number of particles in the galaxy' as f . Consequences of this sort were also outlined by Kothari and discussed by Zwicky who argued that these variations might alter the apparent brightness of stars in a systematic fashion that could be observationally checked. 2

3

21

o

3/2

51

52

235 The Rediscovery of the Anthropic Principle

Pascual Jordan was another notable physicist attracted by the growing interest in a possible connection between large numbers and the time evolution of gravity. Like Chandrasekhar and Kothari, he noticed that a typical stellar mass is roughly ~ 1 0 m and so, according to Dirac's reasoning, should increase with time, M ~ t . Using (4.32) this indicated a relation of the form M G~ ' would be anticipated to characterize the stellar mass scale. Since earlier theoretical work had provided good reasons for such a dependence of M on G, Jordan interpreted this result as a confirmation of the idea of varying constants and its extension to time-varying stellar sizes; (there exists a straightforward explanation for the M© 100°C) on the Earth's surface in the pre-Cambrian era with catastrophic consequences for land and water-based organisms. In the early 1960's Robert Dicke and Carl Brans developed a rigorous self-consistent theory of gravitation which allowed the consequences of a varying G to be evaluated more precisely. The Brans-Dicke theory also had the attractive feature of approaching Einstein's theory in the limiting situation where the change in G tends asymptotically to zero. This enabled arguments like Teller's to be examined more rigorously, and the largest tolerable rate of change in G to be calculated. Dicke and his colleagues had previously carried out a wide-ranging 86

87

The Rediscovery of the Anthropic Principle

246

series of investigations to examine the geological and astronomical evidence for any varying constants of Nature. In his 1957 review of the theoretical and observational situation Dicke made his first remarks concerning the connection between biological factors and the 'large number coincidences'. Dicke realized that the observation of Dirac's coincidences between the Eddington number N and the other quantities not possessing a time-variation is 'not random but is conditioned by biological factors'. This consideration led him to see a link between the large number coincidences and the type of Universe that could ever be expected to support observers. Seen in this light, 88

89

90

90

The problem of the large size of these numbers now has a ready explanation . . . there is a single large dimensionless number which is statistical in origin. This is the number of particles in the Universe. The age of the Universe 'now' is not random but is conditioned by biological factors. The radiation rate of a star varies as e ~ and for very much larger values of e than the present value, all stars would be cold. This would preclude the existence of man to consider this problem... if [it] were presently very much larger, the very rapid production of radiation at earlier times would have converted all hydrogen into heavier elements, again precluding the existence of man. 79

Some years later, in 1961, Dicke presented these ideas in a more quantitative and cogent form specifically geared to explaining the large number coincidences. Life is built upon elements heavier than hydrogen and helium. These heavy elements are synthesized in the late stages of stellar evolution and are spread through the Universe by supernovae explosions which follow the main sequence evolution of stars. Dicke argued that only universes of roughly the main sequence stellar age could produce the heavy elements, like carbon, upon which life is based. Only those Universes could evolve 'observers'. Quantitatively, the argument shows that the main-sequence stellar lifetime is roughly 91

radiation energy trapped within the star that is (see Chapter 5 for the proof):

,

(4.55) 'Observers' could not exist at times greatly in excess of t ^ because no hot

247 The Rediscovery of the Anthropic Principle

stable stars would remain to support photochemical processes on planets; all stars would be white dwarfs, neutron stars or black holes. Living beings are therefore most likely to exist when the age of the Universe, t , is roughly equal to t and so must inevitably observe Dirac's coincidence N ~ N to hold. It is a prerequisite for their existence and no hypothesis of varying constants is necessary to explain it. At a time t ^ after the beginning of the expansion of the Universe it is inevitable that we observe N to have the value 0

ms

1

2

1

Two points are worth making at this stage. Although Dicke's argument explains the coincidence of N and N it does not explain why the coincident value is so large. Further considerations are necessary to resolve this question. Also, Dicke made his 'anthropic' suggestion at a time when the cosmic microwave background radiation was undiscovered and the steady state universe remained a viable cosmological alternative to the Big Bang theory. However, a closer scrutiny of Dicke's argument at that time could have cast doubt upon the steady-state model. For, in the Big Bang model it is to be expected that we measure the Hubble age, HQ \ to lie close to a typical stellar lifetime, whereas in the steady-state theory it is a complete coincidence. In an infinitely old steady-state Universe manifesting 'continuous creation' there should exist no correlation between the time-scale on which the Universe is expanding and the main sequence lifetime. We should be surrounded by stars in all possible states of maturity. There were others who had been thinking along similar lines. Whitrow had sought to explain why, on Anthropic grounds, we should expect to observe a world possessing precisely three spatial dimensions. His ideas were also extended to consider the question of the size and age of the expanding Universe. In the 1956 Bampton Lectures, Mascall elaborated upon some of Whitrow's ideas concerning the relation between the size of the Universe and local environmental conditions. In effect, they anticipate why the size of the large numbers, N , N and N (rather than just their numerical coincidence) is likely to be conditioned by biological factors: x

2

102

1/2

1

2

92

Nevertheless, if we are inclined to be intimidated by the mere size of the Universe, it is well to remember that on certain modern cosmological theories there is a direct connection between the quantity of matter in the Universe and the conditions in any limited portion of it, so that in fact it may be necessary for the Universe to have the enormous size and complexity which modern astronomy has revealed, in order for the earth to be a possible habitation for living beings.

These contributions by Dicke and Whitrow provide the first modern

The Rediscovery of the Anthropic Principle

248

examples of a 'weak' anthropic principle; that the observation of certain, a priori, remarkable features of the Universe's structure are necessary for our own existence. Having gone so far, it is inevitable that some would look at the existence of these features from another angle; one reminiscent of the traditional 'Design arguments' that the Universe either must give rise to life or that it is specially engineered to support it. Carter gave the name 'strong' Anthropic Principle to the idea that the Universe must be 'cognizable' and 'admit the creation of observers within it at some stage'. This approach can be employed to 'retrodict' certain features of any cognizable Universe. There is one obvious defect in this type of thinking as it now stands. We appear to be making statements of comparative reference and evaluating a posteriori the likelihood of the Universe—which is by definition unique—possessing certain structural features. Various suggestions have been made as to how one might generate an entire ensemble of possible worlds, each with different characteristics; some able to support life and some not. One might then examine the ensemble for structural features which are necessary to generate 'observers'. This scrutiny should eventually single out a cognizable subset from the metaspace of all possible worlds. We must inevitably inhabit a member of this subset in which living systems can evolve. Carter suggested that a 'prediction made using this strong version of the Anthropic Principle could boil down to a demonstration that a particular feature of the world is common to all members of the cognizable subset'. Obviously, it would be desirable to have some sort of probability measure on this ensemble of worlds. These speculations sound rather far-fetched, but there are several sources of such an ensemble of different worlds. If the Universe is finite and bounded in space and time, it will recollapse to a second singularity having many features in common with the initial big bang singularity. Wheeler has speculated that the Universe may have a cyclic character, oscillating ad infinitum through a sequence of expanding and contracting phases. At each 'bounce' where contraction is exchanged for expansion, the singularity may introduce a permutation in the values of the physical 'constants' of Nature and of the form of the expansion dynamics. Only in those cycles in which the 'deal' is right will observers evolve. If there is a finite probability of a cognizable combination being selected then in the course of an infinite number of random oscillatory permutations those worlds allowing life to evolve must appear infinitely often. The problem with this idea is that it is far from being testable. At present, only the feasibility of a bounce which does not permute physical constants (although perhaps the expansion dynamics) is under scrutiny. Also, if the permutation at each singularity extends to the constants of Nature, why 102

93

102

93

94

249 The Rediscovery of the Anthropic Principle

not to the space-time topology and curvature as well? And if this were the case, sooner or later the geometry would be exchanged for a noncompact structure bound to expand for all future time. No future singularity would ensue and the constants of Nature would remain forever invariant. Such a scheme actually makes a testable prediction! The Universe should currently be 'open' destined to expand forever since this state will always be reached after a finite series of oscillations. However, why should this final permutation of the constants and topology just happen to be one which allows the evolution of observers? A more attractive possibility, which employs no speculative notions regarding cyclic Universes, is one suggested by Ellis. If the Universe is randomly infinite in space-time then our ensemble already exists. If there is a finite probability that a region the size of the visible Universe (~10 light years in diameter) has a particular dynamical configuration, then this configuration must be realized infinitely often within the infinite Universe at any moment. This feature is more striking when viewed in the following fashion. In a randomly infinite Universe, any event occurring here and now with finite probability must be occurring simultaneously at an infinite number of other sites in the Universe. It is hard to evaluate this idea any further, but one thing is certain: if it is true then it is certainly not original! Finally, a completely different motivation for the 'many worlds' idea comes from quantum theory. Everett, in an attempt to overcome a number of deep paradoxes inherent in the interpretation of quantum theory and the theory of measurement, has argued that quantum mechanics requires the existence of a 'superspace' of worlds spanning the range of all possible observations. Through our acts of measurement we are imagined to trace a path through the mesh of possible outcomes. All the 'worlds' are causally disjoint and the uncertainty of quantum observation can be interpreted as an artefact of our access to such a limited portion of the 'superspace' of possible worlds. The evolution in the superspace as a whole is entirely deterministic. Detailed ramifications of this 'many worlds' interpretation of quantum mechanics will be explained later, in Chapter 7. One other aspect of the ensemble picture is worth pointing out. There are two levels at which it can be used. On the one hand we can suppose the ensemble to be composed of 'theoretical' Universes in which the quantities we now regard as constants of Nature, e /hc, m /m and so forth, together with the dynamical features of the Universe; its expansion rate, rotation rate, entropy content etc. take on all possible values. On the other, we can consider only the latter class of variations. There is an obvious advantage to such a restricted ensemble. The second class of alternative worlds amount to considering only the consequences of varying 95

96

10

97

2

N

e

The Rediscovery of the Anthropic Principle

250

the initial boundary conditions to solutions of Einstein's equations (which we assume here to provide a reliable cosmological theory). An examination of these alternatives does not require any changes in the known laws of physics or the status of physical parameters. Our Universe appears to be described very accurately by an extremely symmetrical solution of Einstein's cosmological equations; but there is no difficulty in finding other solutions to these equations which describe highly asymmetric Universes. One can then examine these 'other worlds' to decide how large a portion of the possible initial conditions gives rise to universes capable of, say, generating stars and planets. A good example of considering this limited ensemble of universes defined by the set of solutions to Einstein's equations is given by Collins and Hawking. Remarkably, they showed that the presently observed Universe may have evolved from very special initial conditions. The present Universe possesses features which are of infinitesimal probability amongst the entire range of possibilities. However, if one restricts this range by the stipulation that observers should be able to exist then the probability of the present dynamical configuration may become finite. The calculations that lead to these conclusions are quite extensive and are examined more critically elsewhere; we shall discuss them in detail in Chapter 6. It is also interesting to see the idea that our Universe may be a special point in some superspace containing all possible Universes is not a new one and a particularly clear statement of it was given by the British zoologist Charles Pantin" in 1951, long before the above-mentioned possibilities were recognized. By reasoning similar to Henderson's, Pantin had argued that the Universe appears to combine a set of remarkable structural 'coincidences' upon which the possibility of our own existence crucially hinges. 98

102

100

. . . the properties of the material Universe are uniquely suitable for the evolution of living creatures. To be of scientific value any explanation must have predictable consequences. These do not seem to be attainable. If we could know that our own Universe was only one of an indefinite number with varying properties we could perhaps invoke a solution analogous to the principle of Natural Selection, that only in certain Universes, which happen to include ours, are the conditions suitable for the existence of life, and unless that condition is fulfilled there will be no observers to note the fact. But even if there were any conceivable way of testing such a hypothesis we should only have put off the problem of why, in all those Universes, our own should be possible?!

Another early subscriber to an ensemble picture, this time of the variety suggested by Ellis, was Hoyle. His interest in the many possible worlds of the Anthropic Principle was provoked by his discovery of a remarkable series of coincidences concerning the nuclear resonance levels of biological elements. 96

251 The Rediscovery of the Anthropic Principle

Just as the electrons of an atom can be considered to reside in a variety of states according to their energy levels so it is with nucleons. Neutrons and protons possess an analogous spectrum of nuclear levels. If nucleons undergo a transition from a high to a low energy state then energy is emitted and conversely, the addition of radiant energy can effect an upward transition between nuclear levels. This nuclear chemistry is a crucial factor in the chain of nuclear reactions that power the stars. When two nuclei undergo fusion into a third nuclear state, energy may be emitted. One of the most striking aspects of low-energy nuclear reactions of this type is the discontinuous response of the interaction rate, or cross-section, as the energy of the participant nuclei changes; see Figure 4.1. A sequence of sharp peaks, or resonances, arises in the production efficiency of some nuclei as the interaction energy changes. They will occur below some characteristic energy (typically —fewx 10 MeV) which depends on the particular nuclei involved in the reaction. Consider the schematic reaction 107

A +B C (4.57) We could make this reaction resonant by adjusting the kinetic energy of the A and B states so that when we add to it the intrinsic energy of the states in the nuclei A and B we obtain a total lying just above a possible energy level of the nucleus C. The interaction (4.57) would then be resonant. Although reactions can be made resonant in this way it may not always be possible to add the right amount of kinetic energy to obtain resonance. In stellar interiors the kinetic energy will be determined by the temperature of the star.

Energy

Figure 4.1. Schematic representation of the influence of nuclear resonances upon the cross-section for a particular nuclear reaction to occur. Typically, a series of energies, E* will exist at which the reactions are maximally efficient, or resonant.

The Rediscovery of the Anthropic Principle

252

The primary mechanism whereby stars generate gas or radiation pressures to support themselves against gravitational collapse is exothermic fusion of hydrogen in helium-4. But, eventually a star will exhaust the supply of hydrogen in its core and its immediate source of pressure support disappears. The star possesses a built-in safety valve to resolve this temporary energy crisis: as soon as gravitational contraction begins to increase the average density at the stellar core the temperature rises sufficiently for the initiation of helium burning (at T ~ 1 0 K , p ~ 10 gm cm ), via 3He C + 2y (4.58) This sequence of events (fuel exhaustion contraction higher central temperature new nuclear energy source) can be repeated several times but it is known that the nucleosynthesis of all the heavier elements essential to biology rests upon the step (4.58). Prior to 1952 it was believed that the interaction (4.58) proceeded too slowly to be useful in stellar interiors. Then Salpeter pointed out that it might be an 'autocatalytic' reaction, proceeding via an intermediate beryllium step, 2He +(99±6) keV Be (4.59) Be + He C + 2y Since the Be lifetime ( ~ 1 0 s ) is anomalously long compared to the He + He collision time (~10~ s), the beryllium will co-exist with the He for a significant time and allow reaction (4.59) to occur. However, in 1952 so little was known about the nuclear levels of C that it was hard to evaluate the influence of the channel (4.59) on the efficiency of (4.58). Two years later Hoyle made a remarkable prediction: in the course of an extensive study of stellar nucleosynthesis he realized that unless reaction (4.58) proceeded resonantly the yield of carbon would be negligible. There would be neither carbon, nor carbon-based life in the Universe. The evident presence of carbon and the products of carbon chemistry led Hoyle to predict that (4.58) and (4.59) must be resonant, with the vital resonance level of the C nucleus lying near —7.7 MeV. This prediction was soon verified by experiment. Dunbar et al discovered a state with the expected properties lying at 7.656±0.008 MeV. If we examine the level structure of C in detail we find a remarkable 'coincidence' exists there. The 7.6549 MeV level in C lies just above the energy of Be plus He (=7.3667 MeV) and the acquisition of thermal energy by the C nucleus within a stellar interior allows a resonance to occur. Dunbar et aVs discovery confirmed an Anthropic Principle prediction. 8

45

-3

4

12

108

4

8

8

8

4

4

12

-17

4

21

4

12

12

wg

12

12

8

4

12

253 The Rediscovery of the Anthropic Principle

However, this is not the end of the story. The addition of another helium-4 nucleus to C could fuse it to oxygen. If this reaction were also resonant all the carbon would be rapidly burnt to O . However, by a further 'coincidence' the O nucleus has an energy level at 7.1187 MeV that lies just below the total energy of C + He at 7.1616 MeV. Since kinetic energies are always positive, resonance cannot occur in the 7.1187 MeV state. Had the O level lain just below that of C + He , carbon would have been rapidly removed via the alpha capture C + He O (4.60) Hoyle realized that this remarkable chain of coincidences—the unusual stability of beryllium, the existence of an advantageous resonance level in C and the non-existence of a disadvantageous level in O —were necessary, and remarkably fine-tuned, conditions for our own existence and indeed the existence of any carbon-based life in the Universe. These coincidences could, in principle, be traced back to their roots where they would reveal a meticulous fine-tuning between the strengths of the nuclear and electromagnetic interactions along with the relative masses of electrons and nucleons. Unfortunately no such back-track is practical because of the overwhelming complexity of the large quantum systems involved; such resonance levels can only by located by experiment in practice. Hoyle's anthropic prediction is a natural successor to the examples of Henderson. It exhibits further relationships between invariants of Nature which are necessary for our own existence. Writing and lecturing in 1965 Hoyle added some speculation as to the conditions in 'other worlds' where the properties of beryllium, carbon and oxygen might not be so favourably arranged. First 'suppose that B e . . . had turned out to be moderately stable, say bound by a million electron volts. What would be the effect on astrophysics?' There would be many more explosive stars and supernovae and stellar evolution might well come to an end at the helium burning stage because helium would be a rather unstable nuclear fuel, 12

16

16

12

4

16

12

12

4

4

1 6

12

16

100

110

8

110

Had Be been stable the helium burning reaction would have been so violent that stellar evolution with its consequent nucleosynthesis would have been very limited in scope, less interesting in its effects... if there was little carbon in the world compared to oxygen, it is likely that living creatures could never have developed. 8

Hoyle chose not to regard these coincidences as absolute. Rather he favoured the idea that the so-called 'constants' of Nature possess a spatial variation. This he believed to be suggested by the additional coincidence that the dimensionless ratio or the gravitational and electric interaction strengths (~10 °) is numerically related to the total number of nucleons -4

The Rediscovery of the Anthropic Principle

254

( N ~ 10 ) in the observable Universe by a 1/VN relation (4.14) that is suggestive of a statistical basis if the coupling constants have some Gaussian probability distribution in space. If this were true (although there is no evidence for such a view) then the coincidences discussed above would not abide everywhere in the Universe but life could only evolve in regions where they did, 80

111

112

. . . we can exist only in the portions of the universe where these levels happen to be correctly placed. In other places the level in O might be a little higher, so that the addition of a-particles to C was highly resonant. In such a place . . . creatures like ourselves could not exist. 16

12

When it comes to assessing the consequences of making small changes in the dimensionless constants of Nature one is on shaky ground (even if we ignore the possibility of an all-encompassing unified theory that fixes the values of these constants uniquely). Although a small change in a dimensionless quantity, like Gmf^/hc or the resonance levels in C and O , might so alter the rate of cosmological or stellar evolution that life could not evolve, how do we know that compensatory changes could not be made in the values of other constants to recreate a set of favourable situations? Interestingly, one can say something quantitative and general about this difficulty. Suppose, for simplicity, we treat the laws of physics as a set of N ordinary differential equations governing various physical quantities x x ,...,x (allowing them to be partial differential equations would probably only reinforce the conclusion) that contain a set of constant parameters A which we call the constants of physics i = F(x;A ); xe(x x ,..., x ) (4.61) The structure of our world is represented by the solutions of this system; let us call the particular realization of the constants that we observe, x*. It will depend upon the particular set of fundamental constants we observe, call these A*. We can ask if the solution x* is stable with respect to small changes of the parameters A*. This is the type of question addressed recently by mathematicians. Any solution of the system (4.61) corresponds to a trajectory in an N-dimensional phase space. In two dimensions, (N= 2), the qualitative behaviour of the possible trajectories is completely classified. Trajectories cannot cross in two dimensions without intersecting, and the property that they must not intersect in the phase plane ensures that the possible stable asymptotic behaviours are simple: after large times the trajectories either approach a 'focus' (which represents an oscillatory approach towards a stationary solution) or a 'limit cycle' (which represents oscillatory approach towards a periodic solution). However, when N ^ 3, trajectories can behave in a far more exotic fashion. Now, they are able to cross and develop complicated knotted configurations without actually intersecting. All the possible 12

16

l9 2

N

f

4

u

155,156

2

N

255 The Rediscovery of the Anthropic Principle

detailed behaviours are not known but when N ^ 3 it has been shown that the generic behaviour of trajectories is approach to a 'strange attractor'. This is a compact region of the phase space containing neither foci nor limit cycles and in which all neighbouring solution trajectories diverge from each other exponentially whether followed forwards or backwards in time; so there is sensitive dependence on starting conditions. An infinitesimal change in the starting position of a solution trajectory will soon develop into a huge difference in subsequent position. This tells us that in our case, so long as N ^ 3, (which it will certainly be in our model equations (4.61)), the solution x* will become unstable to changes in A* away from Af when they exceed some critical (but small) value. If the original attractor at x* was not 'strange' then our set of laws and constants are very special in the space of all choices for the set and a small change in one of them will bring about a catastrophic change in Nature's equilibrium solutions x*. If the attractor at x* is 'strange' then there may be many other similar sets in the A* parameter space. This might ensure that there were other permutations of the values of constants of Nature allowing life. 155

4.7 Are There Any Laws of Physics? There is no law except the law that there is no law. J. A. Wheeler

The ensembles of Worlds we have been outlining involve either hypothetical other possible universes possessing different sets of fundamental constants or different initial conditions. That is, they appeal to a potential non-uniqueness concerning both the laws of Nature and their associated initial conditions. A contrasting approach is to generate the ensemble of possibilities within a single Universe. One means of doing this can be found in the work of some particle physicists on so-called 'chaotic gauge theories'. Instead of assuming that Nature is described by gauge symmetries whose particular form then dictates which elementary particles can exist and how they interact, one might imagine there are no symmetries at high energies at all: in effect, that there are no laws of physics. Human beings have a habit of perceiving in Nature more laws and symmetries than truly exist there. This is an understandable error in that science sets out to organize our knowledge of the world as well as increase it. However, during the last twenty years we have seen a gradual erosion of 'principles' and conserved quantities as Nature has revealed a deep, and previously unsuspected flexibility. Many quantities that traditionally were believed to be absolutely conserved—parity, charge conjugation, baryon and lepton number—all appear to be violated in elementary particle interactions. The neutrino was always believed to be a massless particle but recent experiments have provided evidence that it possesses a

The Rediscovery of the Anthropic Principle

256

tiny rest mass —30 eV. Likewise, the long-held myth that the proton is an absolutely stable particle may be revised by recent theoretical arguments and tentative experimental evidence for its instability. Particle physicists have now adopted an extremely revolutionary spirit and it is reasonable to question other long-standing conservation laws and assumptions—is charge conserved, is the proton massless, is the electron stable, is Newton's law of gravity exact at low energy, is the neutron neutral... ? The natural conclusion of this trend from more laws of Nature to less is to ask the overwhelming question: 'Are there any laws of Nature at all?' Perhaps complete microscopic anarchy is the only law of Nature? If this were even partially true, it would provide an interesting twist to the traditional Anthropic arguments which appeal to the fortuitous coincidence of life-supporting laws of Nature and numerical values of the dimensionless constants of physics. It is possible that the rules we now perceive governing the behaviour of matter and radiation have a purely random origin, and even gauge invariance may be an 'illusion': a selection effect of the low-energy world we necessarily inhabit. Some preliminary attempts to flesh out this idea have shown that even if the underlying symmetry principles of Nature are random—a sort of chaotic combination of all possible symmetries—then it is possible that at low energies («10 K) the appearance of local gauge invariance is inevitable under certain circ*mstances. A form of 'natural' selection may occur wherein, as the temperature of the Universe falls, fewer and fewer of the entire gamut of 'almost symmetries' have a significant impact upon the behaviour of elementary particles, and orderliness arises. Conversely, as the Planck energy (which corresponds to a temperature of 10 K) is approached, this picture would predict chaos. Our low-energy world may be necessary for physical symmetries as well as physicists. Before mentioning some of the detailed, preliminary calculations that have been done in pursuance of this 'chaotic gauge theory' idea, let us recall a simpler example of what might be occurring: if you went out into the street and gathered information, say, on the heights of everyone passing-by over a long period of time, you would find the graph of the frequency of individuals versus height tending more and more closely towards a particular shape. This characteristic 'bell' shape is called the 'Normal' or 'Gaussian' distribution by statisticians. It is ubiquitous in Nature. The Gaussian is characteristic of the frequency distribution of all truly random processes regardless of their specific physical origin. As one goes from one random process to another the resulting Gaussians differ only by their width and the point about which they are centred. A universality of this sort might conceivably be associated with the laws of physics if they had a random origin. 113

32

32

257 The Rediscovery of the Anthropic Principle

Nielsen et a / . have shown that if the fundamental Lagrangian from which physical laws are derived is chosen at random then the existence of local gauge invariance at low energy can be a stable phenomenon in the space of all Lagrangian theories. It will not be generic. That is, the presence, say, of a massless photon is something that will emerge from an (but not every) open set of Lagrangians picked from the space of all possible functional forms. This will give the illusion of a local U(l) gauge symmetry at low energy and also of a massless photon. Suppose that a programme of this sort could be substantiated and provide an explanation for the symmetries of Nature we currently observe—according to Nielsen, it is even possible to estimate the order of magnitude of the fine structure constant in lattice models of random gauge theories; if so, then perhaps some of the values of fundamental constants might have a quasi-statistical character. In that case, the Anthropic interpretation of Nature must be slightly different. If the laws of Nature manifested at low energy are statistical in origin, then again, a real ensemble of different possible universes actually does exist. Our own Universe is one member of the ensemble. The question now is, are all the features of our Universe stable or generic aspects of the ensemble, or are they special? If unstable or non-generic, stochastic gauge theories require an Anthropic interpretation: they also allow, in principle, a precise mathematical calculation of the probabilities of seeing a particular aspect of the present world, and a means of evaluating the statistical significance of any cognizable Universe. In general, we can see that the crux of any analysis of this type, whatever its detailed character, is going to be the temperature of the Universe. Only in a relatively cool Universe, T « 10 K, will laws or symmetries of Nature be dominant and discernible over chaos; but, likewise, only in a cool Universe can life exist. The existence of physics and physicists may be more closely linked than we suspected. Other physicists have adopted a point of view diametrically opposite to that of the stochastic gauge theorists: for instance, S. W. Hawking, B. S. DeWitt, and in the early 1960's J. A. Wheeler, have suggested that there is only one, unique law of physics, for the reason that only one law is logically possible! The main justification for this suggestion is scientific experience: it is exceedingly difficult to construct a mathematical theory which is fully self-consistent, universal, and in agreement with our rather extensive observations. The self-consistency problem can manifest itself in many ways, but perhaps the most significant example in the last half-century is the problem of infinities in quantum field theory. Almost all quantum field theories one can write down are simply nonsensical, for they assert that most (or all) observable quantities are infinite. Only two very tiny classes 113,114

115

32

The Rediscovery of the Anthropic Principle

258

of quantum field Lagrangians do not have this difficulty: finite quantum field theories and renormalizable quantum field theories. Thus, the mere requirement of mathematical consistency enormously restricts the class of acceptable field theories. S. Weinberg , in particulai, has stressed how exceedingly restrictive the requirement of renormalization really is, and how important this restriction has been in finding accurate particle theories. Furthermore, most theories which scientists have written down and developed are not universal; they can apply only to a limited number of possible observations. Most theories of gravity, for example, are incapable of describing both the gravitational field on the scale of the solar system and the gravitational field on the cosmological scale. Einstein's general theory of relativity is one of the few theories of gravity that can be applied on all scales. Universality is a minimum requirement for a fundamental theory. Since, as Popper has shown, we cannot prove a theory, we can only falsify one, we can never know if in fact a universal theory is true. However, a universal theory may in principle be true; a non-universal theory we know to be false even before we test it experimentally. Finally, our observations are now so extensive that it is exceedingly difficult to find a universal theory which is consistent with them all. In the case of quantum gravity, these three requirements are discovered to be so restrictive that Wheeler and DeWitt have suggested that the correct quantum gravity theory equation (which is itself unique) can have only one unique solution! We have discussed in sections 2.8 and 3.10 the philosophical attractiveness of this unique solution: it includes all logically possible physical universes (this is another reason for believing it to be unique, for what else could possibly exist?). The stochastic gauge theory also has this attractive feature of realizing all possibilities. The unique law theory may, however, allow a global evolution, whereas the stochastic gauge theory is likely to be globally static like Whitehead's cosmology (see section 3.10). 163

164

165

166

4.8 Dimensionality

We see ... what experimental facts lead us to ascribe three dimensions to space. As a consequence of these facts, it would be more convenient to attribute three dimensions to it than four or two, but the term 'convenient' is perhaps not strong enough; a being which had attributed two or four dimensions to space would be handicapped in a world like ours in the struggle for existence. H. Poincare

The fact that we perceive the world to have three spatial dimensions is something so familiar to our experience of its structure that we seldom

259 The Rediscovery of the Anthropic Principle

pause to consider the direct influence this special property has upon the laws of physics. Yet some have done so and there have been many intriguing attempts to deduce the expediency or inevitability of a threedimensional world from the general structure of the physical laws themselves. The thrust of these investigations has been to search for any unique or unusual properties of three-dimensional systems which might render them naturally preferred. It transpires that the dimensionality of the World plays a key role in determining the form of the laws of physics and in fashioning the roles played by the constants of Nature. Whatever one's view of such flights of rationalistic fancy they undeniably provide an explicit example of the use of an Anthropic Principle that pre-dates the applications of Dicke and Carter. In 1955 Whitrow suggested that a new resolution of the question 'Why do we observe the Universe to possess three dimensions'? could be obtained by showing that observers could only exist in such universes: 118

1,91

93

116

I suggest that a possible clue to the elucidation of this problem is provided by the fact that physical conditions of the Earth have been such that the evolution of Man has been possible... this fundamental topological property of the world... could be inferred as the unique natural concomitant of certain other contingent characteristics associated with the evolution of the higher forms of terrestrial life, in particular of Man, the formulator of the problem.

This anthropic approach to the dimensionality 'problem' was also taken in a later, but apparently independent, study of atomic stability in universes possessing an arbitrary dimension by the Soviet physicists Gurevich and Mostepanenko. They envisaged an ensemble of universes ('metagalaxies') containing space-times of all possible dimensionality and enquired as to the nature of the habitable subset of worlds, and, as a result of their investigation of atomic stability they concluded that 117

If we suppose that in the universe metagalaxies with various number of dimensions can appear it follows our postulates that atomic matter and therefore life are possible only in 3-dimensional space.

Interest in explaining why the world has three dimensions is by no means new. From the commentary of Simplicius and Eustratius, Ptolemy is known to have written a study of the 3-D nature of space entitled 'On Dimensionality' in which he argued that no more than three spatial dimensions are possible, but unfortunately this work has not survived. What does survive is evidence that the dramatic difference between systems identical in every respect but spatial dimension was discovered and appreciated by the early Greeks. The Platonic solids, first discovered by Theaitetos, brought them face-to-face with a dilemma: why are there an infinite number of regular, convex, two-dimensional polygons but only five regular three-dimensional polyhedral This mysteri118

119

The Rediscovery of the Anthropic Principle

260

ous property of physical space was later to spawn many mystical and metaphysical 'interpretations'—a veritable 'music of the spheres'. In the modern period, mathematicians did not become actively involved in attempting a rigorous formulation of the concept of dimension until the early nineteenth century, although as early as 1685 Wallis had speculated about the local existence of a fourth geometrical dimension. During the nineteenth century Mobius considered the problem of superimposing two enantiomorphic solids by a rotation through 4-space and later Cayley, Riemann, and others, developed the systematic study of N-dimensional geometry although the notion of dimension they employed was entirely intuitive. It sufficed for them to regard dimension as the number of independent pieces information required for a unique specification of a point in some coordinate system. Gradually the need for something more precise was impressed upon mathematicians by a series of counter-examples and pathologies to their simple intuitive notions. For example, Cantor and Peano produced injective and continuous mappings of Sfc into Sfc to refute ideas that the unit square contained more points than the unit line. After unsuccessful attempts by Poincare, it was Brouwer who, in 1911, established the key result: he showed that there is no continuous injective mapping of 9 t into 9 t if N ^ M . The modern definition of dimension due to Menger and Urysoln grew out of this fundamental result. The question of the physical relevance of spatial dimension seems to arise first in the early work of Immanuel Kant. He realized that there was an intimate connection between the inverse square law of gravitation and the existence of precisely three spatial dimensions, although he regards the three spatial dimensions as a consequence of Newton's inverse square law rather than vice versa. As we have already described in Chapter 2, William Paley later spelt out the consequences of a change in the form of the law of gravitation for our existence. Many of the points he summarized in 1802 have been rediscovered by modern workers examining the manner in which the gravitational potential depends on spatial dimensions, which we shall discuss below. In the twentieth century a number of outstanding physicists have sought to accumulate evidence for the unique character of physics in three dimensions. Ehrenfest's famous article of 1917 was entitled 'In what way does it become manifest in the fundamental laws of physics that space has three dimensions'? and it explained how the existence of stable planetary orbits, the stability of atoms and molecules, the unique properties of wave operators and axial vector quantities are all essential manifestations of the dimensionality of space. Soon afterwards, Hermann Weyl pointed out that only in (3 + 1) dimensional space-times can Maxwell's theory be founded upon an invariant, integral form of the 120

121

2

122

N

123

124

125

126

127

M

261 The Rediscovery of the Anthropic Principle

action; only in (3+1) dimensions is it conformally invariant, and this

... does not only lead to a deeper understanding of Maxwell's theory but the fact that the world is four dimensional, which has hitherto always been accepted as merely 'accidental', becomes intelligible through it.

In more recent times a number of novel ideas have been added to the store of examples provided by Ehrenfest and these form the basis of the anthropic arguments of Whitrow, Gurevich and Mostepanenko. These arguments, like most other anthropic deductions, rely on the knowledge of our ignorance being complete and assume a 'Principle of Similarity'— that alternative physical laws should mirror their actual form in three dimensions as closely as possible. As we have already stressed, the development of the first quantitative theory of gravity by Newton brought with it the first universal constant of Nature and this in turn enabled scientific deductions of a very general nature to be made regarding the motions of the heavenly bodies. In his 'Natural Theology' of 1802 William Paley considered in some detail the consequences of a more general law of gravitational attraction than the inverse square law. What, he asks, would be the result if the gravitational force between bodies varied as an arbitrary power law of their separation; say as, Focr" (4.62) Since he believed 'the permanency of our ellipse is a question of life and death to our whole sensitive world' he focused his attention upon the connection between the index N and the stability of elliptical planetary orbits about the Sun. He determined that unless N < 1 or ^ 4 no stable orbits are possible and furthermore only in the cases N = 3 and N = 0 is Newton's theorem, which allows extended spherically symmetric bodies to be replaced by point masses at their centres of gravity, true. The case N = 0 he regarded as unstable and so excluded and this provoked Paley to argue that the existence of an inverse square law in Nature was a piece of divine pre-programming with our continued existence in mind. Only in universes in which gravity abides by an inverse square law could the solar system remain in a stable state over long time-scales. Following up earlier qualitative remarks of Kant and others, Ehrenfest gave a quantitative demonstration of the connection between results of the sort publicized by Paley and the dimensionality question. He pointed out that the Poisson-Laplace equation for the gravitational field of force in an N-dimensional space has a power-law solution for the gravitational potential, of the form 4,0c ~ if N ± 2 (4.63) 125

N+1

128

126

r

2

N

The Rediscovery of the Anthropic Principle

262

for a radial distribution of material. The inverse square law of Newton follows as an immediate consequence of the tri-dimensionality. A planetary motion can only describe a central elliptic orbit in a space without N= 3 if its path is circular, but, as Paley also pointed out, such a configuration is unstable to small perturbations. In three dimensions, of course, stable elliptical orbits are possible. If hundreds of millions of years in stable orbit around the Sun are necessary for planetary life to develop then such life could only develop in a three-dimensional world. In general, the existence of stable, periodic orbits requires of the central force field F(r) =-d/dr that r F(r)-+ 0 as r - ^ 0 and r F(r)-+cc as r—* oo. Thus, by (4.62) we require N 3, as one would expect. One of Newton's classic results was his proof that if two spheres attract each other under an inverse square law of force then they may both be replaced by points concentrated at the centre of each sphere, each with a mass equal to that of the associated sphere. We can ask what the general form of the gravitational potential with this property is. Consider a spherical shell of radius a whose surface density is cr and whose centre, at O, lies at distance r from some arbitrary point P outside its edge. If the gravitational potential at r is (r) then the potential at P due to the sphere will be the same as that due to some point mass M(a) at O is 3

3

129

-2

130

M(a)<Mr) + 27ro-aA(a) = ^ ^ f x(x) dx r

Jr-a

(4.64)

where A (a) is a constant that we can always add to the potential without altering the associated force law. There are two classes of solution to (4.64): 131

(a) The Yukawa-type potentials: (r) =

r with the equivalent point mass given by M(a) = 47rcra

+E

U e

M
« r potential the interior of a spherical shell is an equipotential region; in general, (r) will only have this property if, for r < a , 2

2

2

-1

- 1

132

* a+r

J a—r x(x) dx

(4.70)

and this has the unique solution ( r) = y + C (4.71) where C and A are arbitrary real constants and C can be set equal to zero without altering the force law.

The Rediscovery of the Anthropic Principle

264

These results show why gravitation physics is simplest in three spatial dimensions. The inverse square law of force that is dictated by the three dimensions of space is unique in that it allows the local gravitational field within the spherical region we considered to be evaluated independently of the structure of the entire Universe beyond its outer boundary. Without this remarkable safeguard our local world would be at the mercy of changes in the gravitational field far away across our Galaxy and beyond. It is widely known that matter is stable: by this we mean that the ground state energy of an atom is finite. However, the common text-book argument which employs the Heisenberg Uncertainty Principle to demonstrate this is actually false. Although the energy equation for a single electron of mass m and charge -e in circular orbit around a nuclear charge -he gives a total atomic energy of 133

167

and this energy apparently has a finite minimum of r ~ h / 2 m e where E'(r ) = 0, it is, in principle, possible for the electron to be distributed in a number of widely separated wave packets. The packet close to the nucleus could then have an arbitrarily sharp momentum and position specification at the expense of huge uncertainty in the other packets. In this manner the ground-state energy might be made arbitrarily negative. A much stronger, non-linear constraint is required in addition to the Heisenberg Uncertainty Principle if one is to rule out ground state energies becoming arbitrarily negative. The strongest result is supplied by the non-linear Sobolev inequality. This supplies the required bound on the ground-state energy and shows that matter is indeed stable in quantum theory. For these technical reasons analyses of atomic stability such as those of Ehrenfest and Buchel which use only the Uncertainty Principle must be regarded as only heuristic. However, their results are confirmed by an exact analysis of the Schrodinger equation in simple cases. In 1917, Ehrenfest considered only the simple Bohr theory of an N-dimensional hydrogen atom. He found the energy and radii of the energy levels and noted that when N> 5 the energy levels increase with quantum number whereas the radii of the Bohr orbits r (N)~ (me \~ h~ ) ~ decrease with increasing quantum number A, and electrons just fall into the nucleus. Alternatively, if we write down the total energy for the system and use the Uncertainty Principle to estimate the kinetic energy resisting localization we have (p is the momentum and V the potential energy) h 1 e (4.73) 2m r r ~ 2

134

126

129

2

x

2

2

2

2

N

2

2 1/(N

4)

2

265 The Rediscovery of the Anthropic Principle

It can be seen that for N>5 there is no energy minimum. For N = 4 the situation is ambiguous because there ceases to exist any characteristic length in the system. This also indicates that no minimum energy scale can exist. It is possible to demonstrate this more rigorously by including special relativistic effects in the energy equation (4.73). Thus, for N = 4, the relativistic energy is, (where m is the rest mass of the electron now), 0

E = (p c +m c ) +V 2 2

(4.74)

2 4 1/2

(c h\

2A

2

e

1 / 2

2

and so as r —> 0, E —> - 1 / r and E can become arbitrarily negative, hence no stable minimum can exist. On the basis of these arguments it has been claimed that if we assume the structure of the laws of physics to be independent of the dimension, stable atoms, chemistry and life can only exist in N n (x) ds ), leave the light-cone structure of space-time invariant; but because the metric is transformed to g = n (x)g they do not leave the Einstein equations invariant unless 12 is constant. (Note: conformal transformations are not merely coordinate transformations, because the latter leave ds invariant). Thus, Einstein's equations are not scale invariant, whereas Maxwell's equations for free electromagnetic fields are. F. Hoyle and J. V. Narlikar, Proc. R. Soc. A 277, 178, 184 (1966); A 290, 143, 162, 177 (1966); A 294, 138 (1966); Nature 233, 41 (1971); Mon. Not. R. astron. Soc. 155, 305, 323 (1972); Action at a distance in physics and cosmology (Freeman, San Francisco, 1974). For a review of Jordan's theory and related work by Thierry and Fierz see D. R. Brill, Proc. XX Course Enrico Fermi Int. School of Physics (Academic Press, NY, 1962) p. 50. See also M. Fierz, Helv. Phys. Acta. 29, 128 (1956). Other earlier developments were made by C. Gilbert, Mon. Not. R. astron. Soc. 116, 678, 684 (1960); and in The application of modern physics to the earth and planetary interiors, ed. S. K. Runcorn (Wiley, NY, 1969), p. 9-18. P. S. Wesson, Cosmology and geophysics (Adam Hilger, Bristol, 1978). F. J. Dyson in Aspects of quantum theory, ed. A. Salam and E. P. Wigner (Cambridge University Press, London, 1972). The self-consistent form of all possible variations in the parameters of N, N and N was considered by W. Eichendorf and M. Reinhardt, Z. Naturf. 28, 529 (1973). P. Pochoda, M. Schwarzschild, Astrophys. J. 139, 587 (1964). G. Gamow, Proc. natn. Acad. Sci., U.S.A. 57, 187 (1967). G. Gamow, Phys. Rev. Lett. 19, 757, 913 (1967). For an account of Gamow's contribution to the 'large number' problem, see R. Alpher, Am. Scient. 61, 52 (1973). F. Dyson, Phys. Rev. Lett. 19, 1291 (1967); A. Peres, Phys. Rev. Lett. 19, 1293 (1967); R. Gold, Nature 175, 526 (1967); T. Gold, Nature 175, 526 (1967); S. M. Chitre and Y. Pal, Phys. Rev. Lett. 20, 2781 (1967); J. N. Bahcall and M. Schmidt, Phys. Rev. Lett. 19, 1294 (1967). K. P. Stanyukovich, Sov. Phys. Dokl. 7, 1150 (1963); see also D. Kurdgelaidze, Sov. Phys. JETP 20, 1546 (1965). J. O'Hanlon and K. H. Tam, Prog. Theor. Phys. 41, 1596 (1969); Y. M. Kramarovskii and V. P. Chechev, Sov. Phys. Usp. 13, 628 (1971), V. P. Chechev, L. E. Gurevich, and Y. M. Kramarovsky [sic], Phys. Lett. B 42, 261 (1972). P. C. W. Davies, J. Phys. A 5, 1296 (1972) and reference therein. E. Schrodinger had discussed 'large number' coincidences involving the strong coupling, Nature 141, 410 (1938). The best limits claimed for the constancy 2

2

2

2

2

70. 71.

72. 73.

u

74. 75. 76. 77. 78. 79. 80.

2

The Rediscovery of the Anthropic Principle

282

of the weak, strong and electromagnetic couplings are those of I. Shlyakhter, Nature 264, 340 (1976), and are based upon an interpretation of data from the Oklo Uranium mine in Gabon (see M. Maurette, Ann. Rev. Nucl. Sci. 26, 319 (1976). Shlyakhter bases his argument upon the abundance ratio of the two light samarium isotopes, Sm-149: Sm-148. In ordinary samarium the natural ratio of these isotopes is —0.9, but in the Oklo sample it is —0.02. This depletion is due to the bombardment received from thermal neutrons over a period of many millions of years during the running of the natural 'reactor'. The capture cross-section for thermal neutrons on samarium-149 can be measured in the laboratory at — 55 kb and is dominated by a strong capture resonance when the neutron source has energy —0.1 eV. The Oklo samples imply the cross-section could not have exceeded 63 kb two billion years ago (all other things being equal), and the capture resonance cannot have shifted by as much as 0.02 eV over the same period. The position of this resonance sensitively determines the relative binding energies of different samarium isotopes in conjunction with the weak, a , strong a and electromagnetic, a, couplings. The allowed time-variations are constrained by d/a ^ 10" yr , aja 10" yr~\ dja ^ 5 x 10~ yr . J. D. Bekenstein, Comm. Astrophys. 8, 89 (1979); M. Harwit, Bull. Astron. Inst. Czech. 22, 22 (1971). W. A. Baum and R. Florentin-Nielson, Astrophys. J. 209, 319 (1976); J. E. Solheim, T. G. Barnes III, and H. J. Smith, Astrophys. J. 209, 330 (1967); L. Infeld, Z. Physik. 171, 34 (1963). R. d'E. Atkinson, Phys. Rev. 170, 1193n (1968); H. C. Ohanian, Found. Phys. 7, 391 (1977). T. C. van Flandern, in On the measurement of cosmological variations of the gravitational constant, ed. L. Halpern (University of Florida, 1978), p. 21, concludes that G/G == - 6 x 10 yr" . The latest Viking Lander data yield a limit |G/G| < 3 x 1 0 yr ; see R. D. Reasenberg, Phil. Trans. R. Soc. A 310, 227 (1983). J. B. S. Haldane, Nature 139, 1002 (1937). The ideas broached in this note were developed in more detail in Nature 158, 555 (1944). See New biology, No. 16 (1955), ed. M. L. Johnson, M. Abercrombie, and G. E. Fogg (Penguin, London, 1955), p. 23. C. Brans and R. Dicke, Phys. Rev. 124, 924 (1961); C. Brans, Ph.D. Thesis, 1961 (Princeton University, NJ). R. Dicke, The theoretical significance of experimental relativity (Gordon & Breach, NY, 1964) contains early references; R. Dicke and P. J. E. Peebles, Space Sci. Rev. 4, 419 (1965); R. Dicke, Gravitation and the universe (America Philosophical Society, Philadelphia, 1969). R. H. Dicke, Rev. Mod. Phys. 29, 355, 363 (1957). Ref. 89, p. 375. Here e is the dielectric constant, and so the fine structure constant is a = e /e hc. This is introduced because Maxwell's equations imply charge conservation, so a change in a is most conveniently interpreted as being due to a change in the permittivity or permeability of free space. See also K. Greer, Nature 205, 539 (1965), and Discovery 26, 34 (1965). R. H. Dicke, Nature 192, 440 (1961). An accompanying reply by P. A. M. Dirac, Nature 192, 441 (1961), argues that 'On Dicke's assumption habitw

S9

17

19

81. 82. 83. 84.

11

-11

85. 86. 87. 88. 89. 90. 91.

-1

-1

2

-1

1

w

12

s

283 The Rediscovery of the Anthropic Principle able planets could exist only for a limited period of time. With my assumption they could exist indefinitely in the future and life need never end'. 92. E. Mascall, Christian theology and natural science (Longmans, London, 1956); also private communication from G. J. Whitrow (August 1979). 93. B. Carter, in Confrontation of cosmological theories with observation, ed. M. S. Longair (Reidel, Dordrecht, 1974), p. 291; see also 'Large numbers in astrophysics and cosmology', paper presented at Clifford Centennial Meet, Princeton (1970). 94. J. A. Wheeler, in Foundational problems in the special sciences (Reidel, Dordrecht, 1977) pp. 3-33; see also the final chapter of C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973), also C. M. Patton and J. A. Wheeler, in Quantum gravity: an Oxford symposium, ed. C. J. Isham, R. Penrose and D. W. Sciama (Clarendon, Oxford) pp. 538-605. 95. For the consequences of topology change, see R. P. Geroch, J. Math. Phys. 8, 782 (1967) and 11, 437 (1970); F. J. Tipler, Ann. Phys. 108, 1 (1977); C. W. Lee, Proc. R. Soc. A 364, 295 (1978), R. H. Gowdy, J. Math. Phys. 18, 1798 (1977). They show that a closed universe must undergo a singularity to admit a topology change. Since some topologies require parity violation and others do not, we can see a way in which topology change could act to alter conservation laws of physics. 96. G. F. R. Ellis, R. Maartens and S. D. Nel, Mon. Not. R. astron. Soc. 184, 439 (1978); Gen. Rel. Gravn. 9, 87 (1978) and 11, 281 (1979); G. F. R. Ellis and G. B. Brundrit, Quart. J. R. astron. Soc. 20, 37 (1979). 97. H. Everett, Rev. Mod. Phys. 29, 454 (1957); for other expositions see especially B. de Witt, Physics Today, Sept., p. 30 (1970); P. C. W. Davies, Other worlds (Dent, London, 1980), J. D. Barrow, The Times Higher Educ. Suppl. No. 408, p. 11 (22 Aug. 1980), and Chapter 7. 98. C. B. Collins and S. W. Hawking, Astrophys. J. 180, 137 (1973); S. W. Hawking in Confrontation of cosmological theories with observation, ed. M. S. Longair (Reidel, Dordrecht, 1974). A discussion of the possible conclusions that may be drawn from these papers is given in J. D. Barrow and F. J. Tipler, Nature 276, 453 (1978); J. D. Barrow in Problems of the cosmos (Einstein Centenary Volume, publ. 1981, Enciclopedia Italiana, in Italian). A detailed discussion is given in Chapter 6. 99. C. F. A. Pantin, Adv. Sci. 8, 138 (1951); and in Biology and personality, ed. I. T. Ramsey (Blackwell, Oxford, 1965), pp. 83-106; The relation between the sciences (Cambridge University Press, Cambridge, 1968). 100. L. J. Henderson, The fitness of the environment (Smith, Gloucester, Mass. 1913), repr. 1970, and The order of Nature (Harvard University, Cambridge, Mass. 1917). 101. E. Teller, Phys. Rev. 73, 801 (1948), and in Cosmology, fusion and other matters, ed. F. Reines (Hilger, Bristol, 1972), p. 60. 102. J. D. Barrow, Quart. J. R. astron. Soc. 23, 344 (1982). 103. P. C. W. Davies, Other worlds: space, superspace and the quantum universe (Dent, London, 1980). 104. A. Peres, Physics Today, Nov., 24, 9 (1971). 105. A Wyler, C.r. Acad. Sci. Paris A 269, 743 (1969).

The Rediscovery of the Anthropic Principle

284

106. F. Hoyle, D. N. F. Dunbar, W. A. Wensel and W. Whaling (1953) Phys. Rev. 92, 1095. 107. J. P. Cox and R. T. Giuli, Principles of stellar structure, Vol. 1 (Gordon & Breach, NY, 1968). D. D. Clayton, Principles of stellar evolution and nucleosynthesis (McGraw-Hill, NY, 1968). 108. E. E. Salpeter, Astrophys. J. 115, 326 (1965); Phys. Rev. 107, 516 (1967). 109. F. Hoyle, D. N. F. Dunbar, W. A. Wensel, and W. Whaling, Phys. Rev. 92, 649 (1953). 110. F. Hoyle, Galaxies, nuclei and quasars (Heinemann, London, 1965), p. 146. 111. It is amusing to note the coincidence that Erno Rubik's 'Hungarian' cube, now available in any toyshop or mathematics department has ~10 ° distinct configurations; for details see D. R. Hofstadter, Science. Am. 244 (3), 20 (1981). Also, if neutrinos possess a small non-zero rest mass m ~ 10-30 eV, as indicated by recent experiments, then neutrino clusters surviving the radiation-dominated phase of the Universe have a characteristic scale which encompasses ~(mp/m ) ~ 10 ° neutrinos, where m is the Planck mass. 112. Ref. 110, p. 159. 113. H. Nielsen, in Particle physics 1980, ed. I. Andric, I. Dadic, and N. Zovko (North-Holland, 1981), pp. 125-42, and Phil. Trans. R. Soc. A 310, 261 (1983); J. Iliopoulos, D. V. Nanopoulos, and T. N. Tamaros, Phys. Lett. B 94, 141 (1983); J. D. Barrow, Quart. J. R. astron. Soc. 24, 146 (1983); J. D. Barrow and A. C. Ottewill, J. Phys. A 16, 2757 (1983). 114. D. Foerster, H. B. Nielsen, and M. Ninomiya, Phys. Lett. B 94, 135 (1980); H. B. Nielsen and M. Ninomiya, Nucl. Phys. B 141, 153 (1978); M. Lehto M. Ninomiya, and H. B. Nielsen, Phys. Lett. B 94, 135 (1980). Generally speaking, it is found that small symmetry groups tend to be stable attractors at low energy whilst larger groups are repellers. As an example of the type of analysis performed by Nielsen et al, consider a simple Yang-Mills gauge theory that is Lorentz invariant and has one dimensionless coupling constant. If Lorentz invariance is not demanded of the theory then the Lagrangian can be for more general and up to 20 independent couplings are admitted (when they are all equal the Lorentz invariant theory is obtained). One then seeks to show that the couplings evolve to equality as energy falls. A similar strategy is employed to check for the stability of gauge invariance at low energy. 115. N. Brene and H. B. Nielsen, Niels Bohr Inst, preprint NBI-HE-8242, (1983). 116. G. J. Whitrow, Br. J. Phil. Sci. 6, 13 (1955). 117. L. Gurevich and V. Mostepanenko, Phys. Lett. A 35, 201. 118. O. Neugabauer, A history of ancient mathematical astronomy, pt. 2 (Springer, NY 1975), p. 848; C. Ptolemy, Opera II, 265, ed. J. L. Heiberg (Teubner, Leipzig 1907). 119. G. Sarton, History of science, Vol. 1 (Norton, NY, 1959), pp. 438-9. 120. J. Wallis, A treatise of algebra ; both historical and practical (London, 1685), p. 126. 121. M. Jammer, Concepts of space (Harper & Row, NY, 1960). 122. L. E. J. Brouwer, Math. Annalen 70, 161 (1911); J. Math. 142, 146 (1913). 2

v

v

3

8

p

285 The Rediscovery of the Anthropic Principle 123. K. Menger, Dimensions Theorie (Leipzig, 1928). W. Hurewicz and H. Wallman, Dimension theory (Princeton University Press, NJ, 1941). 124. I. Kant, 'Thoughts on the true estimation of living forces', in J. Handyside (transl.), Kant's inaugural dissertation and early writings on space (University of Chicago Press, Chicago, 1929). 125. W. Paley, Natural theology (London, 1802). It is interesting to note that Paley was Senior Wrangler at Cambridge. 126. P. Ehrenfest, Proc. Amst. Acad. 20, 200 (1917); Ann. Physik 61, 440 (1920). 127. H. Weyl, Space, time and matter (Dover, NY, 1922), p. 284. 128. A derivation of these standard results can be found in almost any text on classical dynamics although they are not discussed as consequences of worlds of different dimension, merely as examples of different possible central force laws. See, for example, H. Lamb, Dynamics (Cambridge University Press, Cambridge 1914), pp. 256-8; J. Bertrand, Compt. rend. 77, 849 (1873). 129. For other derivations of these results and a discussion of their relevance for spatial dimension see; I. M. Freeman, Am. J. Phys. 37, 1222; W. Buchel, Physik. Blatter 19, 547 (1963); appendix I of W. Buchel, Philosophische Probleme der Physik (Herder, Freiburg, 1965); E. Stenius, Acta. phil. fennica 18, 227 (1965); K. Schafer, Studium generale 20, 1 (1967); R. Weitzenbock, Der vierdimensionale Raum (Braunschweig, 1929). 130. F. R. Tangherlini, Nuovo Cim. 27, 636 (1963). 131. For a partial solution of this problem see I. N. Sneddon and C. K. Thornhill, Proc. Camb. Phil. Soc. 45, 318 (1949). These authors do not find the solution (b). 132. A Barnes and C. K. Keogh, Math. Gaz. 68, 138 (1984). 133. Note, however, that the so-called gravitational paradox that J 4>d r is infinite for a r does not arise for a r exp(-juir) for real jut, but this paradox disappears in the general relativistic theory of gravitation, which is able to deal with infinite spaces consistently. 134. E. Lieb, Rev. Mod. Phys. 48, 553; F. J. Dyson, J. Math. Phys. 8, 1538 (1967), J. Lenard, J. Math. Phys. 9, 698 (1968). 135. G. Whitrow, The structure and evolution of the universe (Harper & Row, NY, 1959). 136. Recall Eddington's remark during his 1918 Royal Institution Lecture: 'In two dimensions any two lines are almost bound to meet sooner or later; but in three dimensions, and still more in four dimensions, two lines can and usually do miss one another altogether, and the observation that they do meet is a genuine addition to knowledge.' 137. E. A. Abbott, Flatland (Dover, NY, 1952). For a modern version see D. Burger, Sphereland (Thomas Y. Crowell Co., NY, 1965). 138. A. K. Dewdney, Two dimensional science and technology (1980), pre-print, Dept. of Computer Science, University of Western Ontario; J. Recreation. Math 12, 16 (1979). For a commentary see M. Gardner, Scient. Am., Dec. 1962, p. 144-52, Scient. Am., July 1980, and for a full-blooded fantasy see A. K. Dewdney's Planiverse (Pan Books, London, 1984). 139. W. McCullough and W. Pitts, Bull. Math. Biophys. 5, 115 (1943). 3

_1

_1

The Rediscovery of the Anthropic Principle

286

140. H. Poincare, Demieres pensees (Flammarion, Paris, 1917). 141. J. Hadamard, Lectures on Cauchy's problem in linear partial differential equations (Yale University Press, New Haven, 1923). 142. We are ignoring the effects of dispersion here. 143. R. Courant and D. Hilbert, Methods of mathematical physics (Interscience, NY, 1962). 144. J. D. Barrow, Quart J. R. astron. Soc. 24, 24 (1983). 145. I. Newton, Principia II, prop. 32, see ref. 55. 146. J. B. Fourier, Theoria de la chaleur, Chapter 2, § 9 (1822). 147. A. Einstein, Ann. Physik 35, 687 (1911). 148. For example, surface is a force per unit length, pressure a force per unit area and density a mass per unit volume, and so on. 149. There is a general tendency for spherical configurations to dominate because of the prevalence of spherically symmetric force laws in Nature, but our reasoning does not depend upon this; we consider spherical symmetry for simplicity of explanation only. 150. There are obviously other factors that play a role: the simple local topology of space-time avoids the introduction of dimensionless parameters describing the identification of length scales; for example, if we make identifications (x, y, z) 2

14

N

2

11

-1

2

13

2

nn

pp

14

290

The Weak Anthropic Principle in Physics and Astrophysics 322

and they just fail to be bound. Experiment indicates the diproton fails to be bound by a mere ~92keV. The existence of deuterium and the non-existence of the diproton therefore hinge precariously on the precise strength of the nuclear force. If the strong interaction were a little stronger the diproton would be a stable bound state with catastrophic consequences—all the hydrogen in the Universe would have been burnt to He during the early stages of the Big Bang and no hydrogen compounds or long-lived stable stars would exist today. If the di-proton existed we would not! Also, if the nuclear force were a little weaker the deuteron would be unbound with other adverse consequences for the nucleosynthesis of biological elements because a key link in the chain of nucleosynthesis would be removed. Elements heavier than hydrogen would not form. In our potential approximation for the deuteron the dependence on a is roughly linear Voc« (5.90) A decrease in a of about 9% is sufficient to unbind the deuteron whilst an increase in a of 3.4% is sufficient to bind the diproton. In the case of the dineutron, only a 0.3% increase suffices for binding. The precise dependence of the nuclear binding energy on a becomes very complicated when one examines large nuclei because each nucleon moves in the average potential of all its neighbours. However, these larger nuclei are essential to living systems. Hydrogen and helium exhibit insufficient diversity to provide a basis for living organisms—heavier elements must exist for any form of life based upon chemistry to be possible. Before we can evaluate the stability of heavier elements, we must recall some basic facts about the nuclear force. Firstly, it is charge independent: removing the electromagnetic contributions, the nuclear forces between n-n, n-p and p-p are all the same. Secondly, the nuclear force saturates. If every one of A nucleons in a nucleus were to attract its neighbours then there would exist A ( A - l ) / 2 interacting nucleon pairs and we would find the nuclear binding energy growing as A(A - A . All nuclei would have diameters of order the range of the nuclear force and possess a constant volume. This is not what is observed—rather the nuclear radius R scales as the nucleon number, R- rA ; r = 1.2 x 1 0 " cm (5.91) The volume of the nucleus thus varies linearly with A and so the density of all nuclei is roughly constant and they appear more reminiscent of liquids than solids. We recall that the radius of a liquid drop is also proportional to the cube of the number of molecular constituents. The nuclear force saturates: each nucleon attracts only a small number of others. This is reminiscent of chemical bonds where exchange forces lead 2

s

s

s

s

s

2

1/3

13

290 The Weak Anthropic Principle in Physics and Astrophysics

323

to saturation. The suggestive analogy between liquids and nuclear material led to the development of a liquid-drop model of the nucleus which enables an analysis of its stability to be made by accounting for the various factors that contribute attractively and repulsively inside nuclei. The liquid-drop model represents the nuclear binding energy, B, as a sum of five terms B = a^A- a A -a Z A~ -a (Z0 . 5 A ) A + 8 (5.92) The first (volume) term is the contribution of the total number of nucleons in the nucleus. This contribution is reduced by the second (surface) term because nuclei at the surface of the nucleus have fewer bonding partners. Since the radius of the nucleus is proportional to A this surface energy is proportional to the area, A , and is analogous to the effect of surface tension on a liquid where the surface molecules are more loosely bound. This dependence on A indicates that a fraction ~ 4 A ~ of all nuclei are at the nuclear surface and so light nuclei have nearly all their constituents at the surface. The volume term is reduced still further by the third (Coulomb) repulsion which is the repulsive electromagnetic force acting between any two protons. If we assume the protons are distributed spherically symmetrically throughout a nucleus of radius R then the loss of binding energy is — 0 . 6 a Z / R = - a Z A ~ . Clearly, this effect becomes important for large Z. The fourth (asymmetric) contribution to the binding energy arises because the Exclusion Principle makes it energetically more economical for nuclei to be built with equal numbers of neutrons and protons—as, for example, in carbon or oxygen. If Z protons and ( A - Z ) neutrons are present in a nucleus they will be able to occupy the lowest Z energy states. But, if N more neutrons are added, only N-Z of them will be allowed in the lowest energy states, the rest will have to occupy states of higher kinetic and lower potential energy. Therefore, these neutrons will have less binding energy than the first Z protons and Z neutrons and the reduction will vary as (Z-A/2). Obviously the protons and neutrons can be exchanged and the effect must therefore be independent of the sign of (Z-A/2). A detailed calculation based on the Fermi-gas model gives a contribution to 67

68

s

2/3

c

2

1/3

2

sym

-1

1 / 3

2/3

2 / 3

1/3

2

En

c

2

1 / 3

Of"

(5.93) 6 \ 8 / m rlA A assuming nucleons have the same mass. Large nuclei always contain more neutrons than protons because equal numbers would lead to a huge Coulomb energy; a neutron excess is necessary to prevent Coulomb disruption. This effect is entirely quantum mechanical in origin and has no analogue in classical liquids. Finally, there exists a small pairing energy N

N

The Weak Anthropic Principle in Physics and Astrophysics 324

290

Figure 5.5. The relative contributions of the different components of the binding energy per nucleon versus mass number according to the liquid drop model discussed in the text. 68

term, 8 in (5.92), arising because of the intrinsic spin of the nucleons (it is zero for nuclei with odd A and otherwise falls off as 12A MeV) and we shall neglect it. Figure 5.5 shows the contributions made by the various terms in (5.92) and illustrates how the decrease in free surface energy along with an increase in the Coulomb repulsion produces a maximum of the binding energy per nucleon at A — 60. To extract binding energy from nuclei with A ^ 60 they must be split (fission) but to extract it from nuclei with A ^ 6 0 they must be fused. The unknown constants in (5.92) which enable Figure 5.5 to be plotted are determined from data-fitting as a = 16MeV, a = 50 MeV and a = 0.7 if r0 = 1.24 x 1 0 c m . On dimensional grounds they must all be of order E — a m and in principle they can be calculated. The relation (5.92) now enables us to decide how the strong interaction strength decides which stable nuclei can exist. It is energetically favourable for a nucleus to disintegrate into two equal parts of constitution (Z/2, A/2) if the binding energy change AB is positive, where AB = B(Z, A)-2B(Z/2, A/2) (5.94) -1/2

70

u

-13

N

2

N

s y m

c

290 The Weak Anthropic Principle in Physics and Astrophysics

325

A B is the energy released by the fission of the nucleus (Z, A); for the fission of uranium AB~180MeV. Using (5.92), with the experimental values for a , a , a and a , the binding energy change is just A B = a A ( 1 - 2 ) + a Z A " ( 1 - 2" ) = - 4 . 5 A + 0.26Z A" (MeV) (5.95) The susceptibility to fission is determined by competition between the surface forces of nuclear origin and the electromagnetic Coulomb interaction between the charged nuclei. The Coulomb force tends to deform the nucleus away from a spherical configuration whilst the surface tension tries to maintain it. If the Coulomb forces win then the nucleus can fission and A B > 0 gives the instability criterion as Z / A ^ 18. However, this does not describe the inevitable change in the Coulomb and surface forces as the nucleus is gradually deformed away from sphericity; it is only a static criterion. If the nucleus possesses axial symmetry when deformed, with major axis R( 1 + e) and minor axes R(l-0.5e), the surface energy deforms to E = -a A (l + 0.4e +...) (5.96) while the Coulomb energy becomes E = -a Z A~ (l-0.2e +...) (5.97) So, when the deformations are small, (e « 1) the total energy change after deformation is v

s

s y m

c

2/3

s

1/3

2

c

2/3

2

1/3

2/3

1/3

72

2

s

2/3

s

c

c

2

2

1/3

2

AB = j (a Z A~ -2a A ) c

2

1/3

s

(5.98)

2/3

and this is only positive if ^A- > —a — 49 (5.99) Any nuclei satisfying (5.99) splits into two parts. This is one reason why we do not observe very heavy elements in Nature. Uranium-238, one of the heaviest nuclei, has Z / A ~ 35.5 and is close to the limit. If a nucleus is very close to the fission limit, the addition of small amounts of energy can render it unstable to fission. For example, when U captures a slow-moving neutron the binding energy of the neutron becomes available to the nuclear system. This extra energy is ~ 6 MeV and ensures that the new state of U is formed in a highly energetic state from which it is much easier to deform the nucleus and fission. The criterion (5.99) shows that the dividing line between those nuclei which are stable and those which are not is drawn by the strong and c

2

235

236

290

The Weak Anthropic Principle in Physics and Astrophysics 326

electromagnetic interactions. Their relative strengths determine the susceptibility to fission. The condition for a nucleus (Z, A) to be stable is roughly that (5.100) Thus, if the electromagnetic interaction were stronger (increased a) or the strong interaction a little weaker (decreased a ), or both, then biologically essential nuclei like carbon would not exist in Nature. For example, if the electron charge were increased by a factor ~ 3 no nuclei with Z > 5 would exist and no living organisms would be possible. The existence of carbonbased organisms hinges upon a 'coincidence' regarding the relative strengths of the strong and electric forces, namely that s

(5.101) If one assumes the electromagnetic force strength is fixed, then the effect of small variations is a for the stability of nuclei is shown in Figure 5.6. 75

s

1 0 0 9 0-8 07 r~—TT"(o) Figure 5.6. Nuclear stability as a function of the strong coupling, a , variation away from the observed value, a (0), with Coulomb forces constant. s

s

66

290 The Weak Anthropic Principle in Physics and Astrophysics

327

0 02

-0 03

-0-05

0 00

010

005

'9 («/a)

Figure 5.7. Consequences of simultaneous variations in the nuclear and electromagnetic coupling strengths. 66

A 50% decrease in the strength of the nuclear force (a ^0.025) would adversely affect the stability of all the elements essential to living organisms and biological systems. Similarly, holding the strong force constant, we see that the stability of carbon requires the fine structure constant a to be less than —0.1. In Figure 5.7 are plotted the effects of varying the nuclear and electromagnetic couplings simultaneously. We shall see later that other constraints exist to limit these interactions if Nature exhibits a grand unification of fundamental forces. s

75

5.6 The Stars

Twinkle, twinkle little star I don't wonder what you are, For by spectroscopic ken, I know that you are hydrogen. Ian D. Bush

Any body of mass M and average radius R possesses a gravitational potential energy E of order GM (5.102) g

76

2

If no other forces existed in Nature this attractive gravitational force would cause all bodies to collapse indefinitely. However, as we have

290

The Weak Anthropic Principle in Physics and Astrophysics 328

already seen, there do exist other physical forces which can support small bodies against gravitational collapse. The characteristic sizes of planets and asteroids result from a stable balance between gravity and the quantum mechanical exclusion forces. However, not all systems need appeal to pressures of quantum mechanical origin to support themselves against gravity. Whereas we have regarded planetary material as possessing zero temperature, it is obvious that matter could exist in large quantities with a finite temperature. In that case the object would possess an 'ordinary' gas pressure by virtue of the thermal motion of its constituents. If the motions are non-relativistic (root mean square gas velocities much less than the speed of light) the body could be termed 'cool'. Then, the thermal pressure is given in terms of the temperature, T, and volume of the gas by Boyle's law. NT

(5.103) R (where N is the total number of nucleons in a volume ~R ). Clearly, as the material is compressed isothermally, E falls and the pressure rises. If the body has an average density p, then the condition for an equilibrium to exist between gravity and thermal pressure is that the central pressure of the body, Pc — p G M R ' equal the thermal pressure P ^ p T m ^ . This criterion yields the relation 3

3

g

1

1

C5.104,

that is, simply that (total thermal energy) ~ (gravitational potential energy) (5.105) If the average inter-nucleon separation inside the star is represented by d where, by definition, d N~R (5.106) then (5.104) implies the temperature to be 3

T

3

G m ^ d

This contribution to the pressure does not involve the electrons because they are so light (their contribution to the thermal pressure is smaller than that of the nucleons by a factor x 10 ); however, if the body continues to shrink to a higher density state, it will begin to squeeze the electrons into regions small enough for their degeneracy pressure to be significant. In that event, the thermal pressure of the nucleons becomes augmented by the degeneracy pressure of the electrons. Recall that the 6

329

290 The Weak Anthropic Principle in Physics and Astrophysics

Exclusion Principle reduces to the imposition that electrons of average separation d possess a minimum kinetic energy —m~ d~ . (The corresponding contribution from nucleon degeneracy is clearly negligible because m » m .) The equation of energy balance now looks like equation (5.107) plus the electron degeneracy term: 1

N

2

e

T

+

i md

s-ae:. d

e

( 5

.

1 0 8 )

When the body is large and the density quite low, the degeneracy term (oc d ) is the least significant term in (5.108) and the temperature will just increase according to the ideal gas law (5.103) as the body shrinks under gravity, (T&R~ ). However, this shrinkage ensures the degeneracy pressure must eventually intervene and guarantees a temperature maximum when the combination (Gm^N d' - m' d ) attains its maximum value. This occurs when d equals d+ where d+ = 2(a m N )~ (5.109) which corresponds to a maximum central temperature of T+~a GN m (5.110) Figure 5.8 shows the variation of temperature T versus density d. Incidentally, the form of the 'potential' closely resembles that in nuclei and molecules because in these systems stable states also arise from a competition between r and r~ forces (see Figure 5.4). The defining characteristic which turns our 'warm' body into a star is that the central temperature, T*, be high enough to initiate and sustain 2

2

2/3

G

1

e

2

- 1

1

2/3

1/3

2

1

e

2

-KT unphysical

=2!CL

kin

Figure 5.8. Temperature versus inter-particle separation, d, for a star. We require T to be great enough for nuclear reactions to occur in order to produce a star. If the temperature is always low then the system collapses, heats up and then cools down over a period of about 10 years whereas stars that initiate nuclear burning last for more than 10 years. max

6

9

26

290

The Weak Anthropic Principle in Physics and Astrophysics 330

thermonuclear reactions. In order to establish the size of such stars we must determine the threshold for nuclear fusion reactions. When the ambient temperature is low, two light nuclei will not have enough kinetic energy to break through the Coulomb barrier of superficial electrostatic repulsion that exists between them. This barrier height varies as Z Z , where Z are the atomic numbers of the interacting nuclei; so clearly, light nuclei will be able to interact more readily than heavy ones. This is also advantageous because the fusion of light nuclei is exothermic. However, nuclei can undergo nuclear burning when their mean kinetic energies are significantly lower than the Coulomb barrier ~ l M e V ~ 1 0 K . The reasons are twofold: the energies of nuclei participating in a nuclear interaction will possess a Maxwellian number distribution N(E) oc exp(-E/T), so although the mean energy may sit below the Coulomb threshold, there will still be many nuclei in 'the tail' of the distribution with energies high enough to surmount the potential barrier. Also, there is a help from quantum mechanics: nuclei with energies less than that of the Coulomb barrier can still penetrate it by quantum tunnelling. Ignoring angular momentum, the probability of tunnelling through the barrier E by particles with energy E is 77

X

2

f

l o

c

(Tunnelling Probability) ~ exp

J ° (E - E) c

drj

1/2

(5.111)

where r is the distance of closest approach 'classically' which is given by r = Z Z aE~ (5.112) and R is the nuclear radius. The reaction rate is controlled by competition between the Maxwell factor e x p ( - E / T ) which tends to zero for large E, and the tunnelling probability which varies as exp(-f>E ) and goes to zero for small E; here b ~ Z Z aA mU where A is the reduced atomic weight of the reactants, A = A A / ( A +A ). There exists an intermediate energy ~15-30keV where the interaction probability is optimized. This 'Gamow peak' is illustrated in Figure 5.9 The energy E ~(0.5bT) is the most advantageous for nuclear burning and corresponds to an average thermal energy of TNUC~ T ] a m ~ TJ5.7 x 10 K (5.113) where TJ incorporates small factors due to atomic weights, intrinsic nuclear properties and so forth. For hydrogen burning (~1.5x 10 K) we have r](H)~0.025; helium burning ( ~ 2 x l 0 K ) has r](He)~3.5 whilst r](C)~ 14, ri(Ne)~Tf(O)~30 and rj(Si)~60. Returning to (5.110) and (5.113) we see that hydrogen ignition is 0

1

1

2

n

1/2

1

2

1/2

2

1

2

1

2

7 9

o

2/3

2

7

N

7

8

290 The Weak Anthropic Principle in Physics and Astrophysics

331

Figure 5.9. The Gamow Peak: the dominant energy-dependent factors in thermonuclear reactions. Most reactions occur in the high-energy tail of the Maxwellian distribution which introduces a thermal factor, exp(-E/fcT). The path through the Coulomb barrier introduces a factor exp(-kE ). The product of these factors has a sharp (Gamow) peak at E . 79

1/2

possible if T* >

that is, if the body is larger than M* where

26

W - \^a) / ( ^\ m) / m ~ 10 gm (5.114) This simple argument explains why stars contain no less than about M+mu nucleons and shows that the largest planet in our solar system Jupiter—is fairly close to fulfilling the condition for nuclear ignition in its interior. It was almost a star (as a consequence we expect planets to exist over a mass range of ~(mJm )~ ~ 300). The rough lower size limit corresponding to the mass constraint (5.114) is R+ ~ «G « m ~ lO cm (5.115) In order to ascertain whether there is also a maximum stellar size we must consider a third source of pressure support within the interior— radiation pressure. Equilibrium radiation will possess a pressure, P , given by the black body law, which in our units is, 3 / 2

3 / 4

G

c

N

33

1

3/4

N

1/2

2

c

10

y

y=^ (5.116) From (5.103), we see that the relative importance of gas and radiation pressure in a stellar interior is given by the ratio, P

T 4

Z . ^ j v . ^ g ) '

(

5

.

1

1

7

)

If we consider large bodies, so the electron degeneracy is smaller than the

290

The Weak Anthropic Principle in Physics and Astrophysics 332

gas pressure, the equilibrium condition (5.107) is now modified by the inclusion of the radiation pressure and becomes a%N m

(5.118)

SK)'-®'

(5.119)

T^l+-j^j ~

4/3

e

or equivalently, using (5.117),

where N+ is the Landau-Chandrasekhar number defined by (5.120) N* c? = 2.2 x 10 This relation shows that the relative importance of radiation pressure grows with the size of the star as N . However, if P becomes significantly greater than P , a star will become pulsationally unstable and break up. Therefore (5.119) provides an upper bound on the number of nucleons in a stable hydrogen burning star, N ^ 5 0 a ^ ? and, in combination (5.114), we see that simple physical considerations pin down the allowed range of stellar sizes very closely as 80

s

a

/2

57

2

y

81

g

/2

a o \ m& ) / ^ Af* ^ 50«o m (5.121) A stable, non-relativistic star must inevitably contain ~ a ^ ? ~ l 0 nucleons. The most obvious outward characteristic of a star, besides its mass, is its luminosity—the rate of energy production. In the case of the Sun, it is this property that determines the ambient temperature one astronomical unit away, on the Earth's surface. Photons produced near the stellar centre do not simply leave the star after a time of flight. Rather, they undergo a whole series of quasi-random scatterings from electrons and charged ions which results in a much slower diffusive exit from the stellar interior. This path is called a 'random walk'; see Figure 5.10. Consider first the effect of electron scattering, for which the (Thomson) cross-section cr is ar ~ a m~ (5.122) This mean free path A gives the average distance travelled by photons between collisions by electrons and is A-tornJ(5.123) where the electron number density is n ~ NR~ . The time to traverse a 3 / 2

3 / 4

3/2

%

N

c

/2

T

82

T

2

2

83

1

e

3

57

290 The Weak Anthropic Principle in Physics and Astrophysics

333

Figure 5.10. Absorption and emission processes together with scattering allow radiation to leak out of a star by a random-walk path as shown rather than to free-stream.

linear distance R from the centre to the boundary of the star by a random walk is the escape time (c = 1) t ~(f)xR

(5.124)

e x

and the luminosity, L, of the star is defined as ^ _ Nuclear energy available Escape time from centre so r

T+R

3

J ay /N\ 1

,

3

(5.125) (5.126)

where the dimensionless factor f accounts for deviations from exact Thomson scattering which result at low temperature or high density. The estimate (5.126) gives a reasonably accurate estimate of L ~ 5 x i o ^ ) r g s" 3 4

3

e

1

(5.127)

which is independent of the stellar radius and temperature. We can also deduce the lifetime of a star burning its hydrogen at this rate. This gives the 'main sequence' lifetime, r*, as (5.128) ^(Nuclear Energy from Hydrogen Fusion)

(5.129)

Massive stars have short lifetimes because they are able to attain high

290

The Weak Anthropic Principle in Physics and Astrophysics 334

internal temperatures and luminosities. They burn their nuclear fuel very rapidly. A star of ~ 3 0 M has a hydrogen-burning lifetime of only ten million years whereas the Sun can continue burning hydrogen for more than ten billion years. The fact that t+ can be determined by the fundamental constants of Nature has many far-reaching consequences. It means that we can understand why we observe the Universe to be so old and hence so large, and it also provides a point of contact with the timescales that biologists estimate for evolutionary change and development. To these questions we shall return in Chapter 6. Our estimates of stellar luminosities and lifetimes have assumed that the opacity controlling the transport of energy in the star's interior is entirely due to Thomson scattering. However, when matter becomes denser the nuclei can begin to affect the electrons through free-free and bound-free transitions. For hydrogen the free-free and bound-free opacities—or Kramers opacities—are roughly the same but, unlike the Thomson opacity, they are temperature dependent. Thus, whereas the Thomson opacity per nucleon per unit volume is K ~a m7 , (5.130) the Kramers opacity is K ~« m " (^) . (5-131) o

84

2

T

K

3

e

2

2

1 / 2

When the Kramers opacity is significant, the luminosity differs slightly from the form (5.126) and is In practice, one uses the formula which gives the lowest luminosity of (5.126) and (5.132). We can simplify (5.132) further because we know the relevant central temperature to consider is T^uc — r j a m and this gives 2

N

L ~ 1 0 " V « - (\ — Y' . (5.133) m / The luminosities (5.133) and (5.126) become equal when M ~ 3TJ~ M* and the Sun is thus controlled by Kramers opacity. So far, we have only discussed the central temperature of stars, T*, but we are also interested in the surface temperature of a star. In the solar case it is this parameter which determines the energy flux incident on the Earth's surface. The surface temperature T should be simply related to the luminosity by an inverse square law, so L~0.5£ T (5.134) K

/ 2

2

2

N

1/4

s

2

4

290 The Weak Anthropic Principle in Physics and Astrophysics

335

where T* is the radiant energy at the surface. Applying this result, we obtain T*~0.1r, a m>^ (^) (5.135) with Thomson opactiy and 2

2

2

71- l ° ~ V ^ « ( ^ ) / 2

/ 2

2

3 / 2

m^)

(5.136)

3

with Kramers opacity. However, these results implicitly assume that Thomson or Kramers scattering maintains a large opacity right out to the boundary of the star. This will only be possible if material is ionized near the surface. If the temperature near the stellar surface falls below the dissociation temperature of molecules, T , where x

TJ-HRVM,

(5.137)

the matter will cease to be opaque there. What then happens if the values of T calculated in (5.135) and (5.136) fall below Tj? In order to remain in equilibrium, the star must have other means of transporting heat to its surface and it is believed that convection is responsible for maintaining the surface temperature at T if the radiative transport described by (5.135) or (5.136) is inadequate. Inside the boundary of a star whose surface temperature lies close to T there should exist a thin convection layer associated with the atomic and molecular transitions. If the temperature at the surface falls below T the convective layer will spread into the star until it increases the heat flux sufficiently for T to attain the value T . Convection should therefore extend far enough into the star to maintain the surface temperature close to Tj. Thus if the formulae (5.135) and (5.136) predict a value for T lower than T , that value should be replaced by Tj. For main sequence stars, this leads to an interesting result; we see that s

r

x

x

s

s

x

r

86

when Thomson scattering dominates the opacity within the central regions and when Kramers scattering dominates. These two formulae reveal a striking 'coincidence' of Nature, first recognized by Carter: the surface temperature only neighbours the ionization temperature T of stars with 87

r

290

The Weak Anthropic Principle in Physics and Astrophysics 336

mass M ~ M * because of the numerical 'coincidence' that

1

(5.140) \m / which reduces numerically to the relation 2.2 x 1 0 ~ 5 . 9 x 10~ (5.141) The existence of this unusual numerical coincidence (5.140) ensures that the typical stellar mass Af* is a dividing line between convective and radiative stars. Carter argues that the relation (5.140) therefore has strong Anthropic implications: the fact that a is just bigger than a (m /m ) ensures that the more massive main sequence stars are radiative but the smaller members of the main sequence, which are controlled by Kramers opacity, are almost all convective. If a had been slightly greater all stars would have been convective red dwarfs; if a had been slightly smaller the main sequence would consist entirely of radiative blue stars. This, Carter claims, N

_39

39

G

12

e

N

4

G

G

86

suggests a conceivable world ensemble explanation of the weakness of the gravitational constant. It may well be that the formation of planets is dependent on the existence of a highly convective Hayashi track phase on the approach to the main sequence. (Such an idea is of course highly speculative, since planetary formation theory is not yet on a sound footing, but it may be correlated with the empirical fact that the larger stars—which leave the Hayashi track well before arriving at the main sequence—retain much more of their angular momentum than those which remain convective.) If this is correct, then a stronger gravitational constant would be incompatible with the formation of planets and hence, presumably of observers.

This argument is hard to investigate more closely because of lack of evidence. It is maintaining that planetary formation is associated with convective stars and their relatively low angular momentum relative to blue giants makes it conceivable that stellar angular momentum was lost during the process of planet formation and now resides in the orbital motion of planetary systems around them. Finally, we note that the classic means of classifying stars and tracing their evolutionary history is via the Hertzsprung-Russell diagram which plots the position of stars according to their surface temperature and luminosity, (Figure 5.11), An extremely crude determination of its main branch is possible using (5.133) and (5.139) or (5.126) and (5.134) which give fundamental relations between L and T . For Thomson scattering opacity these formulae give, omitting the small numerical constants, a dependence (5.142) T ocL 90

s

s

1/12

290 The Weak Anthropic Principle in Physics and Astrophysics

337

Figure 5.11. Schematic Hertzsprung-Russell diagram plotting luminosity (in solar units) versus effective temperature. The lines of constant slope represents stars having identical radii (see ref. 90).

whereas for Kramers opacity

T OCL (5.143) remarkably close to the observational situation of T oc L ° in Figure 5.11. Finally, we note that if we take the typical stellar mass as OAm^a^?' then the distance at which a habitable planet will reside in orbit is given by requiring that it be in thermal equilibrium at the biological temperature (5.32) necessary for life. Therefore we can calculate the 'astronomical unit' which gives the distance of a habitable planet from its parent star (assuming that its orbit is not too eccentric) as ' S

3/20

s

1 3

2

40 41

If we now use Kepler's laws of motion, which follow from Newton's second law of motion, we can calculate the typical orbital period of such a planet. This determines what we call a 'year' to be 40,41

(5.145) This result, together with (5.49), may have a deeper significance than the purely astronomical. It has been argued by some historians of

290

The Weak Anthropic Principle in Physics and Astrophysics 338

science that the hom*ogeneity of the thread linking so many mythological elements in ancient human cultures can be traced to an origin in their shared experience of striking astronomical phenomena. If this were true (and it is not an issue that we wish to debate here) then the results (5.145) and (5.49) for f and f r indicate that there are Weak Anthropic reasons why any life-form on a solid planet should experience basically similar heavenly phenomena. They will record seasonal variations and develop systems of time-reckoning that are closely related to our own. If astronomical experiences are a vital driving force in primitive cultural development then we should not be surprised to find that planetary-based life-forms possess some cultural hom*ogeneity. This hom*ogeneity would be a consequence of the fact that the timescales t^ and f are strongly constrained to lie close to the values we observe because they are determined by the fundamental constants of Nature. Any biological phenomenon whose growth cycle and development is influenced by seasonal and diurnal variations will also reflect this universality. The fact that life can develop on a planet suitably positioned in orbit about a stable, long-lived star relies on the close proximity of the spectral temperature of starlight to the molecular binding energy ~ 1 Rydberg. Were it to greatly exceed this value, living organisms would be either sterilized or destroyed; were it far below it, the delicate photochemical reactions necessary for biology to flourish would proceed too slowly. A good example is the human eye: the eye is receptive only to that narrow wave-band of electromagnetic radiation between 4000-8000 A which we call the 'visible' region. Outside this wave-band electromagnetic radiation is either so energetic that the rhodopsin molecules in the retina are destroyed or so unenergetic that these molecules are not stimulated to undergo the quantum transitions necessary to signal the reception of light to the central nervous system. Press and Lightman have shown that the relation between the biological temperature, T , and the spectral temperature (that is, the surface temperature of the Sun) is due to a real coincidence, that 85

day

yea

y

year

41

B

(5.146) where T is given by (5.135) or (5.136). We can even deduce something about the weather systems on habitable planets. The typical gas velocity in an atmosphere will be set by the sound speed at the biologically habitable temperature T . This is just s

41

B

(5.147)

290 The Weak Anthropic Principle in Physics and Astrophysics

339

5.7 Star Formation

He made the stars also. Genesis 1 : 16

Our discussion of stellar structure implicitly assumes that one begins with some spectrum of massive bodies some with initial mass far in excess of d> perhaps, some much smaller. Only those with mass close to a^? m will evolve into main-sequence stars because only bodies with mass close to this value get hot enough to initiate nuclear burning and yet remain stable against disruption by radiation pressure. However, what if some prior mechanism were to ensure that no protostars could exist with masses close to a^?' m l This brings us face to face with the problem of star formation—a problem that is complicated by the possible influence of strong magnetic or rotational properties of the protostellar clouds. One clear-cut consideration has been brought to bear on the problem by Rees. His idea develops a previous suggestion of Hoyle, that stars are formed by the hierarchical fragmentation of gaseous clouds. A collapsing cloud will continue to fragment while it is able to cool in the time it takes to gravitationally collapse. If the fragments radiate energy at a rate per unit area close to that of a true black-body then they will be sufficiently opaque to prevent radiation leaking out from the interior and cooling will be significantly inhibited. Once the fragments begin to be heated up by the trapped radiation the pressure builds up sufficiently to support the cloud against gravity and a protostar can form. These simple physical considerations enable the size of protostellar fragments to be estimated: at any stage during the process of fragmentation, the smallest possible fragment size is given by the Jeans mass (the scale over which pressure forces balance gravitational attraction). If the first opaque fragments to form have temperature T then, since they must behave like black bodies they will be cooling at rate —T^RJ ; where Rj is the Jeans length—the depth from which radiation escapes. The cooling time in the cloud is given by the ratio of the thermal energy density to the radiative cooling rate, nT (5.148) an

/2

N

2

N

91

92

93

1

where n is the particle number density in the cloud. In order for cooling to occur, the cooling time must be shorter than the time for gravitational collapse, where f ~(Gnm )" (5.149) This is the case if, by (5.148) and (5.149), n^T m}> g

N

5,2

2

1/2

290

The Weak Anthropic Principle in Physics and Astrophysics 340

and so the collapsing cloud must cease to fragment when the average mass of the fragment is T \ 1/4 / T \ (— « G m ~ — Mo (5.150) m ) \m J The inevitable size of protostellar fragments is relatively insensitive to temperature over the range of conditions expected in such clouds T ~ 1 0 - 1 0 K . Further fragmentation is not possible because the fragments have reached the maximum rate of energy disposal. It is interesting that the oldest stars must therefore have masses ^ a ~ m . 1 / 4

1

3/2

N

N

2

N

4

3/2

N

5.8 White Dwarfs and Neutron Stars

For a body of density 10 gm/cc—which must be the maximum possible density, and its particles would be then all jammed together,— the radius need only be 400 kilometres. This is the size of the most consolidated body. Sir Oliver Lodge (1921) 12

The picture of a star we have sketched above cannot be sustained indefinitely. Eventually the sources of thermonuclear energy within the star will be exhausted, all elements will be systematically burnt to iron by nuclear fusion and no means of pressure support remains available to the dying star. What is its fate? We have already said enough to provide a partial answer. According to the energy equation (5.108) it should evolve towards a configuration wherein the electron degeneracy pressure balances the inward attraction of gravity. This, we recall, was the criterion for the existence of a planet. However, planets are cold bodies, that is, their thermal energies are far smaller than the rest of the mass energies of the electrons that contribute degeneracy pressure. However, if a body is warm enough for the electrons to be relativistic (T^m ), then the electron degeneracy energy is no longer given by ~p m~ ~ d~ m~ but rather by the relativistic value —d" . The equilibrium state that results is called a white dwarf and has a mass and radius given by, Mwd-«" % (5.151) KwD~«- me (5.152) Thus, although they are of similar mass to main sequence stars, white dwarfs have considerably smaller radii. They are roughly the size of planets but a million times heavier: e

2

x

2

Y

1

94

3 / 2

1/2

_1

95

— K

*

-a

(5.153)

Therefore, they are denser than ordinary stars by a factor ~ a ~ 10 and _ 2

6

341

The Weak Anthropic Principle in Physics and Astrophysics

the density of a white dwarf is roughly

(5.154) Figure 5.12 illustrates the details of the mass-size plane in the neighbourhood that includes stars, planets and white dwarfs. PWD ~ m ml ~ 1 0 gm c m 6

N

- 3

31

Figure 5.12. Detailed view of the mass-size diagram in the region containing planetary and white dwarf masses. 38

Although these objects appear bizarre, they do not involve general relativistic considerations because their binding energy per unit mass is ~ m / m and thus is much less than unity. Now, as Chandrasekhar first discovered, the mass Mwd represents an upper limit to the mass which can be supported by electron degeneracy pressure. Heavier bodies will continue to collapse to densities in excess of Pwd ~ 10 gm cm . In that situation it becomes energetically favourable for the degenerate electrons to combine with nuclear protons to form neutrons (because of the 'coincidence' that m - m ~ m ) when E ~ 1 MeV so e~ + p—> n + v - 0 . 8 M e V (5.155) The electron number density therefore drops and, along with it, the electron degeneracy pressure. But, eventually the neutrons will become so closely packed that their degeneracy pressure becomes significant because they are initially non-relativistic. The fluid, or perhaps solid, of degenerate neutrons will have a degeneracy energy given by the Exclusion Principle as ~ro m^ where r is the mean inter-nucleon separation. c

N

86,94

6

-3

p r

2

1

n

c

e

290

The Weak Anthropic Principle in Physics and Astrophysics 342

The balance between gravity and neutron degeneracy creates a new equilibrium state that is called a neutron star. For equilibrium we require that, (5.156, r$m R N

where N = Mm^ is the number of nucleons in the neutron star and r = N R~ so ro-m^ao'N- . (5.157) The radius of the neutron star is thus R„s=r N ~ m ^ a ^ N " ' - l O/ ^Ar fj \ km (5.158) 1

1/3

1

1

1/3

1

and until p reaches p

N

1 / 3

3

its density will be s

~

m

^ ~ )

2

~

1

Q

1

4

®

2

g

m

c

m

"

3

( 5

and the ratio of its size to that of white dwarfs is simply m

-

1 5 9 )

(5.160)

e

If N~CKG as it will be for typical stars, then we see that neutron stars are much larger than their gravitational radii, ^ s ~ ^ N N and so they are objects in which general relativity is unimportant. If a neutron star is only slightly larger than M—3M©, the neutrons within it become relativistic and are again unstable to gravitational collapse. When this stage is reached no known means of pressure support is available to the star and it must collapse catastrophically. This dynamic state, inevitable for all bodies more massive than a few solar masses, leads to what is called a black hole. If we assume that a neutron star has evolved from a typical main sequence star with R+~R ~ 10 cm and M * ~ M © ~ 10 gm and if both mass and angular momentum were conserved during its evolution (which is rather unlikely), then the frequency of rotation of the neutron star will be related to that of the original star v+ by 3/2

a

N

1 / 2 > >

96

11

33

The sun rotates roughly once a month and, if typical of main sequence stars, this suggests i > * ~ 5 x l 0 s and i> ~ 10~ s . The stipulation that centrifugal forces not be so large that equatorial regions become unbound 7

_1

NS

4

_1

290 The Weak Anthropic Principle in Physics and Astrophysics

places an upper bound on v of NS

343

98

1

-1 \

KMC

(5.162)

/

The neutron star introduces a qualitatively different type of astronomical object from those discussed up until now—an object whose average density is close to that of the atomic nucleus and in whose interior nuclear timescales determine events of physical interest. For these reasons many scientists and science fiction writers have speculated that if living systems could be built upon the strong rather than the electromagnetic interaction, then neutron stars might for them play the role that planets play for us. Freeman Dyson and others have suggested that intelligent 'systems' which rely upon the strong interaction for their organization might reside near or on the surface of neutron stars. It appears that no quantitative investigations have been made to follow up this intriguing speculation and so we shall sketch some results that give some feel for the type of systems that are allowed by the laws of physics. Analysing the surface conditions likely on a neutron star is a formidable problem, principally because of the huge magnetic fields anticipated there. Just as the rotation frequency spins up during the contraction of main-sequence stars into neutron stars, so the magnetic field, B, amplifies with radius, R, as B oc R~ and fields as large as ~ 1 0 gauss could result from an initial magnetic field close to the solar value gauss, (a magnetic field of ~ 1 0 gauss on the neutron star would contribute an energy ~10 erg, far smaller than the gravitational energy ~ 1 0 erg and possible rotational energy ~2 x 10 erg). However, for the moment, let us ignore the magnetic field. The neutron star will possess a density and composition gradient varying from the centre to the boundary. The general form of this variation is probably like that shown in Figure 5.13. In the outer region where the density is less than ~ 1 0 g m c m , electrons are still bound to nuclei, the majority of which are iron. A little deeper into the crust there should exist a sea of free electrons alongside the lattice of nuclei. The estimated surface temperature is ~5 x 10 K and much less than the melting temperature of the nuclei there. Above the outer crust there will exist a thin atmosphere of charged and neutral particles. This atmosphere is characterized by a scale height h over which temperatures and pressures vary significantly and which is defined by 99

2

13

9

12

42

53

53

100

97

4

-3

6

(5.163) where g is the acceleration due to gravity on the surface (so, for example, in the Earth's atmosphere with T — 290K and g~980 cms" , NS

2

290

The Weak Anthropic Principle in Physics and Astrophysics 344

Figure 5.13. Schematic slice through a neutron star displaying the outer crust, the liquid interior and the various theoretical alternative suggested for the core (solid neutrons or pion condensate or hyperons). (Reproduced, with permission, from the Annual Review of Nuclear and Particle Science, Vol. 25, copyright 1975 by Annual Reviews Inc.) 97

one has h ~ 50-100 km). On the neutron star surface T ~ 10 K and 6

s

gNs-^r^-SxKPcms-

(5.164)

2

and so ^ns—"IN2 rnjf 1 (5.165) with T ~ em and e ~ 1.5 x 10" . Just as we were able to calculate the height of mountains on planetary surfaces by considering the maximum stress that can be supported by solid atomic material (P T~ 1 g cm ) at their bases, so we can estimate the largest 'mountains' that could exist on a neutron star. The yield stress, Y, or bulk modulus at the surface will be c m

s

4

e

A

m

3

101

(5.166) a N 1 0 p dyne c m with r] ~ 0.01 and a the average inter-nucleon separation. The maximum height of a mountain strong enough to withstand the gravitational force at its base is therefore h ~ pe — ~ 20 \10 ( em cm '''cm (5.167) 12

4/3

N

9

-2

290 The Weak Anthropic Principle in Physics and Astrophysics

345

If we assume that neutron star 'inhabitants' are subject to analogous constraints as are atomic systems on planetary surfaces—that is, they do not grow so tall that on falling they break their atomic bonds or make themselves susceptible to unacceptable bending moments when slightly displaced from the vertical—then their maximum height is calculated to be L ~ g a a m N 10" cm (5.168) if the energy of their bonding is ea m . Note that on the surface of the neutron star nuclear 'life' based on the strong interaction is not likely. Only in the deep interior where densities approach —10 gm c m would such a possibility be realized. The mildest conditions allowing it might be those just about 1 km from the boundary at a radius — 0 . 9 w h e r e p ~ 10 gmcm~ . Suppose, for amusem*nt's sake, nuclear life existed there with bonding—or communication networks—that would be destroyed by stresses which exceed the nuclear binding energy ~a m . By equating the gravitational potential on a nuclear system of size A situated at a radius ~ r j R from the centre, bound by a bond energy of ~eafm we find its maximum size to be 1 / 2

n s

G

1 / 4 2

U

6

e

14

14

-3

3

2

N

ns

N

A ^ / e \ J a ^ W m N ~ 10~ cm

(5.169)

1 / 2

3

smaller than an atomic being on the surface by a factor r)~ aa . If a nuclear 'civilization' formed a shell in the neutron star interior of thickness ~A it would enclose a total mass M ~p A(r).R ) where M ^ e a for Q > 0. For example, at Q = (10 GeV) we find a(10 GeV) = 0.0074 = 1/135.1. The perturbation analysis used to derive (5.200) breaks down when the denominator vanishes; that is, when Q ~ m exp(37r/a). This corresponds to extraordinarily high energies where neglect of gravity is unwarranted and the theory used to derive (5.200) is invalid. In the case of the strong interaction, although a quark will have its bare colour charge screened by quark-antiquark pairs this is not the only consideration. Indeed, if it were, the strong coupling a (Q) would increase above a at high energy also and we would be no nearer unification with the electromagnetic force. However, whereas the photons which mediate the electromagnetic interaction do not carry the electromagnetic charge, the gluons mediating the strong force do carry the colour quantum charge. Therefore the gluons, unlike the photons, are self-interacting. This enables the gluon field to create a colour deficit near a quark and so there can exist anti-screening of the quark's bare colour charge when that 2

2

2

2

2

2

2

2

s

s

290

The Weak Anthropic Principle in Physics and Astrophysics 356

charge is smeared out. A quark can emit gluons which carry colour and so the quark colour is spread or smeared out over a much larger volume and decreases the effective coupling as Q increases. Incoming quarks will then see a colour field that is stronger outside than within the local smearedout colour field. Thus, although the production of qq pairs strengthens the strong interactions at high Q because the interaction distance is then smaller, the production of gluon pairs acts in the opposite sense to disperse colour and weaken the effective interaction at high Q. The winner of these two trends is determined, not surprisingly, by the population of coloured gluons relative to that of quark flavours, /. The gluons will dominate if the number of quark flavours is less than 17. If, as we believe, fa «G (^) 1

(5.210)

1

4

1

(5.211)

2

For example, with the actual values for a and m /m G

«

(

5

N

.

2

one obtains

e

1

2

)

Rozenthal has pointed out that if one takes a closed universe of mass M, so that its present mass can, using Dirac's observation, be written M ~ A O % then in conjugation with T > t where the present age of the Universe is t ~(a m )we have (compare equation (4.23)), a N )(aKW) (5.214) that is, 2

2

S

« G ~ " \(m~ )/ (5.215) now since m / m ~ e x p ( - 4 a ) we can write this coincidence as / 2

a

2

4

N

x

N

ao ~ «" 1

4/3 e x

p(y) ~

3 x

1 0

"

4 3

(5.216)

In summary, grand unified theories allow very sharp limits to be placed on the possible values of the fine structure constant in a cognizable universe. The possibility of doing physics on a background space-time at the unification energy and the existence of stars made of protons and neutrons enclose a in the niche (5.217) 180 85 These unified theories also show us why we observe the World to be governed by a variety of 'fundamental' forces of apparently differing strengths: inevitably we must inhabit a low-temperature world with T < T ^ a m , and at these low energies the underlying symmetry of the World is hidden; instead we observe only its spontaneously-broken forms. There are further consequences of grand unified theories for cosmology. Most notably, the simultaneous presence of baryon number, CP and C violating interactions makes it possible for us to explain the observed baryon asymmetry of the Universe—the overt propensity for matter rather than antimatter in the Universe. This leads us to consider next what we know of cosmology. In this chapter we have shown how it is possible to construct the gross features of the natural world around us from the knowledge of a few invariant constants of Nature. The sizes of atoms, people, and planets are not accidental, nor are they the inevitable result of natural selection. Rather, they are consequences of inevitable equilibrium states between competing natural forces of attraction and repulsion. Our study has shown us, in a rough way, where natural selection stops. It has enabled us to separate those aspects of Nature which we should regard as coincidences, from those which are inevitable consequences of fundamental forces and the values of the constants of Nature. We have also been able to ascertain which invariant combinations of physical constants play a key b

132

2

c

290

The Weak Anthropic Principle in Physics and Astrophysics 360

role in making the existence of intelligence possible. This possibility appears to hinge upon a number of unrelated coincidences whose existence may or may not be inevitable. In our survey we have ranged from the scale of elementary particles to stars. We stopped there for a reason; beyond the scale of individual stars it is known that cosmological coincidences and initial conditions may also play a major role in rendering the Universe habitable by intelligent observers. In the next chapter we shall investigate these interconnections in some detail.

References

1. G. Johnstone Stoney, Phil Mag. (ser. 5) 11, 381 (1881); Trans. R. Dublin Soc. 6 (ser. 2) Pt xiii, 305 (1900). 2. L. J. Henderson, The fitness of the environment (Harvard University Press, Mass., 1913). 3. W. Paley, Natural theology, Vol. 3 of The complete works of William Paley (Cowie, London, 1825). 4. J. D. Barrow, Quart. J. R. astron. Soc. 22, 388 (1981). 5. I. Newton, Philosphiae naturalis, principia mathematica II, prop 32 (1713), ansl. A. Motte (University of California Press, Berkeley, 1946). 6. J. B. Fourier, Theoria de la chaleur (1822), Chapter 2, §9. For a detailed account of modern dimensional methods see R. Kurth, Dimensional analysis and group theory in astrophysics (Pergamon, Oxford, 1972). 7. An interesting discussion of this was given by A. Einstein, Ann. Physik 35, 687 (1911). For further discussion see section 4.8 of this book for a possible anthropic explanation. 8. Adapted from J. Kleczek, The universe (Reidel, Dordrecht, 1976), p. 218. 9. G. Johnstone Stoney, Phil Mag. (ser. 5) 11, 381 (1881). This work was presented earlier at the Belfast meeting of the British Association in 1874. 10. op. cit., p. 384. 11. M. Planck, The theory of heat radiation, transl. M. Masius (Dover, NY, 1959); based on lectures delivered in 1906-7 in Berlin, p. 174. 12. The initials of the celebrated Mr. C. G. H. Tompkins, a bank clerk with an irrepressible interest in modern science, were given by these constants. For an explanation see Mr. Tompkins in paperback by G. Gamow (Cambridge University Press, Cambridge, 1965) p. vii. 13. A. Sommerfeld, Phys. Z. 12, 1057 (1911). 14. E. Fermi, Z. Physik 88, 161 (1934) transl., in The development of weak interaction theory, ed. P. K. Kabir (Gordon & Breach, NY, 1963). 15. The factor 2~ c is purely conventional; for details see D. C. Cheng and G. K. O'Neill, Elementary particle physics: an introduction (Addison-Wesley, Mass., 1979). 16. These expressions are in rationalized units, g (rat) = 47rg (unrat). 17. P. Langacker, Phys. Rep. 72, 185 (1981). 18. M. Born, Proc. Indian Acad. Sci A 2, 533 (1935). 1/2

2

2

2

290 The Weak Anthropic Principle in Physics and Astrophysics

361

19. For a good overview see S. Gasiorowicz, The structure of matter; a survey of modern physics (Addison-Wesley, Mass., 1979). For historical background to the Bohr theory see M. Hammer, The conceptual development of quantum mechanics (McGraw-Hill, NY, 1966), and Sources of quantum mechanics, ed. B. L. van der Waerden (Dover, NY, 1967). 20. W. E. Thirring, Principles of quantum electrodynamics (Academic Press, NY, 1958). 21. For a discussion of the structure of materials, see D. Tabor, Gases, liquids and solids, 2nd edn (Cambridge University Press, Cambridge, 1979). 22. A. Holden, Bonds between atoms (Oxford University Press, Oxford, 1977), p. 15. 23. F. Dyson quotes Ehrenfest: .. why are atoms themselves so big? . . . Answer: only the Pauli Principle, 'No two electrons in the same state? That is why atoms are so unnecessarily big, and why metal and stone are so bulky'. J. Math. Phys. 8, 1538 (1967). 24. F. Kahn, in The emerging universe, ed. W. C. Saslaw and K. C. Jacobs (University of Virginia Press, Charlottesville, 1972). 25. T. Regge, in Atti de convegus Mendeleeviano, Acad, del Sci. de Torino (1971), p. 398. 26. V. F. Weisskopf, Science 187, 605 (1975). 27. J. M. Pasachoff and M. L. Kutner, University astronomy (Saunders, Philadelphia, 1978). 28. H. Dehnen, Umschau 23, 734 (1973); Konstanz Universitatsreden No. 45 (1972). 29. The height allowed will be slightly less than —30 km because the rock is not initially at zero temperature and so does require so much energy to liquify. 30. The melting temperature of quartz is 1968 K according to D. W. Hyndman, Petrology of igneous and metamorphic rocks (McGraw-Hill, NY, 1972). 31. B. J. Carr and M. J. Rees, Nature 278, 605 (1979). 21. M. H. Hart, Icarus 33, 23 (1978). 33. F. W. Went, Am. Scient. 56, 400 (1968). 34. A. V. Hill, Science Prog. 38, 209 (1950). 35. J. B. S. Haldane, in Possible worlds (Hugh & Bros., NY, 1928). 36. L. J. Henderson, Proc. natn. Acad. Sci., U.S.A. 2, 645 (1916). 37. W. D'A. Thompson, On growth and form (Cambridge University Press, London, 1917). 38. F. Moog, Scient. Am. 179, 5 (1948); C. J. v. d. Klaauw, Arch, neerl. Zool. 9, 1 (1948). 39. R. M. Alexander, Size and shape (E. Arnold, Southampton, 1975). 40. A. Lightman, Am. J. Phys. 52, 211 (1984). 41. W. H. Press and A. Lightman, Phil. Trans. R. Soc. A 310, 323 (1983). 42. G. Galileo, Two new sciences, English transl., S. Drake (University of Wisconsin Press, Madison, 1974); the first edition was published in Italian as Discorsi e dimostrazioni matematiche, intorno a due nouve scienze atteneti alia mecanica ed ai muovimenti locali (1638); the quotation is from p. 127. 43. W. Press, Am. J. Phys. 48, 597 (1980). The size estimates given by Press are a better estimate of the size of a creature able to support itself against

290

The Weak Anthropic Principle in Physics and Astrophysics 362 gravity by the surface tension of water which is some fraction of the intermolecular binding energy, say ea m per unit area, and Press's size limits, ~ 1 cm, more realistically correspond to the maximum dimension of pond-skaters rather than people. A. Rauber showed that elephants are quite close to the maximum size allowed for a land-going animal in Morph. Jb. 7, 327 (1882). Notice that some ingenious organisms (sponges) have evolved means of increasing their surface areas without inflating their masses by the full factor ~(area) . The bathroom towel exploits this design feature. J. Woodhead-Galloway, Collagen: the anatomy of a protein (Arnold, Southampton, 1981). However, it appears that, in general, good resistance to crack and compression tend to be mutually exclusive features of structures. L. Euler, Acta acad. sci. imp. petropol. (1778), p. 163. W. Walton, Quart. J. Math. 9, 179 (1868). A. G. Greenhill, Proc. Camb. Phil. Soc. 4 (Pt II), 5 (1881). H. Lin, Am. J. Phys. 50, 72 (1982). A. Herschmann, Am. J. Phys. 42, 778 (1974), E. D. Yorke, Am. J. Phys. 41, 1286 (1973). T. McMahon, Science 179, 1201 (1973). H. J. Metcalf, Topics in biophysics (Prentice-Hall, NJ, 1980). Ref. 42, p. 129. E. M. Purcell, Am. J. Phys. 45, 3 (1977). Note that the resistive drag force, F <x (cross-sectional area) x (density) x (velocity) , is only independent of the viscosity of the ambient medium when the velocities are large. When they are small the familiar Stokes law holds with F oc (radius) x (velocity) x (viscosity) and this is exploited in centrifuges: since macromolecules have mass a (radius) they will sediment out at rates proportional to their size. H.-C. Berg, Nature 254, 389 (1975); Ann. Rev. Biophys. Biol. 4, 119 (1975): Scient. Am. 233, 36 (Aug. 1975). J. M. Smith, Mathematical ideas in biology (Cambridge University Press, Cambridge, 1980). C. J. Pennycuik, Col. livia. J. Exp. Biol. 49, 527 (1968). Actually only 1% of the work done by the human heart is deployed to accelerate blood. Most of the rest overcomes viscous resistance to the blood flow through small blood vessels. See M. Kleiber, Physiol. Rev. 27, 511 (1947); Scale effects in animal locomotion, ed. T. Pedley (Academic Press, NY, 1927); P. Altman and D. Dittmer, Biology data book 2nd edn (Federation of American Societies for Experimental Biology, Bethseda, Maryland, 1974). There is excellent agreement with the data over a mass range of ~ 1 0 - 1 0 kg encompassing mice, birds, rabbits, dogs, physicists, and elephants. Jellyfish are an obvious exception to this statement. They grow hundreds of times larger but are exceptionally constructed with all their cells superficially situated within ~ 1 0 c m of the water from which they extract oxygen. E. Schrodinger, What is life? (Cambridge University Press, Cambridge, 1944). Despite the huge range of aniamal and plant sizes these organisms all possess cells of roughly the same size. The difference in their gross size is due to variations in cell number. A typical cell has a volume of about a thousand cubic microns. 2

44.

e

3/2

45. 46. 47. 48. 49. 50. 51.

d

2

d

3

52. 53. 54. 55.

-1

56. 57. 58.

_2

5

290 The Weak Anthropic Principle in Physics and Astrophysics

363

59. N. W. Pirie, Ann. Rev. Microbiol. 27, 119 (1973); W. R. Stahl, J. Theor. Biol. 8, 371 (1965); H. J. Morowitz, Prog. Theor. Biol. 1, 35 (1967). 60. H. Yukawa, Proc. Phys. Math. Soc. Japan, 17, 48 (1935). The analogy between chemical and nuclear forces was evident to Heisenberg prior to the work of Yukawa. The Yukawa model can only explain a few aspects of the nuclear force. There exist many mesons besides the IT. Other facts must be taken into account to describe the short range ( T as the radiation era and it is in this period that the most interesting interconnections between cosmology and elementary particle physics lie. At times prior to rec

rec

8

1/2

rec

13

eq

9

eQ

D

3

d

1 / 2

rec

e q

e q

383 The Anthropic Principles in Classical Cosmology

f the curvature parameter k is negligible in the Friedman equation and the expansion of an isotropic, hom*ogeneous Universe filled with radiation has the simple solution ' eq

10 12

R

(

f

)

a

f

l

/

2

;

H

=

h

( 6

-

5 1 )

The energy density in the radiation-dominated phase of the early universe is dominated by black-body radiation. There may exist several different equilibrium species of elementary particles (either interacting or non-interacting) and in general we write p = ^ V = 3p

(6.52)

7

where g is the number of helicity states—the effective number of degrees of freedom—so since in general this counts bosons and fermions, g = g + Ig/ (6.53) where fr = bosons and / = fermions. During the radiation era (6.52), (6.3) and (6.8) yield a solution which, when combined with Toctf" (6.54) b

1

gives the temperature-time adiabat as ^ =2 . 4 2 - » f f ^

(6.55)

8

In Planck units (c = h= 1, m = G~ — 10" gm 10 GeV, fc =l) the temperature-time adiabat is f~0.3m g- T" (6.56) This establishes the essential quantitative features of the 'standard' hot Big Bang model. Some further pieces of observational evidence that support it will be introduced later. For the moment we stress its special character: it is hom*ogeneous and isotropic, has an entropy per baryon close to 10 and is expanding at a rate that is irresolvably close to the critical divide that separates an infinite future from a finite one. We now turn to examine some of these key properties of the Universe with a view to determining which of them are important for the process of local biological evolution. Thus will enable us to identify those aspects of the Universe, our discovery of which may in some sense be necessary consequences of the fact that we are observers of it. p

1/2

p

9

5

1/2

2

19

B

384

The Anthropic Principles in Classical Cosmology

6.3 The Size of The Universe

I don't pretend, to understand the Universe—its a great deal bigger than I am. T. Carlyle

In several other places we have used the fact of the Universe's size as a striking example of how the Weak Anthropic Principle connects aspects of the Universe that appear, at first sight, totally unrelated. The meaning of the Universe's large size has provided a focus of attention for philosophers over the centuries. We find a typical discussion in Paradise Lost where Milton evokes Adam's dilemma: why should the Universe serve the Earth with such a vast number of stars, all 27

28

. . . merely to officiate light Round this opacious earth, this punctual spot One day and night, in all their vast array Useless besides?

Perplexed, he tells Raphael that he cannot understand How nature, wise and frugal, could commit Such disproportions, with superflous hand So many nobler bodies to create?

The archangel replies only that the 'Heaven's wide circuit' is evidence of 'The Maker's high magnificence'. Adam's concern was shared by an entourage of philosophers, ancient and modern: if life and mind are important, or unique, why does their appearance on a single minor planet require a further 10 stars as a supporting cast? In the past, as we saw in Chapter 2, this consideration provided strong circ*mstantial evidence against naive Design Arguments. However, the modern picture of the expanding universe that we have just introduced renders such a line of argument, at best, irrelevant to the question of Design. Einstein's special theory of relativity unified the concepts of space and time into a single amalgam: space-time. The existence of an invariant quantity in Nature with the dimensions of a velocity, (the velocity of light, in vacuo, c) places space and time on an equal footing. The size of the observable universe, A, is inextricably bound-up with its age, through the simple relation A = ct (6.57) The expanding Big Bang model, (6.22), allows us to calculate the total mass contained in this observable universe, 22

29

u

M^—p^-SG-X

(6.58)

385 The Anthropic Principles in Classical Cosmology

which yields, M ~10 (^)M u

5

(6.59)

o

These relations display explicitly the connection between the size, mass and age of an expanding universe. If our Universe were to contain just a single galaxy like the Milky Way, containing 10 stars, instead of 10 such galaxies, we might regard this a sensible cosmic economy with little consequence for life. But, a universe of mass 1 0 M would, according to (6.59) have expanded for only about a month. No observers could have evolved to witness such an economy-sized universe. An argument of this sort, which exploits the connection between the age of the Universe, t , and the global density of matter within it, was first framed by Idlis and Whitrow. Later, it was stressed by Dicke and Wheeler as an explanation for Dirac's famous 'Large number coincidences', - (see Chapter 4). A minimum time is necessary to evolve astronomers by natural evolutionary pathways and stars require billions of years, (—g^WN )* to transform primordial hydrogen and helium into the heavier elements of which astronomers are principally constructed. Thus, only in a universe that is sufficiently mature, and hence sufficiently large, can 'observers' evolve. In answer to Adam's question we would have to respond that the vastness of 'Heavens' wide circuit' is necessary for his existence on Earth. Later, we shall see that the use of (6.58) in this way relies upon particular properties of our Universe like small anisotropy, close proximity to the critical density and simple space-time topology. It is also interesting to recall that even in 1930 Eddington entertained an Anthropic interpretation of cosmological models possessing longlasting static phases due to the presence of a non-zero cosmological constant. He pointed out that if a period of ~ 1 0 years had elapsed from the static state, astronomers would have to 'count themselves extraordinarily fortunate that they are just in time to observe this interesting but evanescent feature of the sky [the dimming of the stars]'. 11

11

12

u

27

30

1

31

10

6.4 Key Cosmic Times

Since the universe is on a one-way slide towards a state of final death in which energy is maximally degraded, how does it manage, like King Charles, to take such an unconscionably long time a-dying. F. Dyson

The hot Big Bang cosmological model contains seven times whose relative sizes determine whether life can develop and continue. The first six

The Anthropic Principles in Classical Cosmology

386

are all determined by microscopic interactions: (a) f : the minimum time necessary for life to evolve by random mutation and natural selection. We cannot, as yet, calculate f from first principles. (See Section 8.7 for further discussion of this time-scale.) (b) t+: the main-sequence stellar lifetime, necessary to evolve stable, long-lived, hydrogen-burning stars like the Sun and t+~ a W f n e ) a G ^ N ~ 10 yr. (c) t^: the time before which the expansion dynamics of the expanding universe are determined by the radiation, rather than the matter content of the Universe. It depends on the observed entropy per baryon, S, and thus * e ~ S « G m N ~ 10 s ( d ) ?rec* the time after which the expanding Universe is cool enough for atoms and molecules to form, t ~ S a - a G ( m / m ) m 7 ~ 10 s (e) T : the time for protons to decay; according to grand unified gauge theories this is 10 yr (f) tpi the Planck time, determined by the unique combination of fundamental constants G, h, c having dimensions of a time, t =(Gh/c ) ~ 10" s (g) t : the present age of the Universe, t ^ (15 ± 3) x 10 yr. Of these fundamental times, only two are not expressed in terms of constants of Nature—the current age, t , and the biological evolution time, f . From the list (a)-(g) we can deduce a variety of simple constraints that must be satisfied by any cognizable universe. If life requires nuclei and stellar energy sources then we must have ev

ev

2

2

lo

2

q

1/2

rec

3

1/2

1/2

12

N

c

1/2

1

12

n

31

p

5 in

43

u

u

9

u

ev

*u>T >fev>'*>*rec

(6.60)

^~S tree

(6.61)

N

We shall see that in order for galaxies to form—and perhaps therefore, stars—we require t+ > t^ We notice, incidentally, that 3 / 2

a (-^V W / 3

/ 2

and the fact that t ~ 10 s in our Universe is an immediate consequence of the fact that we have rec

12

S~a" (^Vl0

(6.62) \m ) The condition that atoms and chemistry exist before all stars burn out 2

e

9

387 The Anthropic Principles in Classical Cosmology

requires f*>t , and leads to an upper bound on the value of S of rec

S ^ a \Nma /- G (6.63) whilst the condition that stellar lifetimes exceed the radiation-dominated phase of the Universe during which galaxy and star formation is suppressed yields the requirement 1 0

c

(6.64) \m / The most powerful constraint, which was also derived in Chapter 5, arises if the proton is unstable with a lifetime of order that predicted by grand unified theories. In order that the proton lifetime exceed that of stars, t+ we require c

9

S ^ (—) (6.65) \ m /expQO.25a- ) Again, we find the ubiquitous trio of dimensionless quantities, m /m , a and a appearing; however, on this occasion it is a property of the entire Universe that they place constraints upon rather than the existence of local structures, as was their role in Chapter 5. So far, the parameter S giving the number of photons per baryon in the Universe has been treated as a free parameter that is an initial condition of the Universe and whose numerical value can only be determined by observation. Later, we shall see that grand unified gauge theories offer some hope that this quantity can be calculated explicitly in terms of other fundamental parameters like a and a . W

1

c

N

c

G

G

6.5 Galaxies

If galaxies did not exist we would have no difficulty in explaining the fact. W. Saslaw

We have already shown that the gross character of planetary and stellar bodies is neither accidental nor providential, but an inevitable consequence of the relative strengths of strong, electromagnetic and gravitational forces at low energies. It would be nice if a similar explanation could be provided for the existence and structure of galaxies and galaxy clusters. Unfortunately, this is not so easily done. Whereas the structure of celestial bodies up to the size of stars is well understood—aided by the convenient fact that we live on a planet close by a typical star—the nature of galaxies is not so clear-cut. It is still not known whether galaxies owe

The Anthropic Principles in Classical Cosmology

388

their sizes and shapes to special conditions at or near the beginning of the Universe (if such there was) or whether these features are conditioned by physical processes in the recent past. To complicate matters further, it is now suspected that the large quantities of non-luminous material in and around galaxies is probably non-baryonic in form. If the electron neutrino were found to possess a non-zero rest mass —30 eV as claimed by recent experiments then our whole view of galaxy formation and clustering would be affected. For simplicity, let us first describe the simplest situation wherein we assume that no significant density of non-baryonic material exists. We imagine that in the early stages of the Big Bang some spectrum of density irregularities arises which we describe by the deviation of the density p from the mean p using 32

21

(6.66)

p p In general, we would expect Sp/p to vary as a power-law in mass so no mass scale is specially picked out, say as — ocM" ;

n> 0

n

P

(6.67)

Cosmologists now ask whether some damping process will smooth out the smallest irregularities up to some particular mass, M . If this occurs the mass scale M might show up observationally in the Universe as a special one dividing large from moderate non-unofirmity. If the initial irregularities involve only non-uniformities in the matter content of the universe, but not in the radiation, they are called isothermal and isothermal irregularities will survive above a mass determined by the distance sound waves can travel whilst the Universe is dominated by radiation, ( t ^ t ) . This gives a mass close to that of globular clusters ~10 M . D

D

33

6

eq

o

M ~S «G m D i

1 / 2

3 / 2

(6.68)

N

Another type of density non-uniformity arises if both the matter and radiation vary from place to place isentropically. These fluctuations are called adiabatic. The survival of adiabatic inhom*ogeneities is determined by the mass scale which is large enough to prevent radiation diffusing away during the period up to t ^ This yields 34

M ~ S a ~ a q \m / m (6.69) This can be compared with the maximum extent of the Jeans mass, Mj, which is the largest mass of a gas cloud which can avoid gravitational D a

5/4

21/2

3/4

3 M

N

N

389 The Anthropic Principles in Classical Cosmology

collapse by means of pressure support during the Universe's history. This maximum arises at r and since Mj ~ G~ p p~ , where p is the pressure, we have

35

3/2

eq

(M ) J

m i l x

3/2

2

~G- t ~«a S (^) «- m 1

a n

e q

I / 2

a / 2

3

(6.70)

N

If inhom*ogeneities were of the isothermal variety then the first structures to condense out of the smoothly expanding universe would have a mass —Moi and would have to be associated with either globular clusters or dwarf galaxies. Galaxies could, in principle, be formed by the gravitational clustering of these building-blocks; subsequent clustering of galaxies would be the source of galaxy clusters. The extent of galaxy clusters would reflect the time interval from t until ~Sl t when gravitational clustering stops because gravity ceases to be cosmologically significant after a time fl t in universes with ft 1By way of contrast, if hom*ogeneities were initially adiabatic then we can argue a little further. The first structures to condense out of the expanding universe and become gravitationally bound should have a mass ~~M , close to the observed mass of galaxy clusters. It is then inevitable that these proto-clusters will contract asymmetrically under their own self-gravity and fragment. Some simple arguments allow us to estimate the masses and radii of typical fragments. The condition that a gravitating cloud be able to fragment is that it be able to cool and, hence, radiate away its binding energy. After the cosmic recombination time, f , the dominant cooling mechanism will be bremsstrahlung on a time-scale dictated by the Thomson cross-section, a , so the cooling time is rec

0 u

R the cloud contracts slowly without fragmenting and thus the characteristic dimension J? divides frozen-in primordial structure from well-developed fragmentation. This argument will only hold so long t

g

The Anthropic Principles in Classical Cosmology

390

as the temperature within the cloud stays below the ionization temperature ~ a m before the cloud contracts to a radius This condition requires that the cloud mass satisfy 2

c

(6.74) Clouds with masses less than M will cool very efficiently by atomic recombination radiation and will never be pressure-supported. This singles out M as the mass-scale dividing well-developed, fragmented cosmic structure from quasi-static, under-developed clustering. The fact that M and R^ are so close to the masses and sizes of real galaxies is very suggestive. If irregularities that arise in the early universe are of adiabatic type (and the latest ideas in elementary particle physics suggest that this will be the case) and if the arguments leading to (6.73) and (6.74) hold then the characteristic dimensions of galaxies are, like those of stars and planets, determined by the fundamental constants a , a and m /m independent of cosmological parameters. The only condition of a cosmological nature that is implicit in these deductions is that the maximum Jeans mass of (6.70) exceed M in order that galaxies can form from fragments of a larger surviving inhom*ogeneity; this implies g

g

g

37

G

N

c

g

35,38

(6.75) In the past few years there has been growing interest in the possibility that the predominant form of matter in the Universe might be nonbaryonic. There are a variety of non-baryonic candidates supplied by supersymmetric gauge theories. The most attractive would be a light massive electron neutrino since its mass can be (and may already have been) measured in the laboratory. Others, like the axion, gravitino or photino, do not as yet readily offer prospects for direct experimental detection. Cosmologists find the possibility that the bulk of the Universe exists in non-luminous, weakly interacting particles a fascinating possibility because it might offer a natural explanation for the large quantities of dark material inferred to reside in the outer regions of spiral galaxies and within clusters. If this is indeed the case then the masses of these elementary particles will play a role in determining the scale and mass of galaxies and galaxy clusters. By way of illustration we show how, in the case of a massive neutrino, this connection arises If a neutrino possesses a rest mass less than 1 MeV and is stable then it will become collisionless after the Universe has expanded for about one second and will always have a number density of order the photon number density, n . The mass density of light neutrinos in the present 32

39

40

4 1

y

391 The Anthropic Principles in Classical Cosmology

Universe is then given by Pv = ^gv"VtY

(6.76)

where m is the neutrino mass, and g is the number of neutrino spin states (for the total collection of known neutrinos v , v , v^ v^ we have g = 4); hence, today, v

v

e

e

v

P v o

~ 10-

3 1

g v

(^)gmcm-

(6.77)

3

If 3.5 eV then the neutrino density will exceed that of luminous matter. Neutrinos are also a natural candidate for galaxy or cluster halos because their distribution remains far more extended than that of baryons. Whereas baryonic material can radiate away its binding energy through the collisional excitation and de-excitation of atomic levels the neutrinos, being collisionless, cannot. One might therefore expect luminous baryonic material to condense within extended halos of neutrinos. If neutrinos are to provide the dominant density within these systems we can derive an interesting limit on the neutrino mass. Since neutrinos are fermions they must obey the Pauli Exclusion Principle. If neutrinos within a spherical region of mass M and radius r have an average speed a and momentum p, then the volume of phase space they occupy is 42

V

j d pj d x~(m a) r 3

3

v

(6.78)

3 3

Since V cannot exceed unity the total mass of the neutrino sphere is at most M~m V ~mt ; - a G m > ^ 10 M (6.82) This is similar to the extent of large galaxy clusters. If the mass-scale (6.82) is associated with the large scale structure of the Universe it illustrates how an additional dimensionless parameter, mJm , can enter into the invariant relations determining the inevitable sizes of large scale structures. In this picture of galaxy formation, which is 'adiabatic', galaxies must form by fragmentation of clusters of mass M . The arguments leading to (6.74) should still apply and we would require M to exceed M , hence v

2

v

43

p

3 / 2

15

N

v

v

g

(6.83) \m / \m / There are two further interesting coincidences in the case when m ~ 30 eV as has been claimed by one recent experiment. Not only is such a neutrino mass sufficient to ensure neutrinos dominate the Universe, (6.77); it also ensures that the cosmic time, t , when the radiation temperature falls to m , and the neutrinos become non-relativistic, is of order f and t^. In general p ~ G~ t~ and so as p ~ T we find that the time t , when T ~ m , is f ~ a G m ~ m and this is only of order r ^ - S W W if N

c

v

32

v

v

rec

x

v

v

v

2

v

1/2

v

2

4

N

~!5S~10 i ™ ^ ) m \ m J In addition, we have t ~ t ~ S a~ a^ {m^m^m^

(6.84)

S

v

v

T ( X

v

ll2

3

12

1

if

S ~ a (\m- I^ ) ( \m^ )J (6.85) and combining (6.84) and (6.85) leads to the suggestive relation m ~ am (6.86) In fact, this formula may turn out to have some deeper theoretical basis as a prediction of the electron neutrino rest-mass since we notice that a m is 27 eV, within the error bars of the reported measurements by Lyubimov et a\? 6

3

N

v

2

e

2

4

v

2

e

393 The Anthropic Principles in Classical Cosmology

Galaxy formation in the presence of massive neutrinos is a version of the adiabatic theory outlined above in which clusters form first and then break up into subcomponents of galactic size. It appears that if this is to be the route to galaxy formation then a high density of neutrinos must exist (exceeding that of baryons) otherwise the level of density fluctuation required in the early universe would exceed that allowed by observational limits on the fine scale temperature fluctuations in the microwave background over minutes of arc —this is the typical angular scale subtended by a galaxy cluster when the radiation was last scattered to us at high redshift. However, recent numerical simulations of galaxy clustering in the presence of massive neutrinos carried out on fast computers reveal that the clustering of the ordinary luminous matter in the presence of 30 eV neutrinos has statistical properties not shared by the real universe; (see Figures 6.5(a) and 6.5(b)). Neutrinos are not the only non-baryonic candidates for the nonluminous material that apparently dominates the present structure of the Universe. Elementary particle physicists have predicted and speculated about the existence of an entire 'zoo' of weakly interacting particles like axions, photinos and gravitinos. These particles should, if they exist, behave in many ways like massive neutrinos, for they do not have electromagnetic interactions with baryons and leptons during the early radiation era of the Universe but respond to gravity. Yet, unlike the neutrino, these more exotic particles are predicted to possess negligible velocities relative to the overall systematic expansion of the universe today, either because of their greater mass or, in the case of the axion, because they were formed with negligible motion. ' This means that only very small clouds of these particles get dispersed by free-streaming during the first few thousand years of cosmic expansion. In contrast to the neutrino model, in which no irregularities survive having mass less than ~10 M©, (see equation (6.82)), non-uniform distributions of these exotic particles are only erased over dimensions smaller than - 1 0 M . In effect, the characteristic survival mass is still given by (6.82) but the mass of a gravitino or photino necessary to generate all the required missing matter is —1 GeV hence the analogue of M is close to 10 M . In this picture of cosmogony, events follow those of the isothermal scenario outlined earlier, with star clusters forming first and then aggregating into galaxies which in turn cluster in hierarchical fashion into great clusters of galaxies. Remarkably, computer simulations of these events in the presence of axions or photinos predict patterns of galaxy clustering with statistical features matching those observed if the total density of the universe satisfies ft ~0-2, but unfortunately the velocities predicted for the luminous galaxies do not agree with observation; see Figure 6.6. This completes our attempt to extend the successes of the last chapter 44

45

39 40

15

6

v

6

o

45

o

o

394

The Anthropic Principles in Classical Cosmology

395 The Anthropic Principles in Classical Cosmology

Figure 6.6. As Figure 6.5(b), but for a model universe containing axions, one of the exotic elementary particle species that may exist in the Universe, with a total density equal to fl = 0.2. There is little evidence for filamentary structures forming and the axions and baryons are clustered in identical fashion with no segregation of mass and light. This model offers a better match to the observed clustering of galaxies shown in Figure 6.5(a) than does the neutrino-dominated model 6.5(b) but the distribution of velocities predicted for the luminous matter is at variance with observation. 45

o

into the extragalactic realm. Here we have encountered awkward uncertainties and unknown factors that prevent us ascribing the structures we observe to the values of the constants of Nature alone. Although we can think of theories of galaxy formation in which galaxy masses are determined by fundamental constants alone, (as in (6.74)), we can also think Figure 6.5. (a) The semi-volume-limited distribution of galaxies fainter than 14.5 mag. with recession velocities less than 10,000 km s observed out to a distance of about 100 Mpc found by M. Davis, J. Huchra, D. Latham and J. Tonry. (b) The clustering of galaxies predicted by a computer simulation of the Universe The computed cosmological model contains a critical density (fl = 1) of neutrinos (m = 30 eV). The circles trace the distribution of luminous (baryonic) material, whilst the dots trace the neutrino distribution. Notice the segregation of luminous from non-luminous material and the filaments and chains of luminous matter created by 'pancake' collapse. The luminous material is predicted to reside in far more concentrated form than.is observed in the sample (a). _1

141

4 5

v

The Anthropic Principles in Classical Cosmology

396

up other theories, which give equally good agreement with observation, in which fundamental constants play a minor role compared with cosmological initial conditions. The truth of the matter is simple: whereas we know how stars and planets are structured and why they must exist given the known laws of physics, we do not really have a full theory of how galaxies and larger astronomical structures form. If galaxies did not exist we would have no difficulty explaining the fact! Despite this incompleteness, which means that we cannot with any confidence as yet draw Weak Anthropic conclusions from the existence and structure of galaxies, this is a good point to take a second look at the problem posed at the beginning of the last chapter. Recall that we presented the reader with a plot of the characteristic masses and sizes for the principal components of the natural world. We saw that the points were strangely polarized in their positions and there was no trace of a purely random distribution filling the entire plane available (see Figure 5.1). As a result of our investigations we can now understand the structure of this diagram in very simple terms. The positions of physical objects within it are a manifestation of the invariant strengths of the different forces of Nature. Naturally occurring composite structures, whether they be atoms, or stars, or trees, are consequences of the existence of stable equilibrium states between natural forces of attraction and repulsion. If we review the detailed analysis of the last chapter and the present one, the structure of the diagram can be unravelled (see Figure 6.7). There are two large empty regions: one covers the area occupied by black holes: R^2GM (6.87) Nothing residing within this region would be visible to external observers like ourselves. The other vacant region is also a domain of unobservable phenomena, made so by the Uncertainty Principle of quantum mechanics, which in natural units reads, (6.88) A R AA*>1 All the familiar objects like atoms, molecules, solids, people, asteroids, planets and stars are atomic systems held in equilibrium by the competing pressures of quantum exclusion and either gravity or electromagnetism. They all have what we termed atomic density, p , which is roughly constant at one proton mass per atomic volume. Thus all these atomic bodies lie along a line of constant atomic density; hence for these objects M*R (6.89) Likewise the atomic nuclei, protons and neutrons all lie along a line of constant nuclear density which they share with neutron stars. As we go beyond the scale of solid, stellar bodies and enter the realm of star 38

AT

3

397 The Anthropic Principles in Classical Cosmology

Figure 6.7. A revised version of Figure 5.1 in which the particular distribution of cosmic objects in the mass-size plane is shown to be conditioned by the existence of regions excluded from direct observation by the existence of black holes and quantum mechanical uncertainty and structured by the lines of constant atomic and nuclear densities. The latter pick out ranges of possible equilibrium states for solid bodies (based on ref. 38).

systems—globular clusters, galaxies, galaxy clusters and superclusters, we stray from the line of constant density. These systems are supported by a balance between the inward attraction of gravity and the outward centrifugal forces generated by the rotation of their components about their common centres of gravity. Finally, off at the top corner of the diagram we see the point marking the entire visible universe. Its exact mass we do not yet know because of our uncertainties regarding the extent of non-baryonic matter and dead stars in space, but if it lies a little below the black hole line so R >2GM then the Universe will continue to expand forever. However, if the final value of the cosmological density yields ft 1 then we will lie in the region R

U

u

U

The Anthropic Principles in Classical Cosmology

398

6.6 The Origin of The Lightest Elements The elements were cooked in less time than it takes to cook a dish of duck and roast potatoes. G. Gamow

One of the great successes of the Big Bang theory has been its successful prediction of the abundances of the lightest elements in Nature: hydrogen, helium, deuterium and lithium. All can be fused from primordial protons and neutrons during the first few minutes of cosmic expansion in quantities that do not depend on events at earlier, more exotic moments. Nuclear reactions are only possible in the early universe during a narrow temperature niche, 0.1 m that is 5 x 10 K ^ T ^ 5 x lO K (6.90) This, according to (6.56) corresponds to a time interval between about m m ~ ^ t ^ a ~ m m ^ , that is 0.04 s s ^ 500 s (6.91) Thus, primordial nuclear reactions are only possible because of the Anthropic coincidence that a>(mJm ). At times earlier than 0.04s thermal energies are so high that any light nucleus would be immediately photodisintegrated, whilst after —500 sec the energies of nucleons are too low to allow them to surmount the Coulomb barriers and come within range of the strong nuclear force One might have thought that the final abundances of light nuclei, all of which are composed solely of neutrons and protons, would have been unpredictable, depending on the relative initial abundances of protons and neutrons at the Big Bang. Fortunately, this is not the case. When the temperature exceeds ~ ( G m ) " ( m / m ) m N ~ 1 MeV there arise weak interactions involving nucleons which proceed more rapidly than the local cosmic expansion rate. These reactions are, p + e~ D + 7, to be followed rapidly by fast nuclear chain-reactions p + D —» He + 7, n + D - > H + 7 , p + H - > H e + 7, n + H e - > H e + 7, D + D—» He + 7. Here the reactions essentially stop; helium-4 is tightly bound and there is no stable nucleus with mass number 5. Virtually all the original neutrons left at T wind-up in helium-4 nuclei hence the number of helium-4 nuclei to hydrogen nuclei will be roughly 0.5 x 0.2 = 0.1, there being two neutrons per helium-4 nucleus. This corresponds to a helium-4 mass fraction of —22-25%, as observed, (6.35). If the baryon density of the present universe equals that observed, ft = 0.03, then this process successfully predicts the observed cosmic abundances of helium-3, deuterium and lithium-7 also. The fact that the early universe gives rise to an 'interesting' abundance of helium-4, that is, neither zero nor 100%, is a consequence of a delicate coincidence between the gravitational and weak interactions. It arises because we have T ~ Am ~ m , so the exponent in (6.93) is neither very large nor very small, and because the temperature T is suitable for electron and neutrino production. This coincidence is equivalent to the coincidence G m*~(Gm*) (6.94) Were this not the case then we would either have 100% hydrogen emerging from the Big Bang or 100% helium-4. The latter would likely preclude the possibility of life evolving. There would be no hydrogen available for key biological solvents like water and carbonic acid, and all the stars would be helium-burning and hence short-lived. Almost certainly, helium stars would not have the long-lived nuclear burning phase necessary to encourage the gradual evolution of biological life-forms in planetary systems. However, there appears no 'anthropic' reason why a universe containing 100% hydrogen initially would not be hospitable to life. Carr and Rees have pointed out that the coincidence (6.94) may be associated with another one that probably is closely tied to the conditions necessary for the existence and distribution of carbon in space following its production in stellar interiors (see section 5.2). It may be that the f

f

9

47

3

3

3

4

3

4

f

B

8

f

c

f

38

F

38

1/4

4

The Anthropic Principles in Classical Cosmology

400

envelope of a supernova is ejected into space by the pressure of neutrinos generated in the core of the stellar explosion. If this is indeed the way the stellar envelope is ejected, then the timescale for interactions between nuclei in the envelope and free neutrinos must be close to the dynamical timescale ~ ( G p ) of the stellar explosion if the debris has density p. This ensures that the neutrinos have enough time to reach the envelope before dumping their energy and momentum but not so much time that they escape beyond the envelope. This would allow the envelope to be expelled. This condition requires the delicate balance G nT -(Gnm ) (6.95) where n is the nucleon number density and T the temperature. Now in order that the supernova be hot enough to produce neutrinos by c + e~—» v + v we must have T ~ m . The density expected when the core explodes is close to the nucleon degeneracy density found within neutron stars. This is roughly the nuclear density n ~ m^. Using these relations we have the Carr-Rees coincidence G m ~ (Gm ) ( m j m ) (6.96) which differs from the primordial nucleosynthesis coincidence (6.94) only by a factor ( m / m ) ~ 0.02 and suggests a fundamental relationship between the weak and gravitational couplings of the form a ~ aU (m /m ) . The other part of the nucleosynthesis coincidence (6.94) arises because the neutron-proton mass difference is A m ~ m . In fact, this is only part of a very delicate coincidence that is crucial for the existence of a life-supporting environment in the present-day Universe. We find that Am - m = 1.394 MeV-0.511 MeV = 0.883 MeV (6.97) Thus, since m(n) and m(p) are of order 1 GeV the relation is a one part in a thousand coincidence. If instead of (6.97) we found Am - m ^ 0 then we would not find the beta decay n—»p + e~ + i> occurring naturally. Rather, we would find the decay p + e~—»n + i>. This would lead to a World in which stars and planets could not exist. These structures, if formed, would decay into neutrons by pe~ annihilation. Without electrostatic forces to support them, solid bodies would collapse rapidly into neutron stars (if smaller than about 3 M ) or black holes. Thus, the coincidence that allows protons to partake in nuclear reactions in the early universe also prevents them decaying by weak interactions. It also, of course, prevents the 75% of the Universe which emerges from nucleosynthesis in the form of protons from simply decaying away into neutrons. If that were to happen no atoms would ever have formed and we would not be here to know it. -1/2

38

2

F

2

N

m

+

e

e

c

38

F

c

2

e

2 1 / 4

N

1/2

N

1 / 2

38

w

4

N

e

3/2

48

c

c

c

c

c

401 The Anthropic Principles in Classical Cosmology

6.7 The Value of S

God created two acts of folly. First, He created the Universe in a Big Bang. Second, He was negligent enough to leave behind evidence for this act, in the form of the microwave radiation. P. Erdos

In our discussion of cosmology so far we have confined our discussion to events and effects that are independent of cosmological initial conditions. They have, like the structures discussed in the previous chapter, been conditioned by various unalterable coupling constants and mass ratios a , a , a and m /m . But we have seen these dimensionless parameters joined by one further parameter introduced in equations (6.42)-(6.45): the entropy per baryon of the Universe, S. This quantity arose from the discovery of the microwave background radiation and was first discussed as a dimensionless parameter characterizing possible hot Big Bang models by Zeldovich and Novikov, and by Alpher, Gamow and Herman. It is interesting to note how fundamental advances in our understanding of Nature are usually accompanied by the discovery of another fundamental constant and in this case it was Penzias and Wilson's serendipitous discovery of the 3 K background radiation which introduced the parameter S. We have seen already that the observed numerical value of S ~ 10 determines the key cosmic times t^ and t , (see equations (6.48) and (6.49)), and hence plays a role in various coincidences that are necessary for the evolution of life, (6.60-6.65). Furthermore, it is possible that S controls the characteristic sizes of galaxies and clusters in our Universe (6.68-6.75), (6.85). The appelation 'hot' is often used of the Big Bang model of the Universe. This is partially because the observed value of S ~ 10 is so large. Indeed, over the period since the discovery of the microwave background radiation in 1965 cosmologists have repeatedly tried to explain why the value of S is not, like many other dimensionless constants of physics, of order unity say, or, like many cosmological parameters, of order 10 ~10 °. It is clear from (6.60-6.85) that the requirement that galaxies exist and that the Universe is not dominated by radiation today (a situation that would prevent the growth and condensation of small irregularities into fully-fledged galaxies by the process of gravitational instability) we must * * have S ^ I O . One approach to explaining why S »1 is to recognize that, since the photon entropy, s^, which defines S, (6.42), is monotonic non-decreasing with time, by the Second Law of thermodynamics, so also is S if the baryon number is unchanging. Hence S ^ O and if the Universe were G

w

s

N

c

49

4

9

rec

9

50

2O

4

35

38

51

11

The Anthropic Principles in Classical Cosmology

402

extremely anisotropic or inhom*ogeneous during its early stages it might be possible for dissipation of non-uniformities to smooth the universe out into the observed state of virtual isotropy and hom*ogeneity whilst boosting an initial entropy per baryon of S ~ 1 to the large observed value of order 10 . Unfortunately, a detailed investigation revealed that this dissipation inevitably results in a catastrophic overproduction of photon entropy from anisotropics in the cosmological expansion. A universe dominated by anisotropy close to the Planck time in which baryon number was conserved would produce a present-day value of 10 and conditions would never be cool enough to allow the formation of living cells at the vital moments of cosmic history (6.60-6.65). Another variation of this idea appealed not to the irregularity of the very early universe to produce a large value of S but to the recent activity of explosive stars. Rees has argued that if a population of supermassive stars formed prior to the emergence of galaxies (and there are reasons why this might be an appealing idea) then they might naturally account for the observed value of S ~ 10 . These objects would radiate their mass in a Salpeter time t ~ K the baryon number is damped exponentially by baryon non-conserving scatterings of quarks and leptons; their rate is 1/3

60,62

c

r 2-ga« T (T +m )2

2

5

2

2

(6.118)

2

and thus becomes equal to the cosmological expansion rate, H, when K

=

K c

Thus when K>K ; we have, '

~3± -i 8*

(6.119)

a

60 62

C

e ^ K a exp(-a K ) (6.120) s g* The calculations that have been performed to determine the value of m from the energy at which all interactions have the same effective strength yield a value m ~ 5 . 5 x 10 GeV, which corresponds to a K value for XX decays of, K ~ 10 (6.121) If the explanation of grand unified theories for the value of S ~ 10 is correct then we can see from (6.114) and (6.120) that everything hinges upon the magnitude (and sign) of the CP violation e in heavy boson decays, like (6.102). Since, as yet, there appears no hope of calculating e precisely, (although it is possible in principle), we seem to have simply replaced an initial condition for S by an initial condition for e. However, e is an invariant and some restrictions on its value are known: we must have |e|

o r

=

2

2

10

43

=

c

s e e

43

2

1

M l ^10" I Pc Itp

5 7

(6.122)

This extraordinary relation regarding the initial conditions has been called the flatness problem by Alan Guth. This name arises because the cosmological models that have p = p are those with zero spatial curvature, (6.4), and hence possess flat, Euclidean spatial geometry. A more physical version of the coincidence (6.122) (and one which is, in fact, roughly the square root of (6.122)) involves the ratio of the present radius of curvature of the Friedman Universe relative to the scale that the Planck length would have freely expanded to after a time equal to the 67

c

411 The Anthropic Principles in Classical Cosmology

present age of the Universe, t ~ 10 yr. Thus, /Friedman curvature radius \ Planck scale at t 1 lO x 68

lo

30

x

|a -i| 0

1 / 2

l« -i| 0

1 / 2

(6.123)

where f appears because we allow for the change-over from radiation to dust-dominated expansion after f , (6.46). This relation can be expressed in terms of fundamental constants and S as eq

eq

(6.124) If t is to exceed the time required to produce stable stars, so t >t+~ a a ' a m ~ then we have a Weak Anthropic constraint on a cognizable Universe 0

2

1

9

(6.125) Another way of stating this problem is to formulate it as an 'oldness' problem. The laws of physics create one natural timescale for cosmological models, t = (Gft/c ) ~ 10" s. The fact that our Universe has existed for at least ~ 1 0 % suggests there is something very unusual and improbable about the initial conditions that gave rise to our Universe. (But see Chapter 7.) This situation was first stressed by Collins and Hawking in 1973 and it is one that has striking anthropic implications. We can see from (6.4) and (6.27) that when |ln ft|»1 the expansion timescale of the Friedman models is altered and we have t QQ approximately. Models with ft l would have recollapsed before stars ever had a chance to form or life to evolve. Models with ft 1 would expand so rapidly that material would never be able to condense into galaxies and stars. Only for a very narrow range of 10~ -10 corresponding to a range ~ 1 0 l O in (6.122) does it appear that life can evolve, (see Figure 6.8). Why did the initial conditions lie in this peculiar and special range that allows observers to exist? One approach to resolving the flatness problem, which is in accord with the Weak Anthropic Principle, is to imagine that the Universe is inhom*ogeneous and infinite, (so C1 :>

ULO"120

(6.128)

m; I

To get an idea of how small this limit is, consider A , ^ the smallest value of the parameter A that could be measured in t ~ 10 yrs (the age of the Universe) according to the Uncertainty Principle of Heisenberg (which yields A%£ t >ti). This minimum value is larger than the limit (6.120) by nearly 65 orders of magnitude! 0

10

l 0

m

2

^10"56

(6.129)

Indeed, the limit (6.128) is the smallest dimensionless number arising naturally anywhere in physics. It has led to determined efforts to demonstrate that there is some deep underlying principle that requires A to be precisely zero. Some of these ideas appear promising, but as yet there is no convincing explanation for the smallness of the observational limit on the possible magnitude of A. If we express the gravitational lagrangian of general relativity as a constant plus a linear four-curvature term in the standard way then L = A + axR (6.130) and the limit (6.128) implies A / a ^ l O . However, this limit and its equivalent, (6.128), have great significance for the possibility of life evolving in the Universe. If |A| exceeds 8TTGP today then the expansion dynamics are dominated by the A term. In the case of A < 0 and |A| large 72

g

- 1 2 0

The Anthropic Principles in Classical Cosmology

414

the Universe will collapse to a second singularity after a time t where s

-A a2a^(mNlme)2m:^ and so we have the Anthropic limit,

mp

\m I N

\m )

(6.132)

p

The same limit applies to A / m 2 in the case when A > 0 because in this case a violation of (6.132) creates expansion dynamics that are dominated by the positive cosmological constant term at times t ^ t \ hence, by (6.127) with R2IR2~AI3

Roc xp[tyffj C

(6.133)

and expansion takes place too rapidly for galaxy and subsequent star formation to occur. Gravitational instability is quenched in a medium undergoing rapid expansion like (6.133) and over-densities behave as73 Sp/p - > constant (this is intuitively obvious since Jeans' instability amplifies at a rate Sp/p i n a static medium and exponential expansion of that medium will exactly cancel the growth rate of the Jeans instability). There have been various attempts to calculate the constant A in terms of other known constants of Nature.74 These amount to nothing more than dimensional analysis except in one case which we shall examine in detail below. It rests upon the fact that the A term in general relativity appears to have a physical interpretation as the energy density, pv, of a Lorentz-invariant quantum vacuum state A = ^(pv>

m;

(6.134)

Unfortunately, it appears that quantum effects arising in the Universe at tp ~ 10" 4 3 s should create (pv>~ w 4 and A ~ m 2 which violates the observational bound and the anthropic limit (6.128) by almost 120 orders of magnitude. How this conclusion is to be avoided is not yet known.

6.10

Inhom*ogeneity hom*ogeneity is a cosmic undergarment and the frills and furbelows required to express individuality can be readily tacked onto this basic undergarment! H. Robertson

The accuracy of the Friedman models as a description of our Universe is a consequence of the Universe's hom*ogeneity and isotropy. Only two

415 The Anthropic Principles in Classical Cosmology constants (or three if A ^ O ) are necessary to completely determine the dynamics. The hom*ogeneous and isotropic universes containing matter and radiation are uniquely defined at all times by adding the value of S. But, fortunately for us, the Universe is not perfectly hom*ogeneous. The density distribution is non-uniform with evident clustering of luminous matter into stars, galaxies and clusters. The statistical properties of this clustering hierarchy were outlined in (6.30)-(6.32). Roughly speaking, the level of inhom*ogeneity in the observable Universe is small and the matter distribution becomes increasingly hom*ogeneous in sample volumes encompassing more than about 10 15 M 0 . The constant of proportionality and the spectral index n of (6.67) are two further parameters that appear to be specified by the initial data of the Universe, either directly or indirectly. The modern theory of the development of inhom*ogeneity in the Universe21 rests upon the idea that the existing large scale structure that manifests itself in the form of galaxies and clusters did not always exist. Rather, it grew by the mechanism of gravitational instability from small beginnings.75 Some (statistical?) graininess must have existed in the earliest stages of the Universe and regions of size x would contain a density p(x) that exceeds the smooth average density of the universe, p. The amplitude of this inhom*ogeneity is measured by the density contrast

8p_p(x)-p P P

(6.135)

As the Universe expands and ages, density inhom*ogeneities that were once very small ( 6 p / p « l ) can amplify by gravitational instability until they become gravitationally bound (Sp/p ^ 1) and then condense into discrete structures resembling galaxies and clusters. Suppose our Universe to be well-described by a flat or open Friedman model. If the present age of the Universe is denoted by t then all the Friedman models resemble the flat model early on when t < Cl0t0. At such times, and at all times in the flat model, the density inhom*ogeneities enhance at a rate directly proportional to the expansion scale factor when the pressure is negligible (p = 0) 0

8p & R(t) p

t , 2 / 3

t^a t

0 0

(6.136)

However, when a Friedman model with ft0. 2

(6.141)

These solutions reduce to (6.136) at early times since t —> 0 when T —> 0. However, the larger the value of ft0» the shorter the age of the universe at maximum expansion ( t = 7 r ) , and the faster the amplification of 8 p / p . Since the total age of the universe is 2 ^ , and this is ~10 l o fto 1 / 2 yr when ft0» 1 w e see that main-sequence stellar evolution and biological evolution would not have time to occur if ft0> 104- H ^o >:> 1 and the initial value of 8p/p were the same as in the flat model ( f t 0 = l ) » then the density inhom*ogeneities would rapidly evolve into condensations of high density or black holes. Equation (6.141) shows that Sp/p grows at a faster rate than t 3 when n o > 1. In order to produce gravitationally bound structures resembling galaxies and clusters, the density contrast 8p/p must have attained a value ~ 5 in the recent past. The above equations77 allow the following general conclusions to be arrived at: w

2 /

417 The Anthropic Principles in Classical Cosmology (a) if the initial conditions are such that 8p/p exceeds a 'critical value' equal to ( l + z i ) " 1 ( l - f l 0 ) n ^ 1 at a redshift z ~ 10 3 then density inhom*ogeneities will collapse and form galaxies prior to a redshift z > {

Ho "!. 1

(b) if initial conditions are such that 8p/p is roughly equal to the 'critical' value of (a) at then by the present it will have attained a fixed value ~4fto A /9 and galaxies and clusters will not condense out of the overall expansion. (c) if initial conditions are such that 8p/p is significantly less than the 'critical' value at z then the present density contrast approaches a steady {

asymptotic value of order 1.5

(l + Zi)ft0(l~^o)

10e"kt k t

(6.142)

where the terminal velocity, v^, is derived from the acceleration due to gravity, g, and the air friction, fc, as i>oo =g/fc

(6.143)

For air fc — O . l s - 1 , g ~ 9.81ms" 2 and so v«, is —98 ms - 1 . The frictional resistance causes an exponential ( o c e ~ ) decrease in the relevance of the unknown condition v for the determination of the stone's velocity at a later time. Chaotic cosmology is a more grandiose application of this simple idea: it envisages that however non-uniform and chaotic the cosmological initial conditions were, as the Universe expands and ages so there might arise natural frictional processes that cause dissipation of the initial non-uniformities, and, after a sufficiently long time, ensure the k t

The Anthropic Principles in Classical Cosmology

422

Universe would inevitably appear isotropic and smooth. If this scenario were true one could 'predict' the isotropy of the microwave background radiation as an inevitable consequence of gravitation alone. The appeal of this type of evolutionary explanation is obvious: it makes knowledge of the (unknowable!) initial conditions at the 'origin' of the Universe largely superfluous to our present understanding of its large scale character. In complete contrast, the alternative 'quiescent cosmology' 8 2 pictures the present state of regularity as a reflection of an even more meticulous order in the initial state. Unfortunately, it transpired that Misner's programme did not possess the panaceatic properties he had hoped. Viscous processes can only smooth out anisotropics in the initial state if these anisotropics are not too large in magnitude and spatial extent. 83 If the anisotropics over-step a certain level the overall expansion rate of the Universe proceeds too rapidly for inter-particle collisions to mediate viscous transport processes. In this rapidly expanding, non-equilibrium environment the Einstein equations possess an important property: the present structure of the Universe is a unique and continuous function of initial conditions and a counter-example to the chaotic cosmology scheme is now easy to construct: pick any model for the present-day Universe which is in conflict with the isotropy measurements on the microwave background. Evolve it backwards and it will generate a set of initial conditions to the Einstein equations which do not tend to regularity by the present, irrespective of the level of dissipation. In the context of our example described by equations (6.142) and (6.143), if we make observations at some predetermined time T then the measured velocity, i?(T), can be made arbitrarily large by picking enormous values of v , and we could avoid the inevitable asymptotic result u(T)«Uoo. Stones thrown with huge initial velocity could confound our predictions that v - > v^ inevitably because they need not have attained a speed close to I?,*, by time T. On the other hand, if we pick v first, then there will always be a T such that v ( T ) is as close as one wishes to v«,. In cosmology we, in effect, observe a v ( T ) while v is given at the initial singularity. 0

This type of objection to the chaotic cosmology programme might not worry us too greatly if it could be shown that the set of counter-examples is of measure zero amongst all the possible initial states for the Universe. This is where the Collins and Hawking paper enters the story. It attempts to discover just how large the set of cosmological initial conditions which do not lead to isotropic Universes really is. Collins and Hawking sought to demonstrate that the chaotic cosmological principle is false and that the generic behaviour of physically realistic solutions to Einstein's equations is to approach irregularity at late times. To establish this they singled-out for investigation the set of cosmological

423 The Anthropic Principles in Classical Cosmology models that are spatially hom*ogeneous but anisotropic. This set is finite in size and is divided into ten equivalence classes according to the particular spatial geometry of the Universe. This classification into ten equivalence classes is called the Bianchi classification and it has a hierarchical structure.84 The most general members, which contain all the others as special cases, are those labelled Bianchi types VIH, VII h , VIII and IX. The Cauchy data for the vacuum cosmological models of these Bianchi types are specified by four arbitrary constants,85 of which the subscript h marking types VI h and VIIH is one. Not all of these four general classes contain the isotropic Friedman models though; types VIH and VIII do not and therefore cannot isotropize completely (although they could, in principle, come arbitrarily close to isotropy). However, the VIIH class contains the isotropic, ever-expanding ('open') Friedman universes and the type IX models include the 'closed' Friedman models which recollapse in the future. Collins and Hawking first investigated the properties of the universes in the VIIH class, and we can view this choice as an examination of the stability of the open, isotropic Universe with respect to spatially hom*ogeneous distortions. Before that examination can be made a definition of isotropization must be decided upon. The following criteria were chosen by Collins and Hawking to establish that a cosmological model tends to isotropy: II: The model must expand for all future time; V — » O O where Vis the comoving volume. 12: The energy density in the Universe, p, must be positive and the peculiar velocities of the material relative to the surfaces of hom*ogeneity must tend to zero as f->0 as f—»oo where T* is the energy-momentum tensor (the indices p,, v run over the values 1, 2, 3) and p=T00. 13. If a is the shear in the expansion and if V/3 V is the volumetric expansion rate, then the distortion

. 14. If the cumulative distortion in the dynamics is defined by |8 = f a dt, then |8 must approach a constant86 as t—>°°. If the conditions 11-4 are satisfied, the cosmological model was said by Collins and Hawking to isotropize. In order to use these criteria, two further physical restrictions on the properties of matter are required; M l : The Dominant Energy Condition requires that T 0 0 > | T O £ ' 3 | and says that negative pressures ('tensions') cannot arise to such an extent that they dominate the energy density of the fluid. M2: The Positive Pressure Criterion stipulates that the sum of the principal pressures in the stress-energy tensor must be non-negative: v

i K=0

T ^o. kk

424

The Anthropic Principles in Classical Cosmology

The conditions M l and M2 are satisfied by all known classical materials but might be violated microscopically in the Universe if quantum black holes evaporate via the Hawking process or if particle creation occurs in empty space in the presence of strong gravitational fields.87 However, even in this case the violations would be confined to small regions ^ 1 0 ~ 3 3 c m and M l , 2 should still be valid on the average over large spatial scales, late in the Universe. 88 Notice that these conditions on the matter tensor exclude a positive cosmological constant, A, and a negative cosmological constant is excluded by II. Collins and Hawking then write down the Einstein equations of the VII h model. They are an autonomous system of non-linear ordinary differential equations 89 of the general form x = F(x);

x = (*!, x 2 . . . x j

(6.144)

Suppose the isotropic Friedman Universe is the solution of (6.144) given by the null solution (this can always be arranged by a coordinate transfomration of the x f ), x=0

(6.145)

then it is a necessary condition of the chaotic cosmology programme that this solution be stable. The usual way of deciding whether or not (6.145) is a stable solution to (6.144) we linearize (6.144) about the solution (6.145) to obtain, x = Ax

A:R -*R n

n

(6.146)

where A is a constant matrix. Now we determine the eigenvalues of A and if any have positive real part then the Friedman solution (6.145) is unstable; that is, neighbouring cosmological solutions that start close to isotropy continuously deviate from it with the passage of time. The situation Collins and Hawking discovered was not so clear-cut. They found one of the eigenvalues of A was purely imaginary and so the stability could not be decided by the linear terms146 alone. However, they were able to decide the stability by separating out the variable with the imaginary eigenvalue and performing a second order stability analysis on it. The open Friedman universe was shown to be unstable, but the deviations from it grow slowly like In t rather than a power of t. More precisely: If M l and M2 are satisfied, then the set of cosmological initial

data giving rise to models which approach isotropy as t—><x> is of measure zero in the space of all spatially hom*ogeneous initial data.

A closer examination90 of the Bianchi VII h universe reveals that it fails to isotropize because conditions 13 and 14 are not met. As t o° the ratio of the shear to the expansion rate, o-/H, approaches a constant and |8 oc t. This result tells us that almost every ever-expanding hom*ogeneous Uni-

425 The Anthropic Principles in Classical Cosmology verse which can isotropize will not do so regardless of the presence of dissipative stresses (so long as they obey the conditions M l and M2). A detailed investigation of the Bianchi VII H universe has been made by Barrow and Siklos91 who have shown that there exists a special solution of Bianchi type VII h which is stable, but not asymptotically stable, in the space of VIIH initial data. This particular solution, which was found some years ago by Lukash, contains two arbitrary parameters which, when chosen appropriately, can make the expansion arbitrarily isotropic. This result considerably weakens the Collins and Hawking conclusions: it shows that isotropic open universes are stable in the same sense that our solar system is stable. As f—»oo there exist spatially hom*ogeneous perturbations with o/H—» constant but there are none with o/H—» oo. The demand for asymptotic stability is too strong a requirement. However, despite this we shall assume that the Collins and Hawking theorem retains its force because its interpretation in connection with the Anthropic Principle will transpire to be non-trivial. Next, Collins and Hawking focused their attention upon a special subclass of the VII h universes—those of type VII 0 . These specialize to the 'flat', Einstein-de Sitter universe when isotropic. These models have the minimum of kinetic energy necessary to undergo expansion to infinity and Euclidean space sections, and are of measure zero amongst all the ever-expanding type VII universes. The stability properties of these universes turn out to differ radically from those in the larger VII H class. If the matter content of the universe is dominated by fluid with zero pressure—as seems to be the case in our Universe today since galaxies exert negligible pressure upon each other—then flat, isotropic Universes are stable. More precisely: If matter has zero pressure to first order and M l

holds, then there exists an open neighbourhood of the flat (fc = 0) Friedman initial data in the type VII subspace of all hom*ogeneous initial data such that all data in this neighbourhood give rise to models which isotropize. 0

If the Universe is close to the 'flat' state of zero energy then, regardless of its initial state, it will eventually approach isotropy when it is old enough for the pressure-free material to dominate over radiation. Finally, we should add that if this type of analysis is applied to closed hom*ogeneous universes which can isotropize—the type IX models—then one sees that in general they will not approach isotropy. A slightly different criterion of isotropization is necessary in this case because a / H 00 when the universe approaches maximum volume because H —> 0 there even if the universe is almost isotropic; as an alternative criterion, one might require the spatial three-curvature to become isotropic at the time of maximum expansion although it is not clear that the type IX universe model can recollapse unless this occurs. 92 From these results two conclusions might be drawn; either: (A) The

The Anthropic Principles in Classical Cosmology 426 Universe is 'young' and it is not of zero measure amongst all the everexpanding models and is growing increasingly anisotropic due to the influence of generic hom*ogeneous distortions which have had, as yet, insufficient time to create a noticeable effect upon the microwave radiation isotropy. Or: (B) The Universe is a member of the zero measure set of flat, zero binding-energy models. The most general hom*ogeneous distortions admitted by its geometry are of Bianchi type VII and all decay at late times. The Universe is isotropizing but is of zero measure in the metaspace of all possible cosmological initial data sets. 0

The stance taken by Collins and Hawking is to support option (B) by invoking the Weak Anthropic Principle in the following manner. We saw in section 6.8 that our astronomical observations show the Universe to be remarkably close to 'flatness', (6.122); indeed, this is one of the reasons it has proven so difficult to determine whether the Universe is expanding fast enough for infinite future expansion or whether it will recollapse to a second and final space-time singularity. Collins and Hawking conclude that the reason for not observing the Universe to be strongly anisotropic is its proximity to the particular expansion rate required to expand forever. And there is a way we can explain our proximity to this very special state of expansion 69 if

. . . there is not one universe but a whole infinite ensemble of universes with all possible initial conditions. From the existence of the unstable anisotropic mode it follows that nearly all of the universes become highly anisotropic. However, these universes would not be expected to contain galaxies, since condensations can grow only in universes in which the rate of expansion is just sufficient to avoid recollapse. The existence of galaxies would seem to be a necessary precondition for the development of any form of intelligent life. In the last section we saw how the probability of galaxy formation is closely related to the proximity of ft0 to unity. In universes that are now extremely ' open', CIq^ 1, density inhom*ogeneities do not condense into self-gravitating units like galaxies, whereas if ft0>:>l they do so very rapidly and all regions of above average density would evolve into supermassive black holes before life-supporting biochemistry could arise. Conditions for galaxy formation are optimal in Universes that are flat, ft0 = 1- We would not have expected humans to have evolved in a universe that was not close to flatness and because flat universes are stable against anisotropic distortions 'the answer to the question "why is the universe isotropic"? is "because we are here'". 6 9 Striking as the previous argument appears, it is open to criticism in a variety of places. W e have already mentioned that Collins and Hawking could simply have concluded that the universe is relatively young, open,

427 The Anthropic Principles in Classical Cosmology and tending towards anisotropy but they felt 'rather unhappy about believing that the universe had managed to remain nearly isotropic up to the present day but was destined to be anisotropic eventually'. However, the Anthropic Principle provides just as good a basis for this interpretation as it does for the sequential argument that observers require heavy elements which require stars and galaxies and these require spatial flatness which, in turn, ensures isotropy at late times. There are good reasons why we should be observing the Universe when it is relatively youthful and close to the main-sequence stellar lifetime— 10 l o yr. All stars will have exhausted their nuclear fuel in — 10 12 yrs and galaxies will collapse catastrophically to black holes after — 10 1 8 yr; all nuclear matter93 may have decayed after —1031 years. Planet-based beings like ourselves could not expect to observe the Universe in the far future when the effects of anisotropy had grown significant and when any deviation from flatness becomes unmistakable because life-supporting environments like ours would, in all probability, no longer exist for carbon-based life. If we scrutinize the calculations which demonstrate the isotropic open universes to be unstable we shall see we have to take this 'young universe' option (A) more seriously than the Collins-Hawking interpretation (B). The criteria 11-4 adopted for isotropization are asymptotic conditions that are concerned only with the cosmological behaviour as t—> 'matter ceases to matter' in open universes. The dynamical evolution becomes entirely dominated by the three-curvature of the space-time. Thus, the proof that open, isotropic universes are unstable is only a statement about the late vacuum stage of their evolution. It would be quite consistent with Collins and Hawking's result if almost every open universe tended to isotropy up until the time a n ( * then tended towards when it became vacuum-dominated ft0*o) anisotropy thereafter. Since we are living fairly close to f*, 10 1 i n our World), the presence of comparative isotropy in our Universe has little or nothing to do with a proof that open universes become increasingly anisotropic in their vacuum stages. Perhaps open universes also become increasingly anisotropic during temporary radiation or matterdominated phases but this is not yet known (although recent analyses94 indicate they do not). The Universe could be open, have begun in a very anisotropic state and have evolved towards the present state of high isotropy without in any way conflicting with the theorems of ref. 69. In such a situation the present level of isotropy does not require close proximity to flatness for its explanation and the Anthropic interpretation of Collins and Hawking become superfluous. The proof that flat anisotropic models approach isotropy requires a

428

The Anthropic Principles in Classical Cosmology

condition on the matter content of the Universe ( M l and M2). This is not surprising since flat models, by definition, contain sufficient matter to influence the expansion dynamics at all times. Their stability depends crucially upon the matter content and would not exist if the flat universe were filled with radiation (p = p/3) rather than dust (p = 0). Yet, the bulk of the Universe's history has seen it dominated by the effects of radiation. Only comparatively recently, after f ^ — l O ^ s , has the influence of pressureless matter predominated. So the theorem that flat, isotropic universes are stable tells us nothing about their behaviour during the entire period of classical evolution from the Planck time, ^ ~ 10~43 s, until the end of the radiation era at f e q ~ 10 12 s. It tells us only that anisotropics must decay after req up until the present, t ~ 10 17 s, if the Universe is flat. The Universe could have begun in an extremely irregular state (or even a comparatively regular one) and grown increasingly irregular throughout its evolution during the radiation era until f eq . The anisotropy could then have fallen slightly during the short period of evolution from t ^ to t yet leave the present microwave background anisotropy greatly in excess of the observed level. Again, a flat, dust-dominated universe could be highly anisotropic today without in any way contradicting the theorems of Collins and Hawking and without in any way invoking the Anthropic Principle. 0

Another weakness of the Anthropic argument for the isotropy of the Universe is that it is based upon an unconfirmed theory for the origin of protogalaxies. For, we might claim, the outstanding problem in explaining the presence of galaxies from the action of gravitational instability on small fluctuations from hom*ogeneity in any cosmological model is the size and nature of the initial fluctuations. In existing theories, these initial amplitudes are just chosen to grow the observed structure in the time allowed. Thus the cosmological model begins with the protogalaxies embedded in it at 10" 4 3 s, and they are given just the right appearance of age to grow galaxies by now ~ 1 0 1 7 s . The is amusingly similar to the theory of Philip Gosse 95 who, in 1857, suggested a resolution of the conflict between fossils of enormous age and religious prejudice for a very young Earth might be provided by a scheme in which the Universe was of recent origin but was created with ready-made fossils of great apparent age already in it! So, in practice, ad hoc initial amplitudes are chosen to allow flat Friedman universes to produce galaxies by the present. If these amplitudes were chosen significantly smaller or larger in the flat model the theory would predict no galaxies, or entirely black holes, respectively. By the same token, initial amplitudes might be chosen correspondingly larger (or smaller) in very open (or closed) models to compensate for the slower (or faster) amplification up to the present. Such a procedure would be no

429 The Anthropic Principles in Classical Cosmology less ad hoc than that actually adopted for the flat models. It is therefore hard to sustain an argument that galaxies grow too quickly or too slowly to allow the evolution of observers in universes deviating greatly from flatness. It could also be argued that to establish whether or not isotropy is a stable property of cosmological models one must examine general inhom*ogeneous cosmologies close to the Friedman model. Strictly speaking, spatially hom*ogeneous models are of measure zero amongst the set of all solutions to Einstein's equations. This may not be as strong an objection as it sounds, probably not as strong as those arguments given against the Anthropic Principle explanation above. The instability of open universes could only be exacerbated by the presence of inhom*ogeneities, but it is possible that flat universes might turn out to be unstable to inhom*ogenous gravitational wave perturbations. A resolution of this more difficult question virtually requires a knowledge of the general solution to the Einstein equations and this is not likely to be found in the very near future. A few investigations of the late-time behaviour of inhom*ogeneous models 96 do exist but are not helpful since they examine very special models that are far from representative of the general case. It could be argued that the real Pandora's box opened by the inclusion of inhom*ogeneous universes is the possibility of an infinite inhom*ogeneous

universe.

Our observations of the 'Universe' are actually just observations on and inside our past light-cone, which is defined by that set of signals able to reach us over the age of the universe. The structure of our past light-cone appears hom*ogeneous and isotropic but the grander conclusion that the entire universe possesses this property can only be sustained by appeal to an unverifiable philosophical principle, for example, the 'Copernican' principle—which maintains that our position in the Universe is typical. As Ellis has stressed,97 it is quite consistent with all cosmological observations so far made to believe that we inhabit an infinite universe possessing bizarre large scale properties outside our past light-cone (and so are unobservable by us), but which is comparatively isotropic and hom*ogeneous on and inside that light-cone. This reflects the fact that we can observe only a finite portion of space-time. 98 If the Universe is 'closed'— bounded in space and time—this finite observable portion may comprise a significant fraction of the entire Universe if ft0 is n o t v e r Y close to unity and will allow conclusions to be drawn from it which are representative of the whole Universe. However, if the Universe is 'open' or 'flat' and infinite in spatial extent, our observational data has sampled (and will only ever sample) an infinitesimal portion of it and will never provide an adequate basis for deductions about its overall structure unless augmented by unverifiable assumptions about uniformity. If the Universe

The Anthropic Principles in Classical Cosmology

430

is infinite and significantly inhom*ogeneous the Collins and Hawking analysis would not even provide an answer to the question 'why is our past light-cone isotropic' unless one could find general inhom*ogeneous solutions to Einstein's equations which resembled the VIIH and VII 0 models locally. But perhaps in this infinite, inhom*ogeneous universe the Anthropic explanation could re-emerge. Only some places within such a universe will be conducive to the presence of life, and only in those places would we expect to find it. Perhaps observers at those places necessarily see isotropic expansion; perhaps only world-lines with isotropic past light-cones eventually trace the paths of intelligent beings through space and time.

6.12 Inflation

It is therefore clear that from the direct data of observation we can derive neither the sign nor the value of the curvature, and the question arises whether it is possible to represent the observed facts without introducing a curvature at all. A. Einstein and W. de Sitter

W e have seen that our Universe possesses a collection of unusual properties—a particular, small level of inhom*ogeneity, a high degree of isotropy and a close proximity to the 'critical' density required for 'flatness'. All of these properties play an important role in underwriting the cosmological conditions necessary to evolve galaxies and stars and observers. Each has, until recently, been regarded as an independent cosmic conundrum requiring a separate solution. We can always appeal to very special starting conditions at the Big Bang to explain any puzzling collection of current observations but, in the spirit of the 'chaotic cosmologists' mentioned in the last section, it is more appealing to find physical principles that require the Universe to possess its present properties or, less ambitiously, to show that some of its unusual properties are dependent upon the others. A new approach to explaining some of these fundamental cosmological problems began in 1981 with the work of S a t o " and Guth. 67 Subsequently this package of ideas, dubbed the 'inflationary universe' by Guth, has undergone a series of revisions and extensions.100 We shall focus upon general points of principle desired of any working model of the inflationary type. During the first 10" 3 5 s of cosmic expansion the sea of elementary particles and radiation that fill the Universe can reside in a variety of physical states that physicists call 'phases'. At a more elementary level,

431 The Anthropic Principles in Classical Cosmology recall that ordinary water exists in three phases of gaseous, liquid or solid type which we call steam, water and ice respectively. These 'phases' correspond to different equilibrium states of the molecules. Steam is the most energetic state whilst ice is the least energetic. If we pass from a high to a low energy state then excess heat will be given out. This is why your hand will be scalded when steam condenses upon it. If changes of phase occur between the different elementary particle states in the early universe then dramatic events can ensue. The energy difference between the two phases can accelerate the expansion of the Universe for a finite period of time. This brief period of 'inflation' can produce a series of remarkable consequences. Alternative phases can exist for the scalar Higgs fields, associated with the supermassive X and Y bosons that we discussed earlier in connection with the baryon asymmetry of the Universe. They will possess some potential energy of interaction, V( - it, it is possible to exclude singularities from the resulting Euclidean region. Path integrals have nice properties in this Euclidean region and Hawking claims that by integrating the path integral only over compact metrics the need for any boundary condition at all disappears. Hence, Hawking, suggests that the quantum wave function of the Universe is defined by a path integral over compact metrics without boundary. Hawking has argued that, in the classical limit, the quantum state derived from this condition has desirable cosmological properties: 140 it must be almost isotropic and hom*ogeneous and be very close to the ft0~ 1 state. This quantum state can be regarded as a sort of superposition of Friedman universes with these classical properties. The type of boundary condition proposed by Hawking, unlike that of Penrose, explicitly involves quantum gravitation and, in particular, must come to grips with the problem of what is meant by the 'wave function of the Universe' after it has been written down. In the next Chapter we move on to consider this complex problem which brings together the roles of observer and observed in Nature in an intimate and intricate fashion. In this chapter we have discussed the ideas of modern theoretical and observational cosmology in some detail. We have completed the study, begun in Chapter 5, of the size spectrum of objects in Nature and have shown how properties of the Universe as a whole, perhaps endowed at its inception, may be crucial if the existence of observers is to ever be possible within it.

The Anthropic Principles in Classical Cosmology

450

References

1. For popular accounts of modern cosmology, see S. Weinberg, The first three minutes (Deutsch, London, 1977), and J. D. Barrow and J. Silk, The left hand of creation (Basic Books, NY, 1983, and Heinemann, London, 1984). 2. This prediction was not made by Einstein himself, who sought to suppress the expansion of the Universe emerging from his original formulation by introducing another, mathematically admissible, parameter into his gravitational field equations. This type of static cosmological solution with a uniform non-zero density was in line with the prevailing philosophical view. These solutions proved to be unstable, and the first expanding universe solutions without cosmological constant were found by A. Friedman, Z. Physik. 10, 377 (1922). For more detailed history see A. Pais, Subtle is the Lord (Oxford University Press, Oxford, 1983), J. North, The measure of the universe (Oxford University Press, 1965), and F. J. Tipler, C. J. S. Clarke, and G. F. R. Ellis, in General relativity and gravitation: an Einstein centenary volume, edited by A. Held (Pergamon Press, NY, 1980). 3. E. Hubble, Proc. natn. Acad. Sci., U.S.A. 15, 169 (1929). Technically speaking, the Hubble redshift is not a true Doppler effect, since in a non-Euclidean geometry we cannot invariantly characterize relative velocities of recession, and the effect arises because of light propagation through a curved space-time, although it is described by identical formulae as the Doppler effect. 4. A. A. Penzias and R. A. Wilson, Astrophys. J. 142, 419 (1965). 5. R. Alpher and R. Herman, Nature 162, 774 (1948), and see also S. Weinberg, ref. 1 for same historical background. 6. R. H. Dicke, P. J. E. Pebbles, P. G. Roll, and D. T. Wilkinson, Astrophys. J. 142, 414 (1965). 7. R. V. Wagoner, W. A. Fowler and F. Hoyle, Astrophys. J. 148, 3 (1967). 8. B. Pagel, Phil. Trans. R. Soc. A 307, 19 (1982). 9. S. W. Hawking and G. F. R. Ellis, The large scale structure of space-time (Cambridge University Press, Cambridge, 1973). 10. For a detailed overview see J. D. Barrow, Fund. Cosmic Phys. 8, 83 (1983), and for popular accounts see ref. 1. 11. This assumption is sometimes called The Cosmological Principle', following E. A. Milne. 12. S. Weinberg, Gravitation and cosmology (Wiley, NY, 1972). We assume A = 0 here. 13. D. W. Sciama, Modern cosmology (Cambridge University Press, Cambridge, 1975). 14. J. D. Barrow, Mon. Not. R. astron. Soc. 175, 359 (1976). 15. Here k is Boltzmann's constant; henceforth it will be set equal to unity. 16. A. Sandage and E. Hardy, Astrophys. J. 183, 743 (1973). 17. J. Audouze, in Physical cosmology, ed. R. Balian, J. Audouze, and D. N. Schramm (North-Holland, Amsterdam, 1979); S. van den Bergh, Quart. J. R. astron. Soc. 25, 137 (1984). 18. S. M. Faber and J. S. Gallagher, Ann. Rev. Astron. Astrophys. 17, 135 (1979). B

451 The Anthropic Principles in Classical Cosmology 19. P. J. E. Peebles, in Physical cosmology, op. cit., M. Davis, J. Tonry, J. Huchra, and D. W. Latham, Astrophys. J. Lett. 238, 113 (1980). 20. H. Totsuji and T. Kihara, Publ. Astron. Soc. Japan 21, 221 (1969); S. M. Fall. Rev. Mod. Phys. 51, 21 (1979). 21. P. J. E. Peebles, The large scale structure of the universe (Princeton University Press, NJ, 1980). 22. A. Webster, Mon. Not. R. astron. Soc. 175, 61; 175, 71 (1976). 23. M. Peimbert, Ann. Rev. Astron. Astrophys. 13, 113 (1975). 24. C. Laurent, A. Vidal-Madjar, and D. G. York, Astrophys. J. 229, 923 (1979). 25. F. Stecker, Natare 273, 493 (1978); G. Steigman, Ann. Rev. Astron. Astrophys. 14, 339 (1976). 26. D. P. Woody and P. L. Richards, Phys. Rev. Lett. 42, 925 (1979). 27. R. Dicke, Nature 192, 440 (1961). The fact that cosmological descriptions of the expanding Universe link local conditions and habitability with global facets like the size of the Universe was first stressed by G. Whitrow, see E. L. Mascall, Christian theology and natural science (Longmans, London, 1955) and G. M. Idlis, Izv. Astrofiz. Inst. Kazakh. SSR 7, 39 (1958), (in Russian), whose paper was entitled 'Basic features of the observed astronomical universe as characteristic properties of a habitable cosmic system'. 28. J. Milton, Paradise Lost, Book 8 (1667). 29. H. Minkowski introduced this concept in a lecture entitled 'Space and Time' delivered in Cologne, 1908. 30. J. A. Wheeler, in Essays in general relativity, ed. F. J. Tipler (Academic Press, NY, 1980), and see also L. C. Shepley, in this volume. These authors investigate the fact that anisotropic universes of Galactic mass can have expanded for 10 yrs compared with only a few months in the isotropic case. 31. A. S. Eddington, Proc. natn. Acad. Sci., U.S.A. 16, 677 (1930). 32. V. A. Lyubimov, E. G. Novikov, V. Z. Nozik, E. F. Tret'yakov, V. S. Kozik, and N. F. Myasoedov, Sov. Phys. JETP 54, 616 (1981). 33. J. D. Barrow, Phil. Trans. R. Soc. A 296, 273 (1980). 34. J. Silk, Astrophys. J. 151, 459 (1968). 35. J. Silk, Nature 265, 710 (1977). 36. M. J. Rees and J. Ostriker, Mon. Not. R. astron. Soc. 179, 541; J. Silk, Astrophys. J. 211, 638 (1976); J. Binney, D.Phil, thesis, Oxford University (1977). 37. J. D. Barrow and J. S. Turner, Nature 291, 469 (1981). 38. B. J. Carr and M. J. Rees, Nature 278, 605 (1979). 39. M. Dine, W. Fischler and M. Srednicki, Phys. Lett. B 104, 199 (1981); M. B. Wise, H. Georgi, and S. L. Glashow, Phys. Rev. Lett. 47, 402 (1981). 40. G. R. Blumenthal, S. M. Faber, J. R. Primack, and M. J. Rees, Nature 311, 517 (1984). 41. J. E. Gunn, B. W. Lee, I. Lerche, D. N. Schramm, and G. Steigman, Astrophys. J. 223, 1015 (1978). lo

452

The Anthropic Principles in Classical Cosmology

42. R. Cowsik and J. McClelland, Phys. Rev. Lett. 29, 669 (1972); Astrophys. J. 180, 7 (1973); J. E. Gunn and S. Tremaine, Phys. Rev. Lett. 42, 407 (1979). 43. G. Bisnovathy-Kogan and I. D. Novikov, Soc. Astron. Lett. 24, 516 (1981); J. Bond, G. Efstathiou, and J. Silk, Phys. Rev. Lett. 45, 1980 (1981). 44. A. G. Doroshkevich, M. Y. Khlopov, A. S. Szalay, and Y. B. Zeldovich, Ann. NY Acad. Sci. 375, 32 (1980). 45. M. Davis, G. Efstathiou, C. Frenk and S. D. M. White, Astrophys. J. 292, 371. (1985). For a popular description, see J. D. Barrow and J. Silk, New Scient., 30 Aug. (1984). 46. C. Hayashi, Prog. Theor. Phys. 5, 224 (1950), Prog. Theor. Phys. Suppl. 49, 248 (1971). 47. F. Hoyle and R. J. Tayler, Nature 203, 1108 (1964); P. J. E. Peebles, Phys. Rev. Lett. 43, 1365; R. V. Wagoner, in Confrontation of cosmological theories with observation, ed. M. Longair (Reidel, Dordrecht 1974). 48. B. Carter, unpublished manuscript 'The significance of numerical coincidences in Nature' (DAMTP preprint, University of Cambridge, 1967): our belief that Carter's work should appear in print provided the original motivation for writing this book, in fact. F. Hoyle, Astronomy and cosmology: a modern course (Freeman, San Francisco, 1975). 49. Y. B. Zeldovich and I. D. Novikov, Sov. Phys. JETP Lett. 4, 117 (1966); R. Alpher and G. Gamow, Proc. natn. Acad. Sci., U.S.A. 61, 363 (1968). 50. M. J. Rees, Phys. Rev. Lett. 28, 1969 (1972); Y. B. Zeldovich, Mon. Not. R. astron. Soc. 160, lp (1972); E. P. T. Liang, Mon. Not. R. astron. Soc. 171, 551 (1975); J. D. Barrow, Nature 267, 117 (1977); J. D. Barrow and R. A. Matzner, Mon. Not. R. astron. Soc. 181, 719 (1977); B. J. Carr, Acta cosmologica 11, 113 (1983). 51. M. Clutton-Brock, Astrophys. Space Sci. 47, 423 (1977). 52. J. D. Barrow and R. A. Matzner, Mon. Not. R. astron. Soc. 181, 719 (1977). 53. M. J. Rees, Nature 275, 35 (1978); B. J. Carr, Acta cosmologica 11, 131 (1982). 54. E. Salpeter, Astrophys. J. 211, 161 (1955). 55. B. J. Carr, Mon. Not. R. astron. Soc. 181, 293 (1977), and 189, 123 (1978). 56. H. Y. Chiu, Phys. Rev. Lett. 17, 712 (1965); Y. B. Zeldovich, Adv. Astron. Astrophys. 3, 242 (1965); G. Steigman, Ann. Rev. Nucl. Part Sci. 29, 313 (1979). The result changes slightly if the expansion is anisotropic but is not enough to affect the general conclusions, J. D. Barrow, Nucl. Phys. B 208, 501 (1982). 57. A. D. Sakharov, Sov. Phys. JETP Lett. 5, 24 (1967); E. Kolb, and S. Wolfram, Nucl. Phys. B 172, 224. 58. V. A. Kuzman, Sov. Phys. JETP Lett. 12, 228 (1970). Note that an interesting example of a system violating baryon conservation but conserving both C and CP is provided by the gravitational interaction. This is manifested during the collapse of a cloud of material to the black hole state and its subsequent evaporation via the Hawking effect. The final state will always be baryon symmetric so long as no particles which can undergo CP violating decays are evaporated (for this case see J. D. Barrow, Mon. Not. R. astron. Soc. 192, 427 (1980), and J. D. Barrow and G. Ross, Nucl. Phys. B 181, 461 (1981)).

453 The Anthropic Principles in Classical Cosmology 59. M. Yoshimura, Phys. Rev. Lett. 41, 281 (1978); A. Y. Ignatiev, N. Krasinokov, V. Kuzmin, and A. Tavkhelkize, Phys. Lett. B 76, 436 (1978); S. Dimopoulos and L. Susskind, Phys. Rev. D 19, 1036 (1979); J. Ellis, M. K. Gaillard, D. V. Nanopoulos, and S. Rudaz, Phys. Lett. B 99, 101 (1981); S. Weinberg, Phys. Rev. Lett. 42, 850 (1979); A. D. Dolgov, Sov. J. Nucl. Phys. 32, 831 (1980). 60. J. D. Barrow, Mon. Not. R. astron. Soc. 192, 19p (1980). 61. E. Kolb and M. S. Turner, Ann. Rev. Nucl. Part. Sci. 33, 645 (1983). 62. J. N. Fry, K. A. Olive, and M. S. Turner, Phys. Rev. D 22, 2953 (1980). For other numerical results see Kolb and Wolfram, ref. 57. 63. D. V. Nanopoulos, Phys. Lett. B 91, 67 (1980). 64. J. D. Barrow and M. S. Turner, Nature 291, 469 (1981). 65. S. W. Hawking and G. F. R. Ellis, The large scale structure of space-time (Cambridge University Press, Cambridge, 1973). 66. C. B. Collins and S. W. Hawking, Mon. Not. R. astron. Soc. 162, 307 (1973); J. D. Barrow, Mon. Not. R. astron. Soc. 175, 359 (1976) and Quart. J. R. astron. Soc. 23, 344 (1982). 67. A. Guth, Phys. Rev. D 23, 347 (1981). 68. M. J. Rees, Phil. Trans. R. Soc. A 310, 311 (1983). 69. C. B. Collins and S. W. Hawking, Astrophys. J. 180, 317. 70. M. J. Rees, Quart. J. R. astron. Soc. 22, 109 (1981). 71. Observations of the deceleration parameter lead to this limit. See also D. Tytler, Nature 291, 289 (1981) and J. D. Barrow, Phys. Lett. B 107, 358. 72. S. W. Hawking, Phil. Trans. R. Soc. A 310, 303 (1983). 73. F. Hoyle and J. V. Narlikar, Proc. R. Soc. A 273, 1 (1963); also articles by J. D. Barrow and by W. Boucher and G. Gibbons, in The very early universe, ed. G. Gibbons, S. W. Hawking, and S. T. C. Siklos (Cambridge University Press, Cambridge, 1983). 74. Y. B. Zeldovich, Sov. Phys. JETP Lett. 6, 1050 (1967); Sov. Phys. Usp. 24, 216 (1982). This interpretation of the cosmological constant was pioneered by W. H. McCrea, Proc. R. Soc. A 206, 562 (1951). 75. E. M. Lifsh*tz, Sov. Phys. JETP 10, 116 (1946); E. M. Lifsh*tz and I. Khalatnikov, Adv. Phys. 12, 185 (1963). 76. S. Weinberg, Gravitation and cosmology (Wiley, NY, 1972). 77. A. A. Kurskov and L. Ozernoi, Sov. Astron. 19, 937 (1975). 78. M. J. Rees, in Physical cosmology, ed. R. Balian, J. Audouze, and D. N. Schramm (North-Holland, Amsterdam, 1979). 79. W. Rindler, Mon. Not. R. astron. Soc. 116, 662 (1955). 80. C. W. Misner, Nature 214, 30 (1967); Phys. Rev. Lett. 19, 533 (1967). 81. C. W. Misner, Astrophys. J. 151, 431 (1968). 82. J. D. Barrow, Nature 272, 211 (1978). 83. J. M. Stewart, Mon. Not. R. astron. Soc. 145, 347 (1969); C. B. Collins and J. M. Stewart, Mon. Not. R. astron. Soc. 153, 419 (1971); A. G. Doroshkevich, Y. B. Zeldovich, and I. D. Novikov, Sov. Phys. JETP 26, 408 (1968). 84. L. Bianchi, Mem. Soc. It. 11, 267 (1898), repr. in Opere IX, ed. A. Maxia

The Anthropic Principles in Classical Cosmology

454

85. 86.

(Editizoni Crenonese, Rome, 1952); M. Ryan and L. C. Shepley, hom*ogeneous relativistic cosmologies (Princeton University Press, New Jersey, 1975); D. Kramer, H. Stephani, E. Herlt, and M. A. H. MacCallum, Exact solutions of Einstein's field equations (Cambridge University Press, 1980). S. T. C. Siklos, in Relativistic astrophysics and cosmology, ed. E. Verdaguer and X. Fustero (World Publ., Singapore, 1984). The reason for this condition is that, even though the shear a may be decaying, it is still possible for the integrated effect of the shear down our past null cone to be large (for example, if a <x f then the cumulative microwave anisotropy would grow logarithmically in time). S. W. Hawking, Commun. Math. Phys. 43, 189 (1975). F. J. Tipler, Phys. Rev. D 17, 2521 (1978). See, for example, V. I. Arnold, Ordinary differential equations (MIT Press, Cambridge, 1978). A. G. Doroshkevich, V. N. Lukash, and I. D. Novikov, Sov. Phys. JETP 37, 739 (1974); J. D. Barrow and F. J. Tipler, Nature 276, 453 (1978); J. D. Barrow, ref. 66. J. D. Barrow and S. T. C. Siklos, in preparation (1984); for a summary of results and the techniques employed, see J. D. Barrow and D. H. Sonoda, Gen. Rel. Gravn. 17, 409 (1985) and Phys. Rep. (In press.) S. P. Novikov, Sov. Phys. JETP 35, 1031 (1977). P. Langacker, Phys. Rep. 72, 185 (1981). V. N. Lukash, Nuovo Cim. B 35, 268 (1976). P. Gosse, Omphalos: an attempt to untie the geological knot (J. van Voorst, London, 1857). W. B. Bonnor, Mon. Not. R. astron. Soc. 167, 55 (1974); W. B. Bonnor and N. Tomimura, Mon. Not. R. astron. Soc. 175, 85 (1976). The models considered in these papers do not contain any gravitational wave modes, and do not belong to the Bianchi classification in the spatially hom*ogeneous limit. They possess special geometrical properties and are of measure zero in initial data space. G. F. R. Ellis, Gen. Rel. Gravn. 9, 87 (1978) and 11, 281 (1979); Quart. J. R. astron. Soc. 16, 245 (1975). Note, however, that closed universes can be made arbitrarily large by choosing f l infinitesimally close to unity. This seemingly artificial situation turns out to be extremely relevant if current ideas in elementary particle physics turn out to be correct, see §6.7 below. Also, 'open' universes can be made finite in volume by a suitable choice of topology. K. Sato, Mon. Not. R. astron. Soc. 195, 467 (1981); Phys. Lett. B 99, 66 (1981). A. Albrecht and P. I. Steinhardt, Phys. Rev. Lett. 48, 1220 (1982); A. Linde, Phys. Lett. B 108, 389 (1982) and B 114, 431 (1982); S. W. Hawking and I. G. Moss, Phys. Lett. B 110, 35 (1982); see also The very early universe, cited in ref. 73. S. Coleman and E. Weinberg, Phys. Rev. D 7, 1888 (1973). Y. B. Zeldovich and M. Y. Khlopov, Phys. Lett. B 79, 239 (1978). J. P. Preskill, Phys. Rev. Lett. 43, 1365 (1979). 1

87. 88. 89. 90. 91. 92. 93. 94. 95. 96.

97. 98.

99. 100. 101. 102. 103.

455 The Anthropic Principles in Classical Cosmology 104. A. M. Polyakov, Sov. Phys. JETP Lett. 20, 194 (1974); G. t'Hooft, Nucl. Phys. B 79, 176 (1974). 105. W. de Sitter, Mon. Not. R. astron. Soc. 90, 3 (1917). 106. S. W. Hawking, Phys. Lett. B 115, 195 (1982); A. A. Starobinskii, Phys. Lett. B 117, 175 (1982); A. H. Guth and S.-Y. Pi, Phys. Rev. Lett. 49, 1110 (1982); C. Vayonnakis, Phys. Lett. B 123, 396; J. M. Bardeen, P. J. Steinhardt, and M. S. Turner, Phys. Rev. D 28, 679 (1983). 107. E. R. Harrison, Phys. Rev. 1, 2726 (1969); Y. B. Zeldovich, Mon. Not. R. astron. Soc. 160, lp (1970). Since the metric perturbation Sg/g is related to the density perturbation Sp/p over a length scale A « M , the dependence (6.155) yields scale-independent metric fluctuations. This means that every mass scale has the same density perturbation amplitude when it enters the cosmological particle horizon, because M t « dpi p. 108. For different types of inflationary model which claim to obtain a more acceptable prediction of 8p/p, see J. Ellis, D. V. Nanopoulos, K. A. Olive, and K. Tamvakis, Nucl. Phys. B 221, 524 (1983); G. B. Gelmini, D. V. Nanopoulos, and K. A. Olive, Phys. Lett. B 131, 53 (1983). 109. Y. B. Zeldovich, Sov. Astron. Lett. 7, 323 (1981). 110. A. D. Linde, Phys. Lett. B 129, 177 (1983), Sov. Phys. JETP Lett. 38, 176 (1983). 111. In effect, the right-hand side of the Friedman equation is governed by a constant stress (up to logarithmic accuracy) and as a consequence has a solution of the exponentially expanding, de Sitter form. 112. The precise quartic form of the potential (6.159) is obviously not essential to this argument. Any symmetric V() with shallow slope near its minimum would work, (for example, V() = /uuf>), although the constraints on the coupling constant, /m, would differ accordingly. 113. Note that for these probabilistic arguments to work it is not sufficient merely to have an infinite number of possibilities to choose from; they must also be exhaustive of all possibilities and hence a random infinity is necessary for the argument to work. 114. This cannot be done with any confidence yet, because there is still no working model of inflation that produces all the advantages results simultaneously without a special ad hoc choice of the free parameters involved. 115. For example, the simple versions of Dirac's Large Numbers Hypothesis discussed in Chapter 4.1 require that fl = 1 and A = 0 identically in order that dimensionless numbers cannot be associated with the curvature radius and A respectively. The simple oscillating universe model studied by P. Landsberg and D. Park, Proc. R. Soc. A 346, 485 (1976), increases in size with each oscillation due to entropy production, and should therefore be infinitesimally close to f l = 1 after a past eternity of oscillations. However, the Landsberg-Park model is unphysical because it contains no mechanism which would allow evolution through the singularity which occurs at the end of each oscillation. Some quantum gravitational theories also naturally predict 1, see J. V. Narlikar and T. Padmanabhan, Phys. Rep. 100, 151 (1983), T. Padmanabhan Phys. Lett. A 96, 110 (1983), and Chapter 7 of this book. 116. S. W. Hawking, in Quantum structure of space and time, ed. H. Duff and C. Isham (Cambridge University Press, Cambridge, 1982), p. 423. 1/3

H

2

a

456

The Anthropic Principles in Classical Cosmology

117. This will occur so long as there does not exist some other conserved combination of baryon and lepton numbers, as there does in the simplest GUT's, like SU(5). 118. L. Grishchuk and Y. B. Zeldovich, in ref. 116; D. Atkatz and H. Pagels, Phys. Rev. D 25, 2065 (1982); A. Vilenkin, Phys. Lett. B 117, 25 (1982). 119. E. Tryon, Nature 246, 396 (1973); see also P. I. Fomin, Dokl. Ukran. Acad. Sci. A 9, 831 (1975). 120. In the closed Friedman model the rest mass energy of the material content exactly equals its potential energy; see Y. B. Zeldovich, Adv. Astron. Astrophys. 3, 242 (1965). 121. If inflation occurs at the energy of grand unified then initial conditions must be such as to allow the temperature to fall to that level. 122. It is possible to avoid the initial singularity in space-time with a cosmological constant, or where there is a self-interacting scalar field of the type described in section 6.8 (because V()>0 allows a violation of the strong energy condition; see ref. 88 and J. D. Barrow and R. A. Matzner, Phys. Rev. D 21, 336 (1980))—although it has been argued that this may not prevent a singularity in the future, see S. Bludman, Nature 308, 319 (1984). 123. J. D. Barrow, Nature 272, 211 (1978). 124. J. D. Barrow, Phys. Rep. 85, 1 (1982); D. Chernoff and J. D. Barrow, Phys. Rev. Lett. 50, 134 (1983) and Gravity Essay (1982); Y. Elskens, Phys. Rev. D 28, 1033; V. A. Belinskii, E. M. Lifsh*tz, and I. M. Khalatnikov, Sov. Phys. Usp. 13, 745 (1971); E. M. Lifsh*tz, I. M. Lifsh*tz, and I. M. Khalatnikov, Sov. Phys. JETP 32, 173; J. D. Barrow, in Classical general relativity, ed. W. Bonnor, J. Islam, and M. A. H. MacCallum (Cambridge University Press, Cambridge, 1984). 125. R. Penrose, in Theoretical principles in astrophysics and relativity, ed. N. Lebovitz, W. H. Reid, and P. O. Vandervoort (University of Chicago Press, 1978) and in Proc. First Marcel Grossman meeting on general relativity, ed. R. Ruffini (North-Holland, Amsterdam, 1977), and in Physics and Contemporary needs, ed. Riazuddin (Plenum, NY, 1977). 126. S. W. Hawking in Astrophysical cosmology, Pont. Acad. Scient. Scripta Varia. 48, 563 (Pontificia Acad. Scient., Vatican City, 1982); Nucl. Phys. B 239, 257 (1984). 127. J. Hartle and S. W. Hawking, Phys. Rev. D 28, 2906 (1983). 128. S. W. Hawking, Commun. Math. Phys. 43, 199 (1975). 129. P. Candelas and D. W. Sciama, Phys. Lett. 38, 1372 (1977). 130. R. Penrose, in Confrontation of cosmological theories with observational data, ed. M. S. Longair (Reidel, Dordrecht, 1974), p. 263. 131. E. Kasner, Ann. J. Math. 43, 217 (1921). 132. J. D. Barrow, unpublished Gravity Essay (1980). 133. F. J. Tipler, Phys. Rev. D 15, 9423 (1977). 134. F. J. Tipler, Gen. Rel. Gravn 10, 1005 (1979). 135. In fact, the stable late-time asymptotes of hom*ogeneous universes investigated in ref. 91 do appear to attain the bound (6.166) as f—• oo. 136. J. Collins and M. J. Perry, Phys. Rev. Lett. 34, 1353 (1975); B. L. Hu in Recent developments in general relativity (North-Holland, Amsterdam, 1980).

457 The Anthropic Principles in Classical Cosmology 137. J. D. Barrow and F. J. Tipler, Nature 276, 453 (1978). 138. R. Penrose, in General relativity: an Einstein centenary survey, ed. S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979), and in Progress in cosmology, ed. A. Wofendale (Reidel, Dordrecht, 1982), p. 87. 139. K. Godel, Rev. Mod. Phys. 21, 447 (1949). For a detailed survey see F. J. Tipler, Ann. Phys. 108, 1 (1977), Phys. Rev. D 9, 2203 (1974), and Phys. Rev. Lett. 37, 879 (1976). The question of the stability of closed timelike curves is investigated in detail in S. W. Hawking, Gen. Rel. Gravn 1, 393 (1971); and in F. J. Tipler, J. Math. Phys. 18, 1568 (1977). 140. S. W. Hawking and J. C. Luttrell, Phys. Lett. B 143, 83 (1984). 141. M. Davis, J. Huchra, D. Latham and J. Tonry, Astrophys. J. 270, 20 (1983). 142. A. D. Linde, Rep. Prog. Phys. 47, 925 (1984); Lett. Nuovo dm. 39, 401 (1984). 143. J. B. Hartle, 'Initial conditions' (Lecture delivered at the Fermi-Lab InnerSpace/Outer-Space Conference, 4 May 1984). 144. H. Kodama, 'Comments on Chaotic Inflation', KEK Report 84-12, ed. K. Odaka and A. Sugamoto (1984).

7

Quantum Mechanics and the Anthropic Principle Nothing ever becomes real till it is experienced. John Keats

7.1 The Interpretations of Quantum Mechanics When I hear of Schrodinger's cat, I reach for my gun. S. W. Hawking

In classical physics Man seemed entirely superfluous to the Universe. He was only a cog—and a rather small cog at that—in the Newtonian world-machine. However, his role in the Cosmos appears greatly enhanced in quantum mechanics. According to the so-called Copenhagen Interpretation of the quantum mechanical formalism—and this interpretation is the most widely accepted interpretation among contemporary physicists—Man, in his capacity as the observer of an experiment, is an essential and irreducible feature of physics. The historians S. G. Brush1 and P. Forman 2 have claimed that the idea of the observer playing an important role in a physical measurement can be traced back to the nineteenth century, but in quantum mechanics this idea was first put forward only in 1926 by Born 3 in his 'probability interpretation' of Schrodinger's wave function. This function and the Schrodinger equation which it satisfies were very successful in solving many outstanding problems in atomic physics and spectroscopy, but in the interpretation which Schrodinger himself gave to the wave function—that of measuring the charge density of the electron in a system described by the Schrodinger equation 4 —there were a number of difficulties. For instance, the wave function which described a beam of electrons incident upon a photographic plate was greatly extended in space, yet each electron actually impinged upon the plate at a localized point. In the Schrodinger interpretation this was interpreted as a sudden instantaneous 'collapse' of the charge spread out over a wide area down to a point on the plate. It was hard to see how such a collapse could be consistent with the requirement of special relativity that no information be transmitted faster than the velocity of light. In the probabilistic interpretation put forward by Born, 5 the wave function was a measure of the probability that the electron, viewed as always remaining a point particle, was at the point

Quantum Mechanics and the Anthropic Principle

459

x in space. More precisely, since ip is a complex number, and a probability must be a non-negative real number \ijj(x)\2 is the probability that the electron is at the point x . This interpretation removed the difficulty presented by causality. For before the electron hits the plate, it has a small probability of being at many points over a wide area, and hence the wave function is spread out over a wide area. When the electron actually hits the photographic plate, and hence is measured to be at a particular spot on that plate, the probability that the electron is at that particular spot suddenly becomes one at that point, and zero at all other points. This means that the wave function must suddenly collapse if it is to measure the sudden change in the probabilities. What changes is not something physical, but as Born put it, rather 'our knowledge of the system suddenly changes'. Thus with this interpretation, a property of Man in the role as observer of the physical universe enters the formalism of physics in an essential way. The Born interpretation of the Schrodinger wave function was extended by the great Danish physicist Niels Bohr, who turned it into the so-called Copenhagen interpretation, and it is this interpretation of the quantum mechanical formalism which is most widely accepted among physicists today, at least in some form. Bohr first defended the essential role played by the observer in quantum mechanics in his Como Lecture of 1927: 6

On one hand, the definition of the state of a physical system, as ordinarily understood, claims the elimination of all external disturbances. But in that case, according to the quantum postulate, any observation will be impossible, and above all, the concepts of space and time lose their immediate sense. On the other hand, if in order to make observation possible we permit certain interactions with suitable agencies of measurement, not belonging to the system, an unambiguous definition of the state of the system is naturally no longer possible, and there can be no question of causality in the ordinary sense of the word. The very nature of the quantum theory thus forces us to regard the space-time coordination and the claim of causality, the union of which characterizes the classical theories, as complementary but exclusive features of the description, symbolizing the idealization of observation and definition respectively. Both in the above passage and in his later writings on the quantum theory of measurement and the philosophical significance of quantum mechanics, Bohr goes far beyond the bare bones of a probabilistic interpretation of the wave function. For a probabilistic or a statistical interpretation of the wave function is perfectly consistent with the notion of the world as a deterministic system in which both causality and a space-time description are valid simultaneously. However, in this case one must admit that the statistical quantum theory is not the ultimate theory of the world. If the world is deterministic in the classical mechanical sense and its properties

460

Quantum Mechanics and the Anthropic Principle 460

exist independently of human observation, then it must be that quantum theory is statistical only because it contains no reference to some of the classical variables which are actually governing the behaviour of atomic particles. These unknown factors not considered by quantum theory are termed 'hidden variables' by those physicists who support such a deterministic world view.7 This view is not inherently unreasonable because classical statistical mechanics was pictured in precisely this way during the nineteenth century.8 The atoms of a gas were pictured as governed by deterministic Newtownian laws of motion. However, because there is such an enormous number of atoms in a macroscopic volume of gas—1023 atoms in a cubic centimetre being a typical number—it is a practical impossibility to take into account all of the variables—6 variables per atom—which would have to be considered in a deterministic classical description of the system of atoms comprising the gas. Therefore, the description of the system was vastly over-simplified by taking certain statistical averages of these variables, thereby reducing the number of independent variables to a tractable number. It is, of course, impossible to give an absolutely precise deterministic description of the time evolution of the gas in terms of these new and fewer variables, but the average behaviour of the new variables can be predicted, and this is sufficient for most practical purposes. Bohr denied that there could exist hidden variables which would ultimately replace the probabilistic description of the world by quantum theory with a deterministic description. He based his position upon the essential role played by the observer in quantum physics. The observer and the world were so inextricably connected that 'an independent reality in the ordinary physical sense can neither be ascribed to the phenomena nor to the agencies of observations'.9 In other words, many physical properties of atomic particles did not even exist before the act of observation, the act of observation was necessary to bring these properties into existence. Bohr argued that ascribing simultaneous independent physical reality to all properties which an electron could possess would contradict the formalism of quantum theory. For, from this formalism one could derive the Heisenberg uncertainty relations, which for the position of the electron in the x-direction and momentum p in the x-direction can be written Ax Apx h/2 (7.1) x

where h is Planck's constant divided by 2ir, and Ax, A p are to be interpreted according to the Copenhagen view as the uncertainty in the measurement of the position of the electron in the x-direction, and Apx the uncertainty in the measurement of its momentum component in this direction, respectively. By applying this relation and the other uncertainty x

461

Quantum Mechanics and the Anthropic Principle

relations to a number of idealized experiments, Bohr showed they implied that a precise measurement of the position would so affect the electron as to make the momentum unknown. On the other hand, had we chosen to measure the momentum of the electron precisely, the experimental arrangement necessary to make this measurement would, through its interaction with the electron, make the electron's position completely unknown. Adopting the empiricist principle that what cannot be measured, even in principle, cannot be said to exist, Bohr therefore denied reality to the notions of electron position and electron momentum prior to their measurement. The electron's position and momentum would be determined by the particular experimental arrangement which the observer chose to interact with it, and quantum mechanics shows no experimental apparatus can be constructed which would determine both properties absolutely precisely in a single measurement. Thus after any measurement, the electron's position and momentum must be partially undetermined; these properties are 'real' only within the limits allowed by the uncertainty relations and the experimental apparatus chosen by the observer to measure them. To a realist like Einstein, who held that a physical reality existed independently of Man the Observer, Bohr's view was anathema. Over a period of a decade Einstein tried repeatedly to contrive an idealized experiment in which precise measurement of a system's complementary properties—those orthogonal properties of a system which, like the position and momentum of an electron, have complementary uncertainties according to Bohr's interpretation of the uncertainty relations—could be made simultaneously. He failed in this endeavour, and was forced to admit that the uncertainty relations did, as Bohr claimed, restrict the precise simultaneous measurement of complementary properties. Nevertheless, he continued to feel that these properties possessed independent simultaneous reality even if they could not be simultaneously measured. To justify this point of view, he and two of his colleagues, Nathan Rosen and Boris Podolsky, proposed what has become known as the Einstein-Podolsky-Rosen (EPR) experiment. This experiment was presented in a paper entitled, 'Can QuantumMechanical Description of Physical Reality be Considered Complete?' 10 The paper began with the authors' definition of 'physical reality:'

If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. 10

The EPR experiment is an experimental arrangement to measure the complementary variables of a physical system. In their original paper, Einstein, Podolsky and Rosen chose position and momentum, but follow-

Quantum Mechanics and the Anthropic Principle 462

462

ing Bohm 11 most modern discussions of the experiment have used the spin of an electron as measured in a given direction. The z-component, S2, of the spin—that is, the value of the component of spin of the electron in the direction of the z -coordinate axis—can be shown to be complementary to the component of the spin in a direction perpendicular to the z-axis. Quantum theory tells us that a component of the electron spin can take on only one of two values: S can only be ±h/2. Similarly, Sx, which is the component of the spin in the x-direction, can only take on the values ±ft/2. However, since the variables S and S are complementary, if it is known that S is equal to hi2, then the value of S is completely undefined by quantum theory; in Bohr's view S has no value in this situation. It takes on a value if it is measured, but if it is measured, the very process of measurement simultaneously destroys the reality of the value of S . The EPR experiment considers a system of two electrons, coupled so that the total spin of the system is zero; if Sz of the first electron is +hl2, then S of the second electron is -h/2, and similarly for a measurement of Sx. Now it can be shown 11 that the uncertainty relations allow an absolutely precise simultaneous measurement of the total spin S of the two-electron system and either Sz or Sx. That is, the pair of variables (S T , S ) or the pair (S T , S ) can be measured. Suppose the two-electron system is constructed so that S T = 0 and the two electrons are moving apart very rapidly. Then after a very long time, S will still be zero and the electrons will be far apart—one light year, say. After the electrons have become widely separated, we perform a measurement of S on electron # 1 , with the result S2 = +ft/2, say. Then since S T = 0, it follows that S of electron # 2 must equal - f t / 2 . So, we know with certainty the value of S for electron # 2 even though we have performed no measurement on electron # 2 . Thus by the EPR definition of physical reality, S of electron # 2 has physical reality. On the other hand, we could have decided to measure S at electron # 1 . If S of electron # 1 were ft/2, then (as before) the value of S of electron # 2 must be - f t / 2 , and so according to the EPR definition, the value of S of electron # 2 must have physical reality. z

z

x

z

x

x

z

z

T

z

x

T

z

z

z

z

x

x

x

x

According to the EPR definition, both S and S of electron # 2 must possess an element of physical reality independent of any observation, since all observations were performed on electron # 1 , not electron # 2 . The electrons were a light year apart when the measurement on electron # 1 was performed, so because the speed of light is finite, there is no question of a measurement on electron # 1 affecting the state of electron # 2 . Einstein contended that since the observation affects electron # 1 and not electron # 2 , it is impossible for the measurement on electron # 1 to bring into existence the properties of electron # 2 , as Bohr's Copenhagen Interpretation would claim. z

x

Quantum Mechanics and the Anthropic Principle

463

The EPR experiment highlights the 'contrary-to-common-sense' nature of the Copenhagen Interpretation. The idea that the act of observation must have a non-negligible effect on the object being observed is certainly plausible, and this happens all the time in the social sciences. If the government announces that its economists have found the inflation rate has changed drastically, then people change their buying and saving habits accordingly. However, the EPR experiment shows the interaction of the observer with the observed in quantum mechanics can have non-local effects. If we grant, following Bohr, that the observation of electron # 1 brings into existence some property of this electron, say, the z- or x-component of the spin—then this observation brings into existence the same property of electron # 2 which is a light year away. Furthermore, this property of electron # 2 is brought into existence at the instant the measurement is performed on electron # 1 , even though no information about the measurement, no forces and no influence of any kind can reach electron # 2 for at least a year. There appears to be instantaneous action at a distance. This non-local effect of the measurement process in quantum mechanics makes it possible to test for a certain class of hidden variables, those hidden variables which act locally like any of the known forms of physical interaction. The physicist J. S. Bell showed 12 ' 13 that if the spin of the electrons in the EPR experiment were indeed controlled by local hidden variables, then the determination of the spin of electron # 2 by a measurement on electron # 1 could not take place instantaneously as it does in quantum mechanics. Thus by performing the EPR experiment one could test for the existence of local hidden variables. In the past few years the EPR experiment has actually been performed by a number of groups, with the result that the predictions of quantum theory are confirmed and the existence of local hidden variables ruled out, 13 (at least those local hidden variable theories in which the measuring process is assumed not to effect the distribution of the hidden variables4 are ruled out). This is generally (see, however, refs 14 and 15) regarded as confirmation of the Copenhagen Interpretation, in which the act of observation is responsible for bringing properties of physical systems into existence. As John Clauser and Abner Shimony graphically put it:

Physical systems cannot be said to have definite properties independent of our observations; perhaps an unheard tree falling in the forest makes no sound after all. 13

Bohr's response to the EPR experimental proposal was to emphasize even more strongly the essential role of the observer in the measurement of a quantum system.16'17 He denied the validity of the EPR criterion of reality. In his view, it was meaningless to say that a property of a quantum mechanical system existed without referring to the observer, or

Quantum Mechanics and the Anthropic Principle 464

464

more precisely, to the observer's experimental arrangement which measured this property:

As a more appropriate way of expression I advocated the application of the word phenomenon exclusively to refer to the observations obtained under specified circ*mstances, including an account of the whole experimental arrangement [ref. 18, p. 238] As regards the specification of the conditions for any well-defined application of the [quantum mechanical] formalism, it is moreover essential that the whole experimental arrangement be taken into account [ref. 18, p. 222] As repeatedly stressed, the principal point here is that such measurements [of S or S ] demand mutually exclusive experimental arrangements [that is, an apparatus which could measure S could not measure S , and vice versa] [ref. 18, p. 233] of course, there is in [the case of the EPR experiment] no question of a mechanical disturbance of the system under investigation during the last critical stage of the measuring procedure. But even at this stage there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behaviour of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term 'physical reality' can be properly attached, we see that the argumentation of [EPR]... does not justify their conclusion that the quantum-mechanical description is essentially incomplete. z

x

z

x

18

In other words, physical reality does not exist independently of the observer and his experimental apparatus. Even though there is no direct interaction between electrons # 1 and # 2 during the measurement, they are bound together by the observer's decision to obtain information about electron # 2 by measuring a property of electron # 1 . The Copenhagen Interpretation of quantum mechanics was first given a rigorous, axiomatic formulation by the mathematician John von Neumann in 1932. 19 Von Neumann's axioms represent a quantum state by a wave function which can change with time in one of two ways: first, it can evolve continuously as a solution to the Schrodinger equation; or second, it can undergo a discontinuous change as a result of a measurement. In the latter case, after a measurement the quantum state will be an eigenstate of the variable which is measured by the experimental apparatus. Since it is the observer who ultimately defines which experimental apparatus is employed, in effect the necessary presence of the observer in quantum physics is recognized by an explicit axiom. Von Neumann regarded the two processes of time evolution as mutually irreducible. He did, however, point out that there was no hard and fast dividing-line between the two. W e might choose to say that the second process, the collapse of the wave function, occurs somewhere in the experimental apparatus itself, or we might want to say that the apparatus is part of the quantum system and that the collapse of the wave function occurs in the consciousness of the human observer. The last possibility was favoured by

465

Quantum Mechanics and the Anthropic Principle

London and Bauer, 20 who published a simplified discussion of the von Neumann theory of measurement, which made this theory widely known to physicists.4 This lack of a sharp dividing-line between the two types of basic quantum processes was felt to be very unsatisfactory by a number of physicists. Schrodinger proposed a famous experiment, canonized as 'Schrodinger's Cat Paradox' to illustrate the difficulties:

A cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radio-active substance, so small, that perhaps in the course of one hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The i/r-function of the entire system would express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts. 21

The situation after one hour is pictured in Figure 7.1. After one hour the wave function is a superposition of two states:

^ = ^ (ifaead + ^alive)

0-2)

where ifaead is the quantum state of the cat being dead, and i f a ^ is the quantum state of the cat being alive. If the cat were dead it would be in

a radioactive atom decays within an hour, a hammer shatters a flask of hydrocyanic acid and the cat dies. If no atom decays, the flask is not shattered and the cat lives. Ater one hour the cat's wave function is a superposition of two states, given by equation (7.2); the cat is both alive and dead. 43

Quantum Mechanics and the Anthropic Principle 466

466 the state

while if the cat

were

^ = ifaead

(7.3)

alive it would be in the state

^ = ^alr

(7.4)

Both state (7.3) and state (7.4) are quite different from the superposed state (7.2) which the cat quantum system must be in before the measurement is made according to quantum mechanics. During the measurement, wave function (7.2) of the cat quantum system must collapse into either wave function (7.3) or wave function (7.4). The question is, just where does this collapse occur? If we follow London and Bauer and say the collapse occurs when a human observer actually observes the system, then this means the cat is neither dead nor alive, but rather is a superposition (7.2) of both states, until the human observer opens the steel chamber. This seems absolutely contrary to common sense. Should then the cat be regarded as the observer who collapses the wave function? Most working physicists would probably take this view. 22 ' 4 On the other hand, perhaps the Geiger counter tube, the device which irreversibly amplifies the atomic decay to macroscopic dimensions, should be regarded as the true 'observer'. This is the view defended by Wheeler, 23 24 and it has some experimental support, 24 ' 25 if it is granted that the wave function is collapsed during the measurement process by some agency. As shown by Arthur Fine from unpublished papers of Einstein,26 the objection to quantum mechanics in its Copenhagen interpretation which Einstein was trying to express in the EPR experiment was actually the same problem that led Born to introduce the probabilistic interpretation of the wave function in the first place: collapse of the wave function during a measurement is inconsistent with the principle of separation— that information cannot be sent faster than light. Thus the reality of a particle property cannot depend on the result of a measurement made on another particle far away from it. Einstein's own simplified version of the EPR experiment is strikingly similar to the Schrodinger's Cat experiment. This simplified version is as follows: Suppose a ball is in one of two closed boxes, with equal probability; if we know that there is exactly one ball in the system, then we can determine whether the ball is in box # 2 by simply looking in box # 1 . If the boxes are sufficiently far apart, then according to the principle of separation the ball really was (or was not) in box # 2 before the observer looked in box # 1 . However, according to quantum mechanics the ball is, so to speak, half in one box and half in the other—just as Schrodinger's Cat is a mixture of dead and alive states before the chamber is opened—and suddenly 'materializes' in one or the other box at the instant of

Quantum Mechanics and the Anthropic Principle

467

measurement—the instant the wave function is collapsed by opening the first box. 27 This ambiguity of just where the wave function collapses leads to further difficulties. In the case of Schrodinger's Cat, we have seen how it is unclear who should be called the observer: is it the Geiger counter, the cat, or the human observer? Why should even the human observer be regarded as responsible for the wave function collapse? Indeed, if one analyses the measurement process according to the laws of quantum mechanics without the axiom of wave function collapse, one finds that the state of the cat-human system is

^cat-human ('^iead ^human sees) ('*Hv ^human sees) system cat dead cat alive =

X

The state i/rdead x i^human ^

cat dead

+

X

M\i)\n)

(7.12a)

= \i)\d)

(7.12b)

Thus if the system is in an eigenstate of the system variable to be measured by the apparatus a von Neumann measurement does not disturb the system. Ever since Heisenberg used his gamma-ray microscope thought-experiment to demonstrate the Uncertainty Principle for the position and momentum of an electron, many have believed that a measurement on a system necessarily disturbs the system, and this disturbance is the cause of the Uncertainty Principle. This is not true. The operator defined in (7.12) does not disturb the system (provided the system happens to be in an eigenstate of the component of spin measured by the apparatus.) For any variable, measurement operators can be defined which have the effect of recording the state of the system in the memory of the measuring apparatus without disturbing the system. In our simple two-spin-state electron example, the Stern-Gerlach apparatus can be regarded as a physical realization of such a von Neumann measuring apparatus, provided the vertical component of momentum of the atom is considered to be the memory trace of the apparatus, and spin precession is ignored. (See ref. 43 for a fuller discussion.) The effect of a von Neumann measurement operator M acting on any state (7.10) with given by (7.9) and |4>> = |n) is then M |Cosmos(before)) = M ( a \ \ ) + b | | »

|n)

= M(a \\)\n)) + M(b = a\\)\u)+b\i)\d)

||)|n» )

= |Cosmos(after)) We can assume that {|t> |n), ||) |n)} span the initial state space, for we shall assume the apparatus is always initially in the neutral position. The fundamental problem in the quantum theory of measurement is deciding what the linear superposition of universe states in the third line of equation (7.13) means. The advocates of the Many-Worlds Interpretation decide this question by arguing as follows. It is obvious that each element in the two cases (7.12) corresponds to a real physical state of

Quantum Mechanics and the Anthropic Principle

475

some actual entity either associated with the system or the apparatus. If we grant that the state (7.9) also corresponds to an actual physical state—and we can justify this by reference either to innumerable experiments or to the superposition principle of quantum mechanics—and we grant that quantum evolution of everything in existence occurs via linear operators, then we are led necessarily to the conclusion that each term in (7.13) corresponds to an actual physical state. We are forced to say that the universe 'splits' into two 'worlds'. In the first world, represented by the first term in (7.13), the electron has spin up, and its spin is measured to be spin up. In the second world, represented by the second term in (7.13), the electron has spin down, and its spin is measured to be spin down. Another way to express this is to say that all a quantum measurement does, or indeed can do, is establish a unique correlation between states of the system being measured and states of the measuring apparatus. In the above discussion, we qualified the statement that the operator (7.12) did not disturb the system with the proviso that the system be in a certain eigenstate. If the system is not in an eigenstate—as it is not in (7.13)—then the operator does affect the system. What the operator (7.12) does when the system is in a general state is establish correlations between the apparatus basis states and those system basis states which are selected by the choice of apparatus basis states. The existence of these correlations can be detected if the {system} + {apparatus} is measured by a second apparatus. For example, a short calculation would show that a measurement of the system by an apparatus with basis states corresponding to a measurement of spin in the horizontal rather than the vertical direction would give a different result if the system were measured by the second apparatus before the system interacts with the first apparatus, than the result which the second apparatus would obtain were it to measure the system after the system has been measured by the first apparatus. Needless to say, the practical importance of these correlations will depend on the size of the system, and the measuring apparatus, relative to Planck's constant, and in the situation where the system and the apparatus are both macroscopic objects (which is the case when humans make a measurement on the Universe), the correlations can be effectively ignored. There is a misconception in popular accounts about the MWI which must be cleared up before the MWI can be applied to cosmology. The misconception arises because the word 'universe' is used in one sense in technical discussions about the MWI, and in another sense in nontechnical discussions. We have said in our interpretation of (7.13), which is the state of the universe after the measurement, that the universe is split by the measurement. This is the standard terminology in the technical literature, but it is important to note this split is to be associated more

476

Quantum Mechanics and the Anthropic Principle 476

with the measuring apparatus rather than with the system being measured. In the case of a von Neumann measurement, the system is not affected (again, with the exception of the correlations) by the measurement, so it is completely misleading to describe the system as splitting as a result of the measurement. On the other hand, as is obvious from (7.12), the measuring apparatus undergoes a tremendous change: it goes from |n) to either |u) or |d) (or both). Of course, in measurements which are not of the von Neumann type, the system variables and not just the system/apparatus correlations will be changed by the measurement, but for macroscopic systems the change of the system variables are very small; measurements of such systems can be regarded as essentially von Neumann measurements. In particular, a measurement of the radius of the Universe can be considered a von Neumann measurement, and it would thus be more appropriate to regard the recording apparatus rather than the Universe as splitting, although the 'universe' in the technical sense defined above does split. The 'universe' in the technical sense includes just the system and the measuring apparatus, whereas the Universe in the non-technical sense includes these two entities, plus everything else in existence. We have made a distinction between the two uses of the word 'universe' by capitalizing the word when it refers to the totality of everything in existence, and left it uncapitalized when it refers to just the system and the apparatus: i.e., to everything being considered in the analysis of the measurement. The other things in the Universe, those things which are not considered in the analysis of the measurement—the planets, stars, and galaxies—are coupled only very weakly to the measuring apparatus. Thus these other items do not split when the apparatus does. Looking at the split from this point of view obviates one of the major objections to the MWI, which is that the MWI seems to require if not an actual infinity, then at least a large number of 'Universes' (in the popular sense) to explain a measurement of some microscopic phenomena, and this is contrary to Ockham's Razor. In the explanation of the MWI given above, there is only one Universe, but small parts of it—measuring apparata—split into several pieces. They split—or more precisely, they undergo a drastic change—upon the act of measurement because they are designed to do so. If they were not capable of registering changes on a macroscopic level they would be quite useless as measuring devies. This fact plus the linearity of quantum mechanical operators requires them to split. Everett himself realized that it is more appropriate to think of the measuring apparatus rather than the Universe as splitting. In reply to a criticism by Einstein against quantum mechanics, to the effect that he [Einstein] '...could not believe... a mouse could bring about drastic changes in the Universe simply by looking at it', Everett said, '... it is not 65

498

476 Quantum Mechanics and the Anthropic Principle 498

so much the system which is affected by an observation as the observer .. . . . . The mouse does not affect the Universe—only the mouse is affected'. We can see this formally by simply putting the non-interacting remainder of the universe in equation (7.13): M |Universe(before)) = M(a | t ) + b ||)) |n) |everything else) = a |t) |u) |everything else)+ b ||) |d) |everything else) (7.14) = (a |t) |u>+ b |4) |d)) |everything else) It is clear from (7.14) that 'everything else' does not split. A human being, or indeed any measuring apparatus, would be unaware of, or in the case of an inanimate apparatus, could not detect, those splits which they do undergo. To detect the split would entail introducing a second observing apparatus into the universe which is capable of recording in its memory both worlds \u) and \d) of the split first apparatus. In the case of a human being, the two apparata could in principle be two sections of the human memory, the second of which observes the first. It is impossible to construct such a second apparatus if it is reasonably required that this second apparatus definitely record the first apparatus to be in the state \u) if in fact it is, or in the state |d) if in fact it is. We may as well let the second apparatus perform a von Neumann measurement on the system simultaneously with measuring the first apparatus, as a check. We require only that the second apparatus record the system as being in the state |t) if in fact it is in this state, and as being in the state ||) if in fact it is in this state. The state of the second apparatus, |A ), can thus be expanded in terms of basis states of the form l^i* ci ), where a records the value of the system variable and a records the content of the first apparatus' memory. Both a and a can have the values n, u, or d. Before the interaction between the second apparatus and the rest of the universe, we shall require the second apparatus to be in the state |n, n). The above restrictions on what the second apparatus must record uniquely define the second apparatus interaction operator M acting on the basis states of the universe. We have M \\)\u)\n n) = |t) \u) \u, u) (7.15a) M \i)\d)\n,n)=\i)\d)\d,d) (7.15b) M | t ) | n ) | n , n ) = | t ) | n ) | u , n> (7.15c) M \l)\n)\n,n)=\l)\n)\d, n) (7.15d) The last two entries in (7.15) are effective only if we were to interact the second apparatus with the rest of the universe before the first apparatus 49

50

2

2

x

2

x

2

2

2

9

2

2

2

476

Quantum Mechanics and the Anthropic Principle 478

has measured the state of the system. Before any measurements by any apparatus are performed, the state of the universe is |Cosmos(before)) = \n) |n, n) (7.16) A measurement of the state of the system by the first apparatus, followed by measurements of the state of the system and the state of the first apparatus is thus represented as: M M |Cosmos(before)> = M M (a |t>+ b ||» \n) \n, n) = M (a |t) |u> |n, n> + b ||> \d) |n, n» (7.17a) = a|t)|u)|u,u)+b|4)|d)|d,d) (7.17b) 7

1

2

1

2

It is clear from (7.17) that the first apparatus is the apparatus responsible for the splitting of the universe. More precisely, it is the first apparatus that is responsible for splitting itself and the second apparatus. The second apparatus splits, but the split just follows the original split of the first apparatus, as is apparent in (7.17b). As a consequence, the second apparatus does not detect the splitting of the first apparatus. Again, the impossibility of split detection is a consequence of two assumptions: first, the linearity of the quantum operators M and M ; second, the requirement that M measure the appropriate basis states of the system and the apparatus correctly. The second requirement is formalized by (7.15). Again, in words, this requirement says that if the system and first apparatus are in eigenstates, then the second apparatus had better record this fact correctly. It is possible, of course, to construct a machine which would not record correctly. However, it is essential for the sensory apparatus of a living organism to record appropriate eigenstates correctly if the organism is to survive. If there is definitely a tiger in place A, (the tiger wave function is non-zero only in place A), then a human's senses had better record this correctly, or the results will be disastrous. Similarly for the tiger. But if the senses of both the tiger and the human correctly record approximate position eigenfunctions, then the linearity of quantum mechanical operators necessarily requires that if either of them are not in a position eigenstate, then an interaction between them will split them both into two worlds, in each of which they both act appropriately. Ultimately, it is natural selection that determines not only that the senses will record that an object is in an eigenstate if in fact it is. Natural selection even determines what eigenstates are the appropriate ones to measure; i.e., which measuring operators are to correspond to the senses. The laws of quantum mechanics cannot determine the appropriate operators; they are given. A different measuring operator will split the observed object into different worlds. But the WAP selection of operators will ensure that the 2

2

x

476 Quantum Mechanics and the Anthropic Principle

479

class of eigenfunctions we can measure, and hence the measuring operators, will be appropriate. The self-selection of measuring operators is the most important role WAP plays in quantum mechanics. Our ultimate goal is to develop a formalism which will tell us what we will actually observe when we measure an observable of a system while the system state is changing with time. One lesson from the above analysis of quantum mechanics from the Many-Worlds point of view is that to measure anything it is necessary to set up an apparatus which will record the result of that measurement. To have the possibility of observing a change of some observable with time requires an apparatus which can record the results of measuring that observable at sequential times. To make n sequential measurements requires an apparatus with n sequential memory slots in its state representation. At first we will just consider the simple system (7.9) that we have analysed before, so the time evolution measurement apparatus has the state |E), which can be written as a linear superposition of basis states of the form |a a ,..., On) (7.18) where each entry a, can have the value n, w, or d, as before. The jth measurement of the system state is represented by the operator M,, defined by M j I t ) K , a ,..., a , , . . . , & n ) = I t ) Wu a , . . . , w,... a,,) (7.19a) Mj |4) | a a ,...,a ,...,a ) = ||) a , . . . , d,... a^) (7.19b) As before, the initial state of the apparatus will be assumed to be |n, n , . . . , n). The measurement is a von Neumann measurement. Time evolution will be generated by a time evolution operator T(t). It is a crucial assumption that T(t) act only on the system, and not have any effect on the apparatus that will measure the time evolution. In other words, we shall assume the basis states (7.18) are not affected by the operator T(t). This is a standard and indeed an essential requirement imposed on instruments that measure changes in time. If the record of the values of some observable changed on timescales comparable with the rate of change of the observable, it would be impossible to disentangle the change of the observable from the change of the record of the change. When we measure the motion of a planet, we record its positions from day to day, assuming (with justification!) that our records of its position at various times are not changing. If we write the apparatus state as |), the effect of a general time evolution operator T(t) on the basis states of the system can be written as T ( 0 | t ) | O > ) = ( a 1 1 ( 0 | t ) + a 1 2 ( 0 |D) |0> (7.20a) T ( t ) ||)|4>) = ( a 2 1 ( 0 | t ) + a 2 2 ( 0 |D) |0> (7.20b) l9

2

l9

2

2

2

j

n

2

476

Quantum Mechanics and the Anthropic Principle 480

Unitarity of T(t) imposes some restrictions on the s, but we do not have to worry about these. Interpreting the result of a measurement on the system in an initially arbitrary state after an arbitrary amount of time has passed would require knowing how to the interpret the a 's, and as yet we have not outlined the meaning of these in the MWI. So let us for the moment analyse a very simplified type of time evolution. Suppose that we measure the state of the system every unit amount of time; that is, at t = 1,2, 3 , . . . , etc. Since time operators satisfy T(t)T(t')= T(t + t'), the evolution of the system from t = 0 to t = n is given by [T(l)] . Again for simplicity, we shall assume a ( l ) = a ( l ) = 0, a (l) = a i(l) = 1. This choice will give a unitary T(t). We have T(l)|t>|0>=|4>|0> (7.21a) T(l)|l>|0> = |t>|0> (7.21b) All that happens is that if the electron spin happens to be in an eigenstate, that spin is flipped from one unit of time to the next, with [T(l)] = I, the identity operator. After every unit of time we shall measure the state of the system. The time evolution and measurement processes together will be represented by a multiplicative sequence of operators acting on the universe as follows: M T ( l ) M _ T ( l ) . . . M T ( 1 ) M | n , n , . . . , n) (7.22a) = M T ( l ) M _ T ( l ) . . . M T(l)[M (a |t> + b |4»] |n, n , . . . , n) (7.22b) = M^iDM^TH)... Af T(l)(a It) Iu, n , . . . , n) + b |4) |d, n , . . . , n)) (7.22c) = M T ( l ) M _ T ( l ) . . . M (a |4) |u, n , . . . , n>+ b |f> |d, n , . . . , n» (7.22d) = M n T ( l ) M r i _ 1 T ( l ) . . . A f 3 T ( l ) ( a |4) k d, n,...)+ b | t ) | d, u,n,...)) (7.22e) and so on. The particularly interesting steps in the above algebra are (7.22c) and (7.22e). The first measurement of the state of the system splits the universe (or more precisely, the apparatus) into two worlds. In each world, the evolution proceeds as if the other world did not exist. The first measurement, M splits the apparatus into the world in which the spin is initially up and the world in which the spin is initially down. Thereafter each world evolves as if the spin of the entire system were initially up or down respectively. If we were to choose a = b, then T(l) \ijj) |4>) = |4>); so the state of 0

n

n

22

2n

n

n

1

2

n

n

1

1

2

1

2

n

n

1

u

2

12

2

476 Quantum Mechanics and the Anthropic Principle

481

the system in the absence of a measurement would not change with time. It would be a stationary state. If the system were macroscopic—for instance, if it were the Universe—then even after the measurement the Universe would be almost stationary; the very small change in the state of a macroscopic system can be ignored. Nevertheless, the worlds would change with time. An observer who was capable of distinguishing the basis states would see a considerable amount of time evolution even though the actual, total state of the macroscopic system would be essentially stationary. Whether or not time evolution will be observed depends more on the details of the interaction between the system and the observer trying to see if the change occurs in the system, than on what changes are actually occurring in the system. In order to interpret the constants a, b in (7.9), or the a 's in (7.20), it is necessary to use an apparatus which makes repeated measurements on not merely a single state of a system, but rather on an ensemble of identical systems. The initial ensemble state has the form: |Cosmos(before)) = ( | ^ » |n, n, n , . . . , n) (7.23) where there are m slots in the apparatus memory state |n, n , . . . n). The fcth slot records the measured state of the fcth system in (|i^)) . The fcth slot is changed by the measuring apparatus operator M , which acts as follows on the basis states of the fcth |i//): 0

m

m

k

= \iIj) . . . | iIj) | u) |

\iIj) | n , . . . , n, u, n , . . . , n)

(7.24a)

= .. |i/>) \d)\ijj)... n , . . . , n, d, n , . . . , n) (7.24b) The M operator effects only the fcth slot of the apparatus memory. It has no other effect on either the system ensemble or the other memory slots. If we perform m state measurements on the ensemble (|i/f)) , an operation which would be carried out by the operator M M _ x . . . M M , the result is AfJV^n-x... M [M (a |t> + b l i M I ^ ) ) " 1 " 1 1 n , n , . . . , n) = M m M m _ x . . . M MMT~\a |t) |u, n , . . . , n>+ b |i> |d, n , . . . , n » k

m

m

m

2

t

2

1

3

= M J V C . i . . . M 3 ( | ^ ) ) m _ 2 ( a M 2 I t ) Iu, n , . . . , n>

+ bM \il>) |4)| d, 2

= M . . . M (|^)) ~ (M | t ) It) k u, n , . . . , + ab |t) |4) |u, d, n,..., n)+ba |4) It) Id, u, n,..., m

m

4

3

3

|^))(a 2

+ b 14) 14) K d, n , . . . , n))

n,...,n)

n)

n)

2

= Sa •^> - (|t)) (|4)) - ' |si, s , . . . , s ) ^

m ^

^

m ,

2

m

(7.24c)

476

Quantum Mechanics and the Anthropic Principle 482

where the s 's represent either u or d, and the final sum is over all possible permutations of u's and d's in the memory basis state |s s ,..., s ). All possible sequences of u's and d's are represented in the sum. The measurement operator M . . . M splits the apparatus into 2 worlds. In this situation we have m systems rather than one, so each measurement splits the apparatus (or equivalently, the universe). Each measurement splits each previous world in two. In each world, we now calculate the relative frequency of the u's and d's. Hartle, Finkelstein, and Graham have shown that if a, b are defined by a = (ifr | f ) and fr = 11), then as m approaches infinity, the relative frequency of the u's approaches |a| /(|a| +|b| ), and the relative frequency of the d's approaches |b| /(|a| +|b| ) in the Hilbert space for which the scalar product defines (if/11) and (ifr 11), except for a set of worlds of measure zero in the Hilbert space. It is only at this stage, where a and b are to be interpreted, that it is necessary to assume is a vector in a Hilbert space. For the discussion of universe splitting, it is sufficient to regard \ijj) as a vector in a linear space with |ijj) and c for any complex constant c, being physically equivalent. If we impose the normalization condition | a | + | b | = l , then |a| and \b\ will be the usual probabilities of measuring the state |i/r) in the state |f) or |j), respectively. It is not essential to impose the normalization condition even to interpret a and b. For example, |a| /(|a| +|b| ) would represent the relative probability of the subspace (il* 11) as opposed to (ty 11) even if we expanded \ijj) to include other states, enough to make |ijj) itself nonnormalizable. One key point should be noted: since there is only one Universe represented by only one unique wave function the ensemble necessary to measure |(a | ^ ) | cannot exist for any state |a). Thus, being unmeasurable, the quantities |(a | have no direct physical meaning. We can at best assume |a| /(|a| +|b| ) measures relative probability. But there is still absolutely no reason to assume that is normalizable. Even in laboratory experiments, where we can form a finite-sized ensemble of identically prepared states, it is not certain that |a| /(|a| + |b| ) will actually be the measured relative frequency of observing u. All we know from quantum theory is that as the ensemble size approaches infinity, the relative frequency approaches |a| /(|a| +|fr| ) in all worlds except for a set of measure zero in the Hilbert space. There will always be worlds in which the square of the wave function is not the observed relative frequency, and the likelihood that we are in such a world is greater the smaller the ensemble. As is well known, we are apparently not in such a world, and the question is, why not? DeWitt suggests that perhaps a WAP selection effect is acting: (

l9

m

51

2

m

x

52

53

2

2

2

m

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

It should be stressed that no element of the superposition is, in the end, excluded.

476 Quantum Mechanics and the Anthropic Principle

483 All the worlds are there, even those in which everything goes wrong and all the statistical laws break down. The situation is similar to that which we face in ordinary statistical mechanics. If the initial conditions were right the universe-aswe-see-it could be a place in which heat sometimes flows from cold bodies to hot. We can perhaps argue that in those branches in which the universe makes a habit of misbehaving in this way, life fails to evolve, so no intelligent automata are around to be amazed by it. 85

We will noW consider wave-packet-spreading from the Many-Worlds point of view. A simple system which will show the essential features has four degrees of freedom, labeled by the basis states |f), |i), |—»), and As before, we shall need a measuring apparatus to record the state of the system if we are to say anything about the state of the system. Since we are interested in measuring time evolution, say at m separate times (which will be assumed to be multiples of unit time, as before), we shall need an apparatus state with m slots: |n, n , . . . , n), where the n denotes the initial 'no record' recording. The fcth measurement of the system state will be carried out by the operator M , which changes the fcth slot from n to u, d, r, or J, depending on whether the state of the system is |f), |4), or respectively. The time evolution operator T(t) will not effect the apparatus state, and its effect on the system basis states is as follows: k

T(l)|t)=at^H)+ati|i)

T( 1) | i > = a ^ | « - > + a | t > l t

T ( l ) I*—)= a _ t | t ) + a«—»|—»)

T( 1)

(7.25a) (7.25b) (7.25c) (7.25d)

The effect of the time evolution operator is easily visualized by regarding the arrow which labels the four basis states of the system as a hand of a clock. If the hand is initially at 12 o'clock, (basis state ||)) the operator T(l) carries the hand clockwise to 3 o'clock (basis state |—»)), and to 6 o'clock (basis state ||)). More generally for any basis state, the operator T(l) carries the basis state (thought of a clock hand at 12, 3, 6, or 9 o'clock) clockwise one quarter and one half the way around the clock. We shall imagine that Kl »k | (7.26) if / = i +1, and fc = i + 2, where i + n means carrying the arrow clockwise around n quarters from the ith clock hand position. The condition (7.26) means roughly that 'most' of the wave packet initially at one definite clock position is carried to the immediately adjacent position in the clockwise direction, with a small amount of spreading into the position halfway around the clock. In addition to satisfying (7.26), the constants a must be chosen to preserve the unitarity of T(t). The measured time 2

k

2

{i

476

Quantum Mechanics and the Anthropic Principle 484

evolution of the state |t) through three time units is then M T M T M TM ||> |m, n, n, n) = M T M T M T |t) |u, n, n, n) (7.27a) = M T M T M (a _ a |i)) \u, n, n, n) (7.27b) = M T M T (a _ |u, r, n, n)+ a ||) |u, d, n, n)) (7.27c) = M T M [a _ (a_ ||)+ |u, r, n, n) + a ( a ^ _ |«->+ a |t)) |u, d, n, n)] (7.27d) = M T [a^a_+± ||) \u, r, d, n)+ a^a^ |i

n

4

>

At

3

TA

4

A

T

n

n

tt

>

>i

i

A

iT

t

At

t

>

3

tA

ij9

it

3

2

2

1

476 Quantum Mechanics and the Anthropic Principle

485

one did not observe a purely 'classical' evolution, the most likely one to see is one of the ones which are as close to 'classical' as possible. For all worlds—memory sequences—there is no overlap between the worlds, even though by the second time period the wave packets of the system have begun to overlap one another. This is a general property which is a consequence only of the linearity of the operators, the assumption that the time evolution does not effect the apparatus memory, and the assumption that the measurement is a von Neumann measurement. If we had evolved and measured the time evolution of a general system state 11(/), the results would have been broadly speaking the same. For example, if we had chosen \ijj) = It)+ b*) +14)+ ! 7 = 1, 2 ^ 7 = 2/3,

2

3 y + 4

2

] dr

(7.35)

Lagrangian will be quadratic only in

radiation gas dust unphysical, since it implies a negative pressure For the radiation gas, varying with respect to the metric gives the Lagrange equation as that of the simple harmonic oscillator (SHO), dR —dr t + £ = 0 (7.36) Y

2

since the constant term in the Lagrangian can be omitted. The general solution to (7.36) is of course R(t) = R sin(T + 8) (7.37) The two integration constants in (7.37) can be evaluated in the following way. It is clear that all solutions (7.37) have zeros with the same period 7r. Since it is physically meaningless to continue a solution through a singularity which occurs at every zero, all solutions exist only over a r-interval of length 77. Thus for all solutions we can choose the phase 8 so that for all solutions the zero of r-time occurs at the beginning of the universe, at R = 0. This implies 8 = 0 for all solutions, in which case the remaining constant R is seen to be the radius of the universe at maximum expansion: R(R) = R sinr (7.38) In the radiation gas case, all solutions are parameterized by a single number R ^ the radius of the universe at maximum expansion. It is important to note we have obtained the standard result (7.38) without having to refer to the Friedman constraint equation. Indeed, we obtained 0

MAX

M

476

Quantum Mechanics and the Anthropic Principle 492

the dynamical equation (7.36) by an unconstrained variation of the Lagrangian (7.35); we obtained the correct dynamical equation and the correct solution even though we ignored the constraint. The constraint equation contained no information that was not available in the dynamical equation obtained by unconstrained variation, except for the tacit assumption that p / 0. From the point of view of the dynamical equation, the vacuum 'radiation gas' (that is, p = 0) is an acceptable 'solution'. For a true (p / 0) radiation gas at least, ignoring the constraints is a legitimate procedure. It is well this is so, for we have precluded any possibility of obtaining the Friedman constraint equation by fixing the lapse N before carrying out the variation (in effect choosing N = R(T)). The fact that the constraint can be ignored in the radiation case is important because quantizing a constrained system is loaded with ambiguities ; indeed, the problem of quantizing Einstein's equations is mainly the problem of deciding what to do with the constraint equations, and these ambiguities do not arise in the unconstrained case (see ref. 62, for a discussion of the relationship between the lapse and the Einstein constraint equations). The constraint equation in the radiation gas case actually tells us two things: the density cannot be zero, and the solutions hit the singularity. Thus as long as these implications of the constraints are duly taken into account in some manner in the quantum theory, quantizing an unconstrained system should be a legitimate procedure, at least for a radiation gas. For simplicity, we will consider only the quantization of a radiation gas. For a radiation gas, the Hamiltonian that is generated from the Einstein Lagrangian (7.35) is just the Hamiltonian, H, for a simple harmonic oscillator (SHO), which is easy to quantize: the wave function of the Universe will be required to satisfy the equation 63,64

64

i dW/dr = H^f?

(7.39)

There are other ways of quantizing the Einstein equations. The various quantization techniques differ mainly in the way in which the Einstein constraint equations are handled. It is an open question which way is correct. Consequently, we must attempt to pose only those questions which are independent of the quantization technique. The Friedman universe quantized via (7.38) will then illustrate the conclusions. After deriving the conclusions using our quantization technique, we shall state the corresponding results obtaining using the Hartle-Hawking technique. The results obtained via these two techniques are identical. Whatever the wave function of the Universe, the MWI implies that it should represent a collection of many universes. We would expect the physical interpretation of the time evolution of the Universal wave function W coupled to some entity in the Universe which measures the 59

476 Quantum Mechanics and the Anthropic Principle

493

radius R of the Universe, to be essentially the same as the physical interpretation of time-evolution of the alpha-particle wave function coupled to an atomic array. The first two measurements of the radius would split the Universe into the various branch universes—or more precisely, the observing system would split—and in each branch the evolution would be seen to be very close to the classical evolution expected from the classical analogue of the quantum Hamiltonian. Since the Hamiltonian is the SHO, the classical motion that will be seen by observers in a given branch universe will be sinusoidal, which is consistent with the motion predicted by the classical evolution equation (7.36). If we assume that the collection of branch universes can be grouped together so that they all begin at the singularity at R = 0 when T = 0, then the Universe—the collection of all branch universes—will be as shown in Figure 7.3. Before the first radius measurement is performed, the Universe cannot be said to have a radius, for the Universe has not split into branches. After the first two radius measurements, the Universe has all radii consistent with the support of the Universal wave function and the support of the measurement apparatus. The MWI imposes a number of restrictions on the quantization procedure. For example, the time parameter in equation (7.38) must be such as to treat all classical universes on an equal footing, so that all the classical universes can be subsumed into a single wave function. It is for this reason that the Einstein action (7.34) has been written in terms of the conformal time T, for this time parameter orders all the classical closed Friedman universes in the same way: the initial singularity occurs at T = 0, the maximum radius is reached at T = tt/2, and the final singularity occurs when T = it. In contrast, a true physical time variable, which is the time an observer in one of the branch universes would measure, does of course depend on the particular branch one happens to be in. An example of such a physical time is the proper time. The proper time at which the maximum radius is reached depends on the value of the maximum radius, which is to say on the branch universe. Thus proper time is not an appropriate quantization time parameter according to the MWI. The MWI also suggests certain constraints on the boundary conditions to be imposed on the Universal wave functions, constraints which are not natural in other interpretations. The other interpretations suggest that the Universe is at present a single branch which has been generated far in the past by whatever forces cause wave-function reduction. Consequently, in these non-MWI theories the effect of quantum gravity, at least at present, is to generate small fluctuations around an essentially classical universe. This view of quantum cosmology has been developed at length by J. V. Narlikar and his students, and it leads to a cosmological model which is 66

476

Quantum Mechanics and the Anthropic Principle 494

Figure 7.3. The branching of a quantum universe. Before the first interaction occurs that can encode a scale measurement, the Universe, represented before this interaction occurs as a series of wavy lines, has no radius. After the first two scaled interactions have occurred, the Universe has been split by the interactions into a large number of branches, in each of which an essentially classical evolution is seen. These branches are represented in the figure by the sine curves, each of which goes through the final singularity at T = TT. The collection of all sine curves are all the classical radiation gas-filled Friedman models. Each curve is defined by R , the radius of the universe at maximum expansion. In the quantum Universe, all the classical universes are present, one classical universe defining a single branch. The classical universes are equally probable. Five such classical universes are pictured. max

476 Quantum Mechanics and the Anthropic Principle

495

physically distinct from the models suggested by the MWI. A detailed analysis of what an observer would see would show a difference between the MWI models and the Narlikar models, although to a very good approximation the evolution would be the classical Friedman evolution in the present epoch. The two models would differ enormously very close to the initial singularity, and this could lead to experimentally testable differences between the MWI on the one hand, and the wave function reduction models on the other. Other experimentally distinguishable differences between the MWI and the other interpretations have been pointed out by Deutsch. These experimentally distinguishable differences between the MWI and the other interpretations obviate the most powerful argument which opponents bring against the MWI. This argument was succinctly stated by Shimony: 67

From the standpoint of any observer (or more accurately, from the standpoint of any 'branch' of an observer) the branch of the world which he sees evolves stochastically. Since all other branches are observationally inaccessible to the observer, the empirical content (in the broadest sense possible) of Everett's interpretation is precisely the same as the empirical content of a modified quantum theory in which isolated systems of suitable kinds occasionally undergo 'quantum jumps' in violation of the Schrodinger equation. Thus the continuous evolution of the total quantum state is obtained by Everett at the price of an extreme violation of Ockham's principle, the entities being entire universes. 15

But Ockham's principle is not violated by the MWI. Note that when the system being observed is small, the Universe in the usual sense of being everything that exists, does not split. Only the measuring apparatus splits, and it splits because it is designed to split. When the system being observed is the entire Universe it is meaningful to think of the Universe as splitting, but strictly speaking even here it is the observing apparatus that splits. If we chose to regard the Universe as splitting, then we have the Universe consisting of all classical universes consistent with the support of the Universal wave function, as in Figure 7.3. This is a violation of Ockham's principle only in appearance, for one of the problems at the classical level is accounting for the apparent fact that only a single point in the initial data space of Einstein's equations has reality. Why this single point out of the aleph-one points in initial data space? Any classical theory will have this problem. It is necessary to raise the Universal initial conditions to the status of physical laws to resolve this problem on the classical level. We also have to allow additional physical laws to account for wave function reduction. No additional laws need be invoked if we adopt the MWI, for here all the points in initial data space—classical universes—actually exist. The question of why does this

476

Quantum Mechanics and the Anthropic Principle 496

universe rather than that universe exist is answered by saying that all logically possible universes do exist. What else could there possibly be? The MWI cosmology enlarges the ontology in order to economize on physical laws. The ontological enlargement required by the MWI is precisely analogous to the spatial enlargement of the Universe which was an implication of the Copernican Theory. Indeed, philosophers in Galileo's time used Ockham's principle to support the Ptolemaic and Tychonic Systems against the Copernican system. For example, the philosopher Giovanni Agucchi argued in a letter to Galileo that one of the three most powerful arguments against the Copernican system was the existence of the vast useless void which the Copernican system required. In 1610 there were three interpretations of the planetary motions, the Ptolemaic, the Tychonic, and the Copernican systems, all of which were empirically equivalent and entirely viable, and two of which—the Tychonic and the Copernican—were actually mathematically equivalent if applied to circular orbits. The Ptolemaic system was just made the most implausible by Galileo's observations with the telescope which he announced in 1610, just as the Statistical Interpretation of quantum mechanics has been rendered implausible in the opinion of most physicists by the experiments to test local hidden variables theories. What finally convinced Galileo of the truth of the Copernican system as opposed to the Tychonic system was the fact that astronomers who would not regard the Earth's motion as real were under a great handicap in understanding the motions they observed, regardless of 'mathematical equivalence'. This was also the major factor in convincing other physicists and astronomers of the truth of the Copernican System. Furthermore, the Tychonic system was dynamically ridiculous and almost impossible to apply other than to those particular problems of planetary orbits which it had been designed to analyse. Similarly, the wave function collapse postulated by the Copenhagen Interpretation is dynamically ridiculous, and this interpretation is difficult if not impossible to apply in quantum cosmology. We suggest that the Many-Worlds Interpretation may well eventually replace the Statistical and Copenhagen Interpretations just as the Copernican system replaced the Ptolemaic and Tychonic. Physicists who think in terms of the Copenhagen Interpretation may become handicapped in thinking about quantum cosmology. The different versions of the Anthropic Principle will themselves differ according to the boundary conditions that are imposed on the Universal wave function even in the MWI, and since different boundary conditions imply different physics, it is possible, at least in principle, to determine experimentally which of the different versions of the Anthropic Principle actually applies to the real Universe. 68

69

69

70

69

497

476 Quantum Mechanics and the Anthropic Principle

7.4 Weak Anthropic Boundary Conditions in Quantum Cosmology Listen, there's a hell of a good universe next door: let's go! E. E. Cummings

From the viewpoint of the Weak Anthropic Principle, the particular classical branch of the Universe we happen to live in is selected by the fact that only a few of the classical branches which were illustrated in Figure 7.2 can permit the evolution of intelligent life. The branches which have a very small R ,—a few light years, say—will not exist long enough for intelligent life to evolve in them. Nevertheless, according to WAP these other branches exist; they are merely empty of intelligent life. Therefore, if WAP is the only restriction on the Universal wave function, the spatial domain of the Universal wave function R, T) must be (0, +»), for any positive universal radius R is permitted by WAP. The origin must be omitted from the domain because R = 0 is the singularity, while negative values of R have no physical meaning. The key problem one faces on the domain (0, +») is the problem of which boundary conditions to impose at the singularity R = 0. A straightforward calculation ' ' shows that in order for the operator -d /dR + V(R) to be self-adjoint on (0, +), where the timeindependent potential is regular at the origin and the operator acts on functions which make it L on (0, + + )- (H will t> Hermitian if S ijj*HT )T ). Calculating necessary conditions to prevent such seepage would require knowledge of the non-gravitational matter Hamiltonian at r and this is not known. A sufficient condition to prevent seepage is W = U, (T ) T ) = 0 (7.53) This condition will also restrict the value of initial wave function at T = 0. Boundary conditions (7.51)-(7.53) are somewhat indefinite because we don't know i^i (r). However, if £ (T) has been comparable to the radius of our particular branch universe over the past few billion years, the effect of these conditions on the observed evolution of our branch would be considerable. Recall that the observed branch motion follows closely the evolution of the expectation value (R) for a wave packet in the potential V(R). Today (R) must be very close to the measured radius of our branch universe. The evolution of (R) will be quite different if the 59

h

2

2

imin

imi

imin

imin

imin

im

9

imin

imin

2

2

T

00

2

imm

{

i

in n

i

i9

M T A

min

I

imin

>

L

e

476 Quantum Mechanics and the Anthropic Principle

505

boundary conditions (7.51) or (7.52) are imposed close to the present observed radius than if they were imposed at R = 0; i.e., if conditions (7.40) or (7.41) were imposed. Thus in principle the boundary conditions imposed by WAP and SAP respectively can lead to different observations. The idea that WAP and SAP are observationally distinct from the point of view of the MWI was suggested independently by Michael Berry and one of the authors. In the above discussion we have assumed that there are no SAP limitations on the upper bound of the Universal wave function domain. An upper bound of plus infinity on square-integrable functions requires such a function and its derivatives to vanish at infinity. If an Anthropic Principle were to require a finite upper bound, then additional boundary conditions, analogous to (7.51) or (7.52), would have to be imposed at the upper boundary. There is some suggestion that FAP may require such a boundary condition; see Chapter 10. John Wheeler's Participatory Anthropic Principle, which is often regarded as a particularly strong form of SAP, has intelligent life selecting out a single branch out of the no-radius Universe that exists prior to the first 'measurement' interaction at T TJ. This selection is envisaged as being due to some sort of wave function reduction, and so it cannot be analysed via the MWI formalism developed here. Until a mechanism to reduce wave functions is described by the proponents of the various wavefunction-reducing-theories, it is not possible to make any experimentally testable predictions. The fact that the boundary conditions on a quantum cosmology permit such predictions to be made is an advantage of analysing the Anthropic Principle from the formalism of the MWI. A more detailed analysis of the significance of boundary conditions in quantum cosmology can be found in ref. 82. In this chapter we have seen how modern quantum physics gives the observer a status that differs radically from the passive role endowed by classical physics. The various interpretations of quantum mechanical measurement were discussed in detail and reveal a quite distinct Anthropic perspective from the quasi-teleological forms involving the enumeration of coincidences which we described in detail in the preceding two chapters. Wheeler's Participatory Anthropic Principle is motivated by unusual possibilities for wave-packet reduction by observers and can be closely associated with real experiments. The most important guide as to the correct interpretation of the quantum measurement process is likely to be that which allows a sensible quantum wave function to be written down for cosmological models and consistently interpreted. This naturally leads one to prefer the Many Worlds picture. Finally, we have tried to show that it is possible to formulate quantum cosmological models in accord with the Many-Worlds 80

81

=

Quantum Mechanics and the Anthropic Principle 506

476

Interpretation of quantum theory so that the Weak and Strong Anthropic Principles are observationally testable.

References 1. 2. 3. 4.

5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.

S. G. Brush, Social Stud. Sci. 10, 393 (1980). P. Formain, Hist. Stud. Physical Sci. 3, 1 (1971). M. Born, Z. Physik 37, 863 (1926). M. Jammer, The philosophy of quantum mechanics (Wiley, NY, 1974), pp. 24-33. Ref. 4, pp. 38-44. N. Bohr, in Atomic theory and the description of Nature (Cambridge University Press, Cambridge, 1934), p. 52. Ref. 4, pp. 252-9; 440-67. S. G. Brush, The kind of motion we call heat, Vols I and II (North-Holland, Amsterdam, 1976). Ref. 6, p. 54. A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 111 (1935). D. Bohm, Quantum theory (Prentice-Hall, Englewood Cliffs, NJ, 1951). J. S. Bell, Physics 1, 195 (1964); Rev. Mod. Phys. 38, 447 (1966). J. F. Clauser and A. Shimony, Rep. Prog. Phys. 41, 1881 (1978). Ref. 1, footnote 131 on p. 445. A. Shimony, Int. Phil Quart. 18, 3 (1978). N. Bohr, Phys. Rev. 48, 696 (1935). N. Bohr, Nature 136, 65 (1935). N. Bohr, 'Discussion with Einstein on epistemological problems in modern physics', in Albert Einstein: philosopher-scientist, ed. P. A. Schlipp (Harper & Row, NY, 1959). J. von Neumann, Mathematical foundations of quantum mechanics (Princeton University Press, Princeton, 1955), transl. by R. T. Beyer from the German edition of 1932. F. London and E. Bauer, La theorie de V observation en mecanique quantique (Hermann et Cie, Paris, 1939). English transl. in ref. 25. E. Schrodinger, Naturwiss. 23, pp. 807-812; 823-828; 844-849 (1935); English transl. by J. D. Trimmer, Proc. Am. Phil. Soc. 124, 323 (1980); English transl. repr. in Wheeler and Zurek, ref. 25; the Cat Paradox was stated on p. 238 of the Proc. Am. Phil. Soc. article. H. Putnam, in Beyond the edge of certainty, ed. R. G. Colodny (Prentice-Hall, Englewood Cliffs, NJ, 1965). J. A. Wheeler, 'Law without law', in Wheeler and Zurek, ref. 25. J. A. Wheeler, in Foundational problems in the special sciences, ed. R. E. Butts and J. Hintikka (Reidel, Dordrecht, 1977); also in Quantum mechanics, a half century later, ed. J. L. Lopes and M. Paty (Reidel, Dordrecht, 1977). J. A. Wheeler and W. H. Zurek, Quantum theory and measurement (Princeton University Press, Princeton, 1983).

476 Quantum Mechanics and the Anthropic Principle

507

26. A. Fine, in After Einstein: Proceedings of the Einstein Centenary, ed. P. Barker (Memphis State University, Memphis, 1982). 27. L. Rosenfeld, in Niels Bohr, ed. S. Rozental (Interscience, NY, 1967), pp. 127-8; ref. 26. 28. E. P. Wigner, in The scientist speculates—an anthology of partly-baked ideas, ed. I. J. Good (Basic Books, NY, 1962), p. 294; repr. in Wheeler and Zurek, ref. 20. 29. E. P. Wigner, Monist 48, 248 (1964). 30. E. P. Wigner, Am. J. Phys. 31, 6 (1963). 31. H. Everett III, in ref. 43, pp. 1-40. This is Everett's Ph.D. Thesis, a summary of which was published in 1957, ref. 42. 32. J. A. Wheeler, Monist 47, 40 (1962). 33. C. L. Burt, 'Consciousness', in Encyclopaedia Britannica, Vol. 6, pp. 368-9 (Benton, Chicago, 1973). Burt asserts that: 'The word 'consciousness' has been used in many different senses. By origin it is a Latin compound meaning 'knowing things together', either because several people are privy to the knowledge, or (in later usage) because several things are known simultaneously. By a natural idiom, it was often applied, even in Latin, to Knowledge a man shared with himself ; i.e., self-consciousness, or attentive knowledge. The first to adopt the word in English was Francis Bacon (1601), who speaks of Augustus Caesar as 'conscious to himself of having played his part well'. John Locke employs it in a philosophical argument in much the same sense: 'a man, they say, is always conscious to himself of thinking'. And he is the first to use the abstract noun. 'Consciousness', he explains, 'is the perception of what passes in a man's own mind' (1690). 34. J. Jaynes, The origin of consciousness in the breakdown of the bicameral mind (Houghton Mifflin, NY, 1976). This author argues that consciousness did not exist in human beings until recent times, because before that period they did not possess the self-reference concept of mind. 35. G. Ryle, The concept of mind (Barnes & Noble, London, 1949). 36. A. Shimony, Am. J. Phys. 31, 755 (1963). 37. J. A. Wheeler, private conversation with F. J. T. 38. W. Heisenberg, Physics and philosophy (Harper & Row, NY, 1959), p. 160. 39. C. F. von Weizsacker, in Quantum theory and beyond, ed. T. Bastin (Cambridge University Press, Cambridge, 1971). 40. M. Gardner, New York Review of Books, November 23, 1978, p. 12; repr. in Order and surprise, part II (Prometheus, Buffalo, 1983), Chapter 32. 41. S. W. Hawking and G. F. R. Ellis, The large scale structure of space-time (Cambridge University Press, Cambridge, 1973). The concept of future time-like infinity is discussed in more detail in Chapter 10—see, in particular, Figure 10.2. 42. H. Everett, Rev. Mod. Phys. 29, 454 (1957). 43. B. S. DeWitt and N. Graham, The Many-Worlds interpretation of quantum mechanics (Princeton University Press, Princeton, 1973). 44. W. Heisenberg, The physical principles of quantum theory (University of Chicago Press, Chicago, 1930), pp. 66-76. 45. N. F. Mott, Proc. Roy. Soc. A 126, 76 (1929); repr. in ref. 25. 46. B. S. DeWitt, in ref. 43, p. 168.

476

Quantum Mechanics and the Anthropic Principle 508

47. B. S. DeWitt, in Battelle rencontres: 1967 lectures in mathematics and physics, ed. C. DeWitt and J. A. Wheeler (W. A. Benjamin, NY, 1968). 48. Ref. 43, p. 143. 49. Ref. 43, p. 116. 50. Ref. 43, p. 117. 51. J. Hartle, Am. J. Phys. 36, 704 (1968). 52. D. Finkelstein, Trans. NY Acad. Sci. 25, 621 (1963). 53. N. Graham, in ref. 43. 54. J. S. Bell, in Quantum gravity 2: a second Oxford symposium, ed. C. J. Isham, R. Penrose, and D. W. Sciama (Oxford University Press, Oxford, 1981), p. 611. 55. S. W. Hawking, in General relativity: an Einstein centenary survey, ed. S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979), p. 746. 56. B. S. DeWitt, in Quantum gravity 2, ed. C. J. Isham, R. Penrose, and D. W. Scima (Oxford University Press, Oxford, 1981), p. 449. 57. B. S. DeWitt, Scient. Am. 249 (No. 6), 112 (1983). 58. F. J. Tipler, Gen. Rel. Gravn 15, 1139 (1983). 59. J. Hartle and S. W. Hawking, Phys. Rev. D 28, 2960 (1983). 60. S. W. Hawking, D. N. Page, and C. N. Pope, Nucl. Phys. B 170, 283 (1980). 61. A. Einstein, in The principle of relativity, ed. A. Einstein (Dover, NY, 1923), pp. 177-83. 62. C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973). 63. B. S. DeWitt, Phys. Rev. 160, 1113 (1967). 64. W. F. Blyth and C. J. Isham, Phys. Rev. D 11, 768 (1975). 65. A. Shimony, Am. J. Phys. 31, 755 (1963). 66. J. V. Narlikar and T. Padmanabham, Phys. Rep. 100, 151 (1983). 67. D. Deutsch, Int. J. Theor. Phys. 24, 1 (1985). 68. S. Drake, Galileo at work (University of Chicago Press, Chicago, 1978), p. 212. 69. T. K. Kuhn, The Copernican revolution (Vintage, NY, 1959). 70. S. Drake, Galileo (Hin & Wang, NY, 1980), p. 54. 71. M. J. Gotay and J. Demaret, Phys. Rev. D 28, 2402 (1983); J. D. Barrow and R. Matzner, Phys. Rev. D 21, 336 (1980). 72. M. Reed and B. Simon, Methods of modern mathematical physics, Vol. II: Fourier analysis, self-adjointness (Academic Press, NY, 1975), Chapter 10. 73. B. Simon, Quantum mechanics for Hamiltonians defined as quadratic forms (Princeton University Press, Princeton, 1971). 74. L. S. Schulman, Techniques and applications of path integration (Wiley, NY, 1981), Chapter 6. 75. A. Guth, Phys. Rev. D 23, 347 (1981). 76. Y. Hoffman and S. A. Bludman, Phys. Rev. Lett. 52, 2087 (1984). 77. M. S. Turner, G. Steigman, and L. M. Krauss, Phys. Rev. Lett, 52, 2090 (1984). 78. R. Wald, W. Unruh, and G. Mazenko, Phys. Rev. D 31, 273 (1985).

476 Quantum Mechanics and the Anthropic Principle

509

79. G. Hellwing, Differential operators of mathematical physics (Addison-Wesley, London, 1967). 80. M. Berry, Nature 300, 133 (1982). 81. F. J. Tipler, Observatory 103, 221 (1983). 82. F. J. Tipler, Phys. Rep. (In press.) 83. If the Universe contains a particular form of slight expansion anisotropy, it is possible to distinguish a 'closed' from an 'open' universe no matter how small the value of |(1 1|; see J. D. Barrow, R. Juszkiewicz, and D. H. Sonoda, Mon. Not. R. astron. Soc. 213, 917 (1985). 84. J. A. Wheeler, in Mathematical foundations of quantum theory, ed. A. R. Marlow (Academic Press, NY, 1978), pp. 9-48. 85. Ref. 43, p. 186, and see also p. 163 for a similar remark. 0

-

8

The Anthropic Principle and Biochemistry Of my discipline Oswald Spengler understands, of course, not the first thing, but aside from that the book is brilliant. typical German professor's reaction to Decline of the West.

8.1 Introduction A physicist is an atom's way of knowing about atoms. G. Wald

The Anthropic Principle in each of its various forms attempts to restrict the structure of the Universe by asserting that intelligent life, or at least life in some form, in some way selects out the actual Universe from among the different imaginable universes: the only 'real' universes are those which can contain intelligent life, or at the very least contain some form of life. Thus, ultimately Anthropic constraints are based on the definitions of life and intelligent life. We will begin this chapter with these definitions. We will then discuss these definitions as applied to the only forms of life known to us, those which are based on carbon compounds in liquid water. As pointed out by Henderson as long ago as 1913, and by the natural theologians a century before that, carbon-based life appears to depend in a crucial way on the unique properties of the elements carbon, hydrogen, oxygen and nitrogen. We shall summarize the unique properties of these elements which are relevant to carbon-based life, and highlight the unique properties of the most important simple compounds which these elements can form: carbon dioxide (C0 ), water (H 0), ammonia (NH ) and methane (CH ). Some properties of the other major elements of importance to life as we know it will also be discussed. With this information before us we will then pose the question of whether it is possible to base life on elements other than the standard quartet of (C, H, O, N). We shall also ask if it is possible to substitute some other liquid for water—such as liquid ammonia—or perhaps dispense with a liquid base altogether. We shall argue that for any form of life which is directly descended from a simpler form of life and which came into existence spontaneously, the answer according to our present 2

4

2

3

The Anthropic Principle and Biochemistry

511

knowledge of science is 'no'; life which comes into existence in this way must be based on water, carbon dioxide, and the basic compounds of (C, H, O, N). In particular, we shall show that many of the proposed alternative biochemistries have serious drawbacks which would prevent them from serving as a base for an evolutionary pathway to the higher forms of life. The arguments which demonstrate this yield three testable predictions: (1) there is no life with an information content greater than or equal to that possessed by terrestrial bacteria in the atmospheres of Jupiter and the other Jovian planets; (2) there is no life with the above lower bound on the information content in the atmosphere of Venus, nor on its surface; (3) there is no life with these properties on Mars. This is not to say that other forms of life are impossible, just that these other forms could not evolve to advanced levels of organizations by means of natural selection. For example, we shall point out that selfreproducing robots, which could be regarded as a form of life based on silicon and metals in an anhydrous environment, might in principle be created by intelligent carbonaceous beings. Once created, such robots could evolve by competition amongst themselves, but the initial creation must be by carbon-based intelligent beings, because such robots are exceedingly unlikely to come into existence spontaneously. A key requirement for the existence of highly-evolved life is ecological stability. This means that the environment in which life finds itself must allow fairly long periods of time for the circulation of the materials used in organic synthesis. It will be pointed out in sections 8.3-8.6 that the unique properties of (C, H, O, N) are probably necessary for this. However, these properties are definitely not sufficient. In fact, there are indications that the Earth's atmosphere is only marginally stable, and that the Earth may become uninhabitable in a period short compared with the time the Sun will continue to radiate. Brandon Carter has obtained a remarkable inequality which relates the length of time the Earth may remain a habitable planet and the number of crucial steps that occurred during the evolution of human life. We discuss Carter's work in section 8.7. The important point to keep in mind is that Carter's inequality, which is based on WAP, is testable, and therefore provides a test of WAP.

8.2 The Definitions of Life and Intelligent Life We mean by 'possessing life', that a thing can nourish itself and grow and decay. Aristotle Now, I realized that not infrequently books speak of books: it is as if they spoke among themselves. In the light of this reflection, the library seemed all the more disturbing

512

The Anthropic Principle and Biochemistry to me. It was then the place of long, centuries-old murmuring, an imperceptible dialogue between one parchment and another, a living thing, a receptacle of powers not to be ruled by a human mind, a treasure of secrets emanated by many minds, surviving the death of those who had produced them or had been their conveyors. U. Eco

Since life is such a ubiquitous and fundamental concept, the definitions of it are legion. Rather than add to the already unmanagable list of definitions, we shall simply give what seem to us to be the sufficient conditions which a lump of matter must satisfy in order to be called 'living'. We shall abstract these sufficient conditions from the various definitions proposed over the last thirty years by biologists. We shall try to express these conditions in a form of sufficient generality that will not eliminate noncarbonaceous life a priori, but which is sufficiently particular so that no natural process now existing on earth is considered 'living' except those systems recognized as such by contemporary biologists. A consequence of giving sufficient conditions rather than necessary conditions is the elimination from consideration as 'living' many forms of matter which most people would regard as unquestionably living matter. This situation seems unavoidable in biology. Any attempt to define some of the most important biological concepts results either in a definition with so many caveats that it becomes completely unusable, or else in a definition possessing occasional ambiguities. For example, Ernst Mayr has pointed out that such difficulties are inherent in any attempt to define the concept of species precisely. Sufficient conditions are generally much stronger than necessary conditions, and so one might wonder if the conditions which we shall give below could eliminate a possible cosmos which contained 'life' recognized as such as by ordinary standards, but not satisfying the sufficient conditions. We do not believe that cases like this can arise. Although the conditions we give for the existence of life are only sufficient when applied to particular lumps of matter, these conditions will actually be necessary when applied to an entire biosphere. That is, although particular individuals in a given biosphere may not satisfy our sufficient conditions, there must be some individuals, if not most individuals, in the biosphere who do satisfy the conditions. This will become clearer as we present and discuss the sufficient conditions. Virtually all authors who have considered life from the point of view of molecular biology (e.g. refs. 2, 23, 37) have regarded the property of self-reproduction as the most fundamental aspect of a living organism. Looking at life from the everyday perspective, it would seem that self-reproduction is not an absolutely essential feature of life. An individual human being cannot self-reproduce—at least two people are 1

The Anthropic Principle and Biochemistry

513

required to produce a child—and a mule cannot produce another mule no matter what assistance it receives from other mules. Further, a substantial fraction of the human species never have children. These examples show that self-reproduction cannot be a ncesssary property of a lump of matter before we can call it 'living', for we would consider mules, childless persons, and celibate persons living beings. But such creatures are metazoans, which means that they are all composed of many single living cells, and generally each cell is itself capable of self-reproduction. Many human cells, for instance, will reproduce both in the human body and in the laboratory. In general, all known forms of living creatures contain as sub-structure cells which can self-produce, or the living creatures are themselves self-reproducing single cells. All organisms with which we are familiar must contain such cells in order to be able to repair damage, and some damage is bound to occur to every living thing. Thus, the ability to self-repair damage to the organism seems to be intimately connected with self-reproduction in living things, at least on the cellular level of structure. Self-repair and self-reproduction seem to involve the same level of molecular technology; indeed, the machinery needed to self-repair is approximately the same as the machinery needed to selfreproduce. Self-reproduction of metazoans always begins with a single cell; in higher animals and plants this cell is the result of a fusion of at most two cells. This single cell reproduces many times, in the process transforming itself into the differentiated cell types which together make up the metazoan—nerve cells, blood cells, and so on. The ability to self-repair is absolutely essential to a living body. If a creature was unable to self-repair, it would be most unlikely to live long enough to be regarded as living. Any creature unable to repair itself would probably be stillborn. Since all living things are largely composed of cells which can selfreproduce, or are autonomous single cells with self-reproductive capacity, we will say that self-reproduction is a necessary property which all living things must have at least in some of their substructure. Self-reproduction to this limited extent is still not sufficient for a lump of matter to be considered living. A single crystal of salt dropped into a super-saturated salt solution would quickly reproduce itself in the sense that the basic crystal structure of NaCl would be copied many times to make up a much larger crystal than was initially present. A less prosaic example would be the 'reproduction' of mesons by high-energy bombardment. If the quarks which compose a meson are pulled sufficiently far apart, the nuclear bonds which hold them together will break. But some of the energy used to break these bonds will be converted into new quarks which did not previously exist, and these new quarks can combine together to form a number of new meson pairs, see Figure 8.1. 3

The Anthropic Principle and Biochemistry

514

o—•

o

Natural state

Energy added:gluon strings stretch

Gluon stringq b r e a k s

New particles form at ends of gluon string

O—• O—•

Figure 8.1. Quark reproduction in the string model. Energy added to bound quarks stretches the bonds (strings) until they break. New quarks are formed at the break in the strings, with the net result that the original bound quark system reproduces itself.

Thus, in the appropriate environment—supersaturated solutions and high-energy accelerators—both salt crystals and mesons can selfreproduce. Yet we would be unwilling to regard either salt crystals or mesons as living creatures. The key distinction between self-reproducing living cells and self-reproducing crystals and mesons is the fact that the reproductive apparatus of the cell stores information, and the specific information stored is preserved by natural selection. The reproductive 'apparatus' of crystals and mesons can in some cases store information, but this information is not preserved by natural selection. Recall that in scientific parlance, 'information' measures the number of alternative possible statements or different individuals. For example, if a computer memory stores 10 bits, then this memory can store 2 different binary numbers. If a creature has 10 genes like humans and each gene can have one of two forms, then there are 3 possible individuals. In humans, at least a third of all genes have two or more forms, so this number is a good estimate of the possible number of different human beings. Many of these potential individuals are nonviable in a normal environment—for many of these possible gene constellations would not correspond to workable cellular machinery—but many of the other potential individuals could survive in the same environment. Thus, in a living organism, the same reproductive apparatus allows the existence of many distinct individuals who are able to reproduce in a given environment. The decision as to which individuals actually reproduce in a given environment is made by natural selection. This decision is not made by natural selection in the case of the 'self-reproduction' by crystals and protons. In this situation, either all the information is located in the environment, or else the various forms do not compete for environmental resources. The form of the crystal which reproduces in a solution is determined by the physical laws and the particular crystal form that is placed in solution, if the salt in question has several crystal forms. 4

6

106

6

106

60

The Anthropic Principle and Biochemistry

515

It is not possible for NaCl to change its crystal structure by mutation, resulting in a new crystal structure that begins to reproduce itself and replace the previously existing crystal structure. Similarly, the type of elementary particle one can generate in a high-energy collision depends on the details of the collision, and the particle bombarded. Elementary particles do not compete for scarce resources. To summarize, we will say that a sufficient condition for a system to be 'living' is that the system is capable of self-reproduction in some environment and the system contains information which is preserved by natural selection. By 'self-reproduction' we will mean not that an exact copy is made every time, but that there is an environment within which an exact copy would have a higher Darwinian selection coefficient than all of the most closely related copies in the same environment (relationship being measured in terms of the number of differences in the copies). Defining self-reproduction by natural selection as we have done is essential for two reasons: first, it is only the fact that natural selection occurs with living beings that allows us to distinguish living beings from crystals in terms of self-reproduction; second, for very complex living organisms, the probability that exact self-reproduction occurs is almost nil. What happens is that many copies—both approximate and exact— are made and natural selection is used to eliminate the almost perfect copies. If one does not allow some errors in the reproductive process, with these errors being corrected at a later stage by natural selection, then one is led to the conclusion that self-reproduction is inconsistent with quantum physics. Ultimately, it is natural selection that corrects errors and holds a self-reproductive process together, as Eigen and Schuster have shown in their investigation of the simplest possible molecular systems exhibiting self-reproduction. Thus, basically we define life to be self-reproduction with error correction. Note that a single human being does not satisfy the above sufficient condition to be considered living, but it is made up of cells some of which do satisfy it. A male-female pair would collectively be a system capable of self-reproduction, and so this system would satisfy the sufficient condition. In any biosphere we can imagine, some systems contained therein would satisfy it. Thus, it is a necessary condition for some organisms in any biosphere to satisfy the above sufficient condition. A virus satisfies the above sufficient condition, and so we consider it a living organism. A virus is the simplest known organism which does satisfy the condition, so it is instructive to review the reproductive cycle of a typical virus, the T2 virus. This cycle is pictured and discussed in Figure 61

5,6

5,6

7

8.2.

A virus consists of two main components, a nucleic acid molecule surrounded by a protein coat. This coat can have a rather complex

The Anthropic Principle and Biochemistry

516 Adsorption by tail to E. co/i cells; injection of D N A molecule.

Infectious particle ( M W — 2.5 X 1 0 8 ) contains (a) one double-stranded D N A molecule of M W — 1.2 X 10 8 ; (b) a protective coat constructed from several types of different protein molecules.

Cell lyses, owing t o / V accumulation imulation of lysozyme. zyme. Release of

Production of phage-speciftc mRNA molecules. These quickly serve as templates to make a number of phage-speciftc enzymes, one of which breaks down the host chromosome.

Duplication of T2 DNA through strand separation

ii'i'J. duplication of / CContinued oi D N A ; first appearance the coat proteins

200-1000 new particles

Aggregation of coat proteins about phage D N A molecules; beginning of synthesis of phage lysozyme molecules

8

Host ribosome T2 D N A

OXX E. coIi DNA *

T2 mRNA attached to host ribosome Phage-specific enzymes • Phage-coat proteins

Figure 8.2. Life cycle of a T2 virus. The T2 virus is a bacteriophage, which means it 'eats' bacteria. In the above figure it is shown attacking an E. coli bacterium. The enzyme lysozyme is coded by the virus DNA, and its purpose is to break the cell wall. Ribosomes are structures inside the cell that enable DNA to construct proteins (coats and enzymes) from amino acid building-blocks. The DNA produces RNA for the desired protein. The RNA act in the ribosomes as templets on which the amino acids collect to form proteins. (From ref. 33, with permission.)

structure, as in the case of the T2 virus. The nucleic acid molecule, either RNA or DNA, is a gene which codes for the proteins required by the virus in its reproductive cycle. This cycle begins with the nucleic acid gene being injected into a living cell by the protein coat, which remains outside the cell. Once inside the cell, the gene uses the cellular machinery to make copies of itself, and to manufacture other protein coats and an

The Anthropic Principle and Biochemistry

517

enzyme that makes cell walls explode. These genes and coats combine, and the enzyme coded by the virus nucleic acid causes the cell to explode, thereby releasing new viruses. These new viruses will be carried by forces not under the control of the virus to new cells, at which time the cycle will repeat. The environment within which this cycle occurs has a dual nature: first, there is the interior of a cell which contains all the necessary machinery and materials to synthesize nucleic acids and the proteins which these acids code; second, whatever environment connects two such cells. Both parts of its environment are necessary for the cycle to complete, and natural selection is active in both environments to decide just what information coded in the nucleic acid molecule will self-reproduce. In the cellular part of the environment, the information coded in the genes must allow the gene to use the cellular machinery to make copies of itself, the protein coat and the enzymes that break cell walls. Furthermore, the particular protein coat which is coded for in the virus gene must be able to combine with the gene to form a complete virus, and it must be able to inject the nucleic acid molecule it surrounds into a cell. If a mutation occurs so that the information coded in the gene does not code for nucleic acids and proteins with these properties, natural selection will eliminate the mutants from the environment. It is the action of natural selection which creates the basic difference between viruses and salt crystals; indeed, aside from just a little more complexity, the physical distinction between the two is not marked, for viruses can be crystallized. But the reproduction cycle of the virus cannot be carried out while the virus is in crystal form; the virus must be transformed into a non-crystalline form, and when it is in this form, natural selection can act. The structure and reproductive cycle of a virus, as outlined above, is strikingly similar to the basic theoretical structure and replication cycle of a self-reproductive machine developed theoretically by von Neumann in the 1950's in complete ignorance of the make-up and life history of viruses. Perhaps this should not be surprising, since von Neumann was attempting to develop a theory of self-reproducing machines which would apply to any machine which could make a copy of itself, and a virus naturally falls into this category. In von Neumann's scheme ' a self-reproducing machine is composed of two parts, a constructor and an information bank which contains instructions for the constructor. The constructor is a machine which manipulates matter to whatever extent it is necessary to make the various parts of the self-reproducing machine and assemble them into final form. The complexity of the constructor will depend on both the complexity of the self-reproducing machine and on what sort of material is available in its environment. The most general type of constructor is called a universal constructor, which is a machine, a 9,10 11

The Anthropic Principle and Biochemistry

518

(b)

(a) B

B n O

n

L

o

©

o

A

A

«=>

Q

a

B

/J

r B

A n U

A

e

e

A

Figure 8.3. The essential features of a self-reproducing machine, according to von Neumann. The self-reproducing machine with the information bank labelled I and the constructor divided into three parts labelled A, B and C reproduces as follows: (a) the constructor subsystem B makes a copy of the information (program) in the bank and inserts the program copy into a holder; (b) the constructor subsystem A takes over, and makes a copy of subsystems A, B, and C using the information in I ; (c) the subsystem C takes the copy of the information from the holder and inserts this copy into the empty bank of A + B + C. The product now has all the information which the original machine had, so it is also capable of self-reproduction in the same environment. (Figure after Arbib, ref. 11, with permission.) D

D

robot if you will, that can make anything, given instructions about the exact procedure necessary to do so. It is the function of the information bank to provide the necessary instructions to the constructor. The reproductive cycle of von Neumann's self-reproducing machine is pictured in Figure 8.3. The information bank, which is a computer memory containing detailed

519

The Anthropic Principle and Biochemistry

instructions about how a constructor should manipulate matter, first instructs the constructor to make a copy of a constructor either without an information bank, or with blank computer memory. The information bank is then duplicated or the information contained in the computer memory is recorded. In the final stage the information bank and constructor are combined, and the result is a copy of the original machine. The copy has all the information which the original machine had, so it is also capable of self-reproduction in the same environment. Von Neumann showed that a machine could reproduce by following this procedure. A virus does follow it in its reproductive cycle, for within a virus the protein coat corresponds to the constructor, and the nucleic acid corresponds to the information bank. In general, the information required to self-reproduce would be much greater than the information stored in a virus gene, because generally the environment within which a living creature must reproduce has less of the necessary reproductive machinery than does the environment of a virus. The virus invades a cell to deploy the cellular machinery for its own reproduction. For the virus to reproduce there must be some self-reproducing cells which can also reproduce the cellular machinery. The environment which these cells face contains just simple molecules like amino acids and sugars; they themselves must have the complex machinery of chemical synthesis to convert this material into proteins and nucleic acids. The information needed to code for the construction of this machinery and to keep it operating is vastly greater than the information coded in the single nucleic acid molecule of a virus. But in the theory of self-reproducing machines this is a matter of degree and not of kind. For our purposes we do not need to distinguish between self-reproducing organisms on the basis of complexity, because in an ecological system which has entities that satisfy our sufficient condition, there necessarily will exist some living things which any human observer would regard as 'autonomous' and which would self-reproduce. All autonomous self-reproducing cells have a structure which can be naturally divided into the constructor part and the information bank part. This has led the French biochemist Jacques Monod to define life as a system which has three properties: autonomous morphogenesis, which means that the system can operate as a self-contained system; teleonomy, which means the system is endowed with a purpose; and, reproductive invariance. He points out that 2

2

The distinction between teleonomy and invariance is more than a mere logical abstraction. It is warranted on grounds of chemistry of the two basic classes of biological macromolecules, one, that of proteins, is responsible for almost all teleonomic structures and performances; while genetic invariance is linked exclusively to the other class, that of nuclei acids.

The Anthropic Principle and Biochemistry

520

Thus, nucleic acids correspond to the information bank, and proteins to the constructor in our self-reproducing machine example. However, it is difficult to make the notion of autonomous morphogenesis and teleonomy precise, as Monod admits. How autonomous should a living system be? A virus cannot reproduce outside a cell. Is it 'autonomous'? Humans cannot synthesize many essential amino acids and vitamins, but many bacteria can. Are we 'autonomous'? How does one recognize an object 'endowed with a purpose'? We avoided these problems by basing our sufficient condition for a 'living' system on reproduction and natural selection. It must be autonomous to just that extent which will allow natural selection to act on the various possible sets of information stored in the system. So the degree of autonomy will depend on the environment faced by the organism. It must have structure in addition to the information bank, and this structure is 'endowed with a purpose' in the sense that this additional structure exists for the purpose of letting the living system win the struggle for survival in competition with systems that have alternative information sets. Thus, our sufficient condition includes Monod's definition for all practical purposes. Monod's definition of life is based, like our sufficient condition, on a generalization from the key structures and processes of living organisms at the molecular level. Before the molecular basis of life was understood, biologists tended to frame definitions of life in terms of macroscopic physiological process, such as eating, metabolizing, breathing, moving, growing, and reproducing. Herbert Spencer's famous definition of life: "The continuous adjustment of internal relations to external relations' fits into this category. However, such definitions possess rather extreme ambiguities. Mules and childless people are eliminated by a strict reproductive requirement, as we noted earlier. But if information preserving (or increasing) reproduction is removed from the list of physiological processes, then it seems that candle flames must be considered living organisms. Flames 'eat' or rather take in fuel such as candle tallow, and they 'breathe' oxygen just as animals do. The oxygen and fuel are metabolized (or rather burned) in a reaction that is essentially the same as the underlying oxidation reaction that supplies humans with their energy. Flames can also grow, and if the fuel is available in various nearby localities, move from place to place. They can even 'reproduce' by spreading. On the other hand, tardigardes are simple organisms that can be dehydrated into a powder, and which can be stored in this state for years. But if water is added, the tardigrades resume their living functions. When in the anhydrous state the tardigrades do not metabolize. Are they 'dead' material during this period? These difficulties led biologists in the first half of this century to attempt 63

64

65

The Anthropic Principle and Biochemistry

521

to define life in terms of biochemical reactions. J. D. BernaPs definition may be taken as representative of this type of definition:

Life is a potentially self-perpetuating open system of linked organic reactions, catalysed stepwise and almost isothermally by complex and specific organic catalysts which are themselves produced by the system. 66

The word 'potentially' was inserted to allow such creatures as the tardigrades, and also dormant seeds. Unfortunately, such biochemical definitions are too narrowly restricted to carbon chemistry. If a selfreproducing machine of the type outlined earlier were to be manufactured by Man, it would probably be regarded as living by the average person, but the above biochemical definition would not classify it as living, because the machine was not made of organic (carbon) compounds. Also, the biochemical definition eliminates a priori the possibility that non-carbonaceous life could arise spontaneously, which no one wants to do in this age of speculation about extraterrestrial life forms. Thus, more modern definitions of life are generally framed either in terms of natural selection and information theory (Monod's definition and our sufficient condition are examples), or in terms of the non-equilibrium thermodynamics of open systems. A good example of the latter class of definitions is the definition offered by Feinberg and Shapiro:

Life is fundamentally the activity of a biosphere. A biosphere is a highly ordered system of matter and energy characterized by complex cycles that maintain or gradually increase the order of the system through an exchange of energy with its environment. 55

We feel this definition has a number of undesirable ambiguities that make it useless. How highly ordered must a system be before it counts as a biosphere? Many astrophysical processes are highly ordered systems with complex cycles that maintain this order. The energy generation processes of stars, for example, involve many complex cycles in a nonequilibrium environment. Is a star a biosphere? Also, by concentrating attention on the biosphere as a whole, the definition becomes impossible to apply to a single creature. Indeed, the notion of 'living creature' is not a meaningful concept according to this definition. What is meant by 'maintaining order'? If the biosphere eventually dies out, does this mean it was never 'alive'? Definitions like our sufficient conditions, which are based on the concepts of information maintained by natural selection, also seem to have unavoidable and strange implications. Although our sufficient condition does not define as alive natural processes which intuitively are not considered alive, there are human constructs which are alive by our sufficient condition, and yet are not usually regarded as alive. Auto-

522

The Anthropic Principle and Biochemistry

mobiles, for example, must be considered alive since they contain a great deal of information, and they can self-reproduce in the sense that there are human mechanics who can make a copy of the automobile. These mechanics are to automobiles what a living cell's biochemical machinery is to a virus. The form of automobiles in the environment is preserved by natural selection: there is a fierce struggle for existence going on between various 'races' of automobiles! In America, Japanese automobiles are competing with native American automobiles for scarce resources—money paid to the manufacturer—that will result in either more American or more Japanese automobiles being built! The British chemist A. G. Cairns-Smith has suggested that the first living things—the first entities to satisfy our sufficient condition—were self-replicating metallic minerals. The necessary information was coded in a crystalline structure in these first living things, and was later transferred to nucleic acids. The ecology changed from a basis in metal to one based on carbon. If Cairns-Smith is correct, the development and evolution of 'living' machines would represent a return to a previous ecological basis. If machines were to become completely autonomous, and able to reproduce independently of humans, then it is possible that a non-carbon ecology would eventually replace the current carbon ecology entirely, just as the present carbon ecology replaced a mineral ecology. The English zoologist Dawkins has pointed out that collections of ideas in human minds can also be regarded as living beings if the information or natural selection definition of life is adopted. Ideas compete for scarce memory space in human minds. Ideas which enable people to function more successfully in their environment tend to replace ideas in the human population which do not. For example, ideas corresponding to Ptolemaic astronomy were essential to anyone who wished to obtain a professorship in astronomy in 1500. However, possessing these ideas would make it impossible to be an astronomer today. Thus, Copernican ideas have eliminated Ptolemaic ideas in a form of struggle for existence. Dawkins calls such idea-complexes 'memes' to stress their similarity to genes and their relationship to self-reproducing machines. In computer science, an idea-complex would be thought of as a subprogram. Thus Dawkins' argument could be phrased as claiming that certain programs could be regarded as being alive. This is essentially the same claim that we have discussed in section 3.9, and that we will develop more fully in Chapters 9 and 10. Examples of computer programs which behave like living organisms in computers—they reproduce and clog computer memories with copies of themselves—have been given recently by the computer scientist Dewdney. Anyone whose computer disks become infected with such programs has no doubt about the remarkable similarity of such programs to disease germs. 104

67

108

523

The Anthropic Principle and Biochemistry

Having given a definition of life in terms of self-reproduction and natural selection, we will now define intelligent life in the same way. The Weak Anthropic Principle asserts that our Universe is 'selected' from amongst all imaginable universes by the presence of creatures— ourselves—which asks why the fundamental laws and the fundamental constants have the properties and values that they are observed to have. Thus, to use the Weak Anthropic Principle, one must either use 'intelligent being' as a synonym for 'human being' or else define 'intelligent being' to be a living creature (or rather a system which is made up in part of subsystems—cells—which are living by the above sufficient condition) that is capable of asking such questions. This definition can easily be related to the usual Turing definition of human-level intelligence. In 1950 the English mathematician Alan Turing ' proposed an operational test to determine if a computer processed intelligence comparable to that of a human being. Suppose we have two sealed rooms, one of which contains a human being while the other contains the computer, but we do not know which. Imagine further that we can communicate with the two rooms by a computer keyboard and TV screen display. Now we set ourselves the problem of trying to determine which of the sealed rooms contains the person, and which the computer. The only way to do this is by typing our questions on the computer keyboard, to the respective room's inhabitant, and analysing the replies. Turing proposed that if after a long period of typing out questions and receiving replies, we could still not tell which room contained the computer, then the computer would have to be regarded as having human-level intelligence. Generalizing this test of intelligence to our case, we will say that an intelligent being is a living system which can pass the Turing Test if the questions involve the fundamental laws and their structure on the levels discussed in this monograph. Further, we would require that at least some of the computer's replies be judged as 'highly creative' by human scientific standards. Such beings will be called 'weakly intelligent'. To apply the Strong Anthropic Principle, a more rigorous criterion is needed. The Strong Anthropic Principle holds that intelligent beings play some essential role in the Cosmos. However, it is difficult to see how intelligent beings could play an essential role if all such beings are forever restricted to the planet upon which they originally evolve. On the other hand, if intelligent beings eventually develop interstellar travel, it is possible, at least in principle, for them to significantly affect the structure of galaxies and metagalaxies by their activities. We will, therefore, say a living creature is strongly intelligent if he is a member of a weakly intelligent species which at some time develops interstellar travel. Some effects which strongly intelligent species could have on the Cosmos will be discussed in Chapter 10. 12 13

14

The Anthropic Principle and Biochemistry

524

8.3 The Anthropic Significance of Water Ocean, n; A body of water occupying about two-thirds of a world made for man—who has no gills. A. Bierce

Water is actually one of the strangest substances known to science. This may seem a rather odd thing to say about a substance as familiar but it is surely true. Its specific heat, its surface tension, and most of its other physical properties have values anomalously higher or lower than those of any other known material. The fact that its solid phase is less dense than its liquid phase (ice floats) is virtually a unique property. These aspects of the chemical and physical structure of water have been noted before, for instance by the authors of the Bridgewater Treatises in the 1830's and by Henderson in 1913, who also pointed out that these strange properties make water a uniquely useful liquid and the basis for living things. Indeed, it is difficult to conceive of a form of life which can spontaneously evolve from non-self-replicating collections of atoms to the complexity of living cells and yet is not based in an essential way on water. In this 100 Melting

ST

\

-100

points

Jb 8

•0k.... W ^ Kr

.A

//\ -C0o

\

.C

o

c

C H 4 (and other reduced

0.4

\

S

0,

carbon compounds)

\\

0.2-

2.0

1.0 Formation of t h e Earth

4.0

3.0

T i m e (in b i l l i o n s o f y e a r s )

|

5.0

Present t Earth uninhabitable d u e t o high o x y g e n content

Figure 8.19. Graph of the change in the composition of Earth's atmosphere over time in Hart's model. (After Hart, ref. 48.)

o $ -Q 100

10

10

o/y

/

' r f r° ^20

20

5 0 / / Fuel moisture / content (%)

25

i • 30 35 O x y g e n concentration (%)

Figure 8.20. The probability that land vegetation will be ignited by lightning bolts or spontaneous combustion. The probability is strongly dependent on the moisture content of the fuel. Each line corresponds to a different moisture content, beginning with completely dry (0%), and going to visibly wet (50%). At the present fraction of 21%, fires will not start at more than 15% moisture content. Were the oxygen concentration to reach 25%, even damp twigs and the grass of a rain forest would ignite. (Data obtained by A. Watson of Reading University, and recorded in ref. 43, reproduced with permission.)

The Anthropic Principle and Biochemistry

569

would survive the fires, and this concentration is reached in Hart's model in about 200 million years from now. The increasing probability of fire can be seen by comparing the steady increase of oxygen in Hart's model (Figure 8.19) with the probability of grass or forest fires at different oxygen concentrations and moisture conditions (Figure 8.20). The source of the free oxygen in Hart's model is green plants, and it is quite possible for the photosynthetic bacteria and plants in the oceans to supply the steadily increasing amount of oxygen even if plant life on the land becomes extinct, so the future evolution of his oxygen source is realistic. However, Hart's model did not take into account the presentday regulator of the oxygen concentration, which is methane supplied by anaerobic bacteria.95 It is possible that this mechanism would be sufficient to stabilize the oxygen concentration at the present 21% level. More research is needed on this question. Research on the stability of the atmosphere should focus on the question of the oxygen concentration, for it is the free oxygen that gives most of the problems in the long-term computer simulations. It is universally accepted that the atmosphere initially contained very little free oxygen, and that the free oxygen concentration gradually rose from zero in the beginning to the current level as photosynthesizing life supplied the oxygen. As the oxygen level rose, the greenhouse effect faded away, with the result that the temperature fell drastically. This sudden fall in temperature tends to force runaway glaciation in the computer models. It is also quite possible—quite likely, in fact—that Hart's model cannot be believed because he has omitted too many other factors besides the oxygen regulator (if the current one is truly sufficient to stabilize the atmosphere in the long run). The current review papers (e.g. refs. 96-100) on the significance of long-term atmospheric simulations all urge caution in believing the predictions made by such models; there are simply too many unknowns at present to make accurate computer models of such long-term evolution. Nevertheless, atmospheric simulations such as Hart's (and the recent 'Nuclear Winter' simulations101'102) suggest that the Earth's atmosphere is only marginally stable, which means it could be destabilized by relatively small perturbations, and either natural causes or human miscalculation could render the Earth uninhabitable in the near future. A more accurate calculation of the atmospheric stability, a calculation we could place confidence in, would give us a good upper bound for t0-te. We should emphasize that Carter's formula is based on the idea that the evolution of intelligent life is most improbable, and that if the current searches for extraterrestrial intelligent life succeed in finding such creatures, his entire argument collapses. Thus one testable prediction Carter's formula makes is that we are alone in the Galaxy.

570

The Anthropic Principle and Biochemistry

If our crucial step # 1 is indeed crucial, i.e., that the evolution of life itself is unlikely to occur in the period t ^ then it follows there should be no other life of any sort in the rest of the solar system. The failure of the Viking probe to detect life on Mars supports this prediction, but there are a number of planets which have not been searched for life. Sagan and Salpeter103 have presented a detailed scenario for the evolution of DNAbased life on Jupiter. If such a life-form as they suggest were indeed found on Jupiter, WAP would be in serious trouble. Furthermore, if crucial step # 1 is indeed crucial in Carter's sense, it is most unlikely that experimenters will succeed in getting primitive life to form spontaneously in the laboratory. We do not mean to suggest that they will be unable to synthesize life; in fact we believe they will succeed in doing this, and in the near future. But we also believe such synthesis will require a great deal of outside help in the form of putting together a large number of reagents under conditions which are most unlikely to have occurred on the primitive Earth 4.5 billion years ago. Recently, the biochemist CairnsSmith has described104 in detail the biochemical improbabilities in the current models for the spontaneous formation of life; the evolutionist G. G. Simpson has also pointed out 105 similar biochemical improbabilities. In the next chapter, we discuss the astronomical evidence that extraterrestrial intelligent life does not exist elsewhere in our Galaxy. The biological evidence was discussed in section 3.2. In this Chapter we have discussed the possible definitions of life and the sufficient conditions for intelligent life to be said to exit. Our definition of life is compared with previous suggestions by biologists and physicists. We have developed the deep connection between living beings and self-reproducing automata in order to describe living systems using the precise language of modern computer theory. We considered the special properties of the elements used by life, as we know it to exist, to argue that life which evolves spontaneously must be carbon-based. Some experiments which might falsify this claim were suggested. The key chemical properties and apparent coincidences of Nature which allow the evolution of human life based on atomic structure, were discussed in detail to reveal a situation of considerable complexity. Finally, we investigated a recent Anthropic prediction due to Carter, that life on Earth may have a relatively short future. The logic of this prediction is based upon the coincidence that the timescale for biological evolution has turned out to be so close to the main-sequence stellar lifetime. Various delicate climatic and photochemical coincidences allowing life to exist on Earth were then discussed, along with the likelihood that they may be upset in the future by terrestrial events. This discussion also reveals how stringent are the conditions that must be satisfied before a planetary surface is even a possible site for the successful evolution of life.

The Anthropic Principle and Biochemistry

571

References 1. E. Mayr, Populations, species, and evolution (Harvard University Press, Cambridge, Mass., 1970). 2. J. Monod, Chance and necessity (Vintage Book, NY, 1971), p. 13. 3. G. L. Stebbins, The basis of progressive evolution (University of North Carolina Press, Chapel Hill, 1969). 4. L. Brillouin, Science and information theory (Academic Press, NY, 1962). 5. E. Wigner, in Symmetries and reflections (University of Indiana Press, Bloomington, 1967), p. 200. 6. J. Mehra, Am. Scient. 61, 722 (1973). 7. M. Eigen and P. Schuster, The hypercycle (Springer-Verlag, Berlin, 1977). 8. J. D. Watson, Molecular biology of the gene (W. A. Benjamin, NY, 1970). 9. J. von Neumann, Theory of self-reproducing automata, ed. and completed by A. W. Burks (University of Illinois Press, Urbana, 1966). 10. M. A. Arbib, Theories of abstract automata (Prentice-Hall, Englewood Cliffs, NJ, 1969). 11. M. A. Arbib, in Interstellar communication: scientific perspectives, ed. C. Ponnamperuma and A. G. W. Cameron (Houghton Mifflin, Boston, 1974). 12. A. Turing, Mind 59, 433 (1950). Turing's article has been reprinted in many anthologies, for example The mind's I by D. R. Hofstadter and D. C. Dennett (Basic Books, NY, 1981). 13. Turing's original paper has provoked an enormous literature; a number of articles on the Turing Test and its significance are reprinted in The mind's I (ref. 12). In addition, a few articles of interest are K. Gunderson, Mind 73, 234 (1964); M. Scriven, Mind 62, 230 (1953); the Introduction to automata studies, ed. C. E. Shannon and J. McCarthy; M. Gardner, Scient. Am. 224 (No. 6, June). 120 (1971); the articles on machine intelligence in Dimensions of Mind, ed. S. Hook (New York University Press, NY, 1960). One crucial point which these works discuss and which we have ignored is the length of time the question period lasts. Another point which must be considered is how clever and original do we wish the human in the sealed room to be? We have taken these points into account to some extent in our text, by requiring that the machine passing the WAP test to make original observations on WAP, where originality is judged with respect to the performance of human scientists. The nerve of the Turing Test as a criteria for mind, creativity, or intelligence is the idea that all intelligent performance is judged with reference to human performance in the corresponding categories, and if the performance of the machine is comparable to that of a human in all of the categories, the machine must be regarded as a 'person'. 14. 15. 16. 17. 18.

F. Drake, Technol. Rev., 7 8 (7), 22 (June 1976). P. T. Landsberg, Nature 203, 928 (1964). E. P. Wigner and P. T. Landsberg, Nature 205, 1307 (1965). F. H. Stillinger, Science 209, 451 (1980). L. Pauling, The nature of the chemical bond, 2nd edn (Cornell University Press, Ithaca, NY, 1948). 19. L. Pauling, General chemistry (Freeman, San Francisco, 1956).

572

The Anthropic Principle and Biochemistry

20. L. Pauling and R. Hayward, The architecture of molecules (Freeman, San Francisco, 1964). 21. J. T. Edsall and J. Wyman, Biophysical chemistry, Vol. I (Academic Press, NY, 1958). 22. T. R. Dyke, K. M. Mack and J. S. Muenter, J. Chem. Phys. 66, 498 (1977). 23. M. Eigen, 'The origin of biochemical information', in The physicist's conception of Nature, ed. J. Mehra (Reidel, Dordrecht, 1972). 24. F. Drake, Astronomy 1 (No. 5, Dec.), 5 (1973). 25. A. E. Needham, The uniqueness of biological materials (Pergamon Press, NY, 1965). 26. A. Geiger, F. H. Stillinger and A. Rahman, J. Chem. Phys. 70, 4185 (1979). 27. P. Schuster, G. Zundel, and C. Sandorfy (eds), The hydrogen bond, 3 vols (North-Holland, Amsterdam, 1976). 28. M. D. Joesten and L. J. Schaad, Hydrogen bonding (Marcel Dekker, NY, 1974). 29. A. Geiger, A. Rahman, and F. H. Stillinger, J. Chem. Phys. 70, 263 (1979). 30. C. Pangali, M. Rao, and B. J. Berne, J. Chem. Phys. 71, 2982 (1979). 31. F. Franks, in Water a comprehensive treatise, Vol. 4, ed. F. Franks (Plenum, NY, 1975), p . l . 32. A. Ben-Naim, Hydrophobic interactions (Plenum, NY, 1980). 33. A. L. Lehninger, Biochemistry, 2nd edn (Worth, NY, 1975). 34. W. Kauzmann, Adv. Protein Chem. 14, 1 (1959). 35. C. Tanford, The hydrophobic effect: formation of micelles and biological membranes (Wiley, NY, 1980). 36. S. W. Fox and K. Dose, Molecular evolution and the origin of life, 2nd edn (Marcel Dekker, NY, 1977). 37. S. W. Fox, in The nature of life: 13th Nobel Conference, ed. W. H. Heidcamp (University Park Press, Baltimore, 1977). 38. F. G. A. Stone and W. A. G. Graham, Inorganic polymers (Academic Press, NY, 1962). 39. C. F. Fox, T h e structure of cell membranes', Scient. Am. (Feb.) 1972. 40. A. M. Buswell and W. H. Rodebush, in Conditions for life, ed. A. Gibor (Freeman, San Francisco, 1976). 41. D. Eisenberg and W. Kauzmann, The structure and properties of water (Oxford University Press, Oxford, 1969). 42. M. H. Hart, Origins of Life 9, 261 (1979). 43. J. E. Lovelock, Gaia: A New Look at Life on Earth (Oxford University Press, Oxford, 1979). 44. G. Wald, Scient. Am. 1 9 1 (2), 45 (1954). 45. G. Wald, Origins of Life, 5, 7 (1974). 46. G. Wald, Introduction to Fitness of the environment, by L. J. Henderson (Peter Smith, Gloucester, 1970). 47. G. Wald, Proc. natn. Acad. Sci., U.S.A. 52, 595 (1964). 48. M. H. Hart, Icarus 33, 23 (1978). 49. N. V. Sidgwick, The chemical elements and their compounds, Vols I and II (Oxford University Press, Oxford, 1950).

The Anthropic Principle and Biochemistry 50. 51 52. 53. 54. 55. 56.

57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70.

71. 72. 73. 74. 75. 76. 77. 78.

573

J. B. S. Haldane, New Biology 16, 12 (1954). H. C. Urey, Proc. natn. Acad. Sci., U.S.A. 38, 351 (1952). L. J. Henderson, Fitness of the environment (Macmillan, NY, 1913). V. A. Firsoff, Discovery 23, 36 (1962). V. A. Firsoff, Life beyond the Earth (Basic Books, NY, 1963). G. Feinberg and R. Shapiro, Life beyond Earth: The intelligent Earthling's guide to life in the Universe (Morrow, NY, 1980), p. 147. G. C. Pimental, K. C. Atwood, H. Gaffron, H. K. Hartline, and T. H. Jukes, Biology and the exploration of Mars, ed. C. S. Pittendrigh, W. Vishniac, and J. P. T. Pearman (NASA, Washington, 1966). H. H. Sisler, Chemistry in non-aqueous solvents (Reinhold, NY, 1961). C. N. Matthews and R. E. Moser, Proc. natn. Acad. Sci., U.S.A. 56, 1087 (1966). C. N. Matthews and R. E. Moser, Nature 215, 1230 (1967). M. Nei and A. K. Roychoudhury, Science 177, 434 (1972). T. Dobzhansky, F. Ayala, G. Stebbins, and J. W. Valentine, Evolution (Freeman, San Francisco, 1977). G. Wald, in Horizons in biochemistry, ed. M. Kasha and B. Pullman (Academic Press, NY, 1962). C. Sagan, 'Life', in Encyclopaedia Britannica, 15th edn, Vol. 10 (Macropedia, 1974), p. 893. H. Spencer, Principles of biology (rev. ed. 1909), p. 123. J. H. V. Crowe and A. F. Cooper, Scient. Am. 225, 30 (Dec. 1971). J. D. Bernal, in Theoretical and mathematical biology, ed. T. H. Waterman and H. J. Morowitz (Blaisdell, NY, 1965). R. Dawkins, The selfish gene (Oxford University Press, Oxford, 1977). G. Wald, 'Life and light', Scient. Am. (Oct. 1959), repr. in Conditions for life, ed. A. Gibor (Freeman, San Francisco, 1976). S. Brenner, personal communication (1981). We are grateful to Professors B. Carter and J. Perdew for discussions regarding the scaling of atomic properties with the change of the fundamental constants. J. Perdew pointed out the scaling law (8.3) to us. B. Carter, Phil. Trans. R. Soc. A 370, 347 (1983); also in The constants of Nature, ed. W. H. McCrea and M. J. Rees (Royal Society, London, 1983). Ref. 61, p. 87. We are grateful to Professors D. Mohr and J. Maynard Smith for helpful discussions on the derivation of Carter's formula. L. Margulis, Symbiosis in cell evolution (Freeman, San Francisco, 1981), p.

82.

Ref. 74, p. 92. Ref. 74, p. 95. Ref. 74, p. 97. T. Dobzhansky, Genetics of the evolutionary process (Colombia University Press, NY, 1970). 79. D. H. Kenyon and G. Steinman, Biochemical predestination (McGraw-Hill, NY, 1969).

574

The Anthropic Principle and Biochemistry

80. H. J. Morowitz, Prog. Theor. Biol. 1, 35 (1967). 81. M. A. Ragan and D. J. Chapman, A biochemical phylogeny of the protists (Academic Press, NY, 1978), p. 204. 82. J. A. Bassham, jn Plant biochemistry, ed. J. Bonner and J. E. Verner (Academic Press, NY, 1965). 83. J. DeLey, Evol. Biol. 2, 103 (1968). 84. Ref. 81, p. 41. 85. Ref. 81, p. 26. 86. Ref. 74, p. 324. 87. K. W. Jeon and M. S. Jeon, J. Cell Physiol. 89, 337 (1976). 88. Ref. 74, p. 286. 89. L. Ornstein, Physics Today 35 (No. 3, March), 27 (1982). 90. The number of genes in the cow and fruit fly are found in ref. 72; the number of genes in E. coli and Mycoplasma gallisepticum are taken from Table 1.1 of ref. 83, where DeLey's estimate of 1000 nucleotides per gene is replaced by Dobzhansky et aVs estimate of 1800. 91. G. F. R. Ellis and G. B. Brundrit, Quart. J. R. astron. Soc. 20, 37 (1979). 92. F. J. Tipler, Quart. J. R. atrom. Soc. 22, 133 (1981). 93. G. G. Simpson, This view of life (Harcourt Brace & World, NY, 1964), p. 252. 94. S. M. Stanley, The new evolutionary timescale (Basic Books, NY, 1981). 95. Ref. 43, pp. 69-76. 96. S. H. Schneider and S. L. Thompson, Icarus 41, 456 (1980). 97. J. D. Pollack and Y. L. Yung, Ann. Rev. Earth & Planet. Sci. 8, 425 (1980). 98. S. H. Schneider and S. L. Thompson, in Life in the Universe, ed. J. Billingham (MIT Press, Cambridge, Mass., 1981). 99. S. Chang, D. DesMarasis, R. Mack, S. L. Miller, and G. E. Strathearn, in Earth's earliest biosphere: its origin and evolution, ed. J. W. Schopf (Princeton University Press, Princeton, 1983). 100. J. Veizer, in Earth's earliest biosphere, ref. 99. 101. R. P. Turco, O. B. Toon, T. P. Ackerman, J. B. Pollack, and C. Sagan, Science 222, 1283 (1983). 102. P. R. Ehrlich et al, Science 222, 1293 (1983). 103. C. Sagan and E. Salpeter, Astrophys. J. Suppl. 32, 737 (1976). 104. A. G. Cairns-Smith, Genetic takeover and the mineral origins of life (Cambridge University Press, Cambridge, 1982); Seven clues to the origin of life: a scientific detective story, (Cambridge University Press, Cambridge 1985). 105. Ref. 93, p. 262. 106. L. H. Ahrens, in Physics and chemistry of the earth, Vol. 5, ed. L. H. Ahrens, F. Press, and S. K. Runcorn (Pergamon Press, NY, 1964). 107. R. F. Doolittle, in Science 214, 149 (1981), argues that some independent invention of enzymes is not as improbable as DeLey would have us believe; duplication would increase the probability; L. E. Orgel, Proc. R. Soc. B 205, 434 (1978), makes a similar argument (we are grateful to J. Maynard Smith for this reference). Nevertheless, the results of Doolittle and Orgel if true, do not appear to significantly alter our calculations.

The Anthropic Principle and Biochemistry

575

108. A. K. Dewdney, Scient. Am. 252, 14 (March 1985). 109. B. Charles worth, in Observatory 102, 49 (1982), gives tD «t g S -1 [(4N/LL) -1 + ln (2N)] where N is the number of individuals in the population and /x is the probability of occurrence of the mutation in a given gene. If we set N = 7, then unless /LX is very small, this expression will give the same estimate as (8.4). 110. F. B. Salisbury, Nature 224, 342 (1969), argued that the enormous improbability of a given gene, which we computed in the text, means that a gene is too unique to come into being by natural selection acting on chance mutations. WAP self-selection refutes this argument, as R. F. Doolittle in Scientists confront creationism, L. R. Godfrey (Norton, NY, 1983) has also pointed out. 111. The invariance we have calculated in (8.1)-(8.3) does not include relativistic effects. However, as can be seen from our discussion of atomic structure in Chapter 5 (see pp. 295-300), an increase in the value of the fine structure constant can induce significant relativistic effects in large atoms because typical orbital velocities are of order aZc. Recently, H. J. Kreuzer, M. Gies, G. L. Malli, and J. Ladik, J. Phys. A 18, 1571 (1985) have examined some of these effects in detail. They find that an increase in the fine structure constant by a factor 5 produces drastic changes in the Fe 2 + and Fe 3 + ions which play a key role in haemoglobin. Increases by factors 2.5 and 1.5 produce significant changes in the chemistry of cadmium and lead respectively.

9 The Space-Travel Argument Against the Existence of Extraterrestrial Intelligent Life Do there exist many worlds, or is there but a single world? This is one of the most noble and exalted questions in the study of Nature. St. Albertus Magnus

9.1 The Basic Idea of the Argument . . . the way whereby one can learn the pure truth concerning the plurality of worlds is by aerial navigation [space-travel]. P. Borel (1657 A D )

The contemporary advocates for the existence of extraterrestrial intelligent life seem to be primarily astronomers and physicists, such as Sagan,2 Drake, 3 and Morrison,4 while most leading experts in evolutionary biology, for instance Dobzhansky,5 Simpson,6 Francois,7 Ayala et al8 and Mayr,9 contend that the Earth is probably unique in harbouring intelligence. We presented the evolutionists' argument against the existence of extraterrestrial intelligent life (ETI) in section 3.2, and Carter's WAP argument in section 8.7. In this chapter we shall present the so-called space-travel argument against the existence of ETI, an argument which one of us has developed at length in a number of publications.1 Specifically, we shall argue in this chapter that the probability of the evolution of creatures with the technological capability of interstellar communication within five billion years after the development of life on an earthlike planet is less than 10"10, and thus it is very likely that we are the only intelligent species now existing in our Galaxy. The basic idea of the space-travel argument is straightforward and indeed has led other authors, such as Fermi,10 Dyson, 11 Hart,12 Simpson,6 Shklovskii,101 and Kuiper and Morris,13 to conclude that extraterrestrial intelligent beings do not exist anywhere in our Galaxy: if they did exist and possessed the technology for interstellar communication, they would also have developed interstellar travel and thus would already be present in our Solar System. Since they are not here,14'15 this implies that they do not exist. Although this argument has been expressed before,—indeed, it was used

Argument Against the Existence of Extraterrestrial Intelligent Life

577

in the seventeenth century to rule out intelligent life on the Moon1—its force does not seem to have been appreciated. We shall try to demonstrate its force by arguing that an intelligent species with the technology for interstellar communication would necessarily develop the technology for interstellar travel, and this would automatically lead to the exploration and/or colonization of our Galaxy in less than 300 million years. It seems reasonable to assume that any intelligent species which develops the technology for interstellar communication must also have (or will develop in a few centuries) technology which is at least comparable to our present-day technology in other fields, particularly rocketry. This is actually a consequence of the Principle of Mediocrity16 (that our own evolution is typical), which is usually invoked, particularly by Sagan,85 in analyses of interstellar communication. This assumption about technological development is also an essential one to make if interstellar communication via radio waves is to be regarded as likely. If we do not assume that an advanced species knows at least what we know, then we have no reason to believe an advanced species would transmit radio waves, for they may never have discovered such things. In the case of rocket technology, the human species invented rockets some six hundred years before it was even aware of the existence of radio waves, and present-day chemical rockets can be regarded as natural developments of early rocket technology. In addition to a rocket technology comparable to our own, it seems probable that a species engaging in interstellar communication would possess a fairly sophisticated computer technology. In fact, Sagan himself has asserted17 that 'Communication with extraterrestrial intelligence . . . will require . . . , if our experience in radioastronomy is any guide, computer-actuated machines with abilities approaching what we might call intelligence'. Furthermore, the Cyclops18 and SETI 19 proposals for radio telescopes to search for artificial extraterrestrial radio signals have required some quite advanced data-processing computers. We shall assume therefore that any species engaging in interstellar communication will have a computer technology which is not only comparable to our present-day technology, but which is comparable to the level of technology which we know is possible, which we are now spending billions of dollars a year to develop, and which a majority of computer experts believe we will actually possess within a century. That is, we shall assume that such a species will eventually develop a self-replicating universal constructor with intelligence comparable to the human level—such a machine should be developed within a century, according to the experts20"22 (see section 3.2 for additional information supporting this opinion)—and such a machine combined with present-day rocket technology would make it possible to explore the Galaxy in less than 300 million

578

Argument Against the Existence of Extraterrestrial Intelligent Life 578

years, for an initial investment less than the cost of operating a 10 MW microwave beacon for several hundred years, as proposed in SETI.19 It is a deficiency in present-day computer technology, not rocket technology, which prevents us from beginning the exploration of the Galaxy tomorrow. The above conclusions may seem to hinge on the motivations of advanced extraterrestrial intelligent beings, a subject about which we admittedly know nothing. However, we know by definition the motivations of the most interesting class of intelligent beings: those whose technology is far in advance of ours, and who are interested in communicating with us, or otherwise interacting with us. It is this class that most SETI programs are designed to detect, and it is the class—in the terminology of Chapter 8, the class of strongly intelligent beings—whose existence is made doubtful by the arguments we present here. We shall also argue that the interstellar exploration mechanism discussed here has so many uses besides contacting other intelligent beings that any technologically advanced species would be likely to use it, and hence if they existed, they should be here. In section 9.3 and in Chapter 10, we shall point out that the ultimate survival of a technological civilization, and indeed the survival of the biosphere in some form, requires the eventual expansion of the civilization into interstellar space. We gave upper bounds to the lifetime of a biosphere restricted to a single planet and a single solar system in Chapter 3. A civilization far in advance of ours is probably aware of this, and such awareness would provide a motivation to begin the colonization of space.

9.2 General Theory of Space Exploration and Colonization If they existed, they would be here. E. Fermi

In space exploration (or colonization), it is wise to adopt a strategy which maximizes the probable rate of information gained (or regions colonized) and minimizes the cost subject to the constraints imposed by the level of technology. Costs may be minimized in two ways: first, 'off-the-shelf' technology should be used as far as possible to reduce the research and development costs; second, resources which could be used for no other purpose should be utilized as far as possible. The resources available in uninhabited stellar systems cannot be utilized for any human purpose unless a space vehicle is first sent; indeed, from the economic viewpoint materials which cannot be utilized at all are valueless. Therefore, any optimal exploration strategy must utilize the material available in other stellar systems as far as possible. With present-day technology, such

Argument Against the Existence of Extraterrestrial Intelligent Life

579

utilization could not be very extensive, but with the level of computer technology assumed in the previous section, these otherwise useless resources can be made to pay for virtually the entire cost of the exploration programme. What one needs is a self-reproducing universal constructor: a machine capable of making any device, given the construction materials and a construction program. By definition, it is capable of making a copy of itself. Von Neumann has shown23 2 4 that such a machine is theoretically possible, and in fact a human being is a universal constructor specialized to perform on the surface of the Earth. Thus the manned space exploration (and colonization) programme outlined in refs. 11, 12, and 13 is just a special case of an exploration strategy to be carried out by universal constructors. We discussed the theory of such machines in section 8.2. The payload of a probe to another stellar system would be a selfreproducing universal constructor with human-level intelligence (we shall term such an interstellar probe a von Neumann probe) together with an engine for slowing down once the other stellar system is reached, and an engine for travelling from one place to another within the target stellar system—the latter could be an electric propulsion system,25 or a solar sail.26 The von Neumann probe would be instructed to search out construction material with which to make several copies of itself and the original probe rocket engines. Judging from observations of our own solar system,27 what observations we have of other stellar systems,28 and most contemporary solar system formation theories,29 such materials should be readily available in virtually any stellar system—including binary star systems—in the form of meteors, asteroids, comets, and other debris from the formation of the stellar system. Recent observations of huge amounts of dust around Vega and other stars indicate that such materials are indeed present around many, if not all, stars. Whatever elements are necessary to reproduce the von Neumann probe, they should be available from some source in a stellar system. For instance, the material in the asteroids is highly differentiated; many asteroids are largely nickel-iron, while others contain large amounts of hydrocarbons.27 As the copies of the von Neumann probe are made, they would be launched at the stars nearest the target star. When these probes reached those stars, the process would be repeated, and repeated again until the probes had covered all the stars of the Galaxy. Once a sufficient number of copies had been made, the von Neumann probe would be programmed to explore the stellar system in which it finds itself, and relay the information gained back to the original solar system from which the exploration began. In addition, the von Neumann probe could be programmed to use the resources of the stellar system to conduct scientific research which would be too expensive to conduct in the original solar system.

Argument Against the Existence of Extraterrestrial Intelligent Life 580 It would also be possible to use the von Neumann probe to colonize the stellar system. Even if there were no planets in the stellar system—the system could be a binary star with asteroid-like debris—the von Neumann probe could be programmed to turn some of the available material into an O'Neill colony,30 a self-sustaining human colony in space which is not located on a planet but is rather a space station. Inhabitants for the colony could be synthesized by the von Neumann probe. All the information needed to manufacture a human being is contained in the genes of a single human cell. Thus if an intelligent extraterrestrial species possessed the knowledge to synthesize a living cell—and some biologists claim31'32 the human race could develop such knowledge within 30 years—they could program a von Neumann probe to synthesize a fertilized egg-cell of their species. If they also possessed artificial womb technology—and such technology is in the beginning stages of being developed on Earth33— then they could program the von Neumann probe to synthesize members of their species in the other stellar system. As suggested by Eiseley,34 these beings could be raised to adulthood in the O'Neill colony by robots also manufactured by the von Neumann probe, after which these beings would be free to develop their own civilization in the other stellar system. Suggestions have been made35 that other solar systems could be colonized by sending frozen cells via space probe to the stars. But, it has not yet been shown36^39 that such cells would remain viable over the long periods required to cross interstellar distances. This difficulty does not exist in the outlined colonization strategy above; the computer memory of the von Neumann probe can be made so that it is essentially stable over long periods of time. If it is felt that the information required to synthesize an egg cell would tax the memory storage space of the original probe, the information could be transmitted via microwave to the von Neumann probe once it has had time to construct additional storage capacity in the other solar system. The key point is that once a von Neumann probe has been sent to another solar system, the entire resources of that solar system become available to the intelligent species which controls the von Neumann probe; all sorts of otherwise prohibitively expensive projects become possible to carry out. It would even be possible to program the von Neumann probe to construct a very powerful radio beacon with which to signal other intelligent species! A number of scientists, for instance G. O'Neill 96 and R. A. Freitas97"99 have independently suggested that self-reproducing probes are the most efficient way to contact ETI. Freitas' articles contain a quite detailed analysis. Hence the problem of interstellar travel has been reduced to the problem of transporting a von Neumann probe to another stellar system. This can be accomplished even with present-day rocket technology. For example, Hunter40'41 has pointed out that by using a Jupiter swing-by to

Argument Against the Existence of Extraterrestrial Intelligent Life

581

approach the Sun and then adding a velocity boost at perihelion, a solar system escape velocity v es of about 90 km/sec ( ~ 3 x 10~4c, where c is the speed of light) is possible with present-day chemical rockets, even assuming the launch is made from the surface of the Earth. As pointed out in references 28 and 29, most other stars should have planets (or companion stars) with characteristics sufficiently close to those of the Jupiter-Sun system to use this launch strategy in reverse to slow down in the other solar system. The mass ratio n (the ratio of the payload mass to the initial launch mass) for the initial acceleration in the swing-by would be 103, so the total trip would require n < 106 (less than, since the 10 3 number assumed an Earth surface launch); quite high, but still feasible. (With Jupiter swing-by only, the escape velocity would be about 1.6xl0~ 4 c with fj, = 103.) For comparison, we note that Voyager spacecraft will have42 a solar escape velocity of about 0.6 x 10"4c with n = 850.) Thus it seems reasonable to assume that any intelligent species would develop at least the rocket technology capable of a one-way trip with deceleration at the other stellar system, and with a travel velocity ves of 3 x 10~4c. At this velocity the transmit time to the nearest stars would be between 10 4 and 105 years. This very long travel time would necessitate a highly developed self-repair capacity, but this should be possible with the level of computer technology assumed for the payload.43 In addition, nuclear power-sources could be developed which would supply power for that period of time. However, nuclear power is not really necessary. If power utilization during the free-fall was sufficiently low, even chemical reactions could be used to supply the power. Since v^ is of the same order as the stellar random motion velocities, sensitive guidance would be required, but this does not seem to be an insuperable problem with the assumed level of computer technology. Because of the very long travel times, it is often claimed44 that interstellar probes would be obsolete before they arrived. However, in a fundamental sense a von Neumann probe cannot become obsolete, since it is a universal constructor. The von Neumann probe can be given instructions by radio about how to make the latest devices after it arrives at the destination star. Restricting consideration to present-day rocket technology is probably too conservative. It seems likely that an advanced intelligent species would eventually develop rocket technology at least to the limit which we regard as technically feasible today. For example, the nuclear pulse rocket of the Orion Project pictured45 a solar escape velocity u es of 3 x 10~2c with jut = 36 for a one-way trip and deceleration at the target star. The cost of the probe would be $ 4 x l 0 1 2 at 1985 prices, almost all of the money being for the deuterium fuel. This is approximately the present GNP of the United States. Project Daedalus, 43 the interstellar probe

Argument Against the Existence of Extraterrestrial Intelligent Life 582 study of the British Interplanetary Society, envisaged a stellar fly-by via nuclear pulse rocket (no slow-down at the target star), with v = 1.6 x 10 _1 c, fj, = 150, and a cost of $10 12 . As before, almost all the cost is for the helium-3 fuel (at 1960 prices). With slow-down at the target star, H = 2 x 10 4 and the cost would be $2 x 1014, or almost 100 times the United States GNP, and it would require centuries to extract the necessary helium-3 from the helium source proposed in the Daedalus study, the Jovian atmosphere. The cost of such probes is far beyond the means of present-day civilization. However, in the above estimates almost all the cost is for the rocket fuel. Building the probe itself and testing it would cost relatively little. A possible interstellar exploration strategy would be to design a probe capable of 1^ = 0.10, record the construction details in a von Neumann probe, launch the probe payload via a chemical rocket at 3 x 10~4c to a nearby stellar system, and program the machine to construct and fuel several high-velocity (0.1c) probes with von Neumann payloads in the other system. When the probes reach their target stars, they would be programmed to build high-velocity probes, and so on. In this way the investment on interstellar probes by the intelligent species is reduced to a minimum while maximizing the rate at which the Galaxy is explored. (The von Neumann probe could conceivably be programmed to develop the necessary technology in the other stellar system. This would reduce the initial investment even further.) The disadvantage, in a 10 4 year transit time is the fact that for 10 4 years, there is no information on other stellar systems reaching the original solar system. There is a trade-off between the cost of the first probe and the time interval the intelligent species must await before receiving any information on the other stellar systems. But with second generation probes with u es = 0.1c, new solar systems would be explored at the rate of several per year by 10 5 years after the original launch. The intelligent species launching the original probe need only be patient and launch a sufficient number of initial probes at v es = 3 x 10~4c so that at least one succeeds in reproducing itself (or in making a high-velocity probe) several times. This number will of course depend on the failure rate. Project Daedalus43 aimed at a mission failure rate of 10~4, and the designers believed that such a failure rate was feasible with on-board repair. If we adopt this failure rate and assume failures to be statistically independent, then only three probes need be launched to reduce the failure probability to 10"12. Judging by contemporary rocket technology, the cost of the initial low-velocity probes would be less than $1 x 10 10 each, since von Neumann probes could make themselves and the original research and development costs would be small—intelligent self-reproducing machines would originally be developed for other purposes 4 6 Thus the exploration of the Galaxy e s

Argument Against the Existence of Extraterrestrial Intelligent Life

583

would cost about 30 billion dollars, approximately the cost of the Apollo program. These costs—$3 x 10 iu for a low-speed probe and $2x 10 14 for a highspeed one—seem quite large to us, but there is evidence that they would not seem large to a member of a civilization greatly in advance of ours. As we pointed out in section 3.7, the cost relative to wages of raw materials, including fuel, has been dropping exponentially with a time constant of 50 years for the past 150 years. If we assume this trend continues for the next 400 years (the reasons for believing that it will continue were discussed in section 3.7; Newman and Sagan62 believe it will continue for the next 1000 years), then to an inhabitant of our own civilization at this future date, the cost of a low-velocity probe would be as difficult to raise as 10 million dollars today, and the cost of a high-velocity probe would be as difficult to raise as 70 billion dollars today. The former cost is easily within the ability of a large number of individuals today. There are today at least 100,000 Americans who are worth 10 million dollars and the Space Telescope project budget exceeds $10 9 . If the cost trend continues for the next 800 years, then the cost of a $3x 10 10 probe would be as difficult to raise as $4000 today; an interstellar probe would appear to cost as much then as a home computer does now. Tens of millions of people could afford one. In such a society, someone would almost certainly build and launch a probe. To maximize the speed of exploration and/or colonization, one must minimize [(daJves) +tc], where dav is the average distance between stars and tc is the time needed for the von Neumann probe to reproduce itself. The time tc will be much larger for ues = 0.1c probes than for 10~4c probes. We would guess the minimum to be obtained for u e s = 5 x 10"2c and fc = 100 years. With d a v =51yr, this gives a rate of expansion of 2.5 x 10 _2 lyr/yr, and thus the Galaxy could be explored in 4 million years. Here, we shall be conservative and assume only present-day rocket technology, which would give an expansion rate of 3 x 10"4 lyr/yr, and such a rate would complete the exploration of the Galaxy in 3 x 108 years. The travel time between stars will equal the expansion rate provided daJves»tc, or f c « 1 0 3 y r . This seems a reasonable condition when we compare von Neumann probes with the only highly intelligent, selfreproducing machines of our experience, namely human beings. In their natural environment humans have a 20-30 yr. If we compare a von Neumann probe to an entire technical civilization, then tc ~ 300 yr for the time required to build up the United States into an industrial nation. Most of this time was required to develop not the hardware but rather the knowledge of which machines to build. Possessing the necessary knowledge, Germany and Japan rebuilt their industries in a decade after World War II, requiring only minor investments from outside. As for the tc for

Argument Against the Existence of Extraterrestrial Intelligent Life 584 space industries, G. O'Neill estimates 30 that space colonies could be self-sufficient and able to make more colonies in less than a century. Such a rapid space colony construction rate might require a large initial investment from Earth, and this might correspond to a very large (i.e., expensive) probe payload. As before, an intelligent species can reduce the initial investment by building an initial probe small, but programmed to construct larger probes in the target systems. It seems unlikely that a Project Daedalus size payload (~10 3 tons), which appears to have most of the essential equipment of a von Neumann probe, would require longer than 10 6 yr to reach the large-scale-probe-making stage, and with this upper bound the above estimate for the time needed to explore the Galaxy is valid. For comparison, recall that modern man, hom*o sapiens, has been in existence for about 4XL0 4 years. (See Chapter 3.) Once the exploration and/or colonization of the Galaxy has begun, it can be modelled quite closely by the mathematical theory of island colonization—a theory first developed by MacArthur and Wilson47'48— since the islands in the ocean are closely analogous to stars in the heavens, and the von Neumann probes are even more closely analogous to biological species. There are several general conclusions applicable to interstellar exploration and/or colonization which follow from the MacArthur-Wilson theory. First, there are two basic behavioural strategies, the r-strategy and the K-strategy, which could be adopted in different phases of the colonization; (r is the net reproductive rate [per capita births minus deaths], and K is the carrying capacity of the environment.) The r-strategy is one which emphasizes rapid reproduction. It is used by species inhabiting a rapidly changing environment, or an environment in which it is crucial to exclude competitors by occupying niches as quickly as possible. Thus it seems likely that an r-strategy would be followed in the early stages of the colonization. The K-strategy on the other hand, is the one followed by species inhabiting a slowly changing environment, or one in which the niches are already occupied by other members of the same species, and there is competition within this species for the occupied niches. We would therefore expect the K-strategy to be adopted after the solar system had been colonized for some time, and this strategy would result in fewer probes being sent to other stars. Second, the MacArthur-Wilson theory suggests49 that the fraction of probes reaching a distance d from the system of launch is V27r[exp(-d 2 /2]/d. This means that even with random dispersal, probes would be expected to be sent not just to nearby solar systems, but also to far distant ones, though distant solar systems would be less likely targets than nearby ones. It is important to realize that the MacArthur-Wilson theory must be modified before it can be applied to the problem of interstellar

Argument Against the Existence of Extraterrestrial Intelligent Life

585

exploration/colonization. The MacArthur-Wilson theory assumes that the dispersal of colonizers is random, while the dispersal of von Neumann probes would be intelligently directed. The von Neumann probes can use radio waves to determine which nearby stars have already been reached by other probes, and launch descendant probes only at those stars which have not yet been reached; at least they can follow this strategy on the colonization frontier. Animal colonizers do not have an analogous ability to learn about uninhabited but habitable islands, and so they must use a random search strategy. This also means that a diffusion model50'51 of interstellar colonization would not be completely accurate. Diffusion can be viewed as expansion against resistance, and there would be no resistance to the expansion of the volume of stars colonized by the von Neumann probes. In the case of the diffusion of gas molecules, the diffusing molecules collide with molecules of the ambient gas, and this leads to (in the usual Brownian motion derivation of the one-dimensional diffusion equation) an equally great probability of going backward as forward from a given collision site. Picture a one-dimensional array of collision points (stellar systems). The von Neumann probe at xt would be programmed to send probes to all nearby unoccupied points (in the interval xt_r to xi+r, say), concentrating first on a probe to point x i+1 , the nearest neighbour in the forward direction. (The probe will have a memory of having arrived from the x i H point ( j ^ l ) , so the direction is defined.) If the reproductive failure rate of the probe at xt is neglected, then with probability one the motion will be forward to xi+1, x i + 2 , etc., at a rate greater than or equal to [(d2LJves) + *c]- By adjusting r (that is, by adjusting the net probe reproduction rate), the effect of the failure rate can be cancelled out. This analysis can be immediately generalized to three dimensions. The expansion speed in three-dimensions would still be [(d av /u es +f c ], at least in the later stages of expansion. (The earlier stages of expansion might be dominated by tc, since there are more than two nearest neighbours. However, for tc upper bounds like those given above, the timescale for expansion throughout the Galaxy would be dominated by the properties of its later stages.) In summary, we would expect the initial colonization of space to be much more like the free expansion of a gas into a vacuum, rather than like the diffusion of one variety of gas through another, or like the diffusion of a coloured liquid through a noncoloured liquid. Free expansion is much, much more rapid than diffusion. Subsequent colonization of a previously colonized region, if it occurs, could closely resemble diffusion, for there would be resistance to the colonization by the descendants of the first probes. But there is no reason to expect such interstellar imperialism. Indeed, if the probes are sent out for exploratory purposes, it would be pointless. Even if such imperialism

586

Argument Against the Existence of Extraterrestrial Intelligent Life 586

does occur, it would not change the fact that the colonization frontier would be expanding freely rather than diffusing. Furthermore, the existence of such imperialists would motivate the colonizers on the frontier to speed up their occupation of previously unoccupied solar systems, in order to prevent the imperialists from seizing them. The rapid conquest of central Africa in the late nineteenth century by the European powers was driven by such a motivation. Germany began occupying parts of one section of Africa, which previously no European nation cared to control. The other powers thereupon began their movement into this section in order to prevent the Germans from occupying it all. Another example would be the occupation of Oklahoma territory by settlers virtually overnight after the region was thrown open to settlement by the United States government. Since whoever first reached the land in Oklahoma owned it thereafter, there was a strong motivation to occupy it as rapidly as possible, and develop it afterwards. This is an instance of an initial r-strategy being replaced later by a K-strategy.

9.3 Upper Bounds on the Number of Intelligent Species In the Galaxy Absence of evidence is not evidence of absence. M. Rees

In most discussions, the probability that intelligent life which eventually attempts interstellar communication will evolve in a star system is expressed by the Drake equation:52

P=f nJififc P

(9.1)

where fp is the probability that a given star system will have planets, ne is the number of habitable planets in a solar system that has planets, fx is the probability that life evolves on a habitable planet, ft is the probability that intelligence evolves on a planet with life, and fc is the probability that an intelligent species will attempt interstellar communication within 5 billion years after the formation of the planet on which it evolved. The time limit in fc is only tacit in most discussions of extraterrestrial intelligence. However, some time period which is short compared with the age of the universe must be assumed if the Drake equation is to yield a number of existing civilizations which is significantly greater than one. If, for example, fc were a Gaussian distribution with peak at 30 billion years and a standard deviation of ~ are future and past infinity respectively.

from Proposition 6.7.1 of ref. 91 that there is a time-like geodesic from any event of A to the event p. Now Ellis and Brundrit have shown that the assumption of hom*ogeneity in a spatially infinite universe, when combined with the atomicity of matter, implies that all possible evolutionary histories must occur an infinite number of times with probability one. The argument is essentially the one used to prove recurrence in a discrete, finite Markov chain: if there are only a finite number of possibilities—and this follows from the hom*ogeneity of the Universe and the atomicity of matter—then with an infinite amount of space (or time) each possibility has a probability one of being realized an infinite number of times. A space-time satisfying the Perfect Cosmological Principle is hom*ogeneous in both space and time, so the Ellis and Brundrit argument applies with double force: there must be an infinite number of evolutionary histories like A to the past of any point p in the space-time. Earlier in this chapter we have described a process whereby it is possible to travel from one star to another in a galaxy at a net speed comparable to that of light if the stars are far apart, while the initial investment in energy and money is quite small: construct and send out von Neumann probes. Since all possible intelligent species have evolved to the past of p, infinitely many species would have evolved which sent out von Neumann probes, or which otherwise colonized space. But we can go further. Since all possible evolutionary sequences have occurred to the past of p, one of these evolutionary sequences consists of the random 92

93

Argument Against the Existence of Extraterrestrial Intelligent Life 604

assembly, without the assistance of any intelligent species whatsoever, of a von Neumann probe out of the atoms of interstellar space. Such a random assembly would occur an infinite number of times to the past of p, by hom*ogeneity and stationarity in an infinite universe. At least one of these randomly assembled probes would have the motivations of a living being, that is to expand and reproduce without limit. Natural selection acting on this probe and its descendants—such descendants can be regarded as comprising a living intelligent species—would insure that this mechanical lineage would expand to occupy all 'ecological' niches available to it. In so expanding the descendants may split into many 'species', but once the expansion begins, at least one lineage would continue to exist and expand. Since the probes are intelligent, some would realize that distant galaxies would constitute available ecological niches, and since it would be possible for them to construct probes like themselves which could travel intergalactic distances of arbitrary length along a curve which is very close to any given timelike geodesic, one concludes that natural selection would impel some descendant probes to do so. Because in an infinite steady-state universe, some such events would lie to the past of p, these probes should have already arrived at p, and should be using the material at p to construct more probes. In effect, the probes would have colonized the region around p. Since p is any event, we obtain a contradiction with the fact that our solar system has not been colonized. The entire above argument is just a systematic use of the Perfect Cosmological Principle, which means this Principle is self-contradictory: the assumption that intelligent beings can evolve implies with this Principle that they never can evolve—they must already be everywhere. In fact, the paradox that extraterrestrial intelligent beings ought to have arrived in our solar system long ago but did not, is as astounding as Olbers' paradox which it closely resembles. Both paradoxes follow from assumptions of hom*ogeneity in space and time. Olbers' paradox can be resolved in a steady-state universe by the redshift, but the expansion will not reduce the effective speed of probes, since the expansion-caused slowing of the intergalactic probes with respect to the fundamental frames can be cancelled by using the matter in galaxies encountered to re-accelerate the probe. The above argument can be extended to any cosmology which is stationary in the large, since the infinite past during which the Universe is locally evolutionary would give rise to the von Neumann probes used to reach a contradiction in the steady-state cosmology case. Davies has given a related argument against the Ellis et al. cosmology, as mentioned above. In addition, the above argument rules out the chronometric cosmology of Segal. The chronometric cosmos is a globally static cosmology with topology S X R . Just as in the Einstein static universe, it is possible to travel via rocket from any one spatial event to any other spatial point in 94

3

1

Argument Against the Existence of Extraterrestrial Intelligent Life

605

finite time as measured by a physical clock on the rocket. The above argument thus still applies. The most important steady-state cosmologies discussed at the present time are those based on inflation. We discussed the basic inflationary mechanism in Chapter 6: when the density and temperature of matter is very high in the very early universe, the expansion of the universe is driven by a non-zero vacuum energy density (equivalent to a primordial positive cosmological constant), which is later cancelled out by a spontaneous symmetry breaking phase transition which occurs when density and temperature of drop sufficiently far. In most inflation models, the expansion is envisaged as beginning at an initial cosmological singularity as in Standard Model. However, such a beginning is not strictly required by the mathematics of the inflation model. In fact, during the inflationary phase the evolution equations are essentially the same as the equations for the steady-state universe, so it is possible to regard the phase transition which terminates the inflation as generating a 'bubble' within which the entire visible universe is located. Outside the walls of this bubble the metric is that of the steady-state universe. Thus, in this model, the Universe in the large is steady-state. In the steady-state region—that is, in the region of space-time outside the bubble—the matter density is only a few orders of magnitude less than the Planck density of 5x 10 gm/cm , and the dominant term in the Einstein equations is the vacuum energy term. The visible universe is then just a tiny bubble of evolving matter in a Universe which is changeless in the large. There may be other bubble universes in this steady-state Universe, but they comprise only an infinitesimal fraction of the volume of the whole. Narlikar has recently argued that there is no essential difference between the final version of the steady-state theory, defended by himself and Hoyle, and the inflation steady-state model. From the point of view of the global causal structure, there are two basic types of bubble universes which can form in the inflation version of the steady-state Universe: 'open universe' bubbles and 'closed universe' bubbles. The open universe bubbles have been discussed extensively by Gott. Their spatial sections have negative or zero curvature, and their walls expand indefinitely at the speed of light. Although finite in spatial extent at any given time, the volume of an open bubble becomes infinite in infinite time. The causal structure of a steady-state universe with infinitely many non-intersecting open bubbles is pictured in Figure 9.2. As seen in this figure, the different bubbles are forever out of causal contact with each other; evolution proceeds in each as if the others did not exist. However, non-intersecting open bubbles are actually inconsistent with the steady-state universe, which is hom*ogeneous in space and time. If the 95

93

3

102

103

Argument Against the Existence of Extraterrestrial Intelligent Life

606 i

Figure 9.2. Penrose conformal diagram for the global causal structure of a steady-state universe with an infinite number of open bubbles. The bubbles come into existence at the events labelled E. The walls of the bubbles are labelled W; these walls expand at the speed of light, reaching spacelike infinity at an infinite time in the future. In the future of the bubbles, £ becomes null, but is timelike elsewhere, as it is in the standard steady-state model: outside the bubbles, the space-time is the same as the standard steady-state universe pictured in Figure 9.1. The point labelled i in each bubble is the future end-point of all timelike geodesies inside the bubbles. All bubbles are to the future of the event p: the boundary of the past light-cone J~(p) of p is represented by a dotted line. +

+

Universe were truly steady-state, we would expect that there would be a constant probability per unit time of bubble formation on the timelike geodesies which are normal to the spacelike hypersurfaces of global hom*ogeneity. But as we saw in our discussion of the standard steady-state universe, all timelike curves must intersect the past light-cone of any event p on the world line of the origin of spatial coordinates. Thus, if the

Argument Against the Existence of Extraterrestrial Intelligent Life

607

inflationary steady-state universe were truly steady-state, there must be a bubble in the past light-cone of such an event p, which contradicts the causal structure pictured in Figure 9.2. (Gott was himself aware of this difficulty with open bubbles in a steady-state universe.) Closed bubbles do not suffer from this problem, for a closed bubble would evolve like a closed universe: it would be formed in a phase transition, expand to a maximum size and then re-contract to a high density. Eventually the bubble walls would intersect and the bubble universe would disappear. Thus, there could be an infinite number of bubbles in the light-cone of an event p, because these bubbles would have formed and disappeared long ago. The causal structure of a steady-state inflation model with only closed bubbles is identical to the causal structure of the standard steady-state model, which is pictured in Figure 9.1. Thus, the closed bubble model is open to the same SAP objection levelled at the standard steady-state universe. We would expect intelligent life to evolve in at least some of the bubbles. These intelligent beings would die out when their bubble disappears if they are restricted to the bubble in which they evolve. Therefore, if it is possible for an intelligent species to escape its bubble of origin—that is, if it is possible for the species to develop a means to travel in the steady-state region—we would expect at least one such species in the past of p to do so, and indeed to expand to the region containing p. This SAP objection is much weaker in the inflation steady-state universe situation than it is in the standard steady-state universe model, for it is far from clear that it is possible to develop technology which will allow intelligent life to exist and travel in the steady-state region: the density and temperature in this region are near the corresponding Planck magnitudes. We shall present arguments in Chapter 10 that it is actually possible for intelligent 'life' to exist in such high-density and hightemperature regimes, but this cannot be regarded as an established fact by any stretch of the imagination. However, this possibility must be taken into account in any steady-state theory based on closed bubbles: such a theory cannot be regarded as true unless it is shown that it is impossible for intelligent life, no matter how advanced, to leave their bubble of origin. If FAP holds, then it must be possible for intelligent life to leave a closed bubble, for by FAP, intelligent life cannot disappear once it comes into existence. This argument will become clearer once the FAP is defined precisely in Chapter 10. The arguments presented in this chapter complement the earlier arguments we presented in Chapters 3 and 8 regarding the improbability of other forms of local (that is, within range of communication with us) extraterrestrial life. We have developed the general theory of exploration and colonization using self-reproducing robots, the theory of whose 103

104,105 1 0 6

608

Argument Against the Existence of Extraterrestrial Intelligent Life 608

existence is already known to us although at present we lack the level of computer technology to implement it in practice. This theory was then used to demonstrate the ease with which advanced Galactic civilizations could reveal their presence and the difficulty they would have concealing it. These arguments are based upon technological considerations and an analysis of the collective features necessary to support an advanced technological civilization. Finally, we have demonstrated how the nonobservation of other life-forms in our Galaxy allows one to rule out a large class of otherwise quite possible steady-state cosmologies having infinite ages. References 1. F. J. Tipler, Quart. J. R. astron. Soc. 21, 267 (1981). Quart. J. R. astron. Soc. 22, 279 (1981); Physics Today 34 (No. 4, April, 9 (1981); Mercury 11 (No. 1, Jan.), 5 (1982); New Scient. 96 (No. 1326, 7 Oct.), 33 (1982); Physics Today 35 (No. 3, March), 26 (1982); Science 219, 110 (1983); Discovery 4 (No. 3, March), 56 (1983). A short history of the belief in extraterrestrial intelligence can be found in F. J. Tipler, J. R. astron. Soc. 22, 133 (1981). The belief in ETI is closely interwoven with the Design Arguments. 2. I. S. Shklovskii and C. Sagan, Intelligent life in the Universe (Dell, NY, 1966). 3. F. D. Drake, Intelligent life in space (Macmillan, NY, 1960). 4. P. Morrison, in Interstellar communication: scientific perspectives, ed. C. Ponnamperuma and A. G. W. Cameron (Houghton Mifflin, Boston, 1974). 5. T. Dobzhansky, in Perspectives in Biology and Medicine, 15, 157 (1972); Genetic diversity and human equality (Basic Books, NY, 1973), pp. 99-101. 6. G. G. Simpson, This view of life (Harcourt, Brace & World, NY, 1964), Chapters 12 and 13. 7. J. Francois, Science 196, 1161 (1977); see also W. D. Mathew, Science 54, 239 (1921). 8. T. Dobzhansky, F. J. Ayala, G. L. Stebbins, and J. W. Valentine, Evolution (Freeman, San Francisco, 1977). 9. E. Mayr, Scient. Am. 239 (No. 3, Sept.), 46 (1978). 10. E. Fermi, quoted on p. 495 of C. Sagan, Planet. Space Sci. 11, 485 (1963). 11. F. M. Dyson, in Perspectives in modern physics : essays in honor of Hans A. Bethe, ed. R. E. Marshak (Wiley, NY, 1966). 12. M. H. Hart, Quart. J. R. astron. Soc. 16, 128 (1975). 13. T. B. H. Kuiper and M. Morris, Science 196, 616 (1977). 14. P. J. Klass, UFOs explained (Random House, NY, 1974). 15. D. H. Menzel and E. H. Taves, The UFO enigma (Doubleday, Garden City, NY, 1977). 16. Ref. 2, Chapter 25. 17. C. Sagan, The dragons of Eden (Ballantine, NY, 1977), p. 239.

Argument Against the Existence of Extraterrestrial Intelligent Life 18.

609

Project Cyclops (Report CR114445, NASA Ames Research Center, Moffett Field California, 1971). 19. The search for extraterrestrial intelligence: SETI (NASA report SP-419, 1977). 20. D. Michie, Nature 241, 508 (1973). 21. O. Firschein, M. A. Fischler, and L. S. Coles, in Third International Joint Conference on Artificial Intelligence (Stanford University, 1973). This reference actually gives the opinions of leading computer scientists as to when computers with human-level intelligence and manipulative ability will be manufactured. This technology seems to be to Be roughly comparable to von Neumann machine technology, so we use this number as our estimate for how long it will be before we develop von Neumann probes. No explicit mention was made of self-reproducing machines in refs 20, 21, or 22, However, G. von Tiesenhausen and W. A. Darbo have claimed in NASA Technical Memorandum TM-78304 (July, 1980) that self-reproducing space robots could be developed in only 20 years. See also Advanced automation for space missions; ed. R. A. Freitas, Jr., and W. P. Gilbreath (NASA Conference Publication 2255, Washington, 1982). 22. M. Minsky, in Communication with extraterrestrial intelligence, ed. C. Sagan (MIT Press, Cambridge, Mass, 1973), p. 160. 23. J. von Neumann, Theory of self-reproducing automata, ed. and completed by A. W. Burks (University of Illinois Press, Urbana, 1966). 24. M. A. Arbib, Theories of abstract automata (Prentice-Hall, Englewood Cliffs, NJ, 1969); see also the Arbib article in Ponnamperuma and Cameron, ref. 4. 25. E. Stuhlinger, Ion propulsion for space flight (McGraw-Hill, NY, 1964). 26. J. L. Wright and J. M. Warmke, 'Solar sail mission applications', JPL preprint 76-808 AIAA/AAS 1976 San Diego Astrodynamics Conference. 27. C. R. Chapman, Scient. Am. 232 (No. 1), 24 (Jan. 1975); B. J. Skinner, Am. Scient. 64, 258 (1976); D. W. Hughes, Nature 270, 558 (1977). 28. H. A. Abt, Scient. Am. 236 (No. 4), 96 (April 1977); A. H. Batten, Binary and multiple systems of stars (Pergamon Press, NY, 1973); S. A. Dole, Habitable planets for man (Blaisdell, NY, 1964). 29. J. W. Truran and A. G. W. Cameron, Astrophys. Space Sci. 14, 179 (1971); A. G. W. Cameron in ref. 4. 30. G. K. O'Neill, Physics Today 27, 32 (Sept. 1974); Science 190, 943 (1975); The high frontier (Morrow, NY, 1977). 31. C. C. Price, Chem. Eng. News 43, 90 (27 Sept. 1965); Synthesis of Life, ed. C. C. Price (Dowden, Hutchinson & Ross, Stoudsburg, Pa, 1974), pp. 284-6. 32. J. F. Danielli, Bull. Atom. Scient., (Dec. 1972) pp. 20-4 (also in C. C. Price, ref. 31); K. W. Jeon, I. J. Lorch, and J. F. Danielli, Science 167, 1626 (1970). 33. C. Grobstein, Scient. Am. 240 (No. 6), 57 (June 1979). 34. L. Eiseley, The invisible pyramid (Scribner's, NY, 1970), pp. 78-80. 35. F. H. C. Crick and L. E. Orgel, Icarus 19, 341 (1973). 36. P. H. C. Sneath, Nature 195, 643 (1962). 37. M. Seibert, Science 191, 1178 (1976); In Vitro 13, 194 (1977).

610 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63.

Argument Against the Existence of Extraterrestrial Intelligent Life 610 E. G. Cravalho, Technol Rev. 78 (No. 1), 30 (Oct. 1975). A. S. Parkes, Sex, science, and society (Oriel Press, London, 1965). M. W. Hunter, II, in AAS Science & Technology Series 17, 541 (1967). G. W. Morgenthaler, Ann. NY Acad. Sci. 163, 559 (1969). M. R. Helton, Jet Propulsion Laboratory Inter-office Memorandum 312/774-173 (21 June 1977). A. Bond et al., Project Daedalus, special supplement of J. Br. Interplan. Soc. (1978). Ref. 19, p. 108. F. H. Dyson, Ann. NY Acad. Sci. 163, 347 (1969); see also D. F. Spencer and L. D. Jaffe, Jet Propulsion Laboratory preprint #32-233 (1962). F. J. Dyson, quoted in A. Berry, The next thousand years (New American Library, NY, 1974), p. 125. R. H. MacArthur and E. O. Wilson, The theory of island biogeography (Princeton University Press, Princeton, 1967). E. O. Wilson, Sociobiology (Harvard University Press, Cambridge, Mass., 1975). Ref. 48, p. 105. E. M. Jones, J. Br. Interplan. Soc. 31, 103 (1978). W. I. Newman and C. Sagan, Icarus 46, 293 (1981). C. Sagan (ed.), Communication with extraterrestrial intelligence (MIT Press, Cambridge, Mass., 1973); T. L. Wilson, Quart. J. R. astron, Soc. 25, 435 (1984). V. Trimble, Rev. Mech. Phys. 47, 877 (1975). J. Audouze and B. M. Tinsley, Ann. Rev. Astron. & Astrophys. 14, 43 (1976). A. A. Penzias, Comm. Astrophys. 8, 19 (1978). J. C. Browne and B. L. Berman, Nature 262, 197 (1976). S. van den Bergh, Quart. J. R. astron. Soc. 25, 137 (1984). R. J. Talbot, Astrophys. J. 189, 209 (1974); R. J. Talbot and W. D. Arnett, Astrophys. J. 186, 51 (1973). D. C. Barry, Nature 268, 509 (1977). Ref. 18, p. 25. J. G. Kreifeldt, Icarus 14, 419 (1971). C. Sagan and W. I. Newman, Quart. J. R. astron. Soc. 24, 113 (1983). The timescale to is 7.5 x 10 years if the diffusion equation is discretized, in order to take into account the fact that the stars are not next to each other, but are separated by distances of ~ 1 pc. If the diffusion equation is not discretized, then tG ~ 10 years, a number which would invalidate our argument. Newman and Sagan stick by the 7.5 x 10 yr figure (see ref. (62)). E. A. Feigenbaum and P. McCorduck, The fifth generation (AddisonWesley, London, 1983). R. N. Bracewell, Nature 186, 670 (1980); repr. in The search for extraterrestrial life, ed. A. G. W. Cameron (Benjamin, NY, 1963). R. N. Bracewell, The galactic club (Freeman, San Francisco, 1975). Ref. 19, p. 108. 8

10

64. 65. 66. 67.

8

Argument Against the Existence of Extraterrestrial Intelligent Life

611

68. W. H. Pickering, Popular Astronomy 17, 495 (1909); reprinted in MARS (Gorham Press, Boston, 1921). F. Galton even worked out a code which is similar to those of the SETI proposals. See Fortnightly Rev. 66, 657 (Nov. 1896). 69. C. Sagan, quoted in Technol. Rev. 79 (No. 6, May), 14 (1977). 70. T. Dobzhansky, Genetics of the evolutionary process (Colombia University Press* NY, 1970), p. 278. 71. S. E. Morison, Portuguese voyages to America in the fifteenth century (Octagon, NY, 1965), pp. 11-15. 72. K. Davies, Scient. Am. 231 (No. 3, Sept.), 92 (1974). 73. E. Mayr, Populations, species, and evolution (Harvard University Press, Cambridge, Mass., 1970), p. 48. 74. R. M. May, Scient. Am. 239 (No. 3, Sept.), 160 (1978). 75. J. A. Ball, Icarus 19, 347 (1973). 76. F. J. Dyson, Science 131, 1667 (1960). 77. M. D. Papagianiss has independently suggested that the asteroid belt would be the most likely place to search for extraterrestrial industrial activities; see Quart. J. R. astron. Soc. 19, 227 (1978). 78. F. J. Low and H. J. Johnson, Astrophys. J. 139, 1130 (1964). 79. N. Calder, 1984 and beyond (Viking, NY, 1984). 80. P. A. Samuelson, Economics (McGraw-Hill, NY, 1964). 81. H. Bondi., Cosmology (Cambridge University Press, Cambridge, 1961). 82. C. Sagan, in UFOs—a scientific debate, ed. C. Sagan and T. Page (Norton, NY, 1972), p. 272. The extent to which this motivation is present among the supporters of SETI, in particular Drake and Sagan, is discussed in F. J. Tipler, Quart. J. R. astron. Soc. 22, 279 (1981). 83. F. Hoyle, Mon. Not. R. astron. Soc. 109, 365 (1949). 84. F. Hoyle, Astrophys. J. 196, 661 (1975). 85. C. Sagan, Discovery 4 (No. 3, March), 30 (1983). 86. Encyclopaedia Britannica, Vol. 2 (Benton, Chicago, 1967), p. 936. 87. J. V. Narlikar, Pramana 2, 158 (1974). 88. G. F. R. Ellis, R. Maartens, and S. Nel, Mon. Not. R. astron. Soc. 184, 439 (1978). 89. G. R. F. Ellis, Gen. Rel Gravn 9, 87 (1978). 90. P. C. W. Davies, Nature 273, 336 (1978). 91. S. W. Hawking, and G. F. R. Ellis, The large-scale structure of space-time (Cambridge University Press, Cambridge, 1973). 92. G. F. R. Ellis and G. B. Brundrit, Quart. J. R. astron. Soc. 20, 37 (1979). 93. F. J. Tipler, 'General relativity and the eternal return' in Essays in general relativity: a festschrift for Abraham H. Taub, ed. F. J. Tipler (Academic Press, NY, 1980). 94. I. E. Segal, Mathematical cosmology and extragalactic astronomy (Academic Press, NY, 1976). 95. I. E. Segal, private communication to FJT. 96. G. O'Neill, private communication to FJT. 97. R. A. Freitas, Jr., J. Brit. Interplan. Soc. 33, 95 (1980).

Argument Against the Existence of Extraterrestrial Intelligent Life 612 98. 99. 100. 101. 102. 103. 104. 105. 106. 107.

R. A. Freitas, Jr., J. Brit. Interplan. Soc. 33, 251 (1980). F. Valdes and R. A. Freitas, Jr., J. Brit. Interplan. Soc. 33, 402 (1980). Encyclopedia Britannica, Vol. 12 (Benton, Chicago, 1967), p. 4. I. S. Shklovskii, quoted in Astronomy 5, 56 (Jan. 1977). J. V. Narlikar, J. Astrophys. Astron. 5, 67 (1984). J. R. Gott, Nature 295, 304 (1982); an analysis of this article appeared in Science 215, 1082 (1982). E. P. Tryon, Nature 246, 396 (1973). Ya. B. Zel'dovich and L. P. Grishchuk, Mon. Not. R. astron. Soc. 207, 23P (1984); J. D. Barrow and F. J. Tipler, Mon. Not. R. astron. Soc. 216, 395. A. D. Linde, Rep. Prog. Phys. 47, 925 (1984). The steady-state universe is often presented as a singularity-free cosmological model. However, as S. W. Hawking and G. F. R. Ellis point out in The large-scale structure of space-time, the steady-state universe is actually singular in the sense that all the null geodesies are incomplete in the past direction. A straightforward calculation shows that a necessary and sufficient condition for the past completeness of null geodesies in a Friedman universe is 'to

where to is the lower limit of the length of the timelike geodesies normal to the hypersurfaces of hom*ogeneity and isotropy (t = ~ if these geodesies are complete), and R(t) is the usual scale factor of the Friedman universe. Since R(t) = exp[Hf] in the steady-state universe, the above integral is finite. 0

00

10 The Future of the Universe Some say the world will end in fire, Some say in ice From what I've tasted of desire I hold with those who favor fire. Robert Frost

10.1 Man's Place in an Evolving Cosmos We need scarcely add that the contemplation in natural science of a wider domain than the actual leads to a far better understanding of the actual. A. S. Eddington

When we investigate the relationship between intelligent life and the Cosmos, one fact stands out at the present time: there is no evidence whatsoever of intelligent life having any significant effect upon the Universe in the large. As we have discussed at length in earlier chapters, the evidence is very strong that intelligent life is restricted to a single planet, which is but one of nine circling a star which itself is only one of about 10 stars in the Galaxy and our Galaxy is but one of some 10 galaxies in the visible universe. Indeed, one of the seeming implications of science as it has developed over the past few centuries is that mankind is an insignificant accident lost in the immensity of the Cosmos. The evolution of the human species was an extremely fortuitous accident, one which is unlikely to have occurred elsewhere in the visible universe. It has appeared to most philosophers and scientists over the past century that mankind is forever doomed to insignificance. Both our species and all our works would disappear eventually, leaving the Universe devoid of mind once more. This world view was perhaps most eloquently stated by Bertrand Russell in the passage we quoted in Section 3.7, but the same sentiments have recently been expressed by the Nobelprize-winning physicist Steven Weinberg in his popular book on cosmol11

12

o g y , The First Three Minutes:

It is almost irresistible for humans to believe that we have some special relation to the universe, that human life is not just a more-or-less farcical outcome of a chain of accidents reaching back to the first three minutes [of the Universe's existence], but that we were somehow built in from the beginning It is very hard to realize that [the entire earth] is just a tiny part of an overwhelmingly hostile

The Future of the Universe

614

universe. It is even harder to realize that this present universe has evolved from an unspeakably unfamiliar early condition, and faces a future extinction of endless cold or intolerable heat. The more the universe seems comprehensible, the more it also seems pointless. 1

These ideas neglect to consider one extremely important possibility: Although mankind—and hence life itself—is at present confined to one insignificant, doomed planet, this confinement may not be perpetual. Bertrand Russell wrote his gloomy lines at the turn of the century, and at that time space travel was viewed as an impossibility by almost all scientists. But we have landed men on the Moon. We know space travel is possible. We argued in Chapter 9 that even interstellar travel is possible. Thus once space travel begins, there are, in principle, no further physical barriers to prevent hom*o sapiens (or our descendants) from eventually expanding to colonize a substantial portion, if not all, of the visible Cosmos. Once this has occurred, it becomes quite reasonable to speculate that the operations of all these intelligent beings could begin to affect the large scale evolution of the Universe. If this is true, it would be in this era—in the far future near the Final State of the Universe—that the true significance of life and intelligence would manifest itself. Present-day life would then have cosmic significance because of what future life may someday accomplish. One can draw an analogy with the geological effect of life upon the Earth. At the dawn of life, some four billion years ago, living beings were nothing more than simple biochemical machines capable of selfreproduction. When the machines formed, they were originally restricted (as far as we can tell) to a small, insignificant portion of the Earth's surface. A being from another world who happened to observe the Earth at this time would have not noticed their presence, nor seen any effect of their presence on the geological evolution of the Earth. As time went on, however, these living creatures increased their numbers exponentially. A significant fraction of the carbon available on the surface of the Earth was incorporated into living bodies. A photosynthetic ability evolved, and plants with this ability began to release oxygen into the atmosphere. As a consequence of this action by green plants, 21% of the present-day atmosphere is now oxygen. Had plants never supplied the atmosphere with oxygen, our planetary atmosphere would probably closely resemble the atmosphere of Venus: 95% carbon dioxide and 5% nitrogen. As we discussed in section 8.7, an oxygen atmosphere such as ours is intrinsically unstable, and the Earth's atmosphere would revert to a Venus-like atmosphere in the absence of the constant action of plants. Life has transformed the global atmosphere of the Earth on such a scale that the effect of life on the Earth (or at least on

615 The Future of the Universe

its atmosphere) could be recognized as such by an observer far outside the Solar System. We can view the action of intelligent life on the entire Universe in a similar fashion. A species capable of rapid technological innovation has existed in the Universe for only about 40,000 years. This species has just begun to take the first, faltering steps to leave its place of origin. In the time to come, it and its descendant species could conceivably change structural features of the Universe. To say that intelligent life has some global cosmological significance is to say that intelligent life will someday begin to transform and continue to transform the Universe on a cosmological scale. What we wish to discuss now is the question of what the Universe must be like in order for this to be possible. As our discussion of dysteleology in section 3.7 and Weinberg's remarks make abundantly clear, until recently scientists did not believe the physical laws could ever permit intelligent life to act on a cosmological scale. In part this belief is based on the notion that intelligent life means human life. Weinberg points out that the ultimate future of the Universe involves great cold or great heat, and that human life—the species hom*o sapiens—cannot survive in either environment. We must agree with him. The ultimate state of the Universe appears to involve one of these environments, and thus hom*o sapiens must eventually become extinct. This is the inevitable fate of any living species. As Darwin expressed it in the concluding pages of the Origin of Species :4 2,3

Judging from the past, we may safely infer that not one living species will transmit its unaltered likeness to a distant futurity.

But though our species is doomed, our civilization and indeed the values we care about may not be. We emphasized in Chapters 8 and 9 that from the behavioural point of view intelligent machines can be regarded as people. These machines may be our ultimate heirs, our ultimate descendants, because under certain circ*mstances they could survive forever the extreme conditions near the Final State. Our civilization may be continued indefinitely by them, and the values of humankind may thus be transmitted to an arbitrarily distant futurity. But before discussing under what conditions this might be possible, it will prove instructive to briefly review the reasons which were given to justify the idea that all intelligent life must become extinct.

10.2 Early Views of the Universe's Future

Cosmology, since it is the outcome of the highest generality of speculation, is the critic of all speculation inferior to itself in generality. A. N. Whitehead

The Future of the Universe

616

The final state of the Universe and mankind's role in the universe have throughout history been important topics of speculation for both philosophers and scientists. Final state scenarios seem to be based on one of three types of cosmological model. Unchanging Models claim that the Universe does not change in the large. Cyclic Models assert that the Universe undergoes a never-ending cycle of growth and decay, analogous to the human life-cycle. Evolving Models claim the Universe continuously evolves from some original state, and will never repeat a previous state. In unchanging models there is no initial or final state; one time is the same as any other. When Einstein constructed his first cosmological model in 1916, he assumed that the Universe was of this class. In the large, Einstein's model was static; that is, the galaxies did not move systematically, relative to one another. Unfortunately, it was shown by Lemaitre and his mentor Eddington that this static model is unstable. A slight perturbation would cause it to expand or contract, thereby converting it into a model of the third type. The next attempt to construct an unchanging cosmology was made in the 1950's by Thomas Gold and Fred Hoyle. This cosmology was termed the steady-state theory. In this model the galaxies were pictured as moving apart according to the usual Hubble law, but the average density of galaxies in the Universe was kept constant by the continuous creation of primordial matter in intergalactic space. This material would then condense to form galaxies. The galaxies thus formed would evolve, eventually ending their existence as a collection of burnt-out stars. Thus, although the galaxies would undergo a birth and death cycle, the cosmos as a whole would retain the same aspect. At any time, the Universe would contain the same percentage of young, middle-aged, and dead galaxies. The steady-state theory enjoyed wide-spread support among cosmologists in the 1950's, but as we pointed out in Chapter 6, it is generally considered to have been ruled out by the observation of the microwave background radiation. This radiation indicates that the visible universe was at one time much hotter and much denser than it is today. It is possible to retain a belief in the steady-state theory only if one is willing to assume that the visible universe is just a very small atypical portion of the entire Universe. Just as in the original version of the steady-state theory the galaxies were pictured as evolving and changing entities in a much larger structure which does not undergo any overall net change, so to defend this steady-state picture today we must picture the entire visible universe, which is that portion of the Universe within a Hubble distance (~10 light years) of us, as an evolving 'bubble' within a much larger Universe. Although 'bubbles' would be born and then decay, the Universe as a whole would not undergo any net change. This idea of a 'Universe of bubbles' was first put forward by Hoyle and his student Narlikar on the basis of their philosophical belief in an 10

617 The Future of the Universe

unchanging universe, but it has recently been independently invented by particle physicists who have been studying the implications of Grand Unified Theories (GUTs) for cosmology. In relativistic cosmology, a steady-state model will be the consequence of the following assumptions: (1) the universe is spatially hom*ogeneous and isotropic on a sufficiently large scale, (2) the evolution is dominated by a positive cosmological constant term in the field equations, and (3) the universe has the spatial topology R . Now GUTs strongly suggest that in the visible universe spontaneous symmetry-breaking should have given rise to an effective negative cosmological constant which is some fifty-seven orders of magnitude larger than is permitted by observation. This means that there must be an enormously large positive cosmological constant which will cancel out the negative cosmological constant generated by spontaneous symmetry-breaking. If this spontaneous symmetry-breaking does not act over the entire universe, but just in localized bubbles, then the evolution of the Universe as a whole will be dominated by the positive cosmological constant, which means that in the large, the Universe will be a steadystate cosmos. In section 9.5 we have seen one way in which Anthropic arguments can rule out such a steady-state cosmos, and we shall point out other Anthropic objections to such a scenario. Until the advent of relativistic cosmology in the twentieth century, most scientific discussions of the evolution of the Universes were evolving models based on the concept of a 'Heat Death' of the Universe, which we discussed in section 3.7. It was difficult, in the context of nineteenth-century physics, to criticize the predictions that the Universe would end in a Heat Death. We have mentioned a few rather weak and inconclusive criticisms in Chapter 3. The most powerful arguments that could be directed against it using only classical thermodynamics was first propounded in 1914 by the French thermodynamicist and philosopher of science Pierre Duhem: 3

5

The deduction [of the Heat Death from the Second Law of thermodynamics] is marred in more than one place by fallacies. First of all, it implicitly assumes the assimilation of the universe to a finite collection of bodies isolated in a space absolutely void of matter; and this assimilation exposes one to many doubts. Once this assimilation is admitted, it is true that the entropy of the universe has to increase endlessly, but it does not impose any lower or upper limit on this entropy; nothing then would stop this magnitude from varying from — to while the time itself varied from to +o°; then the allegedly demonstrated impossibilities regarding an eternal life for the universe would vanish.

We shall see below that both effects discussed by Duhem operate in general relativity to prevent a Heat Death from occurring in a relativistic cosmology. First of all, we take a relativistic cosmology that is assumed at present to be roughly hom*ogeneous and isotropic, and to be either open or closed. If it is open, then there is an infinite amount of non-

The Future of the Universe

618

gravitational energy available now. If it is closed, then the relativistic analogue of the conservation of energy equation, namely T f = 0, implies that the total energy, which is the sum of the gravitational and nongravitational energies and which can be written as a volume integral over the three-sphere corresponding to space at a given time, is trivially zero. This result can be interpreted either as saying that the conservation of energy law is 'transcended globally' (Wheeler prefers this interpretation ) or that the gravitational and non-gravitational energies in a closed universe are always equal in magnitude but opposite in sign (York prefers this interpretation and Penrose's new definition of mass supports this interpretation). ' In either interpretation the law of energy conservation places no restrictions on the continued entropy generation in a closed universe. In the Penrose-York interpretation, available free-energy can always be increased without limit by increasing the magnitude of the gravitational energy without limit. We shall see below in our analysis of life in a closed universe that this is possible; in effect, gravitation is the ultimate source of energy. In spite of earlier cautions, the notion of a Heat Death dominated thought at the end of the nineteenth century, as we discussed in Chapter 3. The discovery of the expanding universe in the early part of the twentieth century changed the picture of the Heat Death slightly; but, as developed by the British astrophysicists Jeans and Eddington, relativistic cosmology in the form of universe which expands forever would still end in a type of Heat Death. As Eddington asserted in 1931 : a

b

6

7

8 9

10

It used to be thought that in the end all the matter of the Universe would collect into one rather dense ball at uniform temperature; but the doctrine of the spherical space, and more especially the recent results as to the expansion of the Universe, have changed t h a t . . . It is widely thought that matter slowly changes into radiation. If so, it would seem that the Universe will ultimately become a ball of radiation growing ever larger, the radiation becoming thinner and passing into longer and longer wave lengths. About every 1,500 million years it will double its radius, and its size will go on expanding in this way in geometrical progression forever.

In his classic work of speculative cosmology, The World, the Flesh, and written in 1929, the physicist J. D. Bernal tried to picture what life would be like in the far future of such a universe: the Devil,

Finally, consciousness itself may end or vanish in a humanity that has become completely etherialized, losing the close-knit organism, becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light ... these beings, nuclearly resident, so to speak, in a relatively small set of mental units, each utilizing the bare minimum of energy, connected 111

619 The Future of the Universe together by a complex of etherial intercommunication, and spreading themselves over immense areas and periods of time by means of inert sense organs which, like the field of their active operations, would be, in general, at a great distance from themselves. As the scene of life would be more the cold emptiness of space than the warm, dense atmosphere of the planets, the advantage of containing no organic material at all, so as to be independent of both of these conditions, would be increasingly felt. 112

But in the end, Bernal came to believe that his 'etherialized life' probably would be destroyed in the Heat Death:

The second law of thermodynamics which, as Jeans delights in pointing out to us, will ultimately bring this universe to an inglorious close, may perhaps always remain the final factor. But by intelligent organizations the life of the Universe could probably be prolonged to many millions of millions of times what it would be without organization. 113

It is now generally believed that protons and other forms of matter will decay, in part to radiation. Thus Eddington's picture of the final state of ever-expanding cosmologies is quite similar in several respects to the contemporary view, as we shall discuss in more detail below. Also, we shall show in section 10.6 that if life continues to survive in the far future, it must take on a form that is roughly similar to Bernal's 'etherialized life'. There is another criticism which can be directed against the concept of the Heat Death and which is based on nineteenth-century physics: the so-called recurrence paradox of Poincare and Zermelo, which we mentioned briefly in section 3.8. The recurrence paradox arose in physics as a consequence of attempts to derive the Second Law of thermodynamics from Newton's laws of motion. In 1890, Poincare showed that for almost all initial states, any Newtonian mechanical system with a finite number of degrees of freedom, finite total and kinetic energy, which is constrained to evolve within a finite spatial region must necessarily return arbitrarily closely and infinitely often to almost every previous state of the system. Poincare emphasized that this doomed attempts to deduce the Second Law rigorously from Newton's laws, because the recurrence theorem proved that a mechanical system must be cyclic in its behaviour rather than unidirectional as implied by the Second Law. Thus if we believe in the validity of the Newtonian laws of motion, a Heat Death cannot be the final state of the Universe. Rather, the evolution of the Universe must consist of a series of cycles. This idea of a cyclic universe—the second type of cosmological model—is very old. Histories of the development of this idea in prescientific times have been written by Eliade, by Jaki, and by Tipler. Modern science contained the idea of a cyclic cosmos from the very 11

12

13

14

The Future of the Universe

620

beginning. Newton himself was worried that his solar system model was gravitationally unstable in the long run, and to compensate for this instability he suggested a cyclic process whereby the planets would be replaced as the gravitational action of the other bodies in the solar system periodically perturbed them from their orbits. By the beginning of the nineteenth century, Euler, Laplace, Lagrange, and others had shown that the solar system was in fact stable to first order, the gravitational perturbations leading merely to a cyclic oscillation of the planetary orbits. The cosmological implications of Newtonian theory were first discussed extensively by the German philosopher Immanuel Kant in 1755. In Kant's cosmology, the inhabited portion of the Universe began as a perturbation of initially static matter, distributed in a hom*ogeneous and isotropic manner throughout infinite Euclidean space. This material perturbation condenses to form the stars and planets. Eventually our particular region of space will exhaust its energy, and the inhabited portion will be another region, conspheric around the original perturbation, whose condensation has been started by the initial perturbation. Thus as time advances, the inhabited portion of the Universe is restricted to spheres of larger and larger radius around the point where the initial disturbance began. Thus from the point of view of life, Kant's cosmology is globally progressive, in the sense that in the long run, the region in which life exists increases with time as t , with t = 0 being the instant of the initial perturbation. This is a cosmological analogue of the progressive expansion of the biomass on Earth (see section 3.2). However, in Kant's scheme life is locally cyclic, for in each sphere life begins anew rather than expanding outward from the point at which it first began. As we mentioned in sections 3.2 and 3.10, it was impossible for Kant and the other eighteenth-century philosophers to imagine a progressive evolution of life, because the Principle of Plenitude did not allow it. In the model we shall develop below, life will be globally progressive in two senses: the amount of living material, and the amount of knowledge both grow without limit as a power of the cosmic time t. Modern discussions of the cyclic universe are generally based on the so-called 'oscillating closed universe' model found in 1922 by A. Friedman. Friedman himself was aware of the cyclic nature of time in his solution, and suggested that one could identify corresponding times in each cycle. However, in the Friedman model the radius of the Universe goes to zero at the beginning and at the end of each cycle, and thus from a strict mathematical standpoint the cycles were disjoined by a singularity. In other words, they were not actually cycles. Each 'cycle' would really be a universe complete in itself, with no possibility of transmitting information of any sort from one 'cycle' to the next. In 1931 Tolman proved 15

16

17

2

18

19

621 The Future of the Universe

that such a singularity was inevitable at the beginning and at the end of any isotropic and hom*ogeneous closed universe with a physically reasonable matter tensor. He argued that this singularity was merely an artefact of the high symmetry assumed, and that in a physically realistic universe, which naturally would not be exactly isotropic and hom*ogeneous, these singularities would disappear. Therefore he assumed that in a realistic case the singularity would be replaced by a very small but non-zero radius followed by a re-expansion, and that the entropy would be conserved on passage through this radius. This would result in the thermodynamics of a cycle being determined in part by the history of a previous cycle. Other relativists of the time by and large agreed with Tolman that an initial singularity was unlikely, and then they found his proposal of transfer of information from one cycle to the next quite reasonable (see ref. 21 for a detailed discussion of the early relativists' opinions on singularities). The Hawking-Penrose singularity theorems, which indicated that a singularity was inevitable provided certain very general hypothesis were made, changed relativists' minds on the reality of oscillating universes. It is now generally believed that either the Universe began in an initial singularity some 20 billions years ago (as measured in proper time), or else quantum effects must be the agency causing the Universe to 'bounce' at extremely high densities and temperatures or which even allows it to appear spontaneously from 'nothing'. By 'extremely high' we mean something of the order of the Planck density (5 x 10 gm/cm ) or the Planck temperature (1.4xl0 K). Wheeler, for example has until recently suggested that the physical constants themselves are cycled at such a bounce. At present, however, Wheeler believes in the reality of an initial singularity, and thus he is advocating a 'one-cycle' closed universe model. As we will discuss below, the large primordial cosmological constant in which many particle physicists believe could cause such a bounce if the temperature goes sufficiently high to dissolve the spontaneous symmetry breaking, but this process can not lead to a series of cycles. Only a single bounce would be possible. We note in passing that the SAP and FAP arguments which we used in section 9.5 against the steady-state theory can also be used to eliminate the possibility that a cyclic universe results from presently unknown physical laws, which cause an infinite sequence of bounces. 20

93

2

32

22

23,24

116

10.3 Global Constraints on the Future of the Universe Absolute space is the divine sensorium. E. A. Burtt, paraphrasing Sir Isaac Newton

In this section we shall briefly review the possible future histories of the Universe from a more mathematical point of view than in Chapter 6. A

The Future of the Universe

622

reader wishing a more detailed discussion is referred to ref. 25. We shall consider only those universe models which satisfy the Principle of Strong Cosmic Censorship. This Principle states that the space-time manifold is globally hyperbolic, which in rough non-technical language means Laplacean determinism holds: initial data given on a special space-like slice S of the space-time manifold uniquely determine the entire global structure of space-time. ' The special spacelike slice is called a Cauchy hypersurface and Geroch has shown that, in particular, the Principle of Strong Cosmic Censorship implies that the global topology of space-time is SxR , where S denotes the topology of any Cauchy hypersurface. If S is compact, then any compact spacelike 3-manifold in the globally hyperbolic space-time is in fact a Cauchy hypersurface. (This is not true if S is non-compact.) From the point of view of classical general relativity, the reason for postulating Strong Cosmic Censorship is that if this assumption is dropped, the future evolution of the universe becomes non-unique. Strong Cosmic Censorship can only be violated if space-time has singularities which lie both in the future and in the past of some observer's world-line. Since space-time itself breaks down at such a naked singularity, anything can come out of the singularity, resulting in an inability to predict the future evolution of the universe. There are indications that naked singularities would cause even worse disasters in quantized general relativity, although we cannot be sure of this, because to date there is no complete quantum theory of gravity. For instance, Hawking, Wald and Page have shown that naked singularities resulting from quantum black hole evaporation could cause pure quantum states to evolve into mixtures, which is not allowed by the fundamental postulates of quantum field theory. Such an evolution would also undermine the theoretical basis for the Many-Worlds interpretation of quantum mechanics. The entire reason for inventing this interpretation in the first place was to avoid having to assume an interaction (the collapse of the wave function) which caused pure states to become mixtures. There are a considerable number of quite different cosmologies which do obey the Principle of Strong Cosmic Censorship. They are distinguished by the topology of their Cauchy hypersurfaces, and they have been classified into two categories. The closed universes are those whose Cauchy hypersurfaces are compact, and the open universes are those whose Cauchy hypersurfaces are non-compact. (Compactness is a topological concept; see ref. 34 for a definition of this and other topological terms.) We discussed these two classes of cosmological models in Chapter 6 from a physical point of view. In addition to classification by the topology of the Cauchy hypersurfaces, universes can also be distinguished by their long-term dynamical 2126

25 27 28

1

29,30

31

32

33

623 The Future of the Universe

behaviour. Universes whose size or radius of curvature (scale factor) grows without limit are called ever-expanding universes while universes which reach a maximum size and recollapse to a final singularity are called recollapsing universes. This classification applies only to those cosmologies which are now expanding, as the real Universe apparently is. Friedman cosmologies—those which have Cauchy hypersurfaces that are hom*ogeneous and isotropic—are generally considered to have one of two possible Cauchy hypersurface topologies: R and S . Because of the high symmetry in Friedman cosmologies, identifications can be made in the Cauchy hypersurfaces to form non-simply connected topologies. For example, the open R topology can be identified to form a three-torus T . Such identifications are generally considered unaesthetic and in any case would destroy some of the global symmetries of the Friedman universe. In our three-torus example, the global rotational symmetry which is present in the original R topology is no longer present in the T universe. In the case of the Friedman cosmologies, it has been known since Tolman's work in the 1930's that there is a deep connection between the topology of the Cauchy hypersurface and the long term dynamical behaviour. Universes with topology R and other universes formed from them by identification expand forever provided the stressenergy tensor satisfies 3

3

3

3

3

3

3

(Tab-$Tgab)VaVb^

- ^ A V

a

V

(10.1)

a

for all unit time-like vectors Va. Furthermore, all Friedman universes with topology S recollapse provided the same inequality holds. It is not known whether this connection between Cauchy hypersurface topology and long-term dynamics persists when the conditions of hom*ogeneity and isotropy are relaxed. It is, however, generally believed that this connection is valid for any globally hyperbolic cosmology which satisfies (10.1). A few partial results are known. It is known, that a necessary and sufficient condition for recollapse to occur in globally hyperbolic closed universes is for the space-time to contain a maximal Cauchy hypersurface, which is a spacelike hypersurface with vanishing trace of its extrinsic curvature. Such a hypersurface is the largest hypersurface in the universe; the maximal hypersurface defines the time of maximal expansion of the Universe. The following theorem, which is a restatement and slight generalization of an earlier theorem due to R. Schoen and S.-T. Yau, places strong restrictions on the topology of recollapsing closed universes: 3

30,114

114

36

Theorem: If S is a spacelike compact orientable maximal

hypersurface,

The Future of the Universe

624 then it must have topology

(S XP )#(S XP )#. . .#(S x P )#fc(S x S ) 3

3

1

3

2

n

2

1

where Pf is a finite subgroup of SO(3), " # " denotes connected sum, and FC(S2XS1) means the connected sum of k copies of S 2 X S \ provided the following conditions hold: (1) The Einstein equations R^ - Rgab+ Ag a b = 877GTab hold on the spacetime: (2) [ T a b - A g a b / 8 7 r G ] V a V b ^ 0 for all timelike vectors V a ; (3) The space-time is not suitably identified Minkowski (flat) space; (4) The differentiate structure on the space-time is not exotic.

The terms in this theorem require some explanation. Roughly speaking, a connected sum of two three-dimensional manifolds is the manifold formed by cutting a small spherical volume out of each, and then gluing the two manifolds together along the boundaries of the remaining manifolds. The quotient S /P of a three-sphere S with a subgroup P means identifying points of the three-sphere which are carried into one another under the action of the subgroup. A non-exotic differentiate structure on the space-time manifold M = SXR , where S is the maximal spacelike hypersurface, means that the coordinate systems covering M are generated by pulling up the coordinate systems which cover S. Condition (4) is probably not necessary, but it simplifies the proof of the theorem. In any case, cosmologists never even consider space-times which violate condition (4), for there is no evidence for exotic differentiable structures. Since any physically realistic space-time contains some matter and hence is not flat space, and also satisfies condition (3) in the low density regime where a maximal hypersurface would be expected to occur (unless A * A, where m

3

m

y

m

1 dR\

an(

xx u-i 11 = present day is the Hubble constant measured today (H =50-100 km s" Mpc , according to the observers) and 1

O — PmO+P-yO _ PmO + PyO 3H%/8itG~ Pc

-1

/ln °-5)

(1

is the density parameter, and p is the critical density, so called because the total density of the universe must be greater than p if fc>0 and A = 0. Thus for a closed three-sphere universe, we must have fto> 1- The scale factor of the Universe today is denoted R0. The Hubble distance RH is CHQ and the Hubble time tH is HQ\ If A = 0 and the universe is matter-dominated the lifetime of the universe, tU9 is c

c

1

'""'"((fto^) ' ) 3

2

=^ ( 1 +1^) )

(10-6)

627 The Future of the Universe

The lifetime tu is just twice the time to maximum expansion; see equation (6.138). If the universe becomes radiation-dominated for most of its future history, say through the radiocative decay of matter and the Hawking evaporation of black holes, and if A = 0, then the lifetime of the universe is (10.7) The universal lifetime should lie between the values given by (10.6) and (10.7) if matter is converted into radiation slowly, but not slowly enough to give (10.6). We have pointed out earlier in Chapter 6 that Anthropic and other theoretical considerations imply ft is Y close to one. One can prove a number of very general theorems about the long-term time evolution of the universe. For example, we can prove that in contrast to a finite universe governed by Newtonian mechanics, states of a closed general relativistic universe cannot recur. In other words, the universe is not oscillating. The events of the present will never be repeated in the future, and what is more, the events of the future will not even be arbitrarily close to present events. Another theorem, first obtained by Brill and Flaherty, and generalized by Tipler and Marsden , and by Gerhardt, is that in a globally hyperbolic universe which is not everywhere flat and which satisfies (10.1), there will exist a unique globally defined time coordinate, which is given by the constant mean curvature foliation. A time coordinate in relativity is defined by any 'slicing' of four-dimensional space-time by a sequence of three-dimensional spacelike hypersurfaces. This sequence is called a foliation of space-time, and each hypersurface is called a leaf of the foliation. For a simple example of the concept of 'foliation', consider the surface of an ordinary cylinder. The surface of an ordinary cylinder is two-dimensional, and it can be foliated by a sequence of circles which are perpendicular to the axis of the cylinder. The cylinder is then just all of these circles stacked on top of one another. Each circle is a leaf of the foliation, and the foliation is all of the circles together. Any physically realistic cosmology can be foliated uniquely by Cauchy hypersurfaces of constant mean extrinsic curvature, and it is this foliation which defines the unique global time. The extrinsic curvature of a spacelike hypersurface is its relative rate of expansion in time. This relative rate of expansion is measured by the Hubble parameter H = (IIR) dR/dt, which we have encountered earlier in our discussion of the Friedman universe. However, in a general cosmology it is possible for the 0

13,46

30

30

ver

The Future of the Universe

628

universe to expand faster in some directions than others, so the Hubble parameter must be generalized to a tensor in order to express properly this directional dependence. This tensor is the extrinsic curvature. The mean extrinsic curvature is a scalar like the Hubble parameter, and it is an average of the extrinsic curvatures in the three spatial directions. (More exactly, it is the contraction of the extrinsic curvature, which is a rank two tensor—see ref. 25 or 30 for a precise definition). A constant mean extrinsic curvature hypersurface, or constant mean curvature hypersurface for short, is a spacelike hypersurface on which the mean extrinsic curvature is the same at every point. The hypersurfaces of hom*ogeneity and isotropy in the Friedman universe are constant mean curvature hypersurfaces on which the mean curvature is 3H. Since the Universe is in fact closely isotropic and hom*ogeneous, the constant mean hypersurface defining the global instant 'now' over the entire universe essentially coincides with the spacelike hypersurface in which the 3 K background radiation temperature is constant. The Earth is currently moving at about 300 km/sec with respect to this globally defined rest frame of the universe. In addition to the no-return theorem and the uniqueness of cosmological time theorem, one can obtain some constraints on the long-term behaviour of the matter and shear terms in equation (10.3), even beyond the point at which the equation breaks down. If the space-time can be assumed to remain roughly hom*ogeneous for all future time (this should be a good approximation for ever-expanding universes), then from Chapter 6 (see also ref. 47): lim inf t2a2 0 is taken, we obtain a constraint on the informationprocessing rate: (dI/dt)/(dE/dt) ^ 1.05 x 10 bits/sec-watt T 1 (10.40) where as before the temperature, T, is measured in degrees Kelvin. At room temperature (300 K) the thermodynamic limit of computing speed per unit power is about 10 bits per second per watt. At present the average off-the-shelf microcomputer works at about 10 bits per second per watt, while state-of-the-art supercomputers work at 10 bits per second per watt. We have a long way to go before reaching the thermodynamic limit. We should mention that there has been a debate in recent years as to whether (10.39) really applies to information-processing inside computers. Inequality (10.39) can be derived from several quite different assumptions. Brillouin obtained (10.39) by calculating the minimum amount of energy needed to measure one bit of information; in computers, measuring would correspond to reading a bit. (If there was no minimum, Maxwell's Demon could operate, thereby contradicting the Second Law.) Von Neumann derived (10.39) by calculating the minimum amount of energy required for accurate transmission of a bit from one logical gate to the next. The IBM computer scientist Landauer arrived at (10.39) by arguing that computation is logically irreversible. ' Both the Brillouin and the von Neumann arguments are founded solidly on the Second Law as generalized by information theory, but Landauer's derivation is open to the objection that computation is in actuality logically reversible, and a number of idealized physical models of reversible computers have been published. However, these models directed at Landauer's argument do not touch the thermodynamic arguments of von Neumann and Brillouin, as has 64

70

81

9 8

lo

no

2/3

23

21

8

10

92

93

94 95

96

92

663 The Future of the Universe

been recently pointed out by Porod et al.93 Furthermore, it seems to us that these models are not true thermodynamic models, because they are either unstable or else do not really interact with a temperature reservoir. To work at all, the models require the information to be already in the machine. These points have also been made by Porod et al For our purposes the existence of ideal computers which can process information already in the machine with no energy minimum is irrelevant, for in Nature it is necessary to transfer information to the machine, and all agree this transfer is subject to (10.39). It is also the case that transfers between different parts of a real machine would be subject to (10.39), so even if the claims of the critics were correct, (10.39) would restrict most of the computer operations. It would also apply to the increase of information, for the ideal computer processing in the critics' models just manipulates the information already in the computer memory. We shall therefore assume the validity of (10.39) in our subsequent discussions of information growth in the far future. From (10.40) we have the following inequality between the total information processed in the future and the energy required to process it: ^c-bound

J

f 'c-bound

In 2) I T~\dEldt)dt (10.41) «'t where the upper bound tc.bound is the time the c-boundary is reached. The value of the integrals in (10.41) do not depend on which measure of time duration is used. By condition (2) in the precise definition of FAP above, the left-hand integral must diverge if FAP is to hold, which implies that the right-hand integral must also diverge. In an open or flat cosmology, it is possible for the right-hand integral to diverge even if the total energy used, C 'c-bound E =j ( d E / d t ) dt (10.42) t

n o w

{dl/dt) dt^(kB

_1

n o w

is finite. Since the temperature goes to zero as the c-boundary is approached in these cosmologies, the information processed can diverge whilst the total energy being used remains finite if the information is processed sufficiently slowly. In closed universes the integral (10.42) must diverge, and diverge very rapidly near the final singularity, since the temperature diverges as l/R(t). We shall show that it is possible, in principle, for the right-hand integral in (10.41) to diverge in all three basic cosmologies: open, flat, and closed. What will be the most important energy source in the far future? At present, the most important energy source is matter: mass is converted into energy in stars via thermonuclear fusion, or via radioactive decay of

The Future of the Universe

664

heavy nuclei in bulk matter. But matter is gradually being used up, and no matter how efficient the conversion of energy into information there are the finite upper bounds, which we calculated earlier, to the amount of information that can be generated by the matter available in any finite region over the next 20 billion years. Life has a tendency to increase its population exponentially until the limits of a given ecological niche are reached. It is the characteristic of intelligent life to discover how to use all forms of matter for its own purposes, so we would expect such life to use up all the material within its home solar system on timescales which are short in comparison with the age of the Universe. They will begin the expansion from their home system, and gain control of new material. On timescales of tens of billions of years, the total region under the control of intelligent life will be an expanding sphere, with almost all of the activity concentrated in a narrow region within a distance AR of the surface of the sphere. The interior of the sphere will be an essentially dead region, the matter having been converted into information during the previous eons. The sphere will be expanding on average at some fraction of the speed of light, so on the average the region under2 the control of life and the net information stored will be increasing as t . (If the interior had not been exhausted, the increase would be proportional to the volume of the sphere rather than its area, or t3.) Thus although perpetual exponential growth of life, or of the economy, or of information, is not allowed by the laws of physics, a power-law growth is allowed. If the average expansion rate of life, as measured in the local rest-frame of the inner boundary of the expanding sphere, is always greater than the current Hubble expansion of 50 to 100 km/sec per megaparsec, then the growth can continue as t2 for the next 10 years, until the decay of protons becomes important. Whether or not this t2 growth as measured in proper time can continue indefinitely depends on whether the clumping of matter will permit the higher and higher biosphere expansion speeds needed to overcome the expansion of the universe. However, as we show below, in the appropriate timescale growth can continue as a power-law growth indefinitely. As we mentioned in section 10.1, the growth of life predicted here is quite similar to the growth of life in Kant's cosmology. By the end of the period from 10 to 10 years, the only matter surviving will be electrons and positrons from the decays of single atoms in interstellar space, (we ignore the possibility of massive neutrinos but they do not change the argument). Frautschi has considered various possible energy sources, such as Hawking radiation from black holes, and the energy from electron-positron annihilation. He concludes that in open universes, black holes would just barely supply sufficient energy, but the electrons and positrons would not. However, it seems to us that neither of these would be the main energy source of life in the far future. 31

31

33

73

665 The Future of the Universe

As we discussed in section 10.4, the most important form of energy available in this epoch will be the shear energy, so it is the most probable energy source for life. As we discussed in section 10.4, the shear energy can be extracted by making use of the directional temperature differential it generates. By Carnot's theorem, the efficiency of energy extraction should be proportional to A T/T which is independent of the scale factor R by equation (10.10), so the percentage of energy extracted from the shear energy should be independent of time unless the distortion parameter (exp jSf,) goes to1 zero asymptotically. However, since kBT~l/R(t) (10.49) while the requirement that the energy in N particle states each with energy m be less than the shear energy in a volume V can be written in the form of a restriction on energy densities: Nm/V-*2-lit21IR6 (10.50) where we have used the average growth rate of shear energy density in the last two steps. Now V~R , so (10.50) becomes Nml V ^ N(IIR)IR3~ N/t4/ (10.51) so the total number of particle states could grow as fast as l/t2/3 without violating the energy upper bound. The total stored information JTOT 2/3 WE would expect to grow roughly as N, so I can diverge as fast as t~ if the growth of particle states with energy permits. But the energy in the particle states cannot grow faster than this without exhausting the energy supply. Suppose that we write N~t~ , where 0 < e < 2 / 3 . Remembering that on the average R(t)~t1/3 near the final singularity, we obtain from (10.50) and (10.51): m < V/(Nt2) ~ 1/Nt ~ te~x (10.52) The inequalities (10.49) and (10.52) can be combined to give a constraint on the mass-energy of the particles: l/t1/3<m

The Anthropic Cosmological Principle - PDF Free Download (2024)
Top Articles
Latest Posts
Article information

Author: Aron Pacocha

Last Updated:

Views: 5729

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Aron Pacocha

Birthday: 1999-08-12

Address: 3808 Moen Corner, Gorczanyport, FL 67364-2074

Phone: +393457723392

Job: Retail Consultant

Hobby: Jewelry making, Cooking, Gaming, Reading, Juggling, Cabaret, Origami

Introduction: My name is Aron Pacocha, I am a happy, tasty, innocent, proud, talented, courageous, magnificent person who loves writing and wants to share my knowledge and understanding with you.