Two Models of the Beginning of the Universe

 There are basically two models for how the universe began. According to the first, the universe, by which we should understand the whole of physical reality, was deliberately created by a unique Being. This is the well-known Judaeo-Christian schema which until recently reigned supreme.
According to the second schema, the universe simply came about spontaneously: no one planned  it and no one made it happen. It ‘need not have been’, was essentially  ‘the product of chance’. This seems to be the Eastern view, though we also  come across it in some Western societies at an early stage of their development for example in Greece (Note 1).
Although for a long time the inhabitants of the Christian West were totally uninterested in the workings of the natural world, the ‘Creationist’ model eventually led on to the development of science as we know it. For, so it was argued, if the universe was deliberately created, its creator must have had certain rules and guidelines that He imposed on his creation. These rules could conceivably be discovered, in which case many of the mysteries of the physical universe would be explained. Moreover, if the Supreme Designer or Engineer really was all-knowing, one set of rules would suffice for all time. This was basically the world-view of the men who masterminded the scientific revolution in the West,  men such as Galileo, Kepler, Descartes, Newton and Leibnitz, all firm believers in both God and the power of mathematics which they viewed as the ‘language of God’ inasmuch as He had one.
If, on the other hand, the universe was the product of chance, one would not expect it to necessarily obey a set of rules, and if the universe was in charge of itself, as it were, things could change abruptly at any moment. In such a case, clever people might indeed notice certain regularities in the natural world but there would be no guarantee that these regularities were binding or would continue indefinitely. The Chinese equivalent of Euclid was the Y Ching, The Book of Changes, where the very title indicates a radically different world view. The universe is something that is in a perpetual state of flux, while nonetheless remaining ‘in essence’ always the same. According to Needham, the main reason why the scientific and technological revolution did not happen in China rather than the West, given that China was for a long time centuries ahead of the West technically, was that Chinese thinkers lacked  the crucial notion of unchanging ‘laws of Nature’ (Note 2).
Interestingly, there is a noticeable shift in Western thought towards the second model : the consensus today is that the universe did indeed come about ‘by chance’ and the same goes for life. However, contemporary physicists still hold tenaciously onto the idea that there are nonetheless certain more or less unchanging physical laws and rational principles which are in some sense ‘outside Nature’ and independent of it.  So the laws remain even though the Lawmaker has long since died quietly in his bed.

Emergent Order and Chaos

Models of the second ‘Spontaneous Emergence’ type generally posit an initial ‘Chaos’ which eventually settles down into a semblance of Order. True Chaos (not the contemporary physical theory of the same name (Note 3)) is a disordered free-for-all: everything runs into everything else and the world, life, us, are at best an ephemeral emergent order that suddenly occurs like the ripples the wind makes on the surface of a pond ─ and may just as suddenly disappear.
Despite the general triumph of Order over Chaos in Western thinking, even in the 19th century a few discordant voices dissented from the prevailing  orthodoxy ─ but none of them were practising scientists. Nietzsche, in a remarkable passage quoted by Sheldrake, writes:

“The origin of the mechanical world would be a lawless game which would ultimately acquire such consistency as the organic laws seem to have… All our mechanical laws would not be eternal but would have survived innumerable alternative mechanical laws” (Note 4)

Note that, according to this view, even the ‘laws of Nature’ are not fixed once and for all : they are subject to a sort of natural selection process just like everything else. This is essentially the viewpoint adopted in Ultimate Event Theory i.e. the universe was self-created, it has ascertainable ‘laws’ but these regularities need not be unchanging nor binding in all eventualities.

In the Beginning…. Random Ultimate Events  

In the beginning was the Void but the Void contained within itself the potential for ‘something’. For some reason a portion of the Void became active and random fluctuations appeared across its surface. These flashes that I call ‘ultimate events’ carved out for themselves emplacements within or on the Void, spots where they could and did have occurrence. Part at least of the Void had become a place where ultimate events could happen, i.e. an Event Locality. Such emplacements or ‘event-pits’ do not, by assumption, have a fixed shape but they do have fixed ‘extent’.
Usually, ultimate events occur once and disappear for ever, having existed for the ‘space’ of a single ksana only. However, if this was all that happened ever, there would be no universe, no matter, no solar system, no us. There must, then, seemingly have been some mechanism which allowed for the eventual formation of relatively persistent event clusters and event-chains : randomness must ultimately be able to give rise to its opposite, causal order. This is reasonable enough since if a ‘system’ is truly random, and is allowed to go on long enough, it will eventually cover all possibilities, and the emergence of ‘order’ is one of them.
As William James writes:
“There must have been a far-off antiquity, one is tempted to suppose, when things were really chaotic. Little by little, out of all the haphazard possibilities of that time, a few connected things and habits arose, and the rudiments of regular performance began.”

This suggests the most likely mechanism : repetition which in time gave rise to ingrained habits. Such a simple progression requires no directing intelligence and no complicated physical laws.
Suppose an ultimate event has occurrence at a particular spot on the Locality; it then disappears for ever. However, one might imagine that the ‘empty space’ remains, at least for a certain time. (Or, more correctly, the emplacement repeats, even though its original occupant is long gone). The Void has thus ceased to be completely homogeneous because it is no longer completely empty: there are certain mini-regions where emplacements for further ultimate events persist. These spots  might attract further ultimate events since the emplacement is there already, does not have to be created.
This goes on for a certain time until a critical point is reached. Then something completely new happens: an ultimate event repeats in the ‘same’ spot at the very next ksana, and, having done this once, carries on repeating for a certain time. The original ultimate event has thus acquired the miraculous property of persistence and an event-chain is born. Nothing succeeds like success and the persistence of one  event-chain makes the surrounding region more propitious for the development of similar rudimentary event-chains which, when close enough, combine to form repeating event-clusters. This is roughly how I see the ‘creation’ of the massive repeating event-cluster we call the universe. Whether the latter emerged at one fell swoop (Big Bang Theory) or bit by bit as in Hoyle’s modified Steady State Theory is not the crucial point and will be decided by observation. However, I must admit that piecemeal manifestation seems more likely a priori. Either way, according to UET, the process of event-chain formation ‘from nothing’ is still going on. 

The Occurrence Function  

This, then, is the general schema proposed ─ how to model it mathematically? We require a ‘Probability Occurrence Function’ which increases very slowly but, once it has reached a critical point, becomes unity or slightly greater than unity.
The Void or Origin, referred to in UET as K0 , is ‘endless’ but we shall only concerned with a small section of it. When empty of ultimate events, K0  is featureless but, when active, it has the capacity to  provide emplacements for ultimate events ─ for otherwise they would not occur. A particular region of K0 can accommodate a maximum of, say, N ultimate events at one and the same ksana. N is a large, but not ‘infinite’ number ─ ‘infinity’ and ‘infinitesimals’ are completely excluded from UET. If there are N potential emplacements and the events appear at random, there is initially a 1/N chance of an ultimate event occurring at one particular emplacement.
However, once an ultimate event has occurred somewhere (and subsequently disappeared), the emplacement remains and the re-occurrence of an event at this spot, or within a certain radius of this spot,  becomes very slightly more likely, i.e. the probability is greater than 1/N. For no two events are ever completely independent in Ultimate Event Theory. Gradually, as more events have occurrence within this mini-region, the radius of probable re-occurrence narrows and  eventually an ultimate event acquires the miraculous property of repeating at the same spot (strictly speaking, the equivalent spot at a subsequent ksana). In other words, the probability of re-occurrence is now a certainty and the ultimate event has turned into an event-chain.
As a first very crude approximation I suggest something along the following lines. P(m) stands for the probability of the occurrence of an ultimate event at a particular spot. The Rule is : 

P(m+1) = P(m) (1/N) ek    m = (–1),0,1, 2, 3…..

P(0) = 1     P(1) = (1/N)

Then,

P(2) = (1/N) (1/N) ek = (1/N2) ek
P(3) = ((1/N2) ek) (1/N) ek = (1/N3) e2k
P(4) = (1/N3) e2k (1/N) ek = (1/N4) e3k
P(5) = (1/N4) e4k (1/N) ek = (1/N5) e4k
P(m+1) = (1/Nm+1) emk  

Now, to have P(m+1) ≥ 1  we require

(1/Nm+1) emk ≥ 1
emk ≥  Nm+1
 mk ≥ (m+1) ln N     (taking logs base e on both sides)
k ≥ ((m+1)/m) ln N  

       If we set k as the first integer > ln N  this will do the trick.
For example, if we take N = 1050   ln N = 115.129….
       Then, e116(m+1)  > (1050)m+1 for any m ≥ 0 

However, we do not wish the function to get to unity or above straightaway. Rather, we wish for some function of N which converges very slowly to ln N  or rather to some value slightly above ln N (so that it can attain ln N). Thus k = f(N) such that ef(N)(m+1) ≥ Nm+1
       I leave someone more competent than myself to provide the details of such a function.
This ‘Probability Occurrence Function’ is the most important function in Ultimate Event Theory since without it  there would be no universe, no us, indeed nothing at all except random ultimate events firing off aimlessly for all eternity. Of course, when I speak of a mathematical function providing a mechanism for the emergence of the universe,  I do not mean to imply that a mathematical formula in any way ‘controls’ reality, or is even a ‘blueprint’ for reality. From the standpoint of UET, a mathematical formula is simply a description in terms comprehensible to humans of what apparently goes on and,  given the basic premises of UET, must go on.

Note the assumptions made. They are that:

(1) There is a region of K0 which can accommodate N ultimate events within a single ksana, i.e. can become an Event Locality with event capacity N;
(2) Ultimate events occur at random and continue to occur at random except inasmuch as they are more likely to re-appear at a spot where they have previously appeared;
(3) ‘Time’ in the sense of a succession of moments of equal duration, i.e. ksanas, exists from the very beginning, but not ‘space’;
(4) ‘Space’ comes into existence in a piecemeal fashion as, or maybe just before, ultimate events have occurrence — without events there is no need for space;
(5) Causality comes into existence when the first event-chain is formed : prior to that, there is no causality, only random emergence of events from wherever events come from (Note 5).

What happens once an event-chain has been formed? Does the Occurrence Function remain ≥ 1 or does it decline again? There are two reasons why the Probability Occurrence Function probably (sic) does at some stage decline, one theoretical and one observational. Everything in UET, except K0 the Origin, is finite ─ and K0 should be viewed as being neither finite nor infinite, ‘para-finite’ perhaps. Therefore, no event can keep on repeating indefinitely : all event-chains must eventually terminate, either giving rise to different event-chains or simply disappearing back into the Void from which they emerged. This is the theoretical reason.
Now for the observational reason. As it happens, we know today that the vast majority of ‘elementary particles’ are very short-lived and since all particles are, from the UET point of view, relatively persistent event-chains or event-clusters, we can conclude that most event-chains do not last for very long. On the other hand, certain particles like the proton and the neutrino are so long-lasting as to be virtually immortal. The cause of ‘spontaneous’ radio-active decay is incidentally not known, indeed the process is considered to be completely random (for a particular particle) which is tantamount to saying there is no cause. This is interesting since it shows that randomness re-emerges and re-emerges where it was least expected. I conceive of event-chains that have lost their causal bonding dwindling away in much the same way as they began only in reverse. There is a sort of pleasing symmetry here : randomness gives rise to order which gives rise to randomness once more.
There is the question of how we are to conceive the ‘build up’ of probability in the occurrence function : exactly where does this occur? Since this process has observable effects, it is more than a mathematical fiction. One could imagine that this slow build-up, and eventual weakening and fading away, takes place in a sort of semi-real domain, a hinterland between K0 and K1 the physical universe. I note this as K01.
I am incidentally perfectly serious in this suggestion. Some such half-real domain is required  to cope, amongst many other things, with the notorious ‘probabilities’ — more correctly ‘potentialities’ — of the Quantum Wave Function. The notion of a semi-real region where ‘semi-entities’ gradually become more and more real, i.e. closer to finalization, is a perfectly respectable idea in Hinayana Buddhism ─ many  authors speak of 17 stages in all,  though I am not so sure about that. Western science and thought generally has considerable difficulty coping with phenomena that are clearly neither completely actual nor completely imaginary (Note 6); this is so because of the dogmatic philosophic materialism that we inherit from the Enlightenment and Newtonian physics. Physicists generally avoid confronting the issue, taking refuge behind a smoke-screen of mathematical abstraction.                                                                SH  8/6/14

Note 1  This tends to be the Eastern view : neither the Chinese nor the Hindus seem to have felt much need for a purposeful all-powerful creator God. For the Chinese, there were certain patterns and trends to be discerned but nothing more, a ceaseless flux with one situation engendering another like the hexagrams of the Y Ching. Consulting the Y Ching involves a chance event, the fall of the yarrow sticks that the consultant throws at random. Whereas in divination chance is essential, in science every vestige of randomness is eliminatedas much as is humanly possible.
For the Hindus, the universe was not an artefact as it was for Boyle who likened it to the Strasbourg clock : it was a ‘dance’, that of Shiva. This is a very different conception since dances do not have either meaning or purpose apart from display and self-gratification. Also, although they may be largely repetitive, the (solitary) dancer is at liberty to introduce new movements at any moment.
As for the Buddhists, there was never any question of the universe being created : the emergence of the physical world was regarded as an accident with tragic consequences.

Note 2 “Needham tells of the irony with which Chinese men of letters of the eighteenth century greeted the Jesuits’ announcement of the triumphs of modern science. The idea that nature was governed by simple, knowable laws appeared to them as a perfect example of anthropomorphic foolishness. (…) If any law were involved [in the harmony and regularity of phenomena] it would be a law that no one, neither God nor man, had ever conceived of. Such a law would also have to be expressed in a language undecipherable by man and not be a law established by a creator conceived in our own image.”
Prigogine, Order out of Chaos p. 48 

Note 3  Contemporary Chaos Theory deals with systems that are deterministic in principle but unpredictable in practice. This is because of their sensitive dependence on initial conditions which can never be known exactly. True chaos cannot be modelled by Chaos Theory so-called. 

Note 4 See pages 12-14 of Rupert Sheldrake’s remarkable book, The Presence of the Past where he quotes this passage, likewise that from Nietzsche. Dr Sheldrake has perhaps contributed more than any other single person to the re-emergence of the ‘randomness/order’ paradigm. In his vision, ‘eternal physical laws’ are essentially reduced to habits and the universe as a whole is viewed as in some sense a living entity. “The cosmos now seems more like a growing and developing organism than like an eternal machine. In this context, habits may be more natural than immutable laws” ( Sheldrake, The Presence of the Past, Introduction).
  Stefan Wolfram also adopts a similar philosophic position, believing as he does that not only can randomness give rise to complex order, but must eventually do so. Both thinkers would probably concur with the idea that “systems with complex behaviour in nature must be driven by the same kind of essential spirit as humans” (Wolfram, A New Kind of Science p. 845)

Note 5.  This idea that causality comes into existence when, and only when, the first event-chains are formed, may be compared to the Buddhist doctrine that ‘karma’ ceases in nirvana, or rather that nirvana is to be defined as the complete absence of karma. Karma literally means ‘activity’ and there is no activity in the Void, or K0. Ultimate events are the equivalent of the Buddhist dharma ─ actually it should be dharmas plural but I cannot bring myself to write dharmas. Reality is basically composed of three ‘entities’, nirvana, karma, dharma, whose equivalents within Ultimate Event Theory are K0 or the Void, Causality (or Dominance) and Ultimate Events. All three are required for a description of phenomenal reality because the ultimate events must come from somewhere and must cohere together if they are to form ‘objects’, the causal force providing the force of cohesion. There is no need to mention matter nor for that matter (sic) God.

Note 6   “ ‘The possible’ cannot interact with the real: non-existent entities cannot deflect real ones from their paths. If a photon is deflected, it must have been deflected by something, and I have called that thing a ‘shadow photon’. Giving it a name does not make it real, but it cannot be true that an actual event, such as the arrival and detection of a tangible photon, is caused by an imaginary event such as what that photon ‘could have done’ but did not do. It is only what really happens that can cause other things really to happen. If the complex motions of the shadow photon in an interference experiment were mere possibilities that did not in fact take place, then the interference phenomena se see would not, in fact, take place.”       David Deutsch, The Fabric of Reality pp.48-9

Comment by SH
 : This is fine but I cannot go along with Deutsch’s resolution of the problem by having an infinite number of different worlds, indeed I regard it as crazy.

 


Advertisement