Archives for category: Buddhism

What is time? Time is succession. Succession of what? Of events, occurrences, states. As someone put it, time is Nature’s way of stopping everything happening at once.

In a famous thought experiment, Descartes asked himself what it was not possible to disbelieve in. He imagined himself alone in a quiet room cut off from the bustle of the world and decided he could, momentarily at least, disbelieve in the existence of France, the Earth, even other people. But one thing he absolutely could not disbelieve in was that there was a thinking person, cogito ergo sum (‘I think, therefore I am’).
Those of us who have practiced meditation, and many who have not, know that it is quite possible to momentarily disbelieve in the existence of a thinking/feeling person. But what one absolutely cannot disbelieve in is that thoughts and bodily sensations of some sort are occurring and, not only that, that these sensations (most of them anyway) occur one after the other. One outbreath follows an inbreath, one thought leads on to another and so on and so on until death or nirvana intervenes. Thus the grand conclusion: There are sensations, and there is succession.  Can anyone seriously doubt this?

 Succession and the Block Universe

That we, as humans, have a very vivid, and more often than not  acutely painful, sense of the ‘passage of time’ is obvious. A considerable body of the world’s literature  is devoted to  bewailing the transience of life, while one of the world’s four or five major religions, Buddhism, has been well described as an extended meditation on the subject. Cathedrals, temples, marble statues and so on are attempts to defy the passage of time, aars long vita brevis.
However, contemporary scientific doctrine, as manifested in the so-called ‘Block Universe’ theory of General Relativity, tells us that everything that occurs happens in an ‘eternal present’, the universe ‘just is’. In his latter years, Einstein took the idea seriously enough to mention it in a letter of consolation to the son of his lifelong friend, Besso, on the occasion of the latter’s death. “In quitting this strange world he [Michel Besso] has once again preceded me by a little. That doesn’t mean anything. For those of us who believe in physics, this separation between past, present and future is an illusion, however tenacious.”
Never mind the mathematics, such a theory does not make sense. For, even supposing that everything that can happen during what is left of my life has in some sense already happened, this is not how I perceive things. I live my life day to day, moment to moment, not ‘all at once’. Just possibly, I am quite mistaken about the real state of affairs but it would seem nonetheless that there is something not covered by the ‘eternal present’ theory, namely my successive perception of, and participation in, these supposedly already existent moments (Note 1). Perhaps, in a universe completely devoid of consciousness,  ‘eternalism’ might be true but not otherwise.

Barbour, the author of The End of Time, argues that we do not ever actually experience ‘time passing’. Maybe not, but this is only because the intervals between different moments, and the duration of the moments themselves, are so brief that we run everything together like movie stills. According to Barbour, there exists just a huge stack of moments, some of which are interconnected, some not, but this stack has no inherent temporal order. But even if it were true that all that can happen is already ‘out there’ in Barbour’s Platonia (his term), picking a pathway through this dense undergrowth of discrete ‘nows’ would still be a successive procedure.

I do not think time can be disposed of so easily. Our impressions of the world, and conclusions drawn by the brain, can be factually incorrect ― we see the sun moving around the Earth for example ― but to deny either that there are sense impressions and that they appear successively, not simultaneously, strikes me as going one step too far. As I see it, succession is an absolutely essential component  of lived reality and either there is succession or there is just an eternal now, I see no third possibility.

What Einstein’s Special Relativity does, however, demonstrate is that there is seemingly no absolute ‘present moment’ applicable right across the universe (because of the speed of light barrier). But in Special Relativity at least succession and causality still very much exist within any particular local section, i.e. inside a particular event’s light cone. One can only surmise that the universe as a whole must have a complicated mosaic successiveness made up of interlocking pieces (tesserae).

Irreversibility
In various areas of physics, especially thermo-dynamics, there is much discussion of whether certain sequences of events are reversible or not, i.e. could take place other than in the usual observed order. This is an important issue but is a quite different question from whether time (in the sense of succession) exists. Were it possible for pieces of broken glass to spontaneously reform themselves into a wine glass, this process would still occur successively and that is the point at issue.

Time as duration

‘Duration’ is a measure of how long something lasts. If time “is what the clock says” as Einstein is reported to have once said, duration is measured by what the clock says at two successive moments (‘times’). The trick is to have, or rather construct, a set of successive events that we take as our standard set and relate all other sets to this one. The events of the standard set need to be punctual and brief, the briefer the better, and the interval between successive events must be both noticeable and regular. The tick-tock of a pendulum clock provided such a standard set for centuries though today we have the much more regular expansion and contraction of quartz crystals or the changing magnetic moments of electrons around a caesium nucleus.

Continuous or discontinuous?

 A pendulum clock records and measures time in a discontinuous fashion: you can actually see, or hear, the minute or second hand flicking from one position to another. And if we have an oscillating mechanism such as a quartz crystal, we take the extreme positions of the cycle which comes to the same thing.
However, this schema is not so evident if we consider ‘natural’ clocks such as sundials which are based on the apparent continuous movement of the sun. Hence the familiar image of time as a river which never stops flowing. Newton viewed time in this way which is why he analysed motion in terms of ‘fluxions’, or ‘flowings’. Because of Calculus, which Newton invented, it is the continuous approach which has overwhelmingly prevailed in the West. But a perpetually moving object, or one perceived as such, is useless for timekeeping: we always have to home in on specific recurring configurations such as the longest or shortest shadow cast. We have to freeze time, as it were, if we wish to measure temporal intervals.

Event time

The view of time as something flowing and indivisible is at odds with our intuition that our lives consist of a succession  of moments with a unique orientation, past to future, actual to hypothetical. Science disapproves of the latter common sense schema but is powerless to erase it from our thoughts and feelings: clearly the past/present/future schema is hard-wired and will not go away.
If we dispense with continuity, we can also get rid of  ‘infinite divisibility’ and so we arrive at the notion, found in certain early Buddhist thinkers, that there is a minimum temporal interval, the ksana. It is only recently that physicists have even considered the possibility that time  is ‘grainy’, that there might be ‘atoms of time’, sometimes called chronons. Now, within a minimal temporal interval, there would be no possible change of state and, on this view, physical reality decomposes into a succession of ‘ultimate events’ occupying  minimal locations in space/time with gaps between these locations. In effect, the world becomes a large (but not infinite) collection of interconnected cinema shows proceeding at different rates.

Joining forces with time 

The so-called ‘arrow of time’ is simply the replacement of one localized moment by another and the procedure is one-way because, once a given event has occurred, there is no way that it can be ‘de-occurred’. Awareness of this gives rise to anxiety ― “the moving finger writes, and having writ/ Moves on, nor all thy piety or wit/Can lure it back to cancel half a line….”  Most religious, philosophic and even scientific systems attempt to allay this anxiety by proposing a domain that is not subject to succession, is ‘beyond time’. Thus Plato and Christianity, the West’s favoured religion. And even if we leave aside General Relativity, practically all contemporary scientists have a fervent belief in the “laws of physics” which are changeless and in effect wholly transcendent.
Eastern systems of thought tend to take a different approach. Instead of trying desperately to hold on to things such as this moment, this person, this self, Buddhism invites us to  ‘let go’ and cease to cling to anything. Taoism goes even further, encouraging us to find fulfilment and happiness by identifying completely with the flux of time-bound existence and its inherent aimlessness. The problem with this approach is, however, that it is not clear how to avoid simply becoming a helpless victim of circumstance. The essentially passive approach to life seemingly needs to be combined with close attention and discrimination ― in Taoist terms, Not-Doing must be combined with Doing.

Note 1 And if we start playing with the idea that  not only the events but my perception of them as successive is already ‘out there’, we soon get involved in infinite regress.

 

Note : Recent posts have focused on ‘macroscopic’ events and event-clusters, especially those relevant to personal ‘success’ and ‘failure’. I shall be returning to such themes eventually, but the point has now come to review the basic ‘concepts’ of ‘micro’ (‘ultimate’) events. The theory ─ or rather paradigm ─ seems to  know where it wants to go, and, after much trepidation, I have decided to give it its head, indeed I don’t seem to have any choice in the matter.  An informal ─ but nonetheless tolerably stringent ─ treatment now seems more appropriate than my original attempted semi-axiomatic presentation. SH   26/6/14

 Beginnings  

It is always necessary to start somewhere and assume certain things, otherwise you can never get going. Contemporary  physics may be traced back to Democritus’ atomism, that is to the idea that ‘everything’ is composed of small ‘bodies’ that cannot be further divided and which are indestructible ─ “Nothing exists except atoms and void” as Democritus put it succinctly. What Newton did was essentially to add in the concept of a ‘force’ acting between atoms and which affects the motions of the atoms and the bodies they form. ‘Classical’, i.e. post-Renaissance  but pre twentieth-century physics, is based on the conceptual complex atom/body/force/motion.

Events instead of things  

Ultimate Event Theory (UET), starts with the concept of the ‘event’. An event is precisely located : it happens at a particular spot and at a particular time, and there is nothing ‘fuzzy’ about this place and time. In contrast to a solid object an ‘event’ does not last long, its ‘nature’ is to appear, disappear and never come back again. Above all, an event does not ‘evolve’ : it is either not at all or ‘in one piece’. Last but not least, an ultimate event is always absolutely still : it cannot ‘move’ or change, only appear and disappear. However, in certain rare cases it can give rise to other ultimate events, either similar or dissimilar.

Rejection of Infinity 

The spurious notion of ‘infinity’ is completely excluded from UET: this clears the air considerably and allows one to deduce at once certain basic properties about events. To start with, macroscopic events, the only ones we are directly aware of, are not (in UET) made up of an ‘infinite’ number of ‘infinitely small’ micro-events: they are composed of a particular, i.e. finite, number of ‘ultimate events’ ─ ultimate because such micro-events cannot be further broken down (Note 1).

 Size and shape of Ultimate Events

Ultimate events may well  vary in size and shape and other characteristics but as a preliminary simplifying assumption, I assume that they are of the same shape and size, (supposing these terms are even meaningful at such a basic level). All ultimate events thus have exactly the same ‘spatio/temporal extent’ and this extent is an exact match for the ‘grid-spots’ or  ‘event-pits’ that ultimate events occupy on the Event Locality. The occupied region may be envisaged as a cuboid of dimensions su × su × su , or maybe a sphere of radius su ,  or indeed any shape of fixed volume which includes three dimensions at right angles to each other.
Every ultimate event occupies such a ‘space’ or ‘place’ for the duration of a single ksana of identical ‘length’ t0. Since everything that happens is reducible to a certain number of  ultimate events occupying fixed positions on the Locality, ‘nothing can happen’ within a spatial region smaller than su3 or within a ‘lapse of time’ smaller than t0. Though there may conceivably be smaller spatial and temporal intervals, they are irrelevant since Ultimate Event Theory is a theory about ‘events’ and their interactions, not about the Locality itself.

Event Kernels and Event Capsules 

The region  su3 t0  corresponds to the precise region occupied by an individual ultimate event. As soon as I started playing around with this simple model of precisely located ultimate events, I saw that it would be necessary to introduce the concept of the ‘Event Capsule’. The latter normally has a much greater spatial extent than that occupied by the ultimate event itself : it is only the small central region known as the ‘kernel’ that is of spatial extent su3, the relation between the kernel and the capsule as a whole being somewhat analogous to that between the nucleus and the enclosing atom. Although each ‘emplacement’ on the Locality can only receive a single ultimate event, the vast spatial region surrounding the ‘event-pit’ itself is, as it were, ‘flexible’. The essential point is that the Event Capsule, which completely fills the available ‘space’, is able to expand and contract when subject to external (or possibly also internal) forces.
There are, however, fixed limits to the size of an Event Capsule ─ everything except the Event Locality itself has limits in UET (because of the Anti-Infinity Axiom). The Event Capsule varies in spatial extent from the ‘default’, maximal size of s03 to the  absolute minimum size of u3which it attains when the Event Capsule has shrunk to the dimensions of the ‘kernel’ housing a single ultimate event.

Length of a ksana 

The ‘length’ of a ksana, the duration or ‘temporal dimension’ of an ultimate event, likewise of an Event Capsule, does not expand or contract but, by hypothesis, always stays the same. Why so? One could in principle make the temporal interval flexible as well but this seems both unnecessary and, to me, unnatural. The size of the enveloping capsule should not, by rights, have anything to do what actually occurs inside it, i.e. with the ultimate event itself, and, in particular, should not affect how long an ultimate event lasts. A gunshot is the same gunshot whether it is located within an area of a few square feet, within a square kilometre or a whole county, and it lasts the same length of time whether we record it as simply having taken place in such and such a year, or between one and one thirty p.m. of a particular day within this year.

Formation of Event-Chains and Event-clusters 

In contrast to objects, a fortiori organisms, it is in the nature of an ultimate event to appear and then disappear for ever : transience and ephemerality are of the very essence of Ultimate Event Theory. However, for reasons that we need not enquire into at present, certain ultimate events acquire the ability to repeat more or less identically during (or ‘at’) subsequent ksanas, thus forming event-chains. If this were not so, there would be no universe, no life, nothing stable or persistent, just a “big, buzzing confusion” of ephemeral ultimate events firing off at random and immediately subsiding into darkness once again.
Large repeating clusters of events that give the illusion of permanence are commonly known as ‘objects’ or ‘bodies’ but before examining these, it is better to start with less complex entities. The most rudimentary  type of event-chain is that composed of a single ultimate event that repeats identically at every ksana.

‘Rest Chains’

Classical physics kicks off with Galileo’s seminal concept of inertia which Newton later developed and incorporated into his Principia (Note 2). In effect, according to Galileo and Newton,  the ‘natural’ or ‘default’ state of a body is to be “at rest or in constant straight-line motion”. Any perceived deviation from this state is to be attributed to the action of an external force, whether this force be a contact force like friction or a force which acts from a distance like gravity.
As we know, Newton also laid it down as a basic assumption that all bodies in the universe attract all others. This means that, strictly speaking, there cannot be such a thing as a body that is exactly at rest (or moving exactly at a constant speed in a straight line) because the influence of other massive bodies would inevitably make such a body deviate from a state of perfect rest or constant straight-line motion. And for Newton there was only one universe and it was not empty.
However, if we  consider a body all alone in the depths of space, it is reasonable to dismiss the influence of all other bodies as entirely negligible ─ though the combined effect of all such influences is never exactly zero in Newtonian Mechanics. Our ideal isolated body will then remain at rest for ever, or if conceived as being in motion, this ‘motion’ will be constant and in a straight line. Thus Newtonian Mechanics. Einstein replaced the classical idea of an ‘inertial frame’ with the concept of a ‘free fall frame’, a region of Space/Time where no external forces could trouble an object’s state of rest ─ but also small enough for there to be no variation in the local gravitational field.
EVENT CAPSULE IMAGEIn a similar spirit, I imagine an isolated event-chain completely removed from any possible interference from other event-chains. In the simplest possible case, we thus have a single ultimate event which will carry on repeating indefinitely (though not for ‘ever and ever’) and each time it re-appears, this event will occupy an exactly similar spatial region on the Locality of size s03 and exist for one ksana, that is for a ‘time-length’ of to.  Moreover, the interval between successive appearances, supposing there is one, will remain the same. The trajectory of such a repeating event, the ‘event-line’ of the chain, may, very crudely, be modelled as a series of dots within surrounding boxes all of the same size and each ‘underneath’ the other.

True rest?

Such an event-chain may be considered to be ‘truly’ at rest ─ inasmuch as a succession of events can be so considered. In such a context, ‘rest’ means a minimum of interference from other event-chains and the Locality itself.
Newton thought that there was such a thing as ‘absolute rest’ though he conceded that it was apparently not possible to distinguish a body in this state from a similar body in an apparently identical state that was ‘in steady straight-line motion’ (Note 3). He reluctantly conceded that there were no ‘preferential’ states of motion and/or rest.
But Newton dealt in bodies, that is with collections of  atoms which were eternal and did not change ever. In Ultimate Event Theory, ‘everything’ is at rest for the space of a single ksana but ‘everything’ is also ceaselessly being replaced by other ‘things’ (or by nothing at all) over the ‘space’ of two or more ksanas. In the next post I will investigate what meaning, if any, is to be given to ‘velocity’ ‘acceleration’ and ‘inertia’ in Ultimate Event Theory.       SH  26/6/14

 Note 1  One could envisage the rejection of infinity as a postulate, one of the two or three most important postulates of Ultimate Event Theory, but I simply regard the concept of infinity as completely meaningless, as ‘not even wrong’.         I do, however,  admit the possibility of the ‘para-finite’ which is a completely different and far more reasonable concept. The ‘para-finite’ is a domain/state where all notions of measurement and quantity are meaningless and irrelevant : it is essentially a mystical concept (though none the worse for that) rather than a mathematical or physical one and so should be excluded from natural science.
The Greeks kept the idea of actual infinity firmly at arm’s length. This was both a blessing and, most people would claim, also a curse. A blessing because their cosmological and mathematical models of reality made sense, a curse because it stopped them developing the ‘sciences of motion’, kinematics and dynamics. But it is possible to have a science of dynamics without bringing in infinity and indeed this is one of the chief aims of Ultimate Event Theory.

Note 2  Galileo only introduced the concept of an ‘inertial frame’ to meet the obvious objection to the heliocentric theory, namely that we never feel the motion of the Earth around the Sun. Galileo’s reply was that neither do we necessarily detect the regular motion of a ship on a calm sea ─ the ship is presumably being rowed by well-trained galley-slaves. In his Dialogue Concerning the Two World Systems, (pp. 217-8 translator Drake) Galileo’s spokesman, Salviati, invites his friends to imagine themselves in a makeshift laboratory, a cabin below deck (and without windows) furnished with various homespun pieces of equipment such as a bottle hung upside down with water dripping out, a bowl of water with goldfish in it, some flies and butterflies, weighing apparatus and so on. Salviati claims that it would be impossible to know, simply by observing the behaviour of the drips from the bottle, the flight of insects, the weight of objects and so on, whether one was safely moored at a harbour or moving in a straight line at a steady pace on a calm sea.
        Galileo does not seem to have realized the colossal importance of this thought-experiment. Newton, for his part, does realize its significance but is troubled by it since he believes ─ or at least would like  to believe ─ that there is such a thing as ‘absolute motion’ and thus also ’absolute rest’. The question of whether Galileo’s principle did, or did not, cover optical (as opposed to mechanical) experiments eventually gave rise to the theory of Special Relativity. The famous Michelsen-Morley experiment was, to everyone’s surprise at the time, unable to detect any movement of the Earth relative to the surrounding ‘ether’. The Earth itself had in effect become Galileo’s ship moving in an approximately straight line at a steady pace through the surrounding fluid.
Einstein made it a postulate (assumption)  of his Special Theory that “the laws of physics are the same in all inertial frames”. This implied that the observed behaviour of objects, and even living things, would be essentially the same in any ‘frame’ considered to be ‘inertial’. The simple ‘mind-picture’ of a box-like container with objects inside it that are free to move, has had tremendous importance in Western science. The strange thing is that in Galileo’s time vehicles  ─ even his ship ─ were very far from being ‘inertial’, but his idea has, along with other physical ideas, made it possible to construct very tolerable ‘inertial frames’ such as high-speed trains, ocean liners, aeroplanes and space-craft.

Note 3  Newton is obviously ill at ease when discussing the possibility of ‘absolute motion’ and ‘absolute rest’. It would seem that he believed in both for philosophical (and perhaps also religious) reasons but he conceded that it would, practically speaking, be impossible to find out whether a particular state was to be classed as ‘rest’ or ‘straight-line motion’. In effect, his convictions clashed with his scientific conscience.

“Absolute motion, is the translation of a body from one absolute place into another. Thus, in a ship under sail, the relative place of a body is that part of the ship which the body possesses, or that part of its cavity which the body fills, and which therefore moves together with the ship, or its cavity. But real, absolute rest, is the continuance of the body in the same part of that immovable space in which the ship itself, its cavity and all that it contains, is moved. (…) It may be, that there is no such thing as an equable motion, whereby time may be accurately measured. (…) Instead of absolute places and motions we use relative ones; and that without any inconvenience in common affairs: but in philosophical disquisitions, we ought to abstract from our senses and consider things themselves, distinct from what are only sensible measures of them. For it may be that there is no body really at rest, to which the places and motions of others may be referred.”
Newton, Principia, I, 6 ff.

 

Two Models of the Beginning of the Universe

 There are basically two models for how the universe began. According to the first, the universe, by which we should understand the whole of physical reality, was deliberately created by a unique Being. This is the well-known Judaeo-Christian schema which until recently reigned supreme.
According to the second schema, the universe simply came about spontaneously: no one planned  it and no one made it happen. It ‘need not have been’, was essentially  ‘the product of chance’. This seems to be the Eastern view, though we also  come across it in some Western societies at an early stage of their development for example in Greece (Note 1).
Although for a long time the inhabitants of the Christian West were totally uninterested in the workings of the natural world, the ‘Creationist’ model eventually led on to the development of science as we know it. For, so it was argued, if the universe was deliberately created, its creator must have had certain rules and guidelines that He imposed on his creation. These rules could conceivably be discovered, in which case many of the mysteries of the physical universe would be explained. Moreover, if the Supreme Designer or Engineer really was all-knowing, one set of rules would suffice for all time. This was basically the world-view of the men who masterminded the scientific revolution in the West,  men such as Galileo, Kepler, Descartes, Newton and Leibnitz, all firm believers in both God and the power of mathematics which they viewed as the ‘language of God’ inasmuch as He had one.
If, on the other hand, the universe was the product of chance, one would not expect it to necessarily obey a set of rules, and if the universe was in charge of itself, as it were, things could change abruptly at any moment. In such a case, clever people might indeed notice certain regularities in the natural world but there would be no guarantee that these regularities were binding or would continue indefinitely. The Chinese equivalent of Euclid was the Y Ching, The Book of Changes, where the very title indicates a radically different world view. The universe is something that is in a perpetual state of flux, while nonetheless remaining ‘in essence’ always the same. According to Needham, the main reason why the scientific and technological revolution did not happen in China rather than the West, given that China was for a long time centuries ahead of the West technically, was that Chinese thinkers lacked  the crucial notion of unchanging ‘laws of Nature’ (Note 2).
Interestingly, there is a noticeable shift in Western thought towards the second model : the consensus today is that the universe did indeed come about ‘by chance’ and the same goes for life. However, contemporary physicists still hold tenaciously onto the idea that there are nonetheless certain more or less unchanging physical laws and rational principles which are in some sense ‘outside Nature’ and independent of it.  So the laws remain even though the Lawmaker has long since died quietly in his bed.

Emergent Order and Chaos

Models of the second ‘Spontaneous Emergence’ type generally posit an initial ‘Chaos’ which eventually settles down into a semblance of Order. True Chaos (not the contemporary physical theory of the same name (Note 3)) is a disordered free-for-all: everything runs into everything else and the world, life, us, are at best an ephemeral emergent order that suddenly occurs like the ripples the wind makes on the surface of a pond ─ and may just as suddenly disappear.
Despite the general triumph of Order over Chaos in Western thinking, even in the 19th century a few discordant voices dissented from the prevailing  orthodoxy ─ but none of them were practising scientists. Nietzsche, in a remarkable passage quoted by Sheldrake, writes:

“The origin of the mechanical world would be a lawless game which would ultimately acquire such consistency as the organic laws seem to have… All our mechanical laws would not be eternal but would have survived innumerable alternative mechanical laws” (Note 4)

Note that, according to this view, even the ‘laws of Nature’ are not fixed once and for all : they are subject to a sort of natural selection process just like everything else. This is essentially the viewpoint adopted in Ultimate Event Theory i.e. the universe was self-created, it has ascertainable ‘laws’ but these regularities need not be unchanging nor binding in all eventualities.

In the Beginning…. Random Ultimate Events  

In the beginning was the Void but the Void contained within itself the potential for ‘something’. For some reason a portion of the Void became active and random fluctuations appeared across its surface. These flashes that I call ‘ultimate events’ carved out for themselves emplacements within or on the Void, spots where they could and did have occurrence. Part at least of the Void had become a place where ultimate events could happen, i.e. an Event Locality. Such emplacements or ‘event-pits’ do not, by assumption, have a fixed shape but they do have fixed ‘extent’.
Usually, ultimate events occur once and disappear for ever, having existed for the ‘space’ of a single ksana only. However, if this was all that happened ever, there would be no universe, no matter, no solar system, no us. There must, then, seemingly have been some mechanism which allowed for the eventual formation of relatively persistent event clusters and event-chains : randomness must ultimately be able to give rise to its opposite, causal order. This is reasonable enough since if a ‘system’ is truly random, and is allowed to go on long enough, it will eventually cover all possibilities, and the emergence of ‘order’ is one of them.
As William James writes:
“There must have been a far-off antiquity, one is tempted to suppose, when things were really chaotic. Little by little, out of all the haphazard possibilities of that time, a few connected things and habits arose, and the rudiments of regular performance began.”

This suggests the most likely mechanism : repetition which in time gave rise to ingrained habits. Such a simple progression requires no directing intelligence and no complicated physical laws.
Suppose an ultimate event has occurrence at a particular spot on the Locality; it then disappears for ever. However, one might imagine that the ‘empty space’ remains, at least for a certain time. (Or, more correctly, the emplacement repeats, even though its original occupant is long gone). The Void has thus ceased to be completely homogeneous because it is no longer completely empty: there are certain mini-regions where emplacements for further ultimate events persist. These spots  might attract further ultimate events since the emplacement is there already, does not have to be created.
This goes on for a certain time until a critical point is reached. Then something completely new happens: an ultimate event repeats in the ‘same’ spot at the very next ksana, and, having done this once, carries on repeating for a certain time. The original ultimate event has thus acquired the miraculous property of persistence and an event-chain is born. Nothing succeeds like success and the persistence of one  event-chain makes the surrounding region more propitious for the development of similar rudimentary event-chains which, when close enough, combine to form repeating event-clusters. This is roughly how I see the ‘creation’ of the massive repeating event-cluster we call the universe. Whether the latter emerged at one fell swoop (Big Bang Theory) or bit by bit as in Hoyle’s modified Steady State Theory is not the crucial point and will be decided by observation. However, I must admit that piecemeal manifestation seems more likely a priori. Either way, according to UET, the process of event-chain formation ‘from nothing’ is still going on. 

The Occurrence Function  

This, then, is the general schema proposed ─ how to model it mathematically? We require a ‘Probability Occurrence Function’ which increases very slowly but, once it has reached a critical point, becomes unity or slightly greater than unity.
The Void or Origin, referred to in UET as K0 , is ‘endless’ but we shall only concerned with a small section of it. When empty of ultimate events, K0  is featureless but, when active, it has the capacity to  provide emplacements for ultimate events ─ for otherwise they would not occur. A particular region of K0 can accommodate a maximum of, say, N ultimate events at one and the same ksana. N is a large, but not ‘infinite’ number ─ ‘infinity’ and ‘infinitesimals’ are completely excluded from UET. If there are N potential emplacements and the events appear at random, there is initially a 1/N chance of an ultimate event occurring at one particular emplacement.
However, once an ultimate event has occurred somewhere (and subsequently disappeared), the emplacement remains and the re-occurrence of an event at this spot, or within a certain radius of this spot,  becomes very slightly more likely, i.e. the probability is greater than 1/N. For no two events are ever completely independent in Ultimate Event Theory. Gradually, as more events have occurrence within this mini-region, the radius of probable re-occurrence narrows and  eventually an ultimate event acquires the miraculous property of repeating at the same spot (strictly speaking, the equivalent spot at a subsequent ksana). In other words, the probability of re-occurrence is now a certainty and the ultimate event has turned into an event-chain.
As a first very crude approximation I suggest something along the following lines. P(m) stands for the probability of the occurrence of an ultimate event at a particular spot. The Rule is : 

P(m+1) = P(m) (1/N) ek    m = (–1),0,1, 2, 3…..

P(0) = 1     P(1) = (1/N)

Then,

P(2) = (1/N) (1/N) ek = (1/N2) ek
P(3) = ((1/N2) ek) (1/N) ek = (1/N3) e2k
P(4) = (1/N3) e2k (1/N) ek = (1/N4) e3k
P(5) = (1/N4) e4k (1/N) ek = (1/N5) e4k
P(m+1) = (1/Nm+1) emk  

Now, to have P(m+1) ≥ 1  we require

(1/Nm+1) emk ≥ 1
emk ≥  Nm+1
 mk ≥ (m+1) ln N     (taking logs base e on both sides)
k ≥ ((m+1)/m) ln N  

       If we set k as the first integer > ln N  this will do the trick.
For example, if we take N = 1050   ln N = 115.129….
       Then, e116(m+1)  > (1050)m+1 for any m ≥ 0 

However, we do not wish the function to get to unity or above straightaway. Rather, we wish for some function of N which converges very slowly to ln N  or rather to some value slightly above ln N (so that it can attain ln N). Thus k = f(N) such that ef(N)(m+1) ≥ Nm+1
       I leave someone more competent than myself to provide the details of such a function.
This ‘Probability Occurrence Function’ is the most important function in Ultimate Event Theory since without it  there would be no universe, no us, indeed nothing at all except random ultimate events firing off aimlessly for all eternity. Of course, when I speak of a mathematical function providing a mechanism for the emergence of the universe,  I do not mean to imply that a mathematical formula in any way ‘controls’ reality, or is even a ‘blueprint’ for reality. From the standpoint of UET, a mathematical formula is simply a description in terms comprehensible to humans of what apparently goes on and,  given the basic premises of UET, must go on.

Note the assumptions made. They are that:

(1) There is a region of K0 which can accommodate N ultimate events within a single ksana, i.e. can become an Event Locality with event capacity N;
(2) Ultimate events occur at random and continue to occur at random except inasmuch as they are more likely to re-appear at a spot where they have previously appeared;
(3) ‘Time’ in the sense of a succession of moments of equal duration, i.e. ksanas, exists from the very beginning, but not ‘space’;
(4) ‘Space’ comes into existence in a piecemeal fashion as, or maybe just before, ultimate events have occurrence — without events there is no need for space;
(5) Causality comes into existence when the first event-chain is formed : prior to that, there is no causality, only random emergence of events from wherever events come from (Note 5).

What happens once an event-chain has been formed? Does the Occurrence Function remain ≥ 1 or does it decline again? There are two reasons why the Probability Occurrence Function probably (sic) does at some stage decline, one theoretical and one observational. Everything in UET, except K0 the Origin, is finite ─ and K0 should be viewed as being neither finite nor infinite, ‘para-finite’ perhaps. Therefore, no event can keep on repeating indefinitely : all event-chains must eventually terminate, either giving rise to different event-chains or simply disappearing back into the Void from which they emerged. This is the theoretical reason.
Now for the observational reason. As it happens, we know today that the vast majority of ‘elementary particles’ are very short-lived and since all particles are, from the UET point of view, relatively persistent event-chains or event-clusters, we can conclude that most event-chains do not last for very long. On the other hand, certain particles like the proton and the neutrino are so long-lasting as to be virtually immortal. The cause of ‘spontaneous’ radio-active decay is incidentally not known, indeed the process is considered to be completely random (for a particular particle) which is tantamount to saying there is no cause. This is interesting since it shows that randomness re-emerges and re-emerges where it was least expected. I conceive of event-chains that have lost their causal bonding dwindling away in much the same way as they began only in reverse. There is a sort of pleasing symmetry here : randomness gives rise to order which gives rise to randomness once more.
There is the question of how we are to conceive the ‘build up’ of probability in the occurrence function : exactly where does this occur? Since this process has observable effects, it is more than a mathematical fiction. One could imagine that this slow build-up, and eventual weakening and fading away, takes place in a sort of semi-real domain, a hinterland between K0 and K1 the physical universe. I note this as K01.
I am incidentally perfectly serious in this suggestion. Some such half-real domain is required  to cope, amongst many other things, with the notorious ‘probabilities’ — more correctly ‘potentialities’ — of the Quantum Wave Function. The notion of a semi-real region where ‘semi-entities’ gradually become more and more real, i.e. closer to finalization, is a perfectly respectable idea in Hinayana Buddhism ─ many  authors speak of 17 stages in all,  though I am not so sure about that. Western science and thought generally has considerable difficulty coping with phenomena that are clearly neither completely actual nor completely imaginary (Note 6); this is so because of the dogmatic philosophic materialism that we inherit from the Enlightenment and Newtonian physics. Physicists generally avoid confronting the issue, taking refuge behind a smoke-screen of mathematical abstraction.                                                                SH  8/6/14

Note 1  This tends to be the Eastern view : neither the Chinese nor the Hindus seem to have felt much need for a purposeful all-powerful creator God. For the Chinese, there were certain patterns and trends to be discerned but nothing more, a ceaseless flux with one situation engendering another like the hexagrams of the Y Ching. Consulting the Y Ching involves a chance event, the fall of the yarrow sticks that the consultant throws at random. Whereas in divination chance is essential, in science every vestige of randomness is eliminatedas much as is humanly possible.
For the Hindus, the universe was not an artefact as it was for Boyle who likened it to the Strasbourg clock : it was a ‘dance’, that of Shiva. This is a very different conception since dances do not have either meaning or purpose apart from display and self-gratification. Also, although they may be largely repetitive, the (solitary) dancer is at liberty to introduce new movements at any moment.
As for the Buddhists, there was never any question of the universe being created : the emergence of the physical world was regarded as an accident with tragic consequences.

Note 2 “Needham tells of the irony with which Chinese men of letters of the eighteenth century greeted the Jesuits’ announcement of the triumphs of modern science. The idea that nature was governed by simple, knowable laws appeared to them as a perfect example of anthropomorphic foolishness. (…) If any law were involved [in the harmony and regularity of phenomena] it would be a law that no one, neither God nor man, had ever conceived of. Such a law would also have to be expressed in a language undecipherable by man and not be a law established by a creator conceived in our own image.”
Prigogine, Order out of Chaos p. 48 

Note 3  Contemporary Chaos Theory deals with systems that are deterministic in principle but unpredictable in practice. This is because of their sensitive dependence on initial conditions which can never be known exactly. True chaos cannot be modelled by Chaos Theory so-called. 

Note 4 See pages 12-14 of Rupert Sheldrake’s remarkable book, The Presence of the Past where he quotes this passage, likewise that from Nietzsche. Dr Sheldrake has perhaps contributed more than any other single person to the re-emergence of the ‘randomness/order’ paradigm. In his vision, ‘eternal physical laws’ are essentially reduced to habits and the universe as a whole is viewed as in some sense a living entity. “The cosmos now seems more like a growing and developing organism than like an eternal machine. In this context, habits may be more natural than immutable laws” ( Sheldrake, The Presence of the Past, Introduction).
  Stefan Wolfram also adopts a similar philosophic position, believing as he does that not only can randomness give rise to complex order, but must eventually do so. Both thinkers would probably concur with the idea that “systems with complex behaviour in nature must be driven by the same kind of essential spirit as humans” (Wolfram, A New Kind of Science p. 845)

Note 5.  This idea that causality comes into existence when, and only when, the first event-chains are formed, may be compared to the Buddhist doctrine that ‘karma’ ceases in nirvana, or rather that nirvana is to be defined as the complete absence of karma. Karma literally means ‘activity’ and there is no activity in the Void, or K0. Ultimate events are the equivalent of the Buddhist dharma ─ actually it should be dharmas plural but I cannot bring myself to write dharmas. Reality is basically composed of three ‘entities’, nirvana, karma, dharma, whose equivalents within Ultimate Event Theory are K0 or the Void, Causality (or Dominance) and Ultimate Events. All three are required for a description of phenomenal reality because the ultimate events must come from somewhere and must cohere together if they are to form ‘objects’, the causal force providing the force of cohesion. There is no need to mention matter nor for that matter (sic) God.

Note 6   “ ‘The possible’ cannot interact with the real: non-existent entities cannot deflect real ones from their paths. If a photon is deflected, it must have been deflected by something, and I have called that thing a ‘shadow photon’. Giving it a name does not make it real, but it cannot be true that an actual event, such as the arrival and detection of a tangible photon, is caused by an imaginary event such as what that photon ‘could have done’ but did not do. It is only what really happens that can cause other things really to happen. If the complex motions of the shadow photon in an interference experiment were mere possibilities that did not in fact take place, then the interference phenomena se see would not, in fact, take place.”       David Deutsch, The Fabric of Reality pp.48-9

Comment by SH
 : This is fine but I cannot go along with Deutsch’s resolution of the problem by having an infinite number of different worlds, indeed I regard it as crazy.

 


As related in the previous post, Einstein, in his epoch-making 1905 paper, based his theory of Special Relativity on just two postulates,

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

I asked myself if I could derive the main results of the Special Theory, the Rule for the Addition of Velocities, Space Contraction, Time Dilation and the ‘Equivalence’ of Mass and Energy from UET postulates.
Instead of Einstein’s Postulate 2, the ‘absolute value of the speed of light’, I employ a more general but very similar principle, namely that there is a ‘limiting speed’ for the propagation of causal influences from one spot on the Locality to another. In the simplest case, that of an  event-chain consisting of a single ultimate event that repeats at every ksana, this amounts to asking ourselves ‘how far’ the causal influence can travel ‘laterally’ from one ksana to the next. I see the Locality as a sort of grid extending indefinitely in all directions where  each ‘grid-position’ or ‘lattice-point’ can receive one, and only one, ultimate event (this is one of the original Axioms, the Axiom of Exclusion). At each ksana the entire previous spatial set-up is deftly replaced by a new, more or less identical one. So, supposing we can locate the ‘same’ spot, i.e. the ‘spot’ which replaces the one where the ultimate event had occurrence at the last ksana, is there a limit to how far to the left (or right) of this spot the ultimate event can re-occur? Yes, there is. Why? Well, I simply cannot conceive of there being no limit to how far spatially an ‘effect’ ─ in this case the ‘effect’ is a repetition of the original event ─ can be from its cause. This would be a holographic nightmare where anything that happens here affects, or at least could affect, what happens somewhere billions of light years away. One or two physicists, notably Heisenberg, have suggested something of the sort but, for my part, I cannot seriously contemplate such a state of affairs.  Moreover, experience seems to confirm that there is indeed a ‘speed limit’ for all causal processes, the limit we refer to by the name of c.
However, this ‘upper speed limit’ has a somewhat different and sharper meaning in Ultimate Event Theory than it does in matter-based physics because c (actually c*) is an integer and corresponds to a specific number of adjacent ‘grid-positions’ on the Locality existing at or during a single ksana. It is a distance rather than a speed and even this is not quite right : it is a ‘distance’ estimated not in terms of ‘lengths’ but only in terms of the number of the quantity of intermediary ultimate events that could conceivably be crammed into this interval.
In UET a distinction is made between an attainable limiting number of grid-positions to right (or left) denoted c* and the lowest unattainable limit, c, though this finicky distinction in many cases can be neglected. But the basic schema is this. A  ‘causal influence’, to be effective, must not only be able to at least traverse the distance between one ksana and the next ‘vertically’ (otherwise nothing would happen) but must also stretch out ‘laterally’ i.e. ‘traverse’ or rather ‘leap over’ a particular number of  grid-positions. There is an upper limit to the number of positions that can be ‘traversed’, namely c*, an integer. This number, which is very great but not infinite ─ actual infinity is completely banished from UET ─ defines the universe we (think we) live in since it puts a limit to the operation of causality (as  Einstein clearly recognized), and without causality there can, as far as I am concerned, be nothing worth calling a universe. Quite conceivably, the value of this constant c i(or c*) is very different in other universes, supposing they exist, but we are concerned only  with this ‘universe’ (massive causally connected more or less identically repeating event-cluster).
So far, so good. This sounds a rather odd way of putting things, but we are still pretty close to Special Relativity as it is commonly taught. What of Einstein’s other principle? Well, firstly, I don’t much care for the mention of “laws of physics”, a concept which Einstein along with practically every other modern scientist inherited from Newton and which harks back to a theistic world-view whereby God, the supreme law-giver, formulated a collection of ‘laws’ that everything must from the moment of Creation obey ─ everything material at any rate. My concern is with what actually happens whether or not what happens is ‘lawful’ or not. Nonetheless, there do seem to be certain very general principles that apply across the board and which may, somewhat misleadingly, be classed as laws. So I shall leave this question aside for the moment.
The UET Principle that replaces Einstein’s First Principle (“that the laws of physics are the same in all inertial frames”) is rather tricky to formulate but, if the reader is patient and broad-minded enough, he or she should get a good idea of what I have in mind. As a first formulation, it goes something like this:

The occupied region between two or more successive causally related positions on the Locality is invariant. 

         This requires a little elucidation. To start with, what do I understand by ‘occupied region’? At least to a first approximation, I view the Locality (the ‘place’ where ultimate events can and do have occurrence) as a sort of three-dimensional lattice extending in all directions which  flashes on and off rhythmically. It would seem that extremely few ‘grid-spots’ ever get occupied at all, and even less spots ever become the seats of repeating events, i.e. the location of the  first event of an event-chain. The ‘Event Locality’ of UET, like the Space/Time  of matter-based physics, is a very sparsely populated place.
Now, suppose that an elementary event-chain has formed but is marooned in an empty region of the Locality. In such a case, it makes no sense to speak of ‘lateral displacement’ : each event follows its predecessor and re-appears at the ‘same’ ─ i.e.  ‘equivalent’ ─ spot. Since there are no landmark events and every grid-space looks like every other, we can call such an event-chain ‘stationary’. This is the default case, the ‘inertial’ case to use the usual term.
We concentrate for the moment on just two events, one the clone of the other re-appearing at the ‘same spot’ a ksana later. These two events in effect define an ‘Event Capsule’ extending from the centre (called ‘kernel’ in UET) of the previous grid-space to the centre of the current one and span a temporal interval of one ksana. Strictly speaking, this ‘Event Capsule’ has two parts, one half belonging to the previous ksana and the other to the second ksana, but, at this stage, there is no more than a thin demarcation line separating the two extremities of the successive ksanas. Nonetheless, it would be quite wrong (from the point of view of UET) to think of this ‘Event Capsule’ and the whole underlying ‘spatial/temporal’ set-up as being ‘continuous’. There is no such thing as a ‘Space/Time Continuum’ as Minkowski understood the term.  ‘Time’ is not a dimension like ‘depth’ which can seamlessly be added on to ‘length’ or ‘width’ : there is a fundamental opposition between the spatial and temporal aspect of things that no physical theory or mathematical artifice can completely abolish. In the UET  model, the demarcations between the ‘spatial’ parts of adjacent Event Capsules do not widen, they  remain simple boundaries, but the demarcations between successive ksanas widen enormously, i.e. there are gaps in the ‘fabric’ of time. To be sure there must be ‘something’ underneath which persists and stops everything collapsing, but this underlying ‘substratum’ has no physical properties whatsoever, no ‘identity’, which is why it is often referred to, not inaccurately, both in Buddhism and sometimes even in modern physics, as ‘nothing’.
To return to the ‘Constant Region Postulate’. The elementary ‘occupied region’ may be conceived as a ‘Capsule’ having the dimensions  s0 × s0  × s= s03  for the spatial extent  and t0 ­for time, i.e. a region of extent s03 × t0 ­. These dimensions are fixed once and for all and, in the simplest UET model, s0 is a maximum and t0 ­is a minimum. Restricting ourselves for simplicity to a single spatial dimension and a single temporal dimension, we  thus have an ‘Event Rectangle’ of  s0  by t0­ .  
        For anything of interest to happen, we need more than one event-chain and, in particular, we need at least three ultimate events, one of which is to serve as a sort of landmark for the remaining pair. It is only by referring to this hypothetical or actual third event, occurring as it does at a particular spot independently of the event-pair, that we can meaningfully talk of the ‘movement’ to left or right of the second ultimate event in the pair with relation to the first. Alternatively, one could imagine an ultimate event giving rise to two events, one occurring ‘at the same spot’ and the other so many grid-spaces to the right (or left). In either case, we have an enormously expanded ‘Event Capsule’ spatially speaking compared to the original one. The Principle of the Constancy of the Area of the Occupied Region asserts that this ‘expanded’ Event Capsule which we can imagine as a ‘Space/Time rectangle’ (rather than Space/Time parallelipod), always has the ‘same’ area.
How can this be possible? Quite simply by making the spatial and temporal ‘dimensions’ inversely proportional to each other. As I have detailed in previous posts, we have in effect a ‘Space/Time Rectangle’ of sides sv and tv (subscript v for variable) such that sv × tv  = s0 × t0  = Ω = constant. Just conceivably, one could make s0  a minimum and t0 a maximum but this would result in a very strange universe indeed. In this model of UET, I take s0 as a maximum and t0 as a minimum. These dimensions are those of the archetypal ‘stationary’ or ‘inertial’ Event Capsule, one far removed from the possible influence of any other event-chains. I do not see how the ‘mixed ratio’ s0 : t0 can be determined on the basis of any fundamental physical or logical considerations, so this ratio just ‘happens to be’ what it is in the universe we (think we) live in. This ratio, along with the determination of c which RELATIVITY  HYPERBOLA DIAGRAMis a number (positive integer), are the most important constants in UET and different values would give rise to very different universes. In UET s0/t0 is often envisaged  in geometrical terms : tan β = s0/t0 = constant.    s0  and   t0   also have minimum and maximum values respectively, noted as  su    and tu  respectively, the subscript u standing for ‘ultimate’. We thus have a hyperbola but one constrained within limits so that there is no risk of ‘infinite’ values.

 

 

What is ‘speed’?   Speed is not one of the basic SI units. The three SI mechanical units are the metre, the standard of length, the kilogram, the standard of mass, and the second, the standard of time. (The remaining four units are the ampere, kelvin, candela and mole). Speed is a secondary entity, being the ratio of space to time, metre to second. For a long time, since Galileo in fact, physicists have recognized the ‘relational’ nature of speed, or rather velocity (which is a ‘vector’ quantity, speed + direction). To talk meaningfully about a body’s speed you need to refer it to some other body, preferably a body that is, or appears to be, fixed (Note 1). This makes speed a rather insubstantial sort of entity, a will-o’-the-wisp, at any rate compared to  ‘weight’, ‘impact’, ‘position’, ‘pain’ and so forth. The difficulty is compounded by the fact that we almost always consider ourselves to be ‘at rest’ : it is the countryside we see and experience whizzing by us when seated in a train. It requires a tremendous effort of imagination to see things from ‘the other object’s point of view’. Even a sudden jolt, an acceleration, is registered as a temporary annoyance that is soon replaced by the same self-centred ‘state of rest’. Highly complex and contrived set-ups like roller-coasters and other fairground machines are required to give us the sensation of ‘acceleration’ or ‘irregular movement’, a sensation we find thrilling precisely because it is so inhabitual. Basically, we think of ourselves as more or less permanently at rest, even when we know we are moving around. In UET everything actually is at rest for the space of a single ksana, it does not just appear to be and everything that happens occurs ‘at’ or ‘within’ a ksana (the elementary temporal interval).
I propose to take things further ─ not in terms of personal experience but physical theory. As stated, there is in UET no such thing as ‘continuous motion’, only succession ─ a succession of stills. An event takes place here, then a ksana or more later, another event, its replica perhaps, takes place there. What matters is what occurs and the number and order of the events that occur, everything else is secondary. This means not only that ultimate events do not move around ─ they simply have occurrence where they do have occurrence ─  but also that the distances between the events are in a sense ‘neither here nor there’, to use the remarkably  apt everyday expression. In UET v signifies a certain number of grid-spaces to right or left of a fixed point, a shift that gets repeated every ksana (or in more complex cases with respect to more than one ksana). In the case of a truncated event-chain consisting of just two successive events, v is the same as d, the ‘lateral displacement’ of event 2 with respect to the position of event 1 on the Locality (more correctly, the ‘equivalent’ of such a position a ksana later). Now, although the actual number of ‘grid-positions’ to right or left of an identifiable spot on the Locality is fixed, and continues to be the same if we are dealing with a ‘regular’ event-chain, the distance between the centres (‘kernels’) of adjacent spots is not fixed but can take any number (sic) of permissible values ranging from 0 to c* according to the circumstances. The ‘distance’ from one spot to another can thus be reckoned in a variety of legitimate ways ─ though the choice is not ‘infinite’. The force of the Constancy of the Occupied Region Principle is that, no matter how these intra-event distances are measured or experienced, the overall ‘area’ remains the same and is equal to that of the ‘default’ case, that of a ‘stationary’ Event Capsule (or in the more extended case a succession of such capsules).
This is a very different conception from that which usually prevails within Special Relativity as it is understood and taught today. Discussing the question of the ‘true’ speed of a particular object whose speed  is different according to what co-ordinate system you use, the popular writer on mathematics, Martin Gardner, famously wrote, “There no truth of the matter”. Although I understand what he meant, this is not how I would put it. Rather, all permissible ‘speeds’, i.e. all integral values of v, are “the truth of the matter”. And this does not lead us into a hopeless morass of uncertainty where “everything is relative” because, in contrast to ‘normal’ Special Relativity, there is in UET always a fixed framework of ultimate events whose number within a certain region of the Locality and whose individual ‘size’ never changes. How we evaluate the distances between them, or more precisely between the spots where they can and do occur, is an entirely secondary matter (though often one of great interest to us humans).

Space contraction and Time dilation 

In most books on Relativity, one has hardly begun before being launched into what is pretty straightforward stuff for someone at undergraduate level but what is, for the layman, a completely indigestible mass of algebra. This is a pity because the actual physical principle at work, though it took the genius of Einstein to detect its presence, is actually extreme simple and can much more conveniently be presented geometrically rather than, as usual today, algebraically. As far as I am concerned, space contraction and time dilation are facts of existence that have been shown to be true in any number of experiments : we do not notice them because the effects are very small at our perceptual level. Although it is probably impossible to completely avoid talking about ‘points of view’ and ‘relative states of motion’ and so forth, I shall try to reduce such talk to a minimum. It makes a lot more sense to forget about hypothetical ‘observers’ (who most of the time do not and could not possibly exist) and instead envisage length contraction and time dilation as actual mechanisms which ‘kick in’ automatically much as the centrifugal governor on Watt’s steam-engine kicks in to regulate the supply of heat and the consequent rate of expansion of the piston. See things like this and keep at the back of your mind a skeletal framework of ultimate events and you won’t have too much trouble with the concepts of space contraction and time dilation. After all why should the distances between events have to stay the same? It is like only being allowed to take photographs from a standing position. These distances don’t need to stay the same provided the overall area or extent of the ‘occupied region’ remains constant since it is this, and the causally connected events within it, that really matters.
Take v to represent a certain number of grid-spaces in one direction which repeats; for our simple truncated event-chain of just two events it is d , the ‘distance’ between two spots. d is itself conceived as a multiple of the ‘intra-event distance’, that  between the ‘kernels’ of any two adjacent ‘grid-positions’ in a particular direction. For any specific case, i.e. a given value of d or v, this ‘inter-possible-event’ distance does not change, and the specific extent of the kernel, where every ultimate event has occurrence if it does have occurrence, never changes ever. There is, as it were, a certain amount of ‘pulpy’, ‘squishy’ material (cf. cytoplasm in a cell) which surrounds the ‘kernel’ and which is, as it were, compressible. This for the ‘spatial’ part of the ‘Event Capsule’. The ‘temporal’ part, however, has no pulp but is ‘stretchy’, or rather the interval between ksanas is.
If the Constant Region Postulate is to work, we have somehow to arrange things that, for a given value of v or d, the spatial and temporal distances sort Relativity Circle Diagram tan sinthemselves out so that the overall area nonetheless remains the same. How to do this? The following geometrical diagram illustrates one way of doing this by using the simple formula tan θ = v/c  =  sin φ . Here v is an integral number of grid-positions ─ the more complex case where v is a rational number will be considered in due course ─ and c is the lowest unattainable limit of grid-positions (in effect (c* + 1) ).
Do these contractions and dilations ‘actually exist’ or are they just mathematical toys? As far as I am concerned, the ‘universe’ or whatever else you want to call what is out there, does exist and such simultaneous contractions and expansions likewise. Put it like this. The dimensions of loci (spots where ultimate events could in principle have occurrence) in a completely empty region of the Locality do not expand and contract because there is no ‘reason’ for them to do so : the default dimensions suffice. Even when we have two spots occupied by independent, i.e. completely disconnected,  ultimate events nothing happens : the ‘distances’ remain the ordinary stationary ones. HOWEVER, as soon as there are causal links between events at different spots, or even the possibility of such links, the network tightens up, as it were, and one can imagine causal tendrils stretching out in different directions like the tentacles of an octopus. These filaments or tendrils can and do cause contractions and expansions of the lattice ─ though there are definite elastic limits. More precisely, the greater the value of v, the more grid-spaces the causal influence ‘misses out’ and the more tilted the original rectangle becomes in order to preserve the same overall area.
We are for the moment only considering a single ‘Event Capsule’ but, in the case of a ‘regular event-chain’ with constant v ─ the equivalent of ‘constant straight-line motion’ in matter-based physics ─ we have  a causally connected sequence of more or less identical ‘Event Capsules’ each tilted from the default position as much as, but no more than, the last (since v is constant for this event-chain).
This simple schema will take us quite a long way. If we compare the ‘tilted’ spatial dimension to the horizontal one, calling the latter d and the former d′ we find from the diagram that d′ cos φ = d and likewise that t′ = t/cos φ . Don’t bother about the numerical values : they can be worked out  by calculator later.
These are essentially the relations that give rise to the Lorentz Transformations but, rather than state these formulae and get involved in the whole business of convertible co-ordinate systems, it is better for the moment to stay with the basic idea and its geometrical representation. The quantity noted cos φ which depends on  v and c , and only on v and c, crops up a lot in Special Relativity. Using the Pythagorean Formula for the case of a right-angled triangle with hypotenuse of unit length, we have

(1 cos φ)2 + (1 sin φ)2 = 12  or cos2 φ + sin2 φ = 1
        Since sin φ is set at v/c we have
        cos2 φ  = 1– sin2 φ   = 1 – (v/c)2       cos φ = √(1 – (v/c)2

         More often than not, this quantity  (√(1 – (v2/c2)  (referred to as 1/γ in the literature) is transferred over to the other side so we get the formula

         d′ = (1/cos φ) d   =     d /( √(1 – (v2/c2))      =  γ d

Viewed as an angle, or rather the reciprocal of the cosine of an angle, the ubiquitous γ of Special Relativity is considerably less frightening.

A Problem
It would appear that there is going to be a problem as d, or in the case of a repeating ‘rate’, v, approaches the limit c. Indeed, it was for this reason that I originally made a distinction between an attainable distance (attainable in one ksana), c*, and an unattainable one, c. Unfortunately, this does not eliminate all the difficulties but discussion of this important point will  be left to another post. For the moment we confine ourselves to ‘distances’ that range from 0 to c* and to integral values of d (or v).

Importance of the constant c* 

Now, it must be clearly understood that all sorts of ‘relations’ ─   perhaps correlations is an apter term ─ ‘exist’ between arbitrarily distant spots on the Locality (distant either spatially or  temporally or both) but we are only concerned with spots that are either occupied by causally connected ultimate events, or could conceivably be so occupied. For event-chains with a 1/1 ‘reappearance rhythm’  i.e. one event per ksana, the relation tan θ = v/c = sin φ (v < c) applies (see diagram) and this means that grid-spots beyond the point labelled c (and indeed c itself) lie ‘outside’ the causal ‘Event Capsule’ Anything that I am about to deduce, or propose, about such an ‘Event Capsule’ in consequence does not apply to such points and the region containing them. Causality operates only within the confines of single ‘Event Capsules’ of fixed maximum size, and, by extension, connected chains of similar ‘Event Capsules’.
Within the bounds of the ‘Event Capsule’ the Principle of Constant Area applies. Any way of distinguishing or separating the spots where ultimate events can occur is acceptable, provided the setting is appropriate to the requirements of the situation. Distances are in this respect no more significant than, say, colours, because they do not affect what really matters : the number of ultimate events (or number of possible emplacements of ultimate events) between two chosen spots on the Locality, and the order of such events.
Now, suppose an ultimate event can simultaneously produce a  clone just underneath the original spot,  and  also a clone as far as possible to the right. (I doubt whether this could actually happen but it is a revealing way of making a certain point.)
What is the least shift to the right or left? Zero. In such a case we have the default case, a ‘stationary’ event-chain, or a pair belonging to such a chain. The occupied area, however, is not zero : it is the minimal s03 t0 . The setting v = 0 in the formula d′ = (1/cos φ) d makes γ = 1/√(1 – (02/c2) = 1 so there is no difference between d′ and d. (But it is not the formula that dictates the size of the occupied region, as physicists tend to think : it is the underlying reality that validates the formula.)
For any value of d, or, in the case of repetition of the same lateral distance at each ksana, any value of v, we tilt the rectangle by the appropriate amount, or fit this value into the formula. For v = 10 grid-spaces for example, we will have a tilted Space/Time Rectangle with one side (10 cos φ) sand the other side                 (1/10 cos φ) t0 where sin φ = 10/c   so cos φ = √1 – (10/c)2  This is an equally valid space/time setting because the overall area is
         (10 cos φ) s0    ×   (1/10 cos φ) t0   =  s t0      

We can legitimately apply any integral value of v < c and we will get a setting which keeps the overall area constant. However, this is done at a cost : the distance between the centres of the spatial element of the event capsules shrink while the temporal distances expand. The default distance s0 has been shrunk to s0 cos φ, a somewhat smaller intra-event distance, and the default temporal interval t0 has been stretched to t0 /cos φ , a somewhat greater distance. Remark, however, that sticking to integral values of d or v means that cos φ does not, as in ‘normal’ physics, run through an ‘infinite’ gamut of values ─ and even when we consider the more complex case, taking reappearance rhythms into account, v is never, strictly never, irrational.
What is the greatest possible lateral distance? Is there one? Yes, by Postulate 2 there is and this maximal number of grid-points is labelled c*. This is a large but finite number and is, in the case of integral values of v, equal to c – 1. In other words, a grid-space c spaces to the left or right is just out of causal range and everything beyond likewise (Note 2).

Dimensions of the Elementary Space Capsule

I repeat the two basic postulates of Ultimate Event Theory that are in some sense equivalent to Einstein’s two postulates. They are

1. The mixed Space/Time volume/area of the occupied parallelipod/rectangle remains constant in all circumstances

 2. There is an upper limit to the lateral displacement of a causally connected event relative to its predecessor in the previous ksana

        Now, suppose we have an ultimate event that simultaneously produces a clone at the very next ksana in an equivalent spot AND another clone at the furthest possible grid-point c*. Even, taking things to a ridiculous extreme to make a point, suppose that a clone event is produced at every possible emplacement in between as well. Now, by the Principle of the Constancy of the Occupied Region, the entire occupied line of events in the second ksana can either have the ‘normal’ spacing between events which is that of the ‘rest’ distance between kernels, s0, or, alternatively, we may view the entire line as being squeezed into the dimensions of a single ‘rest’ capsule, a dimension s0 in each of three spatial directions (only one of which concerns us). In the latter case, the ‘intra-event’ spacing will have shrunk to zero ─ though the precise region occupied by an ultimate event remains the same. Since intra-event distancing is really of no importance, either of these two opposed treatments are ‘valid’.
What follows is rather interesting: we have the spatial dimension of a single ‘rest’ Event Capsule in terms of su, the dimension of the kernel. Since, in this extreme case, we have c* events squashed inside a lateral dimension of s0, this means that
s0 = c* su , i.e. the relation s0 : su = c*: 1. But s0 and su are, by hypothesis, universal constants and so is c* . Furthermore, since by definition sv tv = s0 t0 = Ω = constant , t0 /tv = sv/s0 and, fitting in the ‘ultimate’ s value, we have t0 /tu = su/c* su    = 1 : c*. In the case of ‘time’, the ‘ultimate’ dimension tu is a maximum since (by hypothesis) t0 is a minimum. c* is a measure of the extent of the elementary Event Capsule and this is why it is so important.
In UET everything is, during the space of a single ksana, at rest and in effect problems of motion in normal matter-based physics become problems of statics in UET ─ in effect I am picking up the lead given by the ancient Greek physicists for whom statics was all and infinity non-existent. Anticipating the discussion of mass in UET, or its equivalent, this interpretation ‘explains’ the tremendously increased resistance of a body to (relative) acceleration : something that Bucherer and others have demonstrated experimentally. This resistance is not the result of some arbitrary “You mustn’t go faster than light” law : it is the resistance of a region on the Locality of fixed extent to being crammed full to bursting with ultimate events. And it does not matter if the emplacements inside a single Event Capsule are not actually filled : these emplacements, the ‘kernels’, cannot be compressed whether occupied or not. But an event occurring at the maximum number of places to the right, is going to put the ‘Occupied Region’ under extreme pressure to say the least. In another post I will also speculate as to what happens if c* is exceeded supposing this to be possible.      SH    9/3/14

Notes:

Note 1  Zeno of Elea noted the ‘relativity of speed’ about two and a half thousand years before Einstein. In his “Paradox of the Chariot”, the least known of his paradoxes, Zeno asks what is the ‘true’ speed of a chariot engaged in a chariot race. A particular chariot has one speed with respect to its nearest competitor, another compared to the slowest chariot, and a completely different one again relative to the spectators. Zeno concluded that “there was no true speed” ─ I would say, “no single true speed”.

Note 2  The observant reader will have noticed that when evaluating sin φ = v/c and thus, by implication, cos φ as well, I have used the ‘unattainable’ limit c while restricting v to the values 0 to c*, thus stopping 1/cos φ from becoming infinite. Unfortunately, this finicky distinction, which makes actual numerical calculations much more complicated,  does not entirely eliminate the problem as v goes to c, but this important issue will be left aside for the moment to be discussed in detail in a separate post.
If we allow only integral values of v ranging from 0 to c* = (c – 1), the final tilted Casual Rectangle has  a ludicrously short ‘spatial side’ and a ridiculously long ‘temporal side’ (which means there is an enormous gap between ksanas). We have in effect

tan θ = (c–1)/c  (i.e. the angle is nearly 45 degrees or π/4)
and γ = 1/√1 – (c–1)2/c2 =  c/√c2 – (c–1)2 = c/√(2c –1)
Now, 2c – 1 is very close to 2c  so     γ  ≈ √c/2   

I am undecided as to whether any particular physical importance should be given to this value ─ possibly experiment will decide the issue one day.
In the event of v taking rational values (which requires a re-appearance rhythm other than 1/1), we get even more outrageous ‘lengths’  for sv and tv . In principle, such an enormous gap between ksanas, viewed from a vantage-point outside the speeding event-chain, should become detectable by delicate instruments and would thus, by implication, allow us to get approximate values for c and c* in terms of the ‘absolute units’ s0 and t0 . This sort of experiment, which I have no doubt will be carried out in this century, would be the equivalent in UET of the famous Millikan ‘oil-drop’ series of experiments that gave us the first good value of e, the basic unit of charge.

Although, in modern physics,  many elementary particles are extremely short-lived, others such as protons are virtually immortal. But either way, a particle, while it does exist, is assumed to be continuously existing. And solid objects such as we see all around us like rocks and trees are also assumed to carry on being rocks and trees from start to finish even though they do undergo considerable changes in physical and chemical composition. What is out there is  always there when it’s out there, so to speak.
However, in Ultimate Event Theory (UET) the ‘natural’ tendency is for everything to flash in and out of existence and most ultimate events, the ‘atoms’ or elementary particles of  Eventrics,  disappear for ever leaving no trace and even with more precise instruments than we have at present, wouldshow up as a sort of faint permanent background ‘noise’, a ‘flicker of existence’. Certain ultimate events, those that have acquired persistence ─ we shall not for the moment ask how and why they acquire this property ─ are able to bring about, i.e. cause, their own re-appearance and eventually to constitute a repeating event-chain or causally bonded sequence. And some event-chains also have the capacity to bond to other event-chains, eventually  forming relatively persistent clusters that we know as matter.  All apparently solid objects are, according to the UET paradigm, conglomerates of repeating ultimate events that are bonded together ‘laterally’, i.e. within  the same ksana, and ‘vertically’, i.e. from one ksana to the next. And the cosmic glue is not gravity or any other of the four basic forces of contemporary physics but causality.

The Principle of Spatio/Temporal Continuity

Newtonian physics, likewise 18th and 19th century rationalism generally, assumes what I have referred to elsewhere as the Postulate of Spatio-temporal Continuity. This postulate or principle, though rarely explicitly  stated in philosophic or scientific works,  is actually one of the most important of the ideas associated with the Enlightenment and thus with the entire subsequent intellectual development of Western society (Note 1). In its simplest form, the principle says that an event occurring here, at a particular spot in Space-Time (to use the traditional term), cannot have an effect there, at a spot some distance away without having effects at all (or at least most or some) intermediate spots. The original event, as it were, sets up a chain reaction and a frequent image used is that of a whole row of upright dominoes falling over one after the other once the first has been pushed over. This is essentially how Newtonian physics views the action of a force on a body or system of bodies, whether the force in question is a contact force (push/pull) or a force acting at a distance like gravity ─ though in the latter case Newton was unable to provide a mechanical model of how such a force could be transmitted across apparently empty space.
As we envisage things today, a blow affects a solid object by making the intermolecular distances of the surface atoms contract a little and they pass on this effect to neighbouring atoms which in turn affect nearby objects they are in contact with or exert an increased pressure on the atmosphere, and so on. Moreover, although this aspect of the question is glossed over in Newtonian (and even modern) physics, each transmission of the original impulse  ‘takes time’ : the re-action is never instantaneous (except possibly in the case of gravity) but comes ‘a moment later’, more precisely at least one ksana later. This whole issue will be discussed in more detail later, but, within the context of the present discussion, the point to bear in mind is that,  according to Newtonian physics and rationalistic thought generally, there can be no leap-frogging with space and time. Indeed, it was because of the Principle of Spatio-temporal Continuity that most European scientists rejected out of hand Newton’s theory of universal attraction since, as Newton admitted, there seemed to be no way that a solid body such as   the Earth could affect another solid body such as the Moon thousands  of kilometres without affecting the empty space between. Even as late as the mid 19th century, Maxwell valiantly attempted to give a mechanical explanation of his own theory of electro-magnetism, and he did this essentially because of the widespread rock-hard belief in the principle of spatio-temporal continuity.

So, do I propose to take the principle over into UET? No, except possibly in special situations. If I did take over the principle, it would mean that certain regions of the Locality would soon get hopelessly clogged up with colliding event-chains. Indeed, if all the possible positions in between two spots where ultimate events belonging to the same chain had occurrence were occupied, event-chains would behave as if they were solid objects and one might as well just stick to normal physics. A further, and more serious, problem is that, if all event-chains were composed of events that repeated at every successive ksana, one would expect event-chains with the same ‘speed’ (space/time ratio with respect to some ‘stationary’ event-chain) to behave in the same way when confronted with an obstacle. Manifestly, this does not happen since, for example, photon event-chains behave very differently from neutrino event-chains even though both propagate at the same, or very similar, speeds.
One of the main reasons for elaborating a theory of events in the first place was my deep-rooted conviction ─ intuition if you like ─ that physical reality is discontinuous. I do not believe there is, or can be, such a thing as continuous motion, though there is and probably always will be succession and thus change since, even if nothing else is happening, one ksana is perpetually being replaced by another, different, one ─ “the moving finger writes, and, having writ, moves on” (Rubaiyat of Omar Khayyam). Moreover, this movement is far from smooth : ‘time’ is not a river that flows at a steady rate as (Newton envisaged it) but a succession of ‘moments’, beads of different sizes threaded together to make a chain and with minute gaps between the beads which allow the thread that holds them together to become momentarily visible.
If, then, one abandons the postulate of Spatio-temporal Continuity, it becomes perfectly feasible for members of an event-chain to ‘miss out’ intermediate positions and so there most definitely can be ‘leap-frogging’ with space and time. Not only are apparently continuous phenomena discontinuous but one suspects that they have very different staccato rhythms.

‘Atomic’ Event Capsule model

 At this point it is appropriate to review the basic model.
I envisage each ultimate event as having occurrence at a particular spot on the Locality, a spot of negligible but not zero extent. Such spots, which receive (or can receive) ultimate events are the ‘kernels’ of much larger ‘event-capsules’ which are themselves stacked together in a three-dimensional lattice. I do not conceive of there being any appreciable gaps between neighbouring co-existing event-capsules : at any rate, if there are gaps they would seem to be very small and of no significance, essentially just demarcation lines. According to the present theory these spatial ‘event-capsules’ within which all ultimate events have occurrence cannot be extended or enlarged  ─ but they can be compressed. There is, nonetheless,  a limit to how far they can be squeezed because the kernels, the spots where ultimate events can and do occur, are incompressible.
I believe that time, that is to say succession, definitely exists; in consequence, not only ultimate events but the space capsules themselves, or rather the spots on the Locality where there could be ultimate events, appear and disappear just like everything else. The lattice framework, as it were, flicks on and off and it is ‘on’ for the duration of a ksana, the ultimate time interval (Note 2). When we have a ‘rest event-chain’ ─ and every event-chain is ‘at rest’ with respect to itself and an imaginary observer moving on or with it ─ the ksanas follow each other in close succession, i.e. are as nearly continuous as an intrinsically  discontinuous process can be.
According to the theory, the ‘size’ or ‘extent’ of a ksana cannot be reduced  ─ otherwise there would be little point in introducing the concept of a minimal temporal interval and we would be involved in infinite regress, the very thing which I intend to avoid at all costs. However, the distance between ksanas can, so it is suggested, be extended, or, more precisely, the distance between the successive kernels of the event capsules, where the ultimate events occur, can be extended. That is, there are gaps between events. As is explained in other posts, in UET the ‘Space/Time region’ occupied by the successive members of an event-chain remains the same irrespective of ‘states of motion’ or other distinguishing features. But the dimensions themselves can and do change. If the space-capsules contract, the time dimension must expand and this can only mean that the gaps between ksanas widen (since the extent of an ‘occupied’ ksana is cnstant. The more the space capsules contract, the more the gaps must increase (Note 3).  But, as with everything else in UET, there is a limiting value since the space capsules cannot contract beyond the spatial limits of the incompressible kernels. Note that this ‘Constant Region Principle’ only applies to causally related regions of space ─ roughly what students of SR view as ‘light cones’.

The third parameter of motion

 In traditional physics, when considering an object or body ‘in motion’, we essentially only need to specify two variables : spatial position and time. Considerations of momentum and so forth is only required because it affects future positions at future moments, and aids prediction. To specify an object’s ‘position in space’, it is customary in scientific work to relate the object’s position to an imaginary spot called the Origin where three mutually perpendicular axes meet. To specify the object’s position ‘in time’ we must show or deduce how many ‘units of time’ have elapsed since a chosen start position when t = 0. Essentially, there are only two parameters required, ‘space’ and ‘time’ : the fact that the first parameter requires (at least) three values is not, in the present context, significant.
Now, in UET we likewis need to specify an event’s position with regard to ‘space’ and ‘time’. I envisage the Event Locality at any ‘given moment’ as being composed of an indefinitely extendable set of ‘grid-positions’. Each ‘moment’ has the same duration and, if we label a particular ksana 0 (or 1) we can attach a (whole) number to an event subsequent to what happened when t = 0 (or rather k = 0). As anyone who has a little familiarity with the ideas of Special Relativity knows, the concept of an  ‘absolute present’ valid right across the universe is problematical to say the least. Nonetheless, we can talk of events occurring ‘at the same time’ locally, i.e. during or at the same ksana. (The question of how these different  ‘time zones’ interlock will be left aside for the moment.)
Just as in normal physics we can represent the trajectory of an ‘object’ by using three axes with the y axis representing time and, due to lack of space and dimension, we often squash the three spatial dimensions down to two, or, more simply still, use a single ‘space’ axis, x (Note 4). In normal physics the trajectory of an object moving with constant speed will be represented by a continuous vertical straight line and an object moving at constant non-zero speed relative to an object considered to be stationary will be represented by a slanting but nonetheless still straight line. Accelerated motion produces a ‘curve’ that is not straight. All this essentially carries over into UET except that, strictly, there should be no continuous lines at all but only dots that, if joined up, would form lines. Nonetheless, because the size of a ksana is so small relative to our very crude senses, it is usually acceptable to represent an ‘object’s’ trajectory as a continuous line. What is straight in normal physics will be straight in UET. But there is a third variable of motion in UET which has no equivalent in normal physics, namely an event’s re-appearance rhythm.
        Fairly early on, I came up against what seemed to be an insuperable difficulty with my nascent model of physical reality. In UET I make a distinction between an attainable ‘speed limit’ for an event-chain and an upper unatttainable limit, noting the first c * and the second c. This allows me to attribute a small mass ─ mass has not yet been defined in UET but this will come ─  to such ‘objects’ as photons. However, this distinction is not significant in the context of the present discussion and I shall  use the usual symbol c for either case. Now, it is notorious that different elementary particles (ultimate event chains) which apparently have the same (or very nearly identical) speeds do not behave in the same way when confronted with obstacles (large dense event clusters) that lie on their path. Whereas it is comparatively easy to block visible light and not all that difficult to block or at least muffle much more energetic gamma rays, it is almost impossible to stop a neutrino in its path, so much so that they are virtually undetectable. Incredible though it sounds, “about 400 billion neutrinos from the Sun pass through us every second” (Close, Particle Physics) but even state of the art detectors deep in the earth have a hard  job  detecting a single passing neutrino. Yet neutrinos travel at or close to the speed of light. So how is it that photons are so easy to block and neutrinos almost impossible to detect?
The answer, according to matter-based physics, is that the neutrino is not only very small and very fast moving but “does not feel any of the four physical Reappearance rates 2forces except to some extent the weak force”. But I want to see if I can derive an explanation without departing from the basic principles and concepts of Ultimate Event Theory. The problem in UET is not why the repeating event-pattern we label a neutrino passes through matter so easily ─ this is exactly what I would expect ─ but rather how and why it behaves so  differently from certain other elementary event-chains. Any ‘particle’, provided it is small enough and moves rapidly, is likely, according to the basic ideas of UET, to ‘pass through’ an obstacle just so long as the obstacle is not too large and not too dense. In UET, intervening spatial positions are simply skipped and anything that happens to be occupying these intermediate spatial positions will not in any way ‘notice’ the passing of the more rapidly moving ‘object’. On this count, however, two ‘particles’ moving at roughly the same speed (relative to the obstacle) should either both pass through an  obstacle or both collide with it.
But, as I eventually realized, this argument is only valid if the re-appearance rates of the two ‘particles’ are assumed to be the same. ‘Speed’ is nothing but a space/time ratio, so many spatial positions against so many ksanas. A particular event-chain has, say, a ‘space/time ratio’ of 8 grid-points per ksana. This means that the next event in the chain will have occurrence at the very next ksana exactly eight grid-spaces along relative to some regularly repeating event-chain considered to be stationary. On this count, it would seem impossible to have fractional rates and every ‘re-appearance rate’ would be a whole number : there would be no equivalent in UET of a speed of, say, 4/7 metres per second since grid-spaces are indivisible.
However, I eventually realized that it was not one of my original assumptions that an event in a chain must repeat (or give rise to a different event) at each and every ksana. This at once made fractional rates possible even though the basic units of space and time are, in UET, indivisible. A ‘particle’ with a rate of 4/7 s0 /t0 could, for example, make a re-appearance four times out of every seven ksanas ─ and there are any number of ways that a ‘particle’ could have the same flat rate while not having the same re-appearance rhythm. 

Limit to unitary re-appearance rate

It is by no means obvious that it is legitimate to treat ‘space’ and ‘time’ equivalently as dimensions of a single entity known as ‘Space/Time’. A ‘distance’ in time is not just a distance in space transferred to a different axis and much of the confusion in contemporary physics comes from a failure to accept, or at the very least confront, this fact. One reason why the dimensions are not equivalent is that, although a spatial dimension such as length remains the same if we now add on width, the entire spatial complex must disappear if it is to give rise to a similar one at the succeeding moment in time ─ you cannot simply ‘add’ on another dimension to what is already there.
However, for the the time being I will follow accepted wisdom in treating a time distance on the same footing as a space distance. If this is so, it would seem that, in the case of an event-chain held together by causality, the causal influence emanating from the ‘kernel’ of one event capsule, and which brings about the selfsame event (or a different one) a ksana later in an equivalent spatial position, must traverse at least the ‘width’ or diameter of a space capsule, noted s0 (if the capsule is at rest). Why? Because if it does not at least get to the extremity of the first spatial capsule, a distance of ½ s0  and then get to the ‘kernel’ of the following one, nothing at all will happen and the event-chain will terminate abruptly.
This means that the ‘reappearance rate’ of an event in an event-chain must at least be 1/1 in absolute units, i.e. 1 s0 /t0 , one grid-space per ksana. Can it be greater than this? Could it, for example, be  2, 3 or 5 grid-spacesper ksana? Seemingly not. For if and when the ultimate event re-appears, say  5 ksanas later, the original causal impulse will have covered a distance of 5 s0   ( s0 being the diameter or spatial dimension of each capsule) and would have taken 5 ksanas to do  this. And so the space/time displacement rate would be the same (but not in this case the actual inter-event distances).
It is only the unitary rate, the distance/time ratio taken over a single ksana, that cannot be less (or more) than one grid-space per ksana : any fractional (but not irrational) re-appearance rate is perfectly conceivable provided it is spread out over several ksanas.  A re-appearance rate of m/n s0/t0  simply means that the ultimate event in question re-appears in an equivalent spatial position on the Locality m times every n ksanas where m/n ≤ 1. And there are all sorts of different ways in which this rate be achieved. For example, a re-appearance rate of 3/5 s0/t0 could be a repeating pattern such as

 

   ™˜™™™™™™™™™™™™™™™™™™™™™™Reappearance rates 1

 

 

 

 

 

and one pattern could change over into the other either randomly or, alternatively, according to a particular rule.
As one increases the difference between the numerator and the denominator, there are obviously going to be many more possible variations : all this could easily be worked out mathematically using combinatorial analysis. But note that it is the distribution of ™ and ˜ that matters since, once a re-appearance rhythm has begun, there is no real difference between a ‘vertical’ rate of  ™˜™˜ and ˜™˜™ ─ it all depends on where you start counting. Patterns only count as different if this difference is recognizable no matter where you start examining the sequence.
Why does all this matter? Because, each time there is a blank line, this means that the ultimate event in question does not make an appearance at all during this ksana, and, if we are dealing with large denominators, this could mean very large gaps indeed in an event chain. Suppose, for example, an event-chain had a re-appearance rate of 4/786. There would only be four appearances (black dots) in a period of 786 ksanas, and there would inevitably be very large blank sections of the Locality when the ultimate event made no appearance.

Lower Limit of re-creation rate 

Since, by definition, everything in UET is finite, there must be a maximum number of possible consecutive non-reappearances. For example, if we set the limit at, say, 20 blank lines, or 200 03 2000, this would mean that, each time this was observed, we could conclude that the event-chain had terminated. This is the UET equivalent  of the Principle of Spatio-Temporal Continuity and effectively excludes phenomena such as an ultimate event in an event-chain making its re-appearance a century later than its first appearance. This limit would have to be estimated on the  basis of experiments since I do not see how a specific value can be derived from theoretical considerations alone. It is tempting to estimate that this value would involve c* or a multiple of c* but this is only a wild guess ─ Nature does not always favour elegance and simplicity.
Such a rule would limit how ‘stretched out’ an event-chain can be temporally and, in reality , there may not after all be a hard and fast general rule  : the maximal extent of the gap could decline exponentially or in accordance with some other function. That is, an abnormally long gap followed by the re-appearance of an event, would decrease the possible upper limit slightly in much the same way as chance associations increase the likelihood of an event-chain forming in the first place. If, say, there was an original limit of a  gap of 20 ksanas, whenever the re-appearance rate had a gap of 19, the limit would be reduced to 19 and so on.
It is important to be clear that we are not talking about the phenomenon of ‘time dilation’ which concerns only the interval between one ksana and the next according to a particular viewpoint. Here, we simply have an event-chain ‘at rest’ and which is not displacing itself laterally at all, at any rate not from the viewpoint we have adopted.

Re-appearance Rate as an intrinsic property of an event-chain  

Since Galileo, and subsequently Einstein, it has become customary in physics to distinguish, not between rest and motion, but rather between unaccelerated motion and  accelerated motion. And the category of ‘unaccelerated motion’ includes all possible constant straight-line speeds including zero (rest). It seems, then,  that there is no true distinction to be made between ‘rest’ and motion just so long as the latter is motion in a straight line at a constant displacement rate. This ‘relativisation’ of  motion in effect means that an ‘inertial system’ or a particle at rest within an inertial system does not really have a specific velocity at all, since any estimated velocity is as ‘true’ as any other. So, seemingly, ‘velocity’ is not a property of a single body but only of a system of at least two bodies. This is, in a sense, rather odd since there can be no doubt that a ‘change of velocity’, an acceleration, really is a feature of a single body (or is it?).
So what to conclude? One could say that ‘acceleration’ has ‘higher reality status’ than simple velocity since it does not depend on a reference point outside the system. ‘Velocity’ is a ‘reality of second order’ whereas acceleration is a ‘reality of first order’. But once again there is a difference between normal physics and UET physics in this respect. Although the distinction between unaccelerated and accelerated motion is taken over into UET (re-baptised ‘regular’ and ‘irregular’ motion), there is in Ultimate Event Theory a new kind of ‘velocity’ that has nothing to do with any other body whatsoever, namely the event-chain’s re-appearance rate.
When one has spent some time studying Relativity one ends up wondering whether after all “everything is relative” and that  the universe is evaporating away even as we look it leaving nothing but a trail of unintelligible mathematical formulae. In Quantum Mechanics (as Heisenberg envisaged it anyway) the properties of a particular ‘body’ involve the properties of all the other bodies in the universe, so that there remain very few, if any, intrinsic properties that a body or system can possess. However, in UET, there is a reality safety net. For there are at least two  things that are not relative, since they pertain to the event-chain or event-conglomerate itself whether it is alone in the universe or embedded in the dense network of intersecting event-chains we view as matter. These two things are (1) the number of ultimate events in a given portion of an event-chain and (2) the re-appearance rate of events in the chain. These two features are intrinsic to every chain and have nothing to do with velocity or varying viewpoints or anything else.  To be continued SH

Note 1   This principle (Spatio-temporal Continuity) innocuous  though it may sound, has also had  extremely important social and political implications since, amongst other things, it led to the repeal of laws against witchcraft in the ‘advanced’ countries. For example, the new Legislative Assembly in France shortly after the revolution specifically abolished all penalties for ‘imaginary’ crimes and that included witchcraft. Why was witchcraft considered to be an ‘imaginary crime’? Essentially because it  violated the Principle of Spatio-Temporal Continuity. The French revolutionaries who drew the statue of Reason through the streets of Paris and made Her their goddess, considered it impossible to cause someone’s death miles away simply by thinking ill of them or saying Abracadabra. Whether the accused ‘confessed’ to having brought about someone’s death in this way, or even sincerely believed it, was irrelevant : no one had the power to disobey the Principle of Spatio-Temporal Continuity. The Principle got somewhat muddied  when science had to deal with electro-magnetism ─ Does an impulse travel through all possible intermediary positions in an electro-magnetic field? ─ but it was still very much in force in 1905 when Einstein formulated the Theory of Special Relativity. For Einstein deduced from his basic assumptions that one could not ‘send a message’ faster than the speed of light and that, in consequence,  this limited the speed of propagation of causality. If I am too far away from someone else I simply cannot cause this person’s death at that particular time and that is that. The Principle ran into trouble, of course,  with the advent of Quantum Mechanics but it remains deeply entrenched in our way of thinking about the world which is why alibis are so important in law, to take but one example. And it is precisely because Quantum Mechanics appears to violate the principle that QM is so worrisome and the chief reason why some of the scientists who helped to develop the theory such as Einstein himself, and even Schrodinger, were never happy with  it. As Einstein put it, Quantum Mechanics involved “spooky action at a distance” ─ exactly the same objection that the Cartesians had made to Newton. 

Note 2  Ideally, we would have a lighted three-dimensional framework flashing on and off and mark the successive appearances of the ‘object’ as, say, a red point of light comes on periodically when the lighted framework comes on.

Note 3 In principle, in the case of extremely high speed event-chains, these gaps should be detectable even today though the fact that such high speeds are involved makes direct observation difficult. 

Note 4 This is not how we specify an object’s position in ordinary conversation. As Bohm pertinently pointed out, we in effect speak in the language of topology rather than the language of co-ordinate geometry. We say such and such an object is ‘under’, ‘over’, ‘near’, ‘to the right of’ &c. some other well-known  prominent object, a Church or mountain when outside, a bookcase or fireplace when in a room.
Not only do coordinates not exist in Nature, they do not come at all naturally to us, even today. Why is this? Chiefly, I suspect because they are not only cumbersome but practically useless to a nomadic, hunting/food gathering life style and we humans spent at least 96% of our existence as hunter/gatherers. Exact measurement only becomes essential when human beings start to manufacture complicated objects and even then many craftsmen and engineers used ‘rules of thumb’ and ‘rough estimates’ well into the 19th century.

In its present state, Ultimate Event Theory falls squarely between two stools : too vague and ‘intuitive’ to even get a hearing from professional scientists, let alone be  taken seriously, it is too technical and mathematical to appeal to the ‘ordinary reader’. Hopefully, this double negative can be eventually turned into a double positive, i.e. a rigorous mathematical theory capable of making testable predictions that nonetheless is comprehensible and has strong intuitive appeal. I will personally not be able to take the theory to the desired state because of my insufficient mathematical and above all computing expertise : this will be the work of others. What I can do is, on the one hand, to strengthen the mathematical, logical side as much as I can while putting the theory in a form the non-mathematical reader can at least comprehend. One friend in particular who got put off by the mathematics asked me whether I could not write something that gives the gist of the theory without any mathematics at all. Thus this post which recounts the story of how and why I came to develop Ultimate Event Theory in the first place some thirty-five years ago.

 Conflicting  beliefs

Although scientists and rationalists are loath to admit it, personal temperament and cultural factors play a considerable part in the development of theories of the universe. There are always individual and environmental factors at work although the accumulation of unwelcome but undeniable facts may eventually overpower them. Most people today are, intellectually speaking, opportunists with few if any deep personal convictions, and there are good reasons for this. As sociological and biological entities we are strongly impelled to accept what is ‘official doctrine’ (in whatever domain) simply because, as a French psycho-analyst whose name escapes me famously wrote, “It is always dangerous to think differently from the majority”.
At the same time, one is inclined, and in some cases compelled, to accept only those ideas about the world that make sense in terms of our own experience. The result is that most people spend their lives doing an intellectual balancing act between what they ‘believe’ because this is what they are told is the case, and what they ‘believe’ because this is what their experience tells them is (likely to be) the case. Such a predicament is perhaps inevitable if we decide to live in society and most of the time the compromise ‘works’; there are, however, moments in the history of nations and in the history of a single individual when the conflict becomes intolerable and something has to give.

The Belief Crisis : What is the basis of reality?

Human existence is a succession of crises interspersed with periods of relative stability (or boredom). First, there is the birth crisis (the most traumatic of all), the ‘toddler crisis’ when the infant starts to try to make sense of the world around him or her, the adolescent crisis, the ‘mid-life’ crisis which kicks in at about forty and the age/death crisis when one realizes the end is nigh. All these crises are sparked off by physical changes which are too obvious and powerful to be ignored with the possible exception of the mid-life crisis which is not so much biological as  social (‘Where am I going with my life?’ ‘Will I achieve what I wanted?’).
Apart from all these crises ─ as if that were not enough already ─  there is the ‘belief crisis’. By ‘crisis of belief’ I mean pondering the answer to the question ‘What is real?’ ‘What do I absolutely have to believe in?’. Such a crisis can, on the individual level, come at any moment, though it usually seems to hit one between the eyes midway between the adolescent ‘growing up’ crisis and the full-scale mid-life crisis. As a young person one couldn’t really care less what reality ‘really’ is, one simply wants to live as intensely as possible and ‘philosophic’ questions can just go hang. And in middle age, people usually find they want to find some ‘meaning’ in life before it’s all over. Now, although the ‘belief crisis’ may lead on to the ‘middle age meaning crisis’ it is essentially quite different. For the ‘belief crisis’ is not a search for fulfilment but simply a deep questioning about the very nature of reality, meaningful or not. It is not essentially an emotional crisis nor is it inevitable ─ many people and even entire societies by-pass it altogether without being any the worse off, rather the reverse (Note 1).
Various influential thinkers in history went through such a  ‘belief crisis’ and answered it in memorable ways : one thinks at once of the Buddha or Socrates. Of all peoples, the Greeks during the Vth and VIth centuries BC seem to have experienced a veritable epidemic of successive ‘belief crises’ which is what  makes them so important in the history of civilization  ─ and also what made the actual individuals and city-states so unstable and so quarrelsome. Several of the most celebrated answers to the ‘riddle of reality’ date back to this brilliant era. Democritus of Abdera answered the question, “What is really real?” with the staggering statement, “Nothing exists except atoms and void”. The Pythagoreans, for their part, concluded that the principle on which the universe was based was not so much physical as numerical, “All is Number”. Our entire contemporary scientific and technological ‘world-view’ (‘paradigm’) can  be traced back to the  two giant thinkers, Pythagoras and Democritus, even if we have ultimately ‘got beyond’  them since we have ‘split the atom’ and replaced numbers as such by mathematical formulae. In an equally turbulent era, Descartes, another major ‘intellectual crisis’ thinker, famously decided that he could disbelieve in just about everything but not that there was a ‘thinking being’ doing the disbelieving, cogito ergo sum (Note 2).
In due course, in my mid to late thirties, at about the time of life when Descartes decided to question the totality of received wisdom, I found myself with quite a lot of time on my hands and a certain amount of experience of the vicissitudes of life behind me to ponder upon. I too became afflicted by the ‘belief crisis’ and spent the greater part of my spare time (and working time as well) pondering what was ‘really real’ and discussing the issue interminably with the same person practically every evening (Note 3). 

Temperamental Inclinations or Prejudices

 My temperament (genes?) combined with my experience of life pushed me in certain well-defined philosophic directions. Although I only  started formulating Eventrics and Ultimate Event Theory (the ‘microscopic’ part of Eventrics) in the early nineteen-eighties and by then had long since retired from the ‘hippie scene’, the heady years of the late Sixties and early Seventies provided me with my  ‘field notes’ on the nature of reality (and unreality), especially the human part of it. The cultural climate of this era, at any rate in America and the West, may be summed up by saying that, during this time “a substantial number of people between the ages of fifteen and thirty decided that sensations were far more important than possessions and arranged their lives in consequence”. In practice this meant forsaking steady jobs, marriage, further education and so on and spending one’s time looking for physical thrills such as doing a ton up on the M1, hitch-hiking aimlessly around the world, blowing your mind with drugs, having casual but intense sexual encounters and so on. Not much philosophy here but when I and other shipwrecked survivors of the inevitable débâcle took stock of the situation, we retained a strong preference for a ‘philosophy’  that gave primary importance to sensation and personal experience.
The physical requirement ruled out traditional religion since most religions, at any rate Christianity in its later public  form, downgraded the body and the physical world altogether in favour of the ‘soul’ and a supposed future life beyond the grave. The only aspect of religion that deserved to be taken seriously, so I felt, was mysticism since mysticism is based not on hearsay or holy writ but on actual personal experience. The mystic’s claim that there was a domain ‘beyond the physical’ and that this deeper reality can to some degree actually be experienced within this life struck me as not only inspiring but even credible ─ “We are more than what we think we are and know more than what we think we know” as someone (myself) once put it.
At the same time, my somewhat precarious hand-to-mouth existence had given me a healthy respect for the ‘basic physical necessities’ and thus inclined to reject all theories which dismissed physical reality as ‘illusory’, tempting though this sometimes is (Note 4). So ‘Idealism’ as such was out. In effect I wanted a belief system that gave validity and significance to the impressions of the senses, sentio ergo sum to Descartes’ cogito ergo sum or, better, sentio ergo est :  ‘I feel therefore there is something’.

Why not physical science ?

 Why not indeed. The main reason that I didn’t decide, like most people around me,  that “science has all the answers” was that, at the time, I knew practically no science. Incredible though this seems today, I had managed to get through school and university without going to a single chemistry or physics class and my knowledge of biology was limited to one period a week for one year and with no exam at the end of it.
But ignorance was not the only reason for my disqualifying science as a viable ‘theory of everything’. Apart from being vaguely threatening ─ this was the era of the Cold War and CND ─ science simply seemed monumentally irrelevant to every aspect of one’s personal daily life. Did knowing about neutrons and neurons make you  more capable of making more effective decisions on a day to day basis? Seemingly not. Scientists and mathematicians often seemed to be less (not more) astute in running their lives than ordinary practical people.
Apart from this, science was going through a difficult period when even the physicists themselves were bewildered by their own discoveries. Newton’s billiard ball universe had collapsed into a tangled mess of probabilities and  uncertainty principles : when even Einstein, the most famous modern scientist, could not manage to swallow Quantum Theory, there seemed little hope for Joe Bloggs. The solid observable atom was out and unobservable quarks were in, but Murray Gell-Mann, the co-originator of the quark theory, stated on several occasions that he did not ‘really  believe in quarks’ but merely used them as ‘mathematical aids to sorting out the data’. Well, if even he didn’t believe in them, why the hell should anyone else? Newton’s clockwork universe was bleak and soulless but was at least credible and tactile : modern science seemed nothing more than a farrago of  abstruse nonsense that for some reason ‘worked’ often to the amazement of the scientists themselves.
There was another, deeper, reason why physical science appeared antipathetic to me at the time : science totally devalues personal experience. Only repeatable observations in laboratory conditions count as fact : everything else is dismissed as ‘anecdotal’. But the whole point of personal experience is that (1) it is essentially unrepeatable and (2) it must be spontaneous if it is to be worthwhile. The famous ‘scientific method’ might have a certain value if we are studying lifeless atoms but seemed unlikely to uncover anything of interest in the human domain — . the best ‘psychologists’ such as  conmen and dictators are sublimely ignorant of psychology. Science essentially treats everything as if it were dead, which is why it struggles to come up with any strong predictions in the social, economic and political spheres. Rather than treat living things as essentially dead, I was more inclined to treat ‘dead things’ (including the universe itself) as if they were in some sense alive. 

Descartes’ Thought Experiment 

Although I don’t think I had actually read Descartes’ Discours sur la méthode at the time, I had heard about it and the general idea was presumably lurking at the back of my mind. Supposedly, Descartes who, incredibly, was an Army officer at the time, spent a day in what is described in history books as a poêle (‘stove’) pondering the nature of reality. (The ‘stove’ must have been a small chamber close to a source of heat.) Descartes came to the conclusion that it was possible to disbelieve in just about everything except that there was a ‘thinking  being’, cogito ergo sum. To anyone who has done meditation, even in a casual way, Descartes’ conclusion appears by no means self-evident. The notion of individuality drops away quite rapidly when one is meditating and all one is left with is a flux of mental/physical impressions. It is not only possible but even ‘natural’ to temporarily disbelieve in the reality of the ‘I’ (Note 5)─ but one cannot and does not disbelieve in the reality of the various sensations/impressions that are succeeding each other as ‘one’ sits (or stands).

Descartes’ thought experiment nonetheless seemed  suggestive and required, I thought, more precise evaluation. Whether the ‘impressions/sensations’ are considered to be mental, physical or a mixture of the two, they are nonetheless always events and as such have the following features:

(1) they are, or appear to be, ‘entire’, ‘all of a piece’, there is no such thing as a ‘partial’ event/impression;

(2) they follow each other very rapidly;

(3) the events do not constitute a continuous stream, on the contrary there are palpable gaps between the events (Note 6);

(4) there is usually a connection between successive events, one thought ‘leads on’ to another and we can, if we are alert enough, work backwards from one ‘thought/impression’ to its predecessor and so on back to the start of the sequence;

(5) occasionally ‘thought-events’ crop up that seem to be  completely disconnected from all previous ‘thought-events’, arriving as it were ‘out of the blue.’.

Now, with these five qualities, I already have a number of features which I believe must be part of reality, at any rate individual ‘thought/sensation’  reality. Firstly, whether my thoughts/sensations are ‘wrong’, misguided, deluded or what have you, they happen, they take place, cannot be waved away. Secondly, there is always sequence : thought ‘moves from one thing to another’ by specific stages. Thirdly, there are noticeable gaps between the thought-events. Fourthly, there is  causality : one thought/sensation gives rise to another in a broadly predictable and comprehensible manner. Finally, there is an irreducible random element in the unfolding of thought-events — so not everything is deterministic apparently.
These are properties I repeatedly observe and feel I have to believe in. There are also a number of conclusions to be drawn from the above; like all deductions these ‘derived truths’ are somewhat less certain than the direct impressions, are ‘second-order’ truths as it were, but they are nonetheless compelling, at least to me. What conclusions? (1) Since there are events, there  must seemingly be a ‘place’ where these events can and do occur, an Event Locality. (2) Since there are, and continue to be, events, there  must be an ultimate source of events, an Origin, something distinct from the events themselves and also (perhaps) distinct from the Locality.
A further and more radical conclusion is that this broad schema can legitimately be generalized to ‘everything’, at any rate to everything in the entire known and knowable universe. Why make any hard and fast distinction between mental events and their features and ‘objective’ physical events and their features? Succession, discontinuity and causality are properties of the ‘outside’ world as well, not just that of the private world of an isolated thinking individual.
What about other things we normally assume exist such as trees and tables and ourselves? According to the event model, all these things must either be (1) illusory or irrelevant (same thing essentially) (2) composite and secondary and/or (3) ‘emergent’.
Objects are bundles of events that keep repeating more or less in the same form. And though I do indeed believe that ‘I’ am in some sense a distinct entity and thus ‘exist’, this entity is not fundamental, not basic, not entirely reducible to a collection of events. If the personality exists at all ─ some persons  have doubts on this score ─ it is a complex, emergent entity. This is an example of a ‘valid’ but not  fundamental item of reality.
Ideas, if they take place in the ‘mind’, are events whether true, false or meaningless. They are ‘true’ to the extent that they can ultimately be grounded in occurrences of actual events and their interactions, or interpretations thereof. I suppose this is my version of the ‘Verification Principle’ : whatever is not grounded in actual sensations is to be regarded with suspicion.  This does not necessarily invalidate abstract or metaphysical entities but it does draw a line in the sand. For example, contrary to most contemporary rationalists and scientists, I do not entirely reject the notion of a reality beyond the physical because the feeling that there is something ‘immeasurable’ and ‘transcendent’ from which we and the world emerge is a matter of experience to many people, is a part of the world of sensation though somewhat at the limits of it. This reality, if it exists, is ‘beyond name and form’ (as Buddhism puts it) is ‘non-computable’, ‘transfinite’. But I entirely reject the notion of the ‘infinitely large’ and the ‘infinitely small’ which has bedevilled science and mathematics since these (pseudo)entities are completely  outside  personal experience and always will be. With the exception of the Origin (which is a source of events but not itself an event), my standpoint is that  everything, absolutely everything, is made up of a finite number of ultimate events and an ultimate event is an event  that cannot be further decomposed. This principle is not, perhaps, quite so obvious as some of the other principles. Nonetheless, when considering ‘macro’ events ─ events which clearly can be decomposed into smaller events ─ we have two and only two choices : either the process comes to an end with an ‘ultimate’ event or it carries on interminably (while yet eventually coming to an end). I believe the first option is by far the more reasonable one.
With this, I feel I have the bare bones of not just a philosophy but a ‘view of the world’, a schema into which pretty well everything can be fitted ─ the contemporary buzzword is ‘paradigm’. Like Descartes emerging from his ‘stove’, I considered  I had a blueprint for reality or at least that part of it amenable to direct experience. To sum up, I could disbelieve, at least momentarily,  in just about everything but not that (1) there were events ; (2) that events occurred successively; (3) were subject to some sort of omnipresent causal force with  occasional lapses into lawlessness. Also, (4) these events happened somewhere (5) emerged from something or somewhere and (6) were decomposable into ‘ultimate’ events that could not be further decomposed.  This would do for a beginning, other essential features would be added to the mix as and when required.                                                                             SH

Note 1  Many extremely successful societies seem to have been perfectly happy in  avoiding the ‘intellectual crisis’ altogether : Rome did not produce a single original thinker and the official Chinese Confucian world-view changed little over a period of more than two thousand years. This was doubtless  one of the main reasons why these societies lasted so long while extremely volatile societies such as VIth century Athens or the city states of Renaissance Italy blazed with the light of a thousand suns for a few moments and then were seen and heard no more.

Note 2 Je pris garde que, pendant que je voulais ainsi penser que tout était faux, il fallait nécessairement que moi, qui le pensais, fusse quelquechose. Et remarquant que cette vérité : je pense, donc je suis, était si ferme et si assure, que toutes les autres extravagantes suppositions des sceptiques n’étaient capables de l’ébranler, je jugeai que je pouvais le reçevoir, sans scrupule, pour le premier principe de la philosophie que je cherchais.”
      René Descartes, Discours sur la Méthode Quatrième Partie
“I noted, however, that even while engaged in thinking that everything was false, it was nonetheless a fact that I, who was engaged in thought, was ‘something’. And observing that this truth, I think, therefore I am, was so strong and so incontrovertible, that the most extravagant proposals of sceptics could not shake it, I concluded that I could justifiably take it on  board, without misgiving, as the basic proposition of philosophy that I was looking for.”  [loose translation]

Note 3  The person in question was, for the record, a primary school teacher by the name of Marion Rowse, unfortunately now long deceased. She was the only person to whom I spoke about the ideas that eventually became Eventrics and Ultimate Event Theory and deserves to be remembered for this reason.

Note 4   As someone at the other end of the social spectrum, but who seemingly also went through a crisis of belief at around the same time, put it, “I have gained a healthy respect for the objective aspect of reality by having lived under Nazi and Communist regimes and by speculating in the financial markets” (Soros, The Crash of 2008 p. 40).
According to Boswell, Dr. Johnson refuted Bishop Berkeley, who argued that matter was essentially unreal, by kicking a wall. In a sense this was a good answer but perhaps not entirely in the way Dr. Johnson intended.  Why do I believe in the reality of the wall? Because if I kick it hard enough I feel pain and there is no doubt in my mind that pain is real — it is a sensation. The wall must be accorded some degree of reality because, seemingly, it was the cause of the pain. But the reality of the wall, is, as it were, a ‘derived’ or ‘secondary’  reality : the primary reality is the  sensation, in this case the pain in my foot. I could, I argued to myself, at a pinch, disbelieve in the existence of the wall, or at any rate accept that it is not perhaps so ‘real’ as we like to think it is, but I could not disbelieve in the reality of my sensation. And it was not even important whether my sensations were, or were not, corroborated by other people, were entirely ‘subjective’ if you like, since, subjective or not, they remained sensations and thus real.

Note 5 In the Chuang-tzu Book, Yen Ch’eng, a disciple of the philosopher Ch’i  is alarmed because his master, when meditating, appeared to be “like a log of wood, quite unlike the person who was sitting there before”. Ch’I replies, “You have put it very well; when you saw me just now my ‘I’ had lost its ‘me’” (Chaung-tzu Book II. 1) 

Note 6 The practitioner of meditation is encouraged to ‘widen’ these gaps as much as possible (without falling asleep) since it is by way of the gaps that we can eventually become familiar with the ‘Emptiness’ that is the origin and end of everything.

 

 Although, in modern physics,  many elementary particles are extremely short-lived, others such as protons are virtually immortal. But either way, a particle, while it does exist, is assumed to be continuously existing. And solid objects such as we see all around us like rocks and hills, are also assumed to be ‘continuously existing’ even though they may undergo gradual changes in internal composition. Since solid objects and even elementary particles don’t appear, disappear and re-appear, they don’t have a ‘re-appearance rate ’ ─ they’re always there when they are there, so to speak.
However, in UET the ‘natural’ tendency is for everything to flash in and out of existence and virtually all  ultimate events disappear for ever after a single appearance leaving a trace that would, at best, show up as a sort of faint background ‘noise’ or ‘flicker of existence’. All apparently solid objects are, according to the UET paradigm, conglomerates of repeating ultimate events that are bonded together ‘laterally’, i.e. within  the same ksana, and also ‘vertically’, i.e. from one ksana to the next (since otherwise they would not show up again ever). A few ultimate events, those that have acquired persistence ─ we shall not for the moment ask how and why they acquire this property ─ are able to bring about, i.e. cause, their own re-appearance : in such a case we have an event-chain which is, by definition,  a causally bonded sequence of ultimate events.
But how often do the constituent events of an event-chain re-appear?  Taking the simplest case of an event-chain composed of a single repeating ultimate event, are we to suppose that this event repeats at every single ksana (‘moment’ if you like)? There is on the face of it no particular reason why this should be so and many reasons why this would seem to be very unlikely.    

The Principle of Spatio-Temporal Continuity 

Newtonian physics, likewise 18th and 19th century rationalism generally, assumes what I have referred to elsewhere as the Postulate of Spatio-temporal Continuity. This postulate or principle, though rarely explicitly  stated in philosophic or scientific works,  is actually one of the most important of the ideas associated with the Enlightenment and thus with the entire subsequent intellectual development of Western society. In its simplest form, the principle says that an event occurring here, at a particular spot in Space-Time (to use the current term), cannot have an effect there, at a spot some distance away without having effects at all (or at least most?/ some?) intermediate spots. The original event sets up a chain reaction and a frequent image used is that of a whole row of upright dominoes falling over one by one once the first has been pushed over. This is essentially how Newtonian physics views the action of a force on a body or system of bodies, whether the force in question is a contact force (push/pull) or a force acting at a distance like gravity.
As we envisage things today, a blow affects a solid object by making the intermolecular distances of the surface atoms contract a little and they pass on this effect to neighbouring molecules which in turn affect nearby objects they are in contact with or exert an increased pressure on the atmosphere,  and so on. Moreover, although this aspect of the question is glossed over in Newtonian (and even modern) physics, each transmission of the original impulse  ‘takes time’ : the re-action is never instantaneous (except possibly in the case of gravity) but comes ‘a moment later’, more precisely at least one ksana later. This whole issue will be discussed in more detail later, but, within the context of the present discussion, the point to bear in mind is that,  according to Newtonian physics and rationalistic thought generally, there can be no leap-frogging with space and time. Indeed, it was because of the Principle of Spatio-temporal Continuity that most European scientists rejected out of hand Newton’s theory of universal attraction since, as Newton admitted, there seemed to be no way that a solid body such as  the Earth could affect another solid body such as the Moon thousands  of kilometres with nothing in between except ‘empty space’.   Even as late as the mid 19th century, Maxwell valiantly attempted to give a mechanical explanation of his own theory of electro-magnetism, and he did this essentially because of the widespread rock-hard belief in the principle of spatio-temporal continuity.
The principle, innocuous  though it may sound, has also had  extremely important social and political implications since, amongst other things, it led to the repeal of laws against witchcraft in the ‘advanced’ countries ─ the new Legislative Assembly in France shortly after the revolution specifically abolished all penalties for ‘imaginary’ crimes and that included witchcraft. Why was witchcraft considered to be an ‘imaginary crime’? Essentially because it  offended against the Principle of Spatio-Temporal Continuity. The French revolutionaries who drew the statue of Reason through the streets of Paris and made Her their goddess, considered it impossible to cause someone’s death miles away simply by thinking ill of them or saying Abracadabra. Whether the accused ‘confessed’ to having brought about someone’s death in this way, or even sincerely believed it, was irrelevant : no one had the power to disobey the Principle of Spatio-Temporal Continuity.
The Principle got somewhat muddied  when science had to deal with electro-magnetism ─ Does an impulse travel through all possible intermediary positions in an electro-magnetic field? ─ but it was still very much in force in 1905 when Einstein formulated the Theory of Special Relativity. For Einstein deduced from his basic assumptions that one could not ‘send a message’ faster than the speed of light and that, in consequence,  this limited the speed of propagation of causality. If I am too far away from someone else I simply cannot cause this person’s death at that particular time and that is that. The Principle ran into trouble, of course,  with the advent of Quantum Mechanics but it remains deeply entrenched in our way of thinking about the world which is why alibis are so important in law, to take but one example. And it is precisely because Quantum Mechanics appears to violate the principle that QM is so worrisome and the chief reason why some of the scientists who helped to develop the theory such as Einstein himself, and even Schrodinger, were never happy with  it. As Einstein put it, Quantum Mechanics involved “spooky action at a distance” ─ exactly the same objection that the Cartesians had made to Newton.
So, do I propose to take the principle over into UET? The short answer is, no. If I did take over the principle, it would mean that, in every bona fide event-chain, an ultimate event would make an appearance at every single ‘moment’ (ksana), and I could see in advance that there were serious problems ahead if I assumed this : certain regions of the Locality would soon get hopelessly clogged up with colliding event-chains. Also, if all the possible positions in all ‘normal’ event-sequences were occupied, there would be little point in having a theory of events at all, since, to all intents and purposes, all event-chains would behave as if they were solid objects and one might as well just stick to normal physics. One of the main  reasons for elaborating a theory of events in the first place was my deep-rooted conviction ─ intuition if you like ─ that physical reality is discontinuous and that there are gaps between ksanas ─ or at least that there could be gaps given certain conditions. In the theory I eventually roughed out, or am in the process of roughing out, both spatio-temporal continuity and infinity are absent and will remain prohibited.
But how does all this square with my deduction (from UET hypotheses) that the maximum propagation rate of causality is a single grid-position per ksana, s0/t0, where s0 is the spatial dimension of an event capsule ‘at rest’ and t0 the ‘rest’ temporal dimension? In UET, what replaces the ‘object-based’ image of a tiny nucleus inside an atom, is the vision of a tiny kernel of fixed extent where every ultimate event occurs embedded in a relatively enormous four-dimensional event capsule. Any causal influence emanates from the kernel and, if it is to ‘recreate’ the original ultimate event a ksana later, it must traverse at least half the ‘length’ (spatial dimesion) of one capsule plus half of the next one, i.e. ½ s0 + ½ s0 = 1 s0 where s0 is the spatial dimension of an event-capsule ‘at rest’ (its normal state). For if the causal influence did not ‘get that far’, it would not be able to bring anything about at all, would be like a messenger who could not reach a destination receding faster than he could run flat out. The runner’s ‘message’, in this case the recreation of a clone of the original ultimate event, would never get delivered and nothing would ever come about at all.
This problem does not occur in normal physics since objects are not conceived as requiring a causal force to stop them disappearing, and, on top of that, ‘space/time’ is assumed to be continuous and infinitely divisible. In UET there are minimal spatial and temporal units (that of the the grid-space and the ksana) and ‘time’ in the UET sense of an endless succession of ksanas, stops for no man or god, not even physicists who are born, live and die successively like everything else. I believe that succession, like causality, is built into the very fabric of physical reality and though there is no such thing as continuous motion, there is and always will be change since, even if nothing else is happening, one ksana is being replaced by another, different, one ─ “the moving finger writes, and, having writ, moves on” (Rubaiyat of Omar Khayyam). Heraclitus said that “No man ever steps into the same river twice”, but a more extreme follower of his disagreed, saying that it was impossible to step into the same river once, which is the Hinayana  Buddhist view. For ‘time’ is not a river that flows at a steady rate (as Newton envisaged it) but a succession of ‘moments’ threaded like beads on an invisible  chain and with minute gaps between the beads.

Limit to unitary re-appearance rate

So, returning to my repeating ultimate event, could the ‘re-creation rate’ of an ultimate event be  greater than the minimal rate of 1 s0/t0 ? Could it, for example, be  2, 3 or 5 spacesper ksana? No. For if and when the ultimate event re-appeared, say  5 ksanas later, the original causal impulse would have covered a distance of 5 s0   ( s0 being the spatial dimension of each capsule) and would have taken 5 ksanas to do  this. Consequently the space/time displacement rate would be the same (but not in this case the individual distances). I note this rate as c* in ‘absolute units’, the UET equivalent of c, since it denotes an upper limit to the propagation of the causal influence (Note 1). For the very continuing existence of anything depends on causality : each ‘object’ that does persist in isolation does so because it is perpetually re-creating itself (Note 2).

But note that it is only the unitary rate, the distance/time ratio taken over a single ksana,  that cannot be less (or more) than one grid-space per ksana or 1 s0/t0 : any fractional (but not irrational) re-appearance rate is perfectly conceivable provided it is spread out over several ksanas. A re-appearance rate of m/n s0/t0  simply means that the ultimate event in question re-appears in an equivalent spatial position on the Locality m times every n ksanas where m/n ≤ 1. And there are all sorts of different ways in which this rate be achieved. For example, a re-appearance rate of 3/5 s0/t0 could be a repeating pattern such as

Reappearance rates 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

and one pattern could change over into the other either randomly or, alternatively, according to a particular rule.
As one increases the difference between the numerator and the denominator, there are obviously going to be many more possible variations : all this could easily be worked out mathematically using combinatorial analysis. But note that it is the distribution of ™the black and white at matters since, once a re-appearance rhythm has begun, there is no real difference between a ‘vertical’ rate of 0™˜™˜●0● and ˜™˜™™˜™˜™˜™˜●0™˜™˜●0 ™˜™™˜™˜ ˜™˜™ ─ it all depends on where you start counting. Patterns with the same repetition rate only count as different if this difference is recognizable no matter where you start examining the sequence.
Why does all this matter? Because, each time there is a blank line, this means that the ultimate event in question does not make an appearance at all during this ksana, and, if we are dealing with large denominators, this could mean very large gaps indeed in an event chain. Suppose, for example, an event-chain had a re-appearance rate of 4/786. There would only be four appearances (black dots) in a period of 786 ksanas, and there would inevitably be very large blank sections of the Locality when the ultimate event made no appearance.

Lower Limit of re-creation rate 

Since, by definition, everything in UET is finite, there must be a maximum number of possible consecutive gaps  or non-reappearances. For example, if we set the limit at, say, 20 blank lines, or 200, this would mean that, each time this blank period was observed, we could conclude that the event-chain had terminated. This is the UET equivalent  of the Principle of Spatio-Temporal Continuity and effectively excludes phenomena such as an ultimate event in an event-chain making its re-appearance a century later than its first appearance. This limit would have to be estimated on the  basis of experiments since I do not see how a specific value can be derived from theoretical considerations alone. It is tempting to estimate that this value would involve c* or a multiple of c* but this is only a wild guess ─ Nature does not always favour elegance and simplicity.
Such a rule would limit how ‘stretched out’ an event-chain can be temporally and, in reality , there may not after all be a hard and fast general rule  : the maximal extent of the gap could decline exponentially or in accordance with some other function. That is, an abnormally long gap followed by the re-appearance of an event, would decrease the possible upper limit slightly in much the same way as chance associations increase the likelihood of an event-chain forming in the first place. If, say, there was an original limit of a  gap of 20 ksanas, whenever the re-appearance rate had a gap of 19, the limit would be reduced to 19 and so on.
It is important to be clear that we are not talking about the phenomenon of ‘time dilation’ which concerns only the interval between one ksana and the next according to a particular viewpoint. Here, we simply have an event-chain where an ultimate event is repeating at the same spot on the spatial part of the Locality : it is ‘at rest’ and not displacing itself laterally at all. The consequences for other viewpoints would have to be investigated.

Re-appearance Rate as an intrinsic property of an event-chain  

Since Galileo, and subsequently Einstein, it has become customary in physics to distinguish, not between rest and motion, but rather between unaccelerated motion and  accelerated motion. And the category of ‘unaccelerated motion’ includes all possible constant straight-line speeds including zero (rest). It seems, then,  that there is no true distinction to be made between ‘rest’ and motion just so long as the latter is motion in a straight line at a constant displacement rate. This ‘relativisation’ of  motion in effect means that an ‘inertial system’ or a particle at rest within an inertial system does not really have a specific velocity at all, since any estimated velocity is as ‘true’ as any other. So, seemingly, ‘velocity’ is not a property of a single body but only of a system of at least two bodies. This is, in a sense, rather odd) since there can be no doubt that a ‘change of velocity’, an acceleration, really is a feature of a single body (or is it?).
Consider a spaceship which is either completely alone in the universe or sufficiently remote from all massive bodies that it can be considered in isolation. What is its speed? It has none since there is no reference system or body to which its speed can be referred. It is, then, at rest ─ or this is what we must assume if there are no internal signs of acceleration such as plates falling around or rattling doors and so on. If the spaceship is propelling itself forward (or in some direction we call ‘forward’) intermittently by jet propulsion the acceleration will be note by the voyagers inside the ship supposing there are some. Suppose there is no further discharge of chemicals for a while. Is the spaceship now moving at a different and greater velocity than before? Not really. One could I suppose refer the vessel’s new state of motion to the centre of mass of the ejected chemicals but this seems rather artificial especially as they are going to be dispersed. No matter how many times this happens, the ship will not be gaining speed, or so it would appear. On the other hand, the changes in velocity, or accelerations are undoubtedly real since their effects can be observed within the reference frame.
So what to conclude? One could say that ‘acceleration’ has ‘higher reality status’ than simple velocity since it does not depend on a reference point outside the system. ‘Velocity’ is a ‘reality of second order’ whereas acceleration is a ‘reality of first order’. But once again there is a difference between normal physics and UET physics in this respect. Although the distinction between unaccelerated and accelerated motion is taken over into UET (re-baptised ‘regular’ and ‘irregular’ motion), there is in Ultimate Event Theory, but not in contemporary physics, a kind of ‘velocity’ that has nothing to do with any other body whatsoever, namely the event-chain’s re-appearance rate.
When one has spent some time studying Relativity one ends up wondering whether after all “everything is relative” and quite a lot of physicists and philosophers seems to actually believe something not far from this : the universe is evaporating away as we look it and leaving nothing but a trail of unintelligible mathematical formulae. In Quantum Mechanics (as Heisenberg envisaged it anyway) the properties of a particular ‘body’ involve the properties of all the other bodies in the universe, so that there remain very few, if any, intrinsic properties that a body or system can possess. However, in UET, there is a reality safety net. For there are at least two  things that are not relative, since they pertain to the event-chain or event-conglomerate itself whether it is alone in the universe or embedded in a dense network of intersecting event-chains we view as matter. These two things are (1) occurrence and (2) rate of occurrence and both of them are straight numbers, or ratios of integers.
An ultimate event either has occurrence or it does not : there is no such thing as the ‘demi-occurrence’ of an event (though there might be such a thing as a potential event). Every macro event is (by the preliminary postulates of UET) made up of a finite number of ultimate events and every trajectory of every event-conglomerate has an event number associated with it. But this is not all. Every event-chain ─ or at any rate normal or ‘well-behaved’ event-chain ─ has a ‘re-appearance rate’. This ‘re-appearance rate’ may well change considerably during the life span of a particular event-chain, either randomly or following a particular rule, and, more significantly, the ‘re-appearance rates’ of event-conglomerates (particles, solid bodies and so on) can, and almost certainly do, differ considerably from each other. One ‘particle’ might have a re-appearance rate of 4, (i.e. re-appear every fourth ksana) another with the same displacement rate  with respect to the first a rate of 167 and so on. And this would have great implications for collisions between event-chains and event-conglomerates.

Re-appearance rates and collisions 

What happens during a collision? One or more solid bodies are disputing the occupation of territory that lies on their  trajectories. If the two objects miss each other, even narrowly, there is no problem : the objects occupy ‘free’ territory. In UET event conglomerates have two kinds of ‘velocity’, firstly their intrinsic re-appearance rates which may differ considerably, and, secondly, their displacement rate relative to each other. Every event-chain may be considered to be ‘at rest’ with respect to itself, indeed it is hard to see how it could be anything at all if this were not the case. But the relative speed of even unaccelerated event-chains will not usually be zero and is perfectly real since it has observable and often dramatic consequences.
Now, in normal physics, space, time and existence itself is regarded as continuous, so two objects will collide if their trajectories intersect and they will miss each other if their trajectories do not intersect. All this is absolutely clearcut, at least in principle. However, in UET there are two quite different ways in which ‘particles’ (small event conglomerates) can miss each other.
First of all, there is the case when both objects (repeating event-conglomerates) have a 1/1 re-appearance rate, i.e. there is an ultimate event at every ksana in both cases. If object B is both dense and occupies a relatively large region of the Locality at each re-appearance, and the relative speed is low, the chances are that the two objects will collide. For, suppose a relative displacement rate of 2 spaces to the right (or left)  at each ksana and take B to be stationary and A, marked in red, displacing itself two spaces at every ksana.

Reappearance rates 2

Clearly, there is going to be trouble at the  very next ksana.
However, since space/time and existence and everything else (except possibly the Event Locality) is not continuous in UET, if the relative speed of the two objects were a good deal greater, say 7 spaces per 7 ksanas (a rate of 7/7)  the red event-chain might manage to just miss the black object.

This could not happen in a system that assumes the Principle of Spatio-Temporal Continuity : in UET there is  leap-frogging with space and time if you like. For the red event-chain has missed out certain positions on the Locality which, in principle could have been occupied.

But this is not all. A collision could also have been avoided if the red chain had possessed a different re-appearance rate even though it remained a ‘slow’ chain compared to the  black one. For consider a 7/7 re-appearance rate i.e. one appearance every seven ksanas and a displacement rate of two spaces per ksana relative to the black conglomerate taken as being stationary. This would work out to an effective rate of 14 spaces to the right at each appearance ─ more than enough to miss the black event-conglomerate.

Moreover, if we have a repeating event-conglomerate that is very compact, i.e. occupies very few neighbouring grid-spaces at each appearance (at the limit just one), and is also extremely rapid compared to the much larger conglomerates it is likely to come across, this ‘event-particle’ will miss almost everything all the time. In UET it is much more of a problem how a small and ‘rapid’ event-particle can ever collide with anything at all (and thus be perceived) than for a particle to apparently disappear into thin air. When I first came to this rather improbable conclusion I was somewhat startled. But I did not know at the time that neutrinos, which are thought to have a very small mass and to travel nearly at the speed of light, are by far the commonest particles in the universe and, even though millions are passing through my fingers as I write this sentence, they are incredibly difficult to detect because they interact with ordinary ‘matter’ so rarely (Note 3). This, of course, is exactly what I would expect ─ though, on the other hand, it is a mystery why it is so easy to intercept photons and other particles. It is possible that the question of re-appearance rates has something to do with this : clearly neutrinos are not only extremely compact, have very high speed compared to most material objects, but also have an abnormally high re-appearance rate, near to the maximum.
RELATIVITY   Reappeaance Rates Diagram         In the adjacent diagram we have the same angle sin θ = v/c but progressively more extended reappearance rates 1/1; 2/2; 3/3; and so on. The total area taken over n ksanas will be the same but the behaviour of the event-chains will be very different.
I suspect that the question of different re-appearance rates has vast importance in all branches of physics. For it could well be that it is a similarity of re-appearance rates ─ a sort of ‘event resonance’ ─ that draws disparate event chains together and indeed is instrumental in the formation of the very earliest event-chains to emerge from the initial randomness that preceded the Big Bang or similar macro events.
Also, one suspects that collisions of event conglomerates  disturb not only the spread and compactness of the constituent events-chains, likewise their ‘momentums’, but also and more significantly their re-appearance rates. All this is, of course, highly speculative but so was atomic theory prior to the 20th century event though atomism as a physical theory and cultural paradigm goes back to the 4th century BC at least.        SH  29/11/13

 

 

Note 1  Compared to the usual 3 × 108 metres/second the unitary  value of s/t0 seems absurdly small. But one must understand that s/t0 is a ratio and that we are dealing with very small units of distance and time. We only perceive large multiples of these units and it is important to bear in mind that s0is a maximum while t0 is a minimum. The actual kernel, where each ultimate event has occurrence, turns out to be s0/c* =  su so in ‘ultimate units’ the upper limit is c* su/t0.  It is nonetheless a surprising and somewhat inexplicable physiological fact that we, as human beings, have a pretty good sense of distance but an incredibly crude sense of time. It is only necessary to pass images at a rate of about eight per second for the brain to interpret the successive in images as a continuum and the film industry is based on this circumstance. Physicists, however, gaily talk of all sorts of important changes happening millionths or billionths of a second and in an ordinary digital watch the quartz crystal is vibrating thousands of times a second (293,000 I believe).

 

Note 2  Only Descartes amongst Western thinkers realized there was a problem here and ascribed the power of apparent self-perpetuation to the repeated intervention of God; today, in a secular world, we perforce ascribe it to ‘ natural forces’.

In effect, in UET, everything is pushed one stage back. For Newton and Galileo the  ‘natural’ state of objects was to continue existing in constant straight line motion whereas in UET the ‘natural’ state of ultimate events is to disappear for ever. If anything does persist, this shows there is a force at work. The Buddhists call this all-powerful causal force ‘karma but unfortunately they were only interested in the moral,  as opposed to physical, implications of karmic force otherwise we would probably have had a modern theory of physics centuries earlier than we actually did.

Note 3  “Neutrinos are the commonest particles of all. There are even more of them flying around the cosmos than there are photons (…) About 400 billion neutrinos from the Sun pass through each one of us every second.”  Frank Close, Particle Physics A Very Short Introduction (OUP) p. 41-2