Archives for category: Uncategorized

Bertrand Russell bewails the passing of the scientific spirit with the Greeks and notes that from Plotinus (A.D. 204-70) onwards “men were encouraged to look within rather than to look without“. But there is much to be gained from looking within : the only thing is that the insights to be gained have not yet been turned into science and technology. Maybe their time has come or is coming.
India is a strange civilization since its leading thinkers seem not only to have considered what I call the Unmanifest as more important than the everyday physical world (the Manifest) but to have actually been more at home there. Nonetheless, lost within the dense thickets of abstruse Hindu and Buddhist speculation, there are ideas which may yet find their application, in particular the concept of dharma.
We think of Buddhism today as a philosophical religion that recommends non-violence and compassion but, admirable though such aims may be, they do not appear to have been at all the Buddha’s main concern, to judge by the development of the religion he founded during the six or seven centuries after his supposed life.

“The formula of the Buddhist Credo — which professedly contains the shortest statement of the essence and spirit of Buddhism — declares that Buddha discovered the elements of existence (dharmas), their causal connection, and a method to suppress their efficiency for ever. Vasubandhu makes a similar statement about the essence of the doctrine : it is a method of converting the elements of existence into a condition of rest, out of which they never will arise again.” Stcherbatsky, The Central Conception of Buddhism

The (Hinayana) Buddhist equivalent of Democritus’ terse statement “Nothing exists except atoms and void” would thus be something like
“Nothing exists except Nirvana, Karma and Dharma”
.

       Nirvana is the state of absolute quiescence which is the end and origin of everything.
     Karma (literally ‘activity’, ‘action’) almost always has a strong moral sense in Buddhism — “[It is] that kind of activity which has an ethical charge and which must give rise to a ‘retributionary’ reverberation at a later time” (Anacker, Works of Vasubandhu). To be karmic an act must first of all be deliberate and, secondly, must be the result of an intent to harm another sentient being, or the result of an intent to relieve suffering. Although the Buddha categorically affirmed freedom of will, Buddhist psychology, known as Abhidharma, naturally accepted that most of our daily actions such as eating, sleeping and so on are ‘quasi-automatic’ and do not bring about either reward or punishment in this or a future life. But this concentration on moral acts and their consequences merely underlines the whole aim and approach of Buddhism as a religion/philosophy which is to bring to an end the suffering that is an inevitable part of human (and animal) existence. It would thus seem perfectly legitimate to extend the sense of karma to causal processes in general — “the law of karma,….is only a special case of causality” (Stcherbatsky, BL).  Had the Buddhist thinkers wished to develop a physical as opposed to a spiritual/psychological belief system, they would most likely have made karma (in this extended sense) a prominent feature of such a system.  But not only did they abstain from doing so but they would have regarded excessive interest in the physical world as altogether undesirable since it did not further enlightenment but on the contrary tended to obstruct it.
     For (Hinayana) Buddhist thinkers of the time dharma(s) are the ephemeral but constantly reappearing ‘elements’ which make up absolutely everything we think of as real, material or immaterial. All alleged ‘entities’ such as Matter, Soul, the universe, individual objects, persons  &c. &c. are not true entities but merely bundles (skandhas) or sequences (santanas) of dharmas. Hence the first line of my poem (see Note)

                      Just elements exist, there is no world”

     Although the subsequent Mahayana (‘Greater Vehicle’) Buddhist thinkers denied the ‘absolute reality’ of the ‘dharma(s)’, the Hinayana thinkers of this era (Vasubandhu, Dignaga, Dharmottara &c.) emphatically affirmed their reality — but with the proviso that our ‘normal’ perceptions are hopelessly distorted by irrelevant intellectual additions that are delusory. The dharma(s) have sometimes been compared to the noumena or ‘Things-in-themselves’ of Kant, but they are in fact what Kant would have called phenomena, but phenomena purified by a (usually) long and painful process of demystification and deintoxication. The Hinayana philosophical approach is all on its own in claiming that knowledge of what is ‘really real’ does not at all entail fleeing from the physical world into a transcendent Neverneverland but on the contrary recovering the pristine world of ‘direct sensation’ — “in pure reality there is not the slightest bit of imaginative construction” (Stcherbatsky, BL).

   All this is all very well, but what exactly are the dharma(s) and to what extent can they be made to form the basis of a physical theory?  (Although the plural of dharma is made by adding an ‘s’ I cannot quite accustom myself to doing this.) Being irreducibles, there is nothing more elementary in terms of which the dharma(s)y can be defined. However, what can be said, summarizing the conclusion of Stcherbatsky’s excellent book and other sources, is that dharma(s) are :

1. entirely separate one from another;
2. have no duration;
3. tend to congregate in bundles;
4. are subject to a causal force which makes them ‘co-operate’ with one another.
5. are in a perpetual state of commotion.

    I draw certain far-reaching, possibly fanciful, conclusions from (1-5) above — or, if you like, I interpret them in accord with my own independent thought-experience.
    (1) to my mind implies that there are gaps between dharma(s) and thus that there are no continuous entities whatsoever — with the exception of nirvana which one could (perhaps) just conceivably equate with the quantum vacuum.
    (1) in combination with (2) means that there is incessant change (replacement of one dharma by another) but strictly speaking no motion, no continuous motion that is. What we call motion is nothing but consecutive dharma(s) which are so close to each other that the mind merges them together just as it does the separate images on a cinema screen. “Momentary things” writes Kamalasila, “cannot displace themselves because they disappear at the very place at which they appeared”.
    (3) explains, or rather describes, the appearance of what we consider to be matter : it is the result of the ‘combining’ — the Indian sources say ‘co-operating’ — tendencies of the dharma(s).
    (4) recognizes that what has occurrence is subject to certain formal ‘laws’, i.e. events do not usually occur at random and certain events are invariably followed by similar different events with which they are regularly associated (‘This being, that arises’).
    It is difficult to know what to make of (5), the claim that the dharma are ‘turbulent‘, ‘agitated‘ — though this is perhaps the most important characteristic of the dharma(s). (The Buddha was doubtless thinking of the great difficulty of ‘quietening’ the mind during meditation and, for that matter, during all conscious states.)  Now, air or water can be turbulent — what does this mean?  Physically, if we are to believe the current scientific world view, it means that the microscopic molecules that make up what we loosely call ‘air’ or ‘water’ are rushing about in a random manner, colliding violently with each other. This state of commotion is to be contrasted with the state of affairs when everything is ‘still’ —  though, according to contemporary science, the molecules of a fluid are still moving about randomly even when the fluid is ‘in equilibrium’ (albeit less violently). There is, interestingly, no mention in Buddhist literature of the dharma(s) actually colliding with one another even when they are collected into bundles (‘skandhas‘). So the ‘turbulence’ should perhaps be interpreted as the tendency of these ‘elements‘ to reform, or rather to bring into momentary existence other similar dharma(s) until, eventually, when finally pacified, they cease altogether to conglomerate in space or to persist in time Vasubandhu’s “condition of rest from which they never shall arise again”.
    Although the following conception is much more Hindu than Buddhist in spirit, and would have been strenuously rejected by the Buddhists who developed the dharma theory, I personally envisage the ‘turbulence’ as pertaining to an invisible, all-pervasive substratum: the dharma(s) are specks of turbulence on the surface of a sort of cosmic fluid, foam on an invisible ocean. When the turbulence dies away, the ocean returns to its original state of quiet — until the next cycle commences. Where have the dharma(s) gone to? Nowhere. What we call ‘matter’ and ‘life’ are nothing more (nor less) than a temporary surface film on this enduring ‘sub-stance’. The universe is a knot tied in a (non-material) string : it is pointless to ask where the knot has gone to when the knot is finally untied.

S.H. 4/10/19

Abbreviations:     BL  refers to Buddhist Logic Vol. I by Stcherbatsky (Dover Publications 1962, an “unabridged republication of the work first published by the Academy of Sciences of the U.S.S.R., Leningrad, circa 1930”).

Note  The full version is:

Just elements exist, there is no world,
Events emerge from nowhere, blossom, fall,
just elements exist, there is no world,

Events emerge from nowhere, blossom, fall,
Like hail upon the earth or glistening froth,
Just elements exist, there is no world.

 Like hail upon the earth or glistening froth,
The dharma form and open, scatter, burst,
Each moment brings forth others, vanishes.

The dharma form and open, scatter, burst,
Glistening the froth appears and thunderous the hail,
Just elements exist, there is no world.

Glistening the froth appears and thunderous the hail,
As ceaselessly the living dharma form,
Each moment brings forth others and then vanishes.

from Origins by Sebastian Hayes

A completely axiomatic theory purports to make no appeal to experience whatsoever though one doubts whether any such expositions are quite as ‘pure’ as their authors claim. Even Hilbert’s 20th century formalist version of Euclid, his Grundlagen der Geometrie, has been found wanting in this respect ─ “A 2003 effort by Meikle and Fleuriot to formalize the Grundlagen with a computer found that some of Hilbert’s proofs appear to rely on diagrams and geometric intuition” (Wikipedia).

What exactly is the axiomatic method anyway? It seems to have been invented by the Greeks and in essence it is simply a scheme that divides a subject into :
(1) that part which has to be taken for granted in order to get started at all ─ in Euclid the Axioms, Definitions and Postulates (Note 1); and
(2) that part which is derived by valid chains of reasoning from the first, namely the Theorems ─ Heath calls them ‘Propositions’.
A strictly deductive, axiomatic presentation of a scientific subject made perfect sense in the days when Western scientists believed that an all-powerful God had made the entire universe with the aid of a handful of mathematical formulae but one wonders whether it is really appropriate today when biology has become the leading science. Evolution proceeds via random mutation plus ruthless selection and human societies and/or individuals often seem to owe more to happenstance and experience than reasoning and logic. Few, if any, important discoveries in mathematics have been strictly deductive: I doubt if anyone ever sat down of an evening with the Axioms of von Neumann Set Theory in order to deduce something interesting and original, and certainly no one ever learned mathematics that way (except possibly a robot). For all that, the structural simplicity and elegance of the axiomatic method remains extremely appealing and is one of the reasons why Euclid’s Elements and Newton’s Principia are among the half dozen best-selling books of all time ─ though few people read them today.
Apart from the axioms which are an integral part of a science or branch of mathematics, there exist also certain methodological principles (or prejudices) which, properly speaking, don’t belong to the subject, but nonetheless determine the general approach and overshadow the whole work. These principles should, ideally, be stated at the outset though they rarely are.

There are two principles that I find I have used implicitly or explicitly throughout my attempts to kick-start Ultimate Event Theory. The first is Occam’s Razor, or the Principle of Parsimony, which in practice means preferring the simplest and most succinct explanation ‘other things being equal’. According to Russell, Occam, a mediaeval logician, never actually wrote that “Entities are not to be multiplied without necessity” (as he is usually quoted as stating), but he did write “It is pointless to do with more what can be done with less” which comes to much the same thing. The Principle of Parsimony is uncontroversial and very little needs to be said about it except that it is a principle that is, as it were, imposed on us by necessity rather than being in any way ‘self-evident’. We do not really have any right to assume that Nature always chooses the simplest solution: indeed it sometimes looks as if Nature enjoys complication just for the sake of it. Aristotle’s Physics is a good deal simpler than Newton’s and the latter’s much easier to visualize than Einstein’s: but the evidence so far seems to favour the more complicated theory.
The second most important principle that I employ may be called the Principle of Parmenides, since he first stated it in its most extreme form,
       “If there were no limits, there would be nothing”.
In the context of Ultimate Event Theory this often becomes:
        “If there were no limits, nothing would exist, except (possibly) the Locality itself”
and the slightly different “If there were no limits, nothing would persist”.

      This may sound unexceptional but what I deduce from this principle is highly controversial, namely the necessity to expel the notion of actual infinity from science altogether, and likewise in mathematics (Note 2). The ‘infinite’ is by definition ‘limitless’ and so falls under the ban of this very sensible principle. Infinity has no basis in our sense experience since no one, with the exception of certain mystics, has ever claimed to have ‘known’ the infinite. And mystical experience, though perfectly valid and apparently extremely enjoyable, obviously requires careful assessment before it can be introduced into a theory, scientific or otherwise. In the majority of cases, it is clear that what mystics (think they) experience is not at all what mathematicians mean by the sign ∞ but is rather an alleged reality which is ‘non-finite’ in the sense that any form of measurement would be totally inappropriate and irrelevant. (It is closer to what Bohm calls the Implicate Order as opposed to the Explicate Order ─ unhappy names for a  very useful dichotomy). In present-day science, ‘infinity’ simply functions as a sort of deus ex machina (Note 3) to get one out of a tight spot, and even then only temporarily. As far as I know, there is not a scrap of evidence to suggest that any known process or observable entity actually is either ‘infinitely large‘ or ‘infinitely small’. All energy exchanges are subject to quantum restrictions (i.e. come in finite packages) and all sorts of entities which were once regarded as ‘infinitely small’ such as atoms and molecules can now actually be ‘seen’, if only via an electron tunnelling microscope. Even the universe we live in, which for Newton and everyone else alive in his time, was ‘infinite’ in size, is sometimes thought today to have a finite current extent and is certainly thought to have a specific age (around 13.8 billion years). All that is left as a final bastion of the infinity delusion is space and time and even here one or two noted contemporary physicists (e.g. Lee Smolin and Fay Dowker) dare to suggest that the fabric of Space-Time may be ‘grainy’. But enough on this subject which, in my case,  tends to become obsessive.
What can an axiomatic theory be expected to do? One thing it cannot be expected to do is to give specific quantitative results. Newton showed that the law of gravitation had to be  an inverse square distance law but it was some time before a value could be attributed to the indispensable gravitational constant, G. And Eddington quite properly  said that we could conclude simply by reasoning that in any physical universe there would have to be an upper bound for the speed of a particle or the  transmission of information, but that we would not be able to deduce by reasoning alone the actual value of this upper bound (namely c ≈ 108 metres/second).
It is also legitimate, even in a broadly axiomatic presentation, to appeal to common experience from time to time, provided one does not abuse this facility. For example, de Sitter’s solution of Einstein’s field equations could not possibly apply to the universe we (think we) live in, since his solution required that such a ‘universe’ would be entirely empty of matter ─ which we believe not to be the case.
One would, however, require a broadly axiomatic theory to lead, by reasoning alone, to some results which, as it happens, we know to be correct, and also, if possible, to make certain other predictions that no rival theory had made. And a  theory which embodies a very different ‘take’ on life and the world might still prove worthwhile stating even if it is destined to be promptly discarded: it might prepare the ground for other, more mature,  theories by pointing in a certain  unexpected direction. Predictive power is not the only goal and raison d’etre of a scientific theory : the old Ptolemaic astronomy was for a long time perfectly satisfactory as a predictive system and, according to Koestler, Copernicus’s original heliocentric system was no simpler. As a piece of kinematics, the Ptolemaic Earth-centred system was adequate and, with the addition of more epicycles could probably ‘give the right answer’ even today. However, Copernicus’s revolution paved the way for Galileo’s and Newton’s dynamical world-view in which the movements of planets were viewed in terms of applied forces and so proved far more fruitful.
It is also worth saying that a different world-view from the current established one may remain more satisfactory with respect to certain specific areas, while being utterly inadequate for other purposes. If one is completely honest, one would, I think, have to admit that the now completely discredited magical animistic world-view has a certain cogency and persuasiveness when applied to aberrant human behaviour:   this is why we still talk meaningfully of charm, charisma, inspiration, luck, jinxes, fascination, fate ─ concepts that belong firmly to another era.
Finally, the world-views of other cultures and societies are not just historical curiosities : people in these societies had different priorities and may well have noticed, and subsequently sought to explain, things that modern man is unaware of. Ultimate Event Theory has its roots in the world-views of societies long dead and gone: in particular the world-view of certain Hinayana Buddhist monks in Northern India during the first few centuries of our era, and that of certain Native Amerindian tribes like the Hopi as reflected in the
structure of their languages (according to the Whorf-Sapir theory).

                                                                                                                                SH  26/09/19

Notes :
Note 1  The status of the fourth and last Euclidian subsection, the Definitions, is not entirely clear: they were supposed to be ‘informative’ only in the manner of an entry in a dictionary and “to have no existential import”. On the other hand, Russell concedes that “definitions are often nothing more than disguised axioms”.

Note 2 This is in line with Poincare’s categorical statement, “There is, and can be, no actual infinity”. Gauss, often considered the greatest mathematician of all time, said something similar.

Note 3 A deus ex machine was , in Greek tragedy, a supernatural being who was lowered onto the stage by a sort of crane and whose purpose was to ‘save’ the hero or heroine when no one else could.
Larry Constantine, in an insightful letter to the New Scientist (13 Aug 2011 p. 30), wrote : “Accounting for our universe by postulating infinite parallel universes or explaining the Big Bang as the collision of “branes” are not accounts at all, but merely ignorance swept under a cosmic rug — a rug which itself demands explanation but is in turn buried under still more rugs.”

 

 

Events rather than Things

Descartes kicked off  modern Western philosophy with his thought experiment of deciding what he absolutely couldn’t disbelieve in. He concluded  that he could, momentarily at any rate, disbelieve in all sorts of things, even the existence of other people, but that he couldn’t disbelieve in the existence of himself, the ‘thinking being’. Now for anyone who has done meditation (and for some who  haven’t likewise), Descartes is way off. It really is possible to disbelieve in one’s own existence if by this we mean the ‘person’ who was born at such and such a date and place, went to such and such a school, and so on (Note 1). This ‘entity’ simply drifts away once you are alone, reduce the input from the outside world and confine yourself strictly to your present sensations. Indeed, it is often more difficult to believe that such a ‘being’ ever did exist, than to doubt its existence !
However, what you can’t dismiss even when meditating in isolation in a dark quiet room is the idea that there are some sort of events continually occurring, call them mental or  physical or psychic (at this level such distinctions have little meaning). My version of the cogito ergo sum is thus, “There are thought/sensations, therefore there is something”. Sensations and thoughts are not physical objects but events of a particular kind, so why not take the concept of the event as primary and see where one gets to from there.
Moreover, one can at once draw certain conclusions. There must seemingly be a ‘somewhere’ for these sensations/thoughts to occur just as there must be a location for extendable bodies. We require a ‘place’ : let us call it the Locality. There is, however, no obligation to identify this ‘place where mental/physical events are occurring’ as the head (or brain) of René Descartes or of Sebastian Hayes (the author of the present pamphlet) and to rashly conclude, as Descartes does, that such a person necessarily exists . Nor is there any need just yet to identify the Locality with modern Einsteinian ‘Space-Time’ (though clearly for some people there is an irressistible temptation to do so). A second deduction, or rather observation, is that these mental/physical events do not occur ‘all at once’, they come ‘one after the other’, i.e. they are successive events.
A further question that requires settling is whether these fleeting thought/sensations are connected up in some way. This is not so easy to answer. In some cases quite clearly a certain thought does give rise to another in much the same way as a certain physical impulse triggers an action. But there also seem to be cases when thought/sensations simply emerge from nowhere and drift away into nowhere, i.e. appear to be entirely disconnected from neighbouring events. The first category, the thoughts that follow each other according to a recognizable pattern, naturally us to believe in some form of Causality but there is reason to believe that it is not always operative.
All this seemed enough to make a start. I had a primary entity, the Event — primary because I couldn’t disbelieve in it — and, following closely after it in the sequence of ideas, the notions of an Event-Locality and of an Ordering of Events or Event-Succession. Finally, some causal principle linking one thought/event to another was needed which I eventually baptised Dominance, partly to emphasize the (usually) one-sided rapport between two or more events but also to stress that a force is at work. Today, the notion of a binding causal connection between disparate events has been largely replaced by the much weaker statistical concept of correlation ; indeed there is a strong tendency in contemporary scientific thought to  expel both causality and force from physics altogether.      

What is an Event?

Modern  axiomatic systems usually leave the basic notions, such as ‘lines’, ‘points’ &c. undefined for the good reason that, if they really are fundamental, there is nothing more basic in terms of which they can be described. At first glance this seems reasonable enough but the practice has always struck me as being rather deceitful. The authors of new-fangled geometries such as Hilbert know perfectly well that they could take for granted the reader’s prior knowledge of what a line or a point is — so why not say so?  My basic concept, the event, cannot be defined using other concepts that are more fundamental but what I can do is to openly appeal to the ‘intuitive’, or rather experiential, knowledge that people have of ‘events’ while striving to make this ‘prior knowledge’ more precise.
So what is an event? ‘Something that happens’, an ‘occurrence’…… It is easier to say what it is not. An event is not a thing. Why not? Because things are long-lasting, relatively permanent. An event is punctual, it is not lasting, not permanent, it is here and it is gone never to be experienced again. And it seems to have more to do with time (in the sense of succession) than space (in the sense of extension). An event is usually pinpointed by referring to events of the same type that happened before or after it, rather than by referring to events that happened alongside it. The Battle of Stalingrad came after the fall of France and before the Normandy invasion: the fighting that was going on in parts of Russia other than Stalingrad is not usually mentioned. An event is ‘entire’, ‘all of a piece’, ‘has no parts’, it is not a process, has no inner development since there is no ‘time’ (duration) for it to develop, it is here and then gone. The decline-and-fall of the Roman Empire is not an event.
An important consequence is that events cannot be tampered with –once they have happened, they have happened.

    “The moving finger writes and having writ,
Moves on, nor all they piety nor wit
Can lure it back to cancel half a line
    Nor all thy tears wash out a word of it”

But objects, since they are more spread out in time, are alterable, can be expanded, diminished, painted over, vandalized, restored, bought and sold and likewise individuals can change for the better or worse otherwise life would not be worth living.
Events also seem to be more intimately involved with causality than things. The question, “Why is that tree there?” though by no means  nonsensical sounds somewhat peculiar. But “Why did that branch break?” is a natural question to ask. Why indeed. And, as stated, events usually appear to be causally connected, we feel them to be very strongly bonded to specific other events which is why we look for ‘cause-and-‘effect’ pairs of events.
To sum up:  An event is punctual, sequential,  entire, evanescent, irrevocable, and usually dependent on earlier events.

Ultimate Events

But here we come across a problem.
Although an event such as a battle, an evening out, a chance meeting with a friend, even a fall, is perceived as a ‘single item’, as being entire — otherwise we would not call it an event — it is obvious that any event you like to mention can be subdivided into a myriad of smaller events. Even a blow with a hammer, though treated in mechanics as an impulsive force and thus as having no duration to speak of, is not in point of fact a single event since, thanks to modern technology, we can take snapshots showing the progressive deformation of the object struck.
So, are we to conclude that all events are in fact composite? This is, I suppose, a permissible assumption but it does not appeal to me since it leads at once to infinite regress. It is already bad enough having to treat ‘space’ as being ‘infinitely divisible’ as the traditional mathematical treatment of motion  assumes it to be. But it is much worse to suppose that any little action we make is in reality made up of an ‘infinite’ number of smaller events. I certainly don’t want to go down this path and so I find myself obliged at the very outset to introduce an axiom which states that there are certain events which cannot be further decomposed. I name these ultimate events and they play much the same role in Eventrics as atoms once did in physical theory.
Ultimate events, if they exist at all (and I am convinced they do) must be very short-lived indeed since there are many physical processes which are known to take only a few nanoseconds and any such process must contain more than one ultimate event. Perhaps ultimate events will remain forever unobservable and unrecordable in any way, though I doubt this since the same was until recently said of atoms prior to  the invention of the electron tunnelling microscope. Today it is possible to count the atoms on the surface of a piece of metal and sheets of graphene a single atom thick have either already been manufactured, or very soon will be. I can easily foresee that one day we will have the equivalent of Alvogrado’s number for standard bundles of ultimate events. Whether or not this will come to pass, what we can do right now is to assume that all the features that we attribute to ordinary events but which are only approximately true, are strictly true of ultimate events.  Thus ultimate events really are punctual, all of a piece, have no parts and so on.
Assuming that a macroscopic event is made up of a large number of ultimate events, there must seemingly be something that keeps the ultimate events separate, i.e. stops them fusing. There is here a further choice. Are the ultimate events stacked up tightly against each other so that their extremities touch, as it were, or are they separated by gaps? Almost all thinkers who have taken the concept of ‘atoms of time’ seriously have opted for the first possibility but it does not appeal to me, indeed  I find it implausible. If ultimate events (or chronons) have a sort of skin as cells apparently have, this would imply that there is at least a rudimentary structure, an ‘inside’ and an ‘outside’ to an ultimate event. This seems an unnecessary and, to me, rather artificial assumption; also there are advantages in opting for the second alternative that will only become apparent later. At any rate, I decided from the very beginning to assume that there are gaps between ultimate events which means that bundles and streams of macroscopic events are not just made up of discrete micro-entities (ultimate events) but are discontinuous in a very strong sense. This is an extremely important assumption and it applies right across the board. Since everything is (by hypothesis) made up of ultimate events, it means that there are no truly physical continuous entities whatsoever with the single exception of ultimate events themselves (since they are entire by definition) and (possibly) the Locality itself. As the philosopher Heidegger put it in a memorable phrase, “Being is shot through with nothingness”.

A  (very) Rough Visual Image

Many of the early Western scientists had a clear mental picture of solid bodies knocking into each other like billiard balls and, reputedly, Newton had a Eureka moment on seeing an apple falling to the ground in the orchard of the family farm (Note 1). Such mental pictures, though they do not always stand up to close scrutiny have nonetheless been extremely helpful (as well as misleading) to scientists and philosophers in the past. Today abstraction is the name of the game but I suspect that many a pure mathematician employs crude images on the sly when no one is looking — some brave spirits even admit to doing so. I think it is better to declare one’s mental imagery from the outset. I picture to myself a sort of grid extending in all possible directions, or, better, a featureless expanse which reveals itself to be such a grid as soon as, or just before, an event ‘has occurrence’. Moreover, I imagine an ultimate event completely filling a grid-cube or grid-disc so that there is no room for any other ultimate events at this spot. This is the image that comes to mind when I say to myself, “This particular event has occurrence there and nowhere else”.
I now stipulate that a ‘spot’ of this grid is either occupied or empty but not both at once. This might seem obvious but it is nonetheless worth stating : it is the equivalent of the logical Law of Non-Contradiction but applied to events. No kind of prediction system would be much use to anyone if, say, it predicted that there would be an eclipse of the moon at a particular place and time and that there would simultaneously not  be an eclipse at the same spot. One might reasonably object that Quantum Mechanics with its superposition of states does not verify this principle, but that is precisely why Quantum Mechanics is so worrisome (Note 2).
Thirdly, I assume that once a square of the grid is occupied it remains occupied ‘forever’. This is merely another way of saying, “What has happened has happened”, and I doubt if many people would quarrel with that. It is not possible to rewrite the (supposed) past because such events are not accessible to us and, even if they were, they could not be tampered with : there is no way un-occur an event, or so I at any rate believe.
Finally, for the sake of simplicity, I assume to begin with that all ultimate events are the ‘same size’, i.e. occupy a spot of equivalent size on the Locality.

Axioms of Ultimate Event Theory

Putting these last assumptions together, along with my requirement that every occurrence can be decomposed into so many ultimate events, also my requirement that there must be some sort of interconnectedness between certain events, we have a set of axioms, i.e. assertions which it is not necessary or possible to ‘prove’ — you either take them or leave them. The whole art of finding the right axioms is to choose those that seem the most ‘reasonable’ (least far-fetched) but which readily giving rise to non-obvious deductions. Ultimately the validity of the axioms depends on what one can make them do (Note 3).
Ultimate Event Theory, or my contemporary version of it, thus seems to require the following set of Definitions and Axioms :

 FUNDAMENTAL ITEMS:    Events, the Locality, Succession, Dominance.

DEFINITIONS:
    An ultimate event is an event that cannot be further decomposed.
    The Locality is the connected totality of all spots where ultimate events may have occurrence.
   Dominance is an influence that certain ultimate events exert on other events and on (repetitions of) themselves.

AXIOM OF OCCURRENCE

Everything that has occurrence is made up of a finite number of ultimate events.

AXIOM OF ULTIMATE EVENTS

All ultimate events have the same extent, i.e. occupy spots on the Locality of equivalent size.

AXIOM OF LOCALIZATION

A  spot on the Locality may receive at most one ultimate event, and every ultimate event that has occurrence occupies one, and only one, spot on the Locality.

AXIOM OF EXCLUSION

 A spot on the Locality is either  full, i.e. occupied by an ultimate event, or it is empty, but not both at once.

AXIOM OF IRREVERSIBILITY

 If an ultimate event has occurrence,  there is no way in which it  can be altered or prevented from having occurrence.

AXIOM OF DOMINANCE

Only events that have occurrence on the Locality may exercise dominance over  other events.

AXIOM OF GAPS

There are gaps between successive ultimate events.

                                                                                                                                            SH  25/09/19

Note 1  “Introspective experience [according to Buddhists] shows us no ‘ego’ at all and no ‘world’ but only a stream of all sorts of sensations, strivings, and representations which together constitute ‘reality'”  (Max Weber, The Religions of India

Note 2 Historians are often embarrassed by anecdotes about Newton seeing an apple fall and wondering whether there might be a universal ‘force of attraction’. But there is plenty of good evidence that the story is based on fact though Newton gave slightly different accounts of it in later life as we all tend to do about really important events. It is notable that (arguably) the three greatest scientists of all time, Archimedes, Newton and Einstein all had Eureka moments.

Note 3   If one accepts the original Schrödinger schema of Quantum Mechanics, the wave function itself does not model ‘events’ since whatever is going on prior to the collapse of the wave function, entirely lacks the specificity and decisiveness of events. So there are apparently ‘real entities’ that are not composed of ultimate events. But we lack appropriate terms to deal with such semi-realities: probability’ is far too weak a term, ‘potentiality’ is in every way preferable. Contemporary scientific parlance studiously avoids the concept of ‘potentiality’, so important for Aristotle,  because of the dead weight of positivism — but the concept is due for a revival.

Note 4   Since sketching out the barebones of this theory some thirty years ago, I have somewhat lost faith in the appropriateness of the axiomatic method but, until something better is available, one continues to use it. We enter the drama of life in media res, as it were, and, I am inclined to think that, like human societies or animal species, the universe  itself ‘makes things up as it goes along’, as it were, subject to some very general fundamental constraints of a logical nature. Such an Experimental Universe Theory is not yet the accepted contemporary scientific paradigm by a long shot but we seem to be moving steadily towards it.

 

 

 

The Rise and Fall of Atomism

So-called ‘primitive’ societies by and large split the world into two, what one might call the Manifest (what we see, hear &c.) and the Unmanifest (what we don’t perceive directly but intuit or are subliminally aware of). For the ‘primitives’ everything originates in the Unmanifest, especially drastic and inexplicable changes like earthquakes, sudden storms, avalanches and so on,  but also more everyday but nonetheless mysterious occurrences like giving birth, changing a substance by heating it (i.e. cooking), growing up, aging, dying. The Unmanifest is understandably considered to be much more important than the Manifest — since the latter originates in the first but not vice-versa — and so the shaman, or his various successors, the ‘sage’, ‘prophet’, ‘initiate’ &c. claims to have special knowledge because he or she has ready access to the Unmanifest which normal people do not.  The shaman and more recently the priest is, or claims to be, an intermediary between the two realms, a sort of spiritual marriage broker. Ultimately, a single principle or ‘hidden force’ drives everything, what has been variously termed in different cultures mana, wakanda, ch’i ….  Ch’i is ‘what makes things go’ as Chuang-tzu puts it, in particular what makes things more, or less, successful. If the cheetah can run faster than all other animals, it is because the cheetah has more mana and the same goes for the racing car; a warrior wins a contest of strength because he has more mana, a young woman has more suitors because of mana and so on.
Charm and charisma are watered down modern versions of mana and, like mana, are felt to originate in the beyond, in the non here and now, in the Unmanifest. This ancient dualistic scheme is far from dead and is likely to re-appear in the most unexpected places despite the endless tut-tutting of rationalists and sceptics; as a belief system it is both plausible and comprehensible, even conceivably contains a kernel of truth. As William James put it, “The darker, blinder strata of character are the only places in the world in which we catch real fact in the making”.
Our own Western civilization, however,  is founded on that of Ancient Greece (much more so than on ancient Palestine). The Greeks, the ones we take notice of at any rate, seem to have been the first people to have disregarded the Unmanifest entirely and to have considered that supernatural beings, whether they existed or not, were not required if one wanted to understand the physical universe: basic natural processes properly understood sufficed (Note 1). Democritus of Abdera, whose works have unfortunately been lost,  kicked off a vast movement which has ultimately led to the building of the Hadron Particle Collider, with his amazing statement, reductionist if ever there was one, Nothing exists except atoms and void.

Atoms and void, however, proved to be not quite enough to describe the universe : Democritus’s whirling atoms and the solids they composed when they settled themselves down were seemingly subject to certain  ‘laws’ or ‘general principles’ such as the Law of the Lever or the Principle of Flotation, both clearly stated in quantitative form by Archimedes.  But a new symbolic language, that of higher mathematics, was required to talk about such things since the “Book of Nature is written in the language of mathematics” as Kepler, a Renaissance successor and great admirer of the Greeks,  put it. Geometry stipulated the basic shapes and forms to which the groups of atoms were confined when they combined together to form regular solids — and so successfully that, since the invention of the high definition microscope, ‘Platonic solids’ and other fantastical shapes studied by the Greek geometers can actually be ‘seen’ today embodied in the arrangement of molecules in rock crystals and in the fossils of minute creatures known as radiolarians.
To all this Newton added the important notion of Force and gave it a precise meaning, namely the capacity to alter a body’s state of rest or constant straight line motion, either by way of contact (pushes and pulls) or, more mysteriously, by  ‘gravitational attraction’ which could operate at a distance through a vacuum. Nothing succeeds like success and by the middle of the nineteenth century Laplace had famously declared that he had “no need of that hypothesis”  — the existence of God — to explain the movements of heavenly bodies while Helmholtz declared that “all physical problems are reducible to mechanical problems” and thus, in principle, solvable by applying Newton’s Laws. Why stop there? The dreadful implication, spelled out by maverick thinkers such as Hobbes and La Mettrie,  was that human thoughts and emotions, maybe life itself,  were also ultimately reducible to “matter and motion” and that it was only a question of time before everything would be completely explained scientifically.
The twentieth century has at once affirmed and destroyed the atomic hypothesis. Affirmed it because molecules and atoms, at one time considered by most physicists simply as useful fictions, can actually be ‘seen’ (i.e. mapped precisely) with an electron tunnelling microscope and substances ‘one atom thick’ like graphene are actually being manufactured, or soon will be. However, atoms have turned out not to be indestructible or even indivisible as Newton and the  early scientists supposed.  Atomism and materialism have, by a curious circuitous route, led us back to a place not so very far from our original point of departure since the 20th century scientific buzzword, ‘energy’, has disquieting similarities to mana.  No one has ever seen or touched ‘Energy‘ any more that they have ever seen or touched mana. And, strictly speaking, energy in physics is ‘Potential Work’, i.e. Work which could be done but is not actually being done  while ‘Work’ in physics has the precise meaning, Force × distance moved in the direction of the applied force.  Energy is not something actual at all, certainly not something perceptible by the senses or their extensions, it is “strictly speaking a definition rather than a physical entity, merely being the first integral of the equations of motion” (Heading, Mathematical Methods in Science and Engineering p. 546). It is questionable whether statements in popular science books such as “the universe is essentially radiant energy” have any real meaning — taken literally they imply that the universe is ‘pure potentiality’ which it clearly isn’t.
The present era thus exhibits the contradictory tendencies of being on the one hand militantly secular and ‘materialistic’ both in the acquisitive and the philosophic senses of the word, while the foundations of this entire Tower of Babel, good old solid ‘matter’ composed of  “hard, massy particles” (Newton)  and “extended bodies” (Descartes) has all but evaporated. When he wished to refute the idealist philosopher, Bishop Berkeley, Samuel Johnson famously kicked a stone, but it would seem that the Bishop  has had the last laugh.

A New Starting Point?

Since the wheel of thought concerning the physical universe has more or less turned full circle, a few brave 20th century souls have wondered whether, after all, ‘atoms‘ and ‘extended bodies’ were not the best starting point, and that one might do better starting with something else. What though? There was in the early 20th century a resurgence of ‘animism’ on the fringes of science and philosophy,  witness Bergson’s élan vital (‘Life-force’), Dreisch’s ‘entelechy‘ and similar concepts. The problem with such theories is not that they are implausible — on the contrary they have strong intuitive appeal — but that they seem to be scientifically and technologically sterile. In particular, it is not clear how such notions can be represented symbolically by mathematical (or other) symbols, let alone tested in laboratory conditions.
Einstein, for his part, pinned his faith on ‘fields‘ and went so far as to state that “matter is merely a region where the field is particularly intense”. However, his attempt to unify physics via a ‘Unified Field’ was unsuccessful: unsuccessful for the layman because the ‘field‘ is an elusive concept at best, and unsuccessful for the physicist because Einstein never did succeed in combining mathematically the four basic physical forces, gravity, electro-magnetism and the strong and weak nuclear forces.
More recently, there have been one or two valiant attempts to present and attempt to elucidate the universe in terms of ‘information’, even to view the extent of viewing it as a vast computer or cellular automaton (Chris Langton, Stephen Wolfram et al.). But such attempts may well one day appear just as crudely anthropomorphic as Boyle’s vision of the universe as a sort glorified town clock. Apart from that one hopes that the universe, or whatever is behind it, has better things to do than simply pile up endless stacks of data like the odious Super Brains of Olaf Stapledon’s prescient SF fantasy The Last and First Men whose only ’emotion’ is curiosity.

The Event

During the Sixties and Seventies, at any rate within the booming counter-culture, there was a feeling that the West had somehow ‘got it wrong’ and was leading everyone towards disaster with its obsessive emphasis on material goods and material explanations. The principal doctrine of the hippie movement, inasmuch as it had one, was that “Experiences are more important than possessions” — and the more outlandish the experiences the better.  Zen-style ‘Enlightenment’ suddenly seemed much more appealing than the Eighteenth century movement of the same name which spearheaded Europe into the secular, industrial era . A few physicists, such as Fritjof Capra, argued that, although classical physics was admittedly very materialistic in the bad sense, modern physics “wasn’t like that” and had strong similarities with the key ideas of eastern mysticism. However, though initially attracted, I found modern physics (wave/particle duality, quantum entanglement, Block Universe, &c. &c.) a shade too weird, and what followed soon after, String Theory, completely opaque to all but a small band of elite advanced mathematicians .
But the trouble didn’t start in the 20th century. Newtonian mechanics was clearly a good deal more sensible but Calculus, when I started learning mathematics towards middle age, proved to be a major stumbling block, not so much because it was difficult to learn as because its basic principles and procedures were so completely  unreasonable. D’Alembert is supposed to have said to a student who expressed some misgivings about manipulating infinitesimals, “Allez à l’avant; la foi vous viendra” (“Keep going, conviction will follow”), but in my case it never did. Typically, the acceleration (change of velocity) of a moving body is computed by supposing the velocity of the body to be constant during a certain ‘short’ interval in time; we then reduce this interval ‘to the limit’ and, hey presto! we have the derivative appearing like the rabbit out of the magician’s hat. But if the particle is always accelerating its speed is never constant, and if the particle is always moving, it is never at a fixed location. The concept of ‘instantaneous velocity’ is mere gobbledeegook as Bishop Berkeley pointed out to Newton centuries ago. In effect, ‘classical’ Calculus has its cake and eats it too — something we all like doing if we can get away with it — since it merrily sets δx to non-zero and zero simultaneously on opposite sides of the same equation. ‘Modern’, i.e. post mid nineteenth-century Calculus, ‘solved’ the problem by the ingenious concept of a ‘limit’, the key idea in the whole of Analysis. Mathematically speaking, it turns out to be irrelevant whether or not a particular function actually attains  a given limit (assuming it exists) just so long as it approaches closer than any desired finite quantity . But what anyone with an enquiring mind wants to know is whether in reality the moving arrow actually attains its goal or whether the closing door ever actually slams shut (to use two examples mentioned by Zeno of Elea). As a matter of fact in neither case do they attain their objectives according to Calculus, modern or classical,  since, except in the most trivial case of a constant function, ‘taking the derivative’ involves throwing away non-zero terms on the Right Hand Side which, however puny, we have no right to get rid of just because they are inconvenient. As Zeno of Elea pointed out over two thousand years ago, if the body is in motion it is not at a specific point, and if  situated exactly at a specific point, it is not in motion. 
     This whole issue can, however, be easily resolved by the very natural supposition (natural to me at any rate) that intervals of time cannot be indefinitely diminished and that motion consists of a succession of stills in much the same way as a film we see in the cinema gives the illusion of movement. Calculus only works, inasmuch as it does work, if the increment in the independent variable is ‘very small’ compared to the level of measurement we are interested in, and the more careful textbooks warn the student against relying on Calculus in cases where the minimum size of the independent variable is actually known — for example  in molecular thermo-dynamics where dn cannot be smaller than that of a single molecule.
In any case, on reflection, I realized that I had always felt ‘time’ to be discontinuous, and life to be made up of a succession of discrete moments. This implies — taking things to the limit —  that there must be a minimal  ‘interval of time’ which, moreover, is absolute and does not depend on the position or motion of an imaginary observer. I was thus heartened when, in my vasual reading, I learned that nearly two thousand years ago, certain Indian Buddhist thinkers had advanced the same supposition and even apparently attempted to give an estimate of the size of such an ‘atom of time’ that they referred to as a ksana. More recently, Whitrow, Stefan Wolfram and one or two others, have given estimates of the size of a chronon  based on the Planck limit — but it is not the actual size that is important as the necessary existence of such a limiting value (Note 2).
Moreover, taking seriously the Sixties mantra that “experiences are more important than things” I wondered whether one could, and should, apply this to the physical world and take as a starting point not the ‘fundamental thing’, the atom, but the fundamental event, the ultimate event, one that could not be further decomposed. The resulting general theory would be not so much physics as Eventrics, a theory of events which naturally separates out into the study of the equivalent of the microscopic and macroscopic realms in physics. Ultimate Event Theory, as the name suggests, deals with the supposed ultimate constituents of physical (and mental) reality – what Hinayana Buddhists referred to as dharma(s) — while large-scale Eventrics deals with ‘historical events’ which are massive bundles of ultimate events and which have their own ‘laws’.
        The essential as far as I was concerned was that I suddenly had the barebones of a physical schema : ‘reality’ was composed of  events, not of objects (Note 3), or “The world is the totality of events and not of things” to adapt Wittgenstein’s aphorism.  Ultimate Event Theory was born, though it has taken me decades to pluck up the courage to put such an intuitively reasonable theory into the public domain, so enormous is the paradigm shift involved in these few innocuous sounding assumptions.       S.H. (3/11/ 2019)

Note 1 There exists, however, an extremely scholarly (but nonetheless very readable) book, The Greeks and the Irrational by E.R. Dodds, which traces the history of an ‘irrational’ counter-current in Greek civilisation from Homer to Hellenistic times. The author, a professor of Greek and a one time President of the Psychical Research Society, asked himself the question, “Were the Greeks in fact quite so blind to the importance of non-rational factors in man’s experience and behaviour as is commonly assumed both by their apologists and by their critics?” The book in question is the result of his erudite ponderings on the issue.

Note 2 Caldirola suggests 6.97 × 10−24 seconds for the minimal temporal interval, the chronon ─ what I refer to by the Sanscrit term ksana. Other estimates exist such as 5.39 ×10–44  seconds. Causal Set Theory and some other contemporary relativistic theories assume minimal values for spatial and temporal intervals, though I did not know this at the time (sic).

Note 3 Bertrand Russell, of all people, clearly anticipated the approach taken in UET, but made not the slightest attempt to lay out the conceptual foundations of the subject.  “Common sense thinks of the physical world as composed of ‘things’ which persist through a certain period of time and move in space. Philosophy and physics developed the notion of ‘thing’ into that of ‘material substance’, and thought of material substance as consisting of particles, each very small, and each persisting throughout all time. Einstein substituted events for particles; each event had to each other a relation called ‘interval’, which could be analyzed in various ways into a time-element and a space-element. (…) From all this it seems to follow that events, and not particles, must be the ‘stuff’ of physics. What has been thought of as a particle will have to be thought of as a series of events. (…) ‘Matter’ is not part of the ultimate material of the world, but merely a convenient way of collecting events into bundles.”  Russell, History of Western Philosophy p. 786 (Counterpoint, 1979

 

Every event or event cluster is in Ultimate Event Theory (UET) attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value of any rational number n/m or zero ─ but no irrational value. The r/r of an event is roughly the equivalent of its speed in traditional physics, i.e. it is a distance/time ratio.

If r/r = 0, this means that the event in question does not repeat.
If r/r = m/n this signifies that the event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other when one space-time ‘slice’ is replaced by the next. Such an event-chain is roughly the equivalent of an inertial system in normal physics. We generally assume that we ourselves constitute a standard  inertial system relative to which all other inertial systems can be compared ─ we ‘are where we are’ at all instants and so, in a certain sense, are always at rest. In a similar way we constitute a sort of standard landmark event-chain to which all other event-chains can be related. But we cannot see ourselves so we choose instead as standard landmark event chain some  object (=repeating event-cluster) that remains at a constant distance from us as far as we can tell.  Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all. The crucial difference is, of course, not between ‘vertical’ event-paths and ‘slanting’ event-paths but between ‘straight’ paths, whether vertical or not, and ones that are jagged or curved, i.e. not straight (assuming these terms are appropriate in this context). As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and motion.

So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence.

The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it, i.e. time only stands still for the space of a single ksana. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
Thus 0/n ≠ 0

m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if r/r = 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right every ksana (because the stralda is indivisible). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.
We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will thus, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important when we consider collisions. If there is any likelihood of confusion the denominator will be marked in bold, thus 187/187.

Also, the stralda/ksana ratio for event-chains always has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes according to Special Relativity. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1) — or at the Nth ksana if it is allowed to attain the limit.

These restrictions mean that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the amount of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is certainly not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event”(Note 1).       SH  11/09/19

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” is the equivalent in UET of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points….” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET based on a maxim of the great ancient Greek philosopher Parmenides:
“If there were no limits, nothing would persist except the limitless itself”.

The genesis of Ultimate Event Theory can be traced back to a stray remark made by the author of popular books on mathematics, W.W. Sawyer. In the course of an exchange of views on contradiction in mathematics, Sawyer threw off the casual remark that “a scientific theory would be useless if it predicted that an event such as an eclipse of the sun would happen at a given place and time, and also that it would not happen at the same time and place”. Such a ‘Law of Non-Contradiction for Events’ was assumed by all the classical physicists and seems to be a necessary (though never stated) assumption for doing science at all. Arguably, Quantum Mechanics does not verify this principle, but this is precisely why QM is so worrisome (Note 1).
Sawyer’s chance remark sounds innocuous enough, but the principle involved turns out to be extremely far-reaching. We have in effect a non-contradiction law for events (not statements), a building block of a physical not a logic theory. Now, it seems of the essence of an ‘event’ that it either happens ‘at a particular time and place’ or it does not: there is no middle ground. There would be little point in announcing that a certain musical or theatrical event was scheduled to take place in such and such a Town Hall on, say, Monday, the 25th of December in the year 20**, but also scheduled not to take place on the given time and date. And certainly once the time and date has passed, the ‘event’ either has occurred or it has not. Moreover, it seems to be of the essence of an ‘event’ to be ‘punctual’, ‘precise’ as to place and time.
An ‘event’, however, is clearly itself made up of smaller events, there are, as it were, macro- and micro-events.  Narrowing everything down and ‘taking the limit’, we end up with the eminently reasonable supposition that there are ‘ultimate events’, i.e. events that cannot be further decomposed. Secondly, since like macro-events they are ‘precise as to time and place’, we may presume that they, as it were, occupy a single ‘grid position’ on the Event Locality. This at any rate is the schema I proposed to work with.
There are two philosophic assumptions, one negative and one positive, built into this schema, namely 1. that there is no such thing as infinite regress and 2. that an ‘event’ has to happen ‘somewhere’. Calculus and much of traditional physics has ‘infinite regress’ (or ‘infinite divisibility’ which comes to the same thing) built into it, i.e. it rejects (1). Some contemporary systems such as Loop Quantum Gravity (LPG) are prepared to consider that space-time is perhaps ‘grainy’, but they do not see the need for an ‘event locality’, i.e. they reject (2). In  LPG what we call time and space are  simply ‘relations’ between basic entities (nodes) and have no real existence. And one could, of course, dispense both with actual infinity and an Event Locality i.e. reject both (1) and (2) — but such a course does not appeal to me. I opted to exclude infinity from my proposed system of the world but, on the other hand, to accept that that there is indeed a ‘Locality’, i.e. a ‘place’ where ultimate events can and do have occurrence.

Dispensing with actual infinity gets rid in one fell swoop of the ingenious paradoxes of Zeno and  the reality of Cantor’s transfinite sets in which no one except Cantor himself really believes. Instead of ‘infinite sets’, we have ‘indefinitely extendable sets’ which, as far as I can see, do all the work required of them without us having to (pretend to) believe in ‘actual infinity’. It is tedious to have to explain to mathematicians that so-called infinite sequences can indeed (and very often do) have a finite limit but that this limit is, in the vast majority of cases, manifestly not attained. The terms ‘sum’ and ‘limit’ are not interchangeable and so-called ‘infinite’ series only ever have partial sums, indeed are indefinitely extendable sequences of partial sums. For example the well known series 1 + 1/2 + 1/4 + 1/8 + …. has limit 2 but cannot ever attain it. Most (all?) non-trivial so-called ‘infinite’ series are strictly speaking interminable.
As to (2), the positive requirement, it is to me inconceivable that ‘something that happens’, i.e. an event, does not happen somewhere, i.e. has a precise position on some underlying substratum without which it simply could not occur. The idea of space and time being ‘relations’ between things that exist rather than things that exist in their own right goes back to Leibnitz and is one of the features that distinguishes his mathematics and science from that of Newton who was a great believer in absolute time and space and thus in absolute position. I do not think there is any experiment that can determine the issue one way or the other and doubtless temperament comes into play here, but for what it is worth I believe that Newton’s approach makes much more sense and has been more fruitful. As far as I am concerned, I am convinced that an event, if it occurs at all, occurs somewhere though there is no reason at this stage to attribute any property to this ‘somewhere’ except that it allows events to ‘have occurrence’. It does, however, make the ‘Event Locality’ a primary actor since this Locality seemingly existed prior to any particular events taking place. One could alternatively consider that an event, when and as it has occurrence, as it were carves out a place for itself to happen. In this schema the Locality is an essentially negative entity which does nothing to obstruct occurrence and that is all. This is a perfectly reasonable approach but again one that does not appeal to me for aesthetic or temperamental reasons.  However, once I accepted ultimate events and an Event Locality I realized that I had two ‘primary entities’ that henceforth could be taken for granted. A third primary entity was some ‘force of causality’ providing order and coherence to events as they occurred, or rather re-occurred, and so we have the three primary entities of Ultimate Event Theory:  ultimate events, the Locality and a kind of causality that I call Dominance.       SH  

Note 1  It is not, I think, at this stage worth getting involved in interminable discussions about Schrödinger’s dead-and-alive cats though the issue will have to be faced at some stage. Suffice it to say, for the moment, that the wave function, prior to an intervention by a human or other conscious agent does not verify the Law of Non-Contradiction for Events — and one way out is to simply accept that the wave function does not describe ‘events’ at all, though it does deal in ‘potential’ physical entities that are capable of producing bona fide events.

 

 

 

What is random? That which cannot be predicted with any confidence. But there is a weak and a strong sense to ‘unpredictable’. We might say that the motion of a leaf blown about by the wind is ‘random’ ― but then that may simply be because we don’t know the exact speed and direction of the wind or the aerodynamic properties of this particular leaf. In classical mechanics, there is no room for randomness since all physical phenomena are fully determined and so could in principle be predicted if one had sufficient data. Indeed, the French astronomer Laplace claimed that a super-mind, aware of the current positions and momenta of all particles currently in existence could predict the entire future of the universe from Newtonian principles.

In practice, of course, one never does know the initial conditions of any physical system perfectly. Whether this is going to make a substantial difference to the outcome hinges on how sensitively dependent on the initial conditions the system happens to be. Whether or not the flap of a butterfly’s wings in the bay of Tokyo could give rise to a hurricane in Barbados as chaos theory claims, systems that are acutely sensitive to initial conditions undoubtedly exist, and this is, of course, what makes accurate weather forecasting so difficult. Gaming houses retire dice after a few hundred throws because of inevitable imperfections creeping in and a certain Jagger made a good deal of money because he noted that certain numbers seemed to come up slightly more often than others on a particular roulette wheel and bet on them. Later on, he guessed that the cause was a slight scratch on this particular wheel and there seems to have been something in this for eventually the management thwarted him by changing the roulette wheels every night (Note 1). All sorts of other seemingly ‘random’ phenomena turn out, on close examination, to exhibit a definite bias or trend: for example, certain digits turn up in miscellaneous lists of data more often than others (Bensford’s Law) and this bias, or rather its absence, has been successfully used to detect tax fraud.

There is, however, something very unsatisfactory about the ‘unpredictable because of insufficient data’ definition of randomness: it certainly does not follow that there is an inherent randomness in Nature, nor does chaos theory imply that this is the case either. Curiously, quantum mechanics, that monstrous but hugely successful creation of modern science, does maintain that there is an underlying randomness at the quantum level. The radioactive decay of a particular nucleus is held to be not only unforeseeable but actually ‘random’ in the strong sense of the word ― though the bulk behaviour of a collection of atoms can be predicted with confidence. Likewise, genetic mutation, the pace setter of evolution, is regarded today as not just being unpredictable but, in certain cases at least, truly ‘random’. Randomness seems to have made a strong and unexpected come-back since it is now a key player in the game or business of living ― a bizarre volte-face given that science had previously been completely deterministic.

The ‘common sense’ meaning of randomness is the lack of any perceived regularity or repeating pattern in a sequence of events, and this will do for our present purposes (Note 2). Now, it is extremely difficult to generate a random sequence of events in the above sense and in the recent past there was big money involved in inventing a really good random number generator. Strangely, most random number generators are not based on the behaviour of actual physical systems but depend on algorithms deliberately concocted by mathematicians. Why is this? Because, to slightly misquote Moshe, “complete randomness is a kind of perfection”(Note 3).

The more one thinks about the idea of randomness, the weirder the concept appears since a truly ‘random’ event does not have a causal precursor (though it usually does have a consequence). So, how on earth can it occur at all and where does it come from? It arrives, as common language puts it very well, ‘out of the blue’.

Broadly speaking there are two large-scale tendencies in the observable universe: firstly the dissipation of order and decline towards thermal equilibrium and mediocrity because of the ‘random’ collision of molecules, secondly the spontaneous emergence of complex order from processes that appear to be, at least in part, ‘random’. The first principle is enshrined in the 2nd Law of Thermo-dynamics : the entropy (roughly extent of disorder) of a closed system always increases, or (just possibly) stays the same. Contemporary biologists have a big problem with the emergence of order and complexity in the universe since it smacks of creationism. But at this very moment the molecules of tenuous dispersed gases are clumping together to form stars and the trend of life forms on earth is, and has been for some time, a movement from relative structural simplicity (bacteria, archaea &c.) to the unbelievable complexity of plants and mammals. Textbooks invariably trot out the caveat that any local ‘reversal of entropy’ must always be paid for by increased entropy elsewhere. This is, however, not a claim that has been, or ever could be, comprehensively tested on a large scale, nor is it at all ‘self-evident’ (Note 4). What we do know for sure is that highly organized structures can and do emerge from very unpromising beginnings and this trend seems to be locally on the increase ― though it is conceivable that it might be reversed.

For all that, it seems that there really are such things as truly random events and they keep on occurring. What can one conclude from this? That, seemingly, there is a powerful mechanism for spewing forth random, uncaused events, and that this procedure is, as it were, ‘hard-wired’ into the universe at a very deep level. But this makes the continued production of randomness just as mysterious, or perhaps even more so, than the capacity of whatever was out there in the beginning to give rise to complex life!

The generation of random micro-events may in fact turn out to be just about the most basic and important physical process there is. For what do we need to actually produce a ‘world’? As far as I am concerned, there must be something going on, in other words we need ‘events’ and these events require a source of some sort. But this source is remote and we don’t need to attribute to it any properties except that of being a permanent store of proto-events. The existence of a source is not enough though. Nothing would happen without a mechanism to translate the potential into actuality, and the simplest and, in the long run, most efficient mechanism is to have streams of proto-events projected outwards from the source at random. Such a mechanism will, however, by itself not produce anything of much interest. To get order emerging from the primeval turmoil we require a second mechanism, contained within the first, which enables ephemeral random events to, at least occasionally, clump together, and eventually build up, simply by spatial proximity and repetition, coherent and quasi-permanent event structures (Note 5). One could argue that this possibility, namely the emergence of ‘order from chaos’, however remote, will eventually come up ― precisely because randomness in principle covers all realizable possibilities. A complex persistent event conglomeration may be termed a ‘universe’, and even though an incoherent or contradictory would-be ‘universe’ will presumably rapidly collapse into disorder, others may persist and maybe even spawn progeny.

So, which tendency is going to win out, the tendency towards increasing order or reversion to primeval chaos? It certainly looks as if a recurrent injection of randomness is necessary for the ‘health’ of the universe and especially for ourselves ― this is one of the messages of natural selection and it explains, up to a point, the extraordinarily tortuous process of meiosis (roughly sexual reproduction) as against mitosis when a cell simply duplicates its DNA and splits in two (Note 6). But there is also the “nothing succeeds like success” syndrome. And, interestingly, the evolutionary biologist John Bonner argues that microorganisms “are more affected by randomness than large complex organisms” (Note 7). This and related phenomena might tip the balance in favour of order and complexity ― though specialization also makes the larger organisms more vulnerable to sudden environmental changes.                                                                 SH

 

Note 1 This anecdote is recounted and carefully analysed in The Drunkard’s Walk by Mlodinow.

 Note 2 Alternative definitions of randomness abound. There is the frequency definition whereby, “If a procedure is repeated over and over again indefinitely and one particular outcome crops up as many times as any other possible outcome, the sequence is considered to be random” (adapted from Peirce). And Stephen Wolfram writes: “Define randomness so that something is considered random only if no short description whatsoever exists of it” (Stephen Wolfram).

 Note 3 Moshe actually wrote “Complete chaos is a kind of perfection”.

Note 4 “The vast majority of current physics textbooks imply that the Second Law is well established, though with surprising regularity they say that detailed arguments for it are beyond their scope. More specialized articles tend to admit that the origins of the Second Law remain mysterious” (Stephen Wolfram, A New Kind of Science p. 1020

 Note 5 This is essentially the principle of ‘morphic resonance’ advanced by Rupert Sheldrake. Very roughly, the idea is that if a certain event, or cluster of events, has occurred once, it is slightly more likely to occur again, and so on and so on. Habit thus eventually becomes physical law, or can do. At bottom the ‘Gambler’s Fallacy’ contains a grain of truth: I suspect that current events are never completely independent of previous similar occurrences despite what statisticians say. Clearly, for the theory to work, there must be a very slow build-up and a tipping point after which a trend really takes off. We require in effect the equivalent of the Schrodinger equation to show how initial randomness evolves inexorably towards regularity and order.

Note 6. In meiosis not only does the offspring get genes from two individuals rather than one, but there is a great deal of ‘crossing over’ of segments of chromosomes and this reinforces the mixing process.

Note 7 The reason given for this claim is that there are many more developmental steps in the construction of a complex organism and so “if an earlier step fails through a deleterious mutation, the result is simple: the death of the embryo”. On the other hand “being small means very few developmental steps, with little or no internal selection” and hence a far greater number of species, witness radiolaria (50,000) and diatoms (10,000). See article Evolution, by chance? in the New Scientist 20 July 2013 and Randomness in Evolution by John Bonner.

 

 

 

 

 

 

                 “There is a tide in the affairs of men
Which, taken in the flood, leads on to fortune”

Shakespeare, Julius Caesar

In a previous post I suggested that the three most successful non-hereditary ‘power figures’ in Western history were Cromwell, Napoleon and Hitler. Since none of the three had advantages that came by birth, as, for example, Alexander the Great or Louis XIV did, the meteoric rise of these three persons suggests either very unusual abilities or very remarkable ‘luck’.
From the viewpoint of Eventrics, success depends on how well a particular person fits the situation and there is no inherent conflict between ‘luck’ and ability. Quite the reverse, the most important ‘ability’ that a successful politician, military commander or businessman can have is precisely the capacity to handle events, especially unforeseen ones. In other words success to a considerable extent depends on how well a person handles his or her ‘good luck’ if and when it occurs, or how well a person can transform ‘bad luck’ into ‘good luck’. Whether everyone gets brilliant opportunities that they fail to seize one doubts but, certainly, most of us are blind to the opportunities that do arise and, when not blind, lack the self-confidence to seize such an offered ‘chance’ and turn it to one’s advantage.
The above is hardly controversial though it does rule out the view that everything is determined in advance, or, alternatively, the exact opposite, that ‘more or less anything can happen at any time anywhere’. I take the commonsense view that there are certain tendencies that really exist in a given situation. It is, however, up to the individual to reinforce or make use of such ‘event-currents’ or, alternatively, to ignore them and, as it were, pass by on the other side like the Levite in the Parable of the Good Samaritan. The driving forces of history are not people but events and ‘event dynamics’; however, this does not reduce individuals to the status of puppets, far from it. Either through instinct or correct analysis (or a judicious mixture of the two) the successful person identifies a ‘rising’ event current, gets with it if it suits him or her, and abandons it abruptly when it ceases to be advantageous. This is easy enough to state, but supremely difficult to put into practice. Everyone who speculates on the Stock Exchange knows that the secret of success is no secret at all : it consists in buying  when the price of stock is low but just about to rise and selling when the price is high but just about to fall. For one Soros, there are a hundred thousand or maybe a hundred million ‘ordinary investors’ who either fail entirely or make very modest gains.
But why, one might ask, is it advantageous to identify and go with an ‘event trend’ rather than simply decide what you want to do and pursue your objective off your own bat? Because the trend will do a good deal of the work for you : the momentum of a rising trend is colossal, indeed for a while, seems to be unstoppable. Pit yourself against a rising trend and it will overwhelm you, identify yourself with it and it will take you along with a force equivalent to that of a million individuals. If you can spot coming trends accurately and go with them, you can succeed with only moderate intelligence, knowledge, looks, connections, what have you.

Is charisma essential for success?

It is certainly possible to succeed spectacularly without charisma since Cardinal Richelieu, the most powerful man in the France and Europe of his day, had none whereas Joan of Arc who had plenty had a pitifully short career. Colbert, finance minister of Louis XIV is another example; indeed, in the case of ministers it is probably better not to stick out too much from the mass, even to the extent of appearing a mediocrity.
Nonetheless, Richelieu and Colbert lived during an era when it was only necessary to obtain the support of one or two big players such as kings or popes, whereas, in a democratic era, it is necessary to inspire and fascinate millions of ‘ordinary people’. No successful modern dictator lacked charisma : Stalin, Mao-tse-tong, Hitler all had plenty and this made up for much else. Charisma, however, is not enough, or not enough if one wishes to remain in power : to do this, an intuitive or pragmatic grasp of the behaviour of event patterns is a sine qua non and this is something quite different from charisma.

Hitler as failure and mediocrity

Many historians, especially British, are not just shocked but puzzled by Hitler ─ though less now than they were fifty years ago. For how could such an unprepossessing individual, with neither looks, polish, connections or higher education succeed so spectacularly? One British newspaper writer described Hitler, on the occasion of his first big meeting with Mussolini, as looking like “someone who wanted to seduce the cook”.
Although he had participated in World War I and shown himself to be a dedicated and brave ‘common soldier’, Hitler never had any experience as a commander on the battlefield even at the level of a platoon ─ he was a despatch runner who was told what to do (deliver messages) and did it. Yet this was the man who eventually got control of the greatest military machine in history and blithely disregarded the opinions of seasoned military experts, initially with complete success. Hitler also proved to be a vastly successful public speaker, but he never took elocution lessons and, when he started, even lacked the experience of handling an audience that an amateur  actor or stand-up comedian possesses.
Actually, Hitler’s apparent disadvantages proved to be more of a help than a hindrance once he had  begun to make his mark, since it gave his adversaries and rivals the erroneous impression  that he would be easy to manipulate and outwit. Hitler learned about human psychology, not by reading learned tomes written by Freud and Adler, but by eking out a precarious living in Vienna as a seller of picture postcards and sleeping in workingmen’s hostels. This was learning the hard way which, as long as you last the course (which the majority don’t), is generally the best way.
It is often said that Hitler was successful because he was ruthless. But ruthlessness is, unfortunately, not a particularly rare human trait, at any rate in the lower levels of a not very rich society. Places like Southern Italy or Colombia by all accounts have produced and continue to produce thousands or tens of thousands of exceedingly ruthless individuals, but how many ever get anywhere? At the other end of the spectrum, one could argue that it is impossible to be a successful politician without a certain degree of ruthlessness ─ though admittedly Hitler took it to virtually unheard of extremes. Even ‘good’ successful political figures such as Churchill were ruthless enough to happily envisage dragging neutral Norway into the war (before the Germans invaded), to authorise the deliberate bombing of civilian centres and even to approve in theory the use of chemical weapons. Nor did de Gaulle bother unduly about the bloody repercussions for the rural population that the activities of partisans would inevitably bring  about. Arguably, if people like Churchill and de Gaulle had not had a substantial dose of ‘ruthlessness’ (aka ‘commitment’), we would have lost the war long before the Americans ever got involved  ─ which is not, of course, to put such persons on a level with Hitler and Stalin.
To return to Hitler. Prior to the outbreak of WWI, Hitler, though by all accounts  already quite as ruthless and opinionated as he subsequently proved himself to be on a larger arena, was a complete failure. He had a certain, rather conventional, talent for pencil drawing and some vague architectural notions but that is about it. Whether Hitler would or could have made a successful architect, we shall never know since he was refused entry twice by the Viennese School of Architecture. He certainly retained a deep interest in the subject and did succeed in spotting and subsequently promoting an architect of talent, Speer. But there is no reason to think we would have heard of Hitler if he had been accepted as an architectural student and subsequently articled to a Viennese firm of Surveyors and Architects.
As for public speaking, Hitler didn’t do any in his Vienna pre-war days, only discovering his flair in Munich in the early twenties. And although Hitler enlisted voluntarily for service at the outbreak of  WWI, he was for many years actually a draft-dodger wanted for national service by Austria, his country of birth. Hardly a promising start for a future grand military strategist.

Hitler’s Decisive Moment : the Beer Hall Putsch

Hitler did, according to the few accounts we have by people who knew him at the time, have boyhood dreams of one day becoming a ‘famous artist’ — but what adolescent has not? Certainly, Hitler did not, in  his youth and early manhood, see himself as a future famous political or military figure, far from it. Even when Hitler started his fiery speeches about Germany’s revival and the need for strong government, he did not at first cast himself in the role of ‘Leader’. On the contrary, it would seem that awareness of his own mission as saviour of the German nation came to him gradually and spasmodically. Indeed, one could argue that it was only after the abortive Munich Beer-Hall putsch that Hitler decisively took on this role : it was in a sense thrust on him.
The total failure of this rather amateurish plot to take over the government of Bavaria by holding a gun to the governor’s face and suchlike antics turned out to be the turning-point of his thinking, and of his life. In Quattrocento Italy it was possible to seize power in such a way ─ though only the Medici with big finance behind them really succeeded on a grand scale  ─ and similar coups have succeeded in modern Latin American countries. But in an advanced industrial country like Germany where everyone had the vote, such methods were clearly anachronistic. Even if Hitler and his supporters had temporarily got control of Munich, they would easily have been put down by central authority : they would have been seven day wonders and no more. It was this fiasco that decided Hitler to obtain power via the despised ballot box rather than the more glamorous but outmoded methods of an Italian condottieri.
The failed Beer-hall putsch landed Hitler in court and, subsequently in prison; and most people at the time thought this would be the end of him. However, Hitler, like Napoleon before him in Egypt after the destruction of his fleet, was a strong enough character not to be brought  down by the disaster but, on the contrary, to view it as a golden opportunity. This is an example of the ‘law’ of Eventrics that “a disadvantage, once turned into an advantage, is a greater advantage than a straightforward advantage”.
What were the advantages of the situation? Three at least. Firstly, Hitler now had a regional and soon a national audience for his views and he lost no time in making the court-room a speaker’s platform with striking success. His ability as a speaker was approaching its zenith : he had the natural flair and already some years of experience. Hitler was given an incredibly  lenient sentence and was even at one point thanked by the judge for his informative replies concerning Germany’s recent history! Secondly, while in prison, Hitler had the time to write Mein Kampf which, given his lax, bohemian life-style, he would probably have never got round to doing  otherwise. And his court-room temporary celebrity meant the book was sure to sell if written and published rapidly.
Thirdly, and perhaps most important of all, the various nascent extreme Right groups made little or no headway with the ‘leader’ in prison which confirmed them in the view that  Hitler was indispensable. Once out of prison, he found himself without serious competitors on the Right and his position stronger than ever.
But the most important outcome was simply the realization that the forces of the State were far too strong to be overthrown by strong-arm tactics. The eventual break with Röhm and the SA was an inevitable consequence of Hitler’s fateful decision to gain power within the system rather than by openly opposing it.

Combination of opposite abilities

As a practitioner of Eventrics or ‘handler of events’, Hitler held two trump cards that are rarely dealt to the same individual. Firstly, even though his sense of calling seems to have come relatively late, by the early nineteen-thirties he was entirely convinced that he was a man of destiny. He is credited with the remarkable statement, very similar to one made by Cromwell, “I follow the path set by Providence with the precision and assurance of a sleepwalker”. It was this messianic side that appealed to the masses of ordinary people, and it was something that he retained right up to the end. Even when the Russian armies were at the gates of Berlin, Hitler could still inspire people who visited him in the Bunker. And Speer recounts how, even  at Germany’s lowest ebb, he overheard (without being recognized) German working people in a factory repeating like a mantra that “only Hitler can save us now”.
However, individuals who see themselves as chosen by the gods, usually fail because they do not pay sufficient attention to ordinary, mundane technicalities. Richelieu said that someone who aims at high power should not be ashamed to concern himself with trivial details  ─ an excellent remark. Napoleon has been called a ‘map-reader of genius’ and to prepare for the Battles of Ulm and Austerlitz, he instructed Berthier “to prepare a card-index showing every unit of the Austrian army, with its latest identified location, so that the Emperor could check the Austrian order of battle from day to day” (Note 1). Hitler had a similar capacity for attention to detail, supported by a remarkable memory for facts and figures — there are many records of him reeling off correct data about the range of guns and the populations of certain regions to his amazed generals.
This ‘combination of contraries’ also applies to Hitler as a statesman. Opponents and many subsequent historians could never quite decide whether Hitler, from the beginning, aimed for world domination, or whether he simply drifted along, waiting to see where events would take him. In reality, as Bullock rightly points out, these contradictions are only apparent : “Hitler was at once fanatical and cynical, unyielding in his assertion of will power and cunning in calculation” (Bullock, Hitler and the Origins on the Second World War). This highly unusual combination of two opposing tendencies is the key to Hitler’s success. As Bullock again states, “Hitler’s foreign policy… combined consistency of aim with complete opportunism in method and tactics. (…) Hitler frequently improvised, kept his options open to the last possible moment and was never sure until he got there which of several courses of action he would choose. But this does not alter the fact that his moves followed a logical (though not a predetermined) course ─ in contrast to Mussolini, an opportunist who snatched eagerly at any chance that was going, but never succeeded in combining even his successes into a coherent policy” (Bullock, p. 139).
Certainly, sureness of ultimate aim combined with flexibility in day to day management is a near infallible recipe for conspicuous success. Someone who merely drifts along may occasionally obtain a surprise victory but will be unable to build on it; someone who is completely rigid in aim and means will not  be able to adapt to, and take advantage of, what is unforeseen and unforeseeable. Clarity of goal and unshakeable conviction is the strategic part of Practical Eventrics while the capacity to respond rapidly to the unforeseen belongs to the tactical side.

Why did Hitler ultimately fail?

Given the favourable political circumstances and Hitler’s unusual abilities, the wonder is, not that he lasted as long as he did, but that he eventually failed. On a personal level, there are two reasons for this. Firstly, Hitler’s racial theories, while they originally helped him to power, eventually proved much more of a drawback than an advantage. For one thing, since Hitler regarded ‘Slavs’ as inferior, this conviction unnecessarily alienated large populations in Eastern Europe, many of whom were originally favourable to German intervention since they had had enough of Stalin. Moreover, Hitler allowed ideological and personal prejudices to influence his choice of subordinates : rightly suspicious of the older Army generals but jealous of brilliant commanders like von Manstein and Guderian, he ended up with a General Staff of supine mediocrities.
Secondly, Hitler, though he had an excellent intuitive grasp of overall strategy, was a poor tactician. Not only did he have no actual experience of command on the battlefield but, contrary to popular belief, he was easily rattled and unable to keep a clear head in emergencies.
Jomini considered that “the art of war consists of six distinct parts:

  1. Statesmanship in relation to war
  2. Strategy, or the art of properly directing masses upon the theatre of war, either for defence or invasion.
  3. Grand Tactics.
  4. Logistics, or the art of moving armies.
  5. Engineering ─ the attack and defence of frotifications.
  6. Minor tactics.”
    Jomini, The Art of War p. 2

Hitler certainly ticks the first three boxes. But certainly not (4), Logistics. Hitler tended to override his highly efficient Chief of General Staff, Halder, whereas Napoleon always listened carefully to what Halder’s equivalent, Berthier, had to say. According to Liddell Hart, the invasion of Russia failed, despite the high quality of the commanders and fighting men, because of an error in logistics.
“Hitler lost his chance of victory because the mobility of his army was based on wheels instead of on tracks. On Russia’s mud-roads its wheeled transport was bogged when the tanks could move on. If the panzer forces had been provided with tracked transport they could have reached Russia’s vital centres by the autumn in spite of the mud” (Liddel-Hart, History of the Second World War )  On such mundane details does the fate of empires and even of the world often depend.
As for (5), the attack on fortifications, it had little importance in World War II though the long-drawn out siege of Leningrad exhausted resources and troops and should probably have been abandoned. Finally, on (6), what Jomini calls ‘minor tactics’, Hitler was so poor as to be virtually incompetent. By ‘minor tactics’, we should understand everything relating to the actual movement of troops on the battlefield (or battle zone) ─ the area in which Napoleon and Alexander the Great were both supreme.  Hitler was frequently indecisive and vacillating as well as nervy, all fatal qualities for a military commander.
On two occasions, Hitler made monumental blunders that cost him the war. The first was the astonishing decision to hold back the victorious tank units just as they were about to sweep into Dunkirk and cut off the British forces. And the second was Hitler’s rejection of  Guderian’s plan for a headlong drive towards Moscow before winter set in; instead, following conventional Clausewitzian principles,  Hitler opted for a policy of encirclement and head-on battle. Given the enormous man-power of the Russians and their scorched earth policy, this was a fatal decision.
Jomini, as opposed to Clausewitz, recognized the importance of statesmanship in the conduct of a war, something that professional army officers and even commanders are prone to ignore. Whereas Lincoln often saw things that his generals could not, and on occasion successfully overrided them  because he had a sounder long-term view, Hitler, a political rather than a military man, introduced far too much statesmanship into the conduct of war.
It has been plausibly argued, especially by Liddel Hart, that the decision to halt the tank units before Dunkirk was a political rather than a military decision. Blumentritt, operational planner for General Rundstedt, said, at a later date, that “the ‘halt’ had been called for more than military reasons, it was part of a political scheme to make peace easier to reach. If the British Expeditionary Force had been captured at Dunkirk, the British might have felt that their honour had suffered a stain which they must wipe out. By letting it escape, Hitler hoped to conciliate them” (Liddel Hart, History of the Second World War I p. 89-90). This did make some kind of sense : a rapid peace settlement with Britain would have wound up the Western campaign and freed Hitler’s hands to advance eastwards which had seemingly always been his intention. However, if this interpretation is correct, Hitler made a serious miscalculation, underestimating Britain’s fighting spirit and inventiveness.

Hitler’s abilities and disabilities

It would take us too far afield from the field of Eventrics proper to go into the details of Hitler’s political, economic and military policies. My overall feeling is that Hitler was a master in the political domain, time and again outwitting his internal and external rivals and enemies, and that he had an extremely good perception of Germany’s economic situation and what needed to be done about it. But he was an erratic and often incapable military commander ─ for we should not forget that, following the resignation of von Brauchitsh, Hitler personally supervised the entire conduct of the war in the East (and everywhere else eventually). This is something like the reverse of the conventional assessment of Hitler so is perhaps worth explaining.
Hitler is credited with the invention of Blitzkrieg, a new way of waging war and, in particular, with one of the most successful campaigns in military history, the invasion of France, when the tank units moved in through the Ardennes, thought to be impassible. The original idea was in reality not Hitler’s but von Manstein’s (who got little credit for it) though Hitler did have the perspicacity to see the merits of this risky and unorthodox plan of attack which the German High Command unanimously rejected. It is also true that Hitler took a special interest in the tank and does seem to have some good ideas regarding tank design.
However, Hitler never seems to have rid himself completely of the conventional Clausewitzian idea that wars are won by large-scale confrontations of armed men, i.e. by modern ‘pitched battles’. Practically all (if not all) the German successes depended on surprise, rapidity of execution and artful manoeuvre ─ that is, by precisely the avoidance of direct confrontation. Thus the invasion of France, the early stages of the invasion of Russia, Rommel in North Africa and so on. When the Germans fought it out on a level playing field, they either lost as at Al Alamein or achieved ‘victories’ that were so costly as to be more damaging than defeats as in the latter part of the Russian campaign.        Hitler was in fact only a halfway-modernist in military strategy. “The school of Fuller and Basil Liddel Hart [likewise Guderian and Rommel] moved away from using manoeuvre to bring the enemy’s army to battle and destroy it. Instead, it [the tank] should be used in such a way as to numb the enemy’s command, control, and communications and bring about victory through disintegration rather than destruction” (Messenger, Introduction to Jomini’s Art of War).

As to the principle of Bitzkrieg (Lightning War) itself, though it doubtless appealed to Hitler’s imagination, it was in point of fact forced on him by economic necessity : Germany just did not have the resources to sustain a long war. It was make or break. And much the same went for Japan.
Hitler’s duplicity and accurate reading of his opponents’ minds in the realm of politics needs no comment. But what is less readily recognized is how well he understood the general economic situation. Hitler had doubtless never read Keynes ─ though his highly capable Economics Minister, Schacht, doubtless had. But with his talent for simplification, Hitler realized early on that Germany laboured under two crippling economic disadvantages : she did not produce enough food for her growing population and, as an industrial power, lacked indispensable natural resources especially oil and quality iron-ore. So where to obtain  these and a lot more essential  items? By moving eastwards, absorbing the cereal-producing areas of the Ukraine and getting hold of the oilfields of the Caucasus. This was the policy exposed to the German High Command in the so-called ‘Hossbach Memorandum’ to justify the invasion of Russia to an unenthusiastic general staff.
The policy of finding Lebensraum in the East was based on a ruthless but shrewd and essentially correct analysis of the economic situation in Europe at the time. But precisely because Germany would need even more resources in a wartime situation, victory had to be rapid, very rapid. The gamble nearly succeeded : as a taster, Hitler’s armies  overwhelmed Greece and Yugoslavia in a mere six weeks and at first looked set to do much the same in Russia in three months. Perhaps if Hitler had followed Guderian’s plan of an immediate all-out tank attack on Moscow, instead of getting bogged down in Southern Russia and failing to take Stalingrad, the gamble would actually have paid off.

Hitler: Summary from the point of view of Eventrics

The main points to recall from this study of Hitler as a ‘handler of events’ are the following.

  1. The methods chosen must fit the circumstances, (witness Hitler’s switch to a strategy based on the ballot box rather than the revolver after the Beer-Hall putsch).
  2. An apparent defeat can be turned into an opportunity, a disadvantage into an advantage (e.g. Hitler’s trial after the Beer-hall putsch)
  3. Combining inflexibility of ultimate aim with extreme flexibility on a day-to-day basis is a near invincible combination (Hitler’s conduct of foreign affairs during the Thirties);
  4. It is disastrous to allow ideological and personal prejudices to interfere with the conduct of a military campaign, and worse still to become obsessed with a specific objective (e.g. Hitler’s racial views, his obsession with taking Stalingrad).

 

As related in the previous post, Einstein, in his epoch-making 1905 paper, based his theory of Special Relativity on just two postulates,

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

I asked myself if I could derive the main results of the Special Theory, the Rule for the Addition of Velocities, Space Contraction, Time Dilation and the ‘Equivalence’ of Mass and Energy from UET postulates.
Instead of Einstein’s Postulate 2, the ‘absolute value of the speed of light’, I employ a more general but very similar principle, namely that there is a ‘limiting speed’ for the propagation of causal influences from one spot on the Locality to another. In the simplest case, that of an  event-chain consisting of a single ultimate event that repeats at every ksana, this amounts to asking ourselves ‘how far’ the causal influence can travel ‘laterally’ from one ksana to the next. I see the Locality as a sort of grid extending indefinitely in all directions where  each ‘grid-position’ or ‘lattice-point’ can receive one, and only one, ultimate event (this is one of the original Axioms, the Axiom of Exclusion). At each ksana the entire previous spatial set-up is deftly replaced by a new, more or less identical one. So, supposing we can locate the ‘same’ spot, i.e. the ‘spot’ which replaces the one where the ultimate event had occurrence at the last ksana, is there a limit to how far to the left (or right) of this spot the ultimate event can re-occur? Yes, there is. Why? Well, I simply cannot conceive of there being no limit to how far spatially an ‘effect’ ─ in this case the ‘effect’ is a repetition of the original event ─ can be from its cause. This would be a holographic nightmare where anything that happens here affects, or at least could affect, what happens somewhere billions of light years away. One or two physicists, notably Heisenberg, have suggested something of the sort but, for my part, I cannot seriously contemplate such a state of affairs.  Moreover, experience seems to confirm that there is indeed a ‘speed limit’ for all causal processes, the limit we refer to by the name of c.
However, this ‘upper speed limit’ has a somewhat different and sharper meaning in Ultimate Event Theory than it does in matter-based physics because c (actually c*) is an integer and corresponds to a specific number of adjacent ‘grid-positions’ on the Locality existing at or during a single ksana. It is a distance rather than a speed and even this is not quite right : it is a ‘distance’ estimated not in terms of ‘lengths’ but only in terms of the number of the quantity of intermediary ultimate events that could conceivably be crammed into this interval.
In UET a distinction is made between an attainable limiting number of grid-positions to right (or left) denoted c* and the lowest unattainable limit, c, though this finicky distinction in many cases can be neglected. But the basic schema is this. A  ‘causal influence’, to be effective, must not only be able to at least traverse the distance between one ksana and the next ‘vertically’ (otherwise nothing would happen) but must also stretch out ‘laterally’ i.e. ‘traverse’ or rather ‘leap over’ a particular number of  grid-positions. There is an upper limit to the number of positions that can be ‘traversed’, namely c*, an integer. This number, which is very great but not infinite ─ actual infinity is completely banished from UET ─ defines the universe we (think we) live in since it puts a limit to the operation of causality (as  Einstein clearly recognized), and without causality there can, as far as I am concerned, be nothing worth calling a universe. Quite conceivably, the value of this constant c i(or c*) is very different in other universes, supposing they exist, but we are concerned only  with this ‘universe’ (massive causally connected more or less identically repeating event-cluster).
So far, so good. This sounds a rather odd way of putting things, but we are still pretty close to Special Relativity as it is commonly taught. What of Einstein’s other principle? Well, firstly, I don’t much care for the mention of “laws of physics”, a concept which Einstein along with practically every other modern scientist inherited from Newton and which harks back to a theistic world-view whereby God, the supreme law-giver, formulated a collection of ‘laws’ that everything must from the moment of Creation obey ─ everything material at any rate. My concern is with what actually happens whether or not what happens is ‘lawful’ or not. Nonetheless, there do seem to be certain very general principles that apply across the board and which may, somewhat misleadingly, be classed as laws. So I shall leave this question aside for the moment.
The UET Principle that replaces Einstein’s First Principle (“that the laws of physics are the same in all inertial frames”) is rather tricky to formulate but, if the reader is patient and broad-minded enough, he or she should get a good idea of what I have in mind. As a first formulation, it goes something like this:

The occupied region between two or more successive causally related positions on the Locality is invariant. 

         This requires a little elucidation. To start with, what do I understand by ‘occupied region’? At least to a first approximation, I view the Locality (the ‘place’ where ultimate events can and do have occurrence) as a sort of three-dimensional lattice extending in all directions which  flashes on and off rhythmically. It would seem that extremely few ‘grid-spots’ ever get occupied at all, and even less spots ever become the seats of repeating events, i.e. the location of the  first event of an event-chain. The ‘Event Locality’ of UET, like the Space/Time  of matter-based physics, is a very sparsely populated place.
Now, suppose that an elementary event-chain has formed but is marooned in an empty region of the Locality. In such a case, it makes no sense to speak of ‘lateral displacement’ : each event follows its predecessor and re-appears at the ‘same’ ─ i.e.  ‘equivalent’ ─ spot. Since there are no landmark events and every grid-space looks like every other, we can call such an event-chain ‘stationary’. This is the default case, the ‘inertial’ case to use the usual term.
We concentrate for the moment on just two events, one the clone of the other re-appearing at the ‘same spot’ a ksana later. These two events in effect define an ‘Event Capsule’ extending from the centre (called ‘kernel’ in UET) of the previous grid-space to the centre of the current one and span a temporal interval of one ksana. Strictly speaking, this ‘Event Capsule’ has two parts, one half belonging to the previous ksana and the other to the second ksana, but, at this stage, there is no more than a thin demarcation line separating the two extremities of the successive ksanas. Nonetheless, it would be quite wrong (from the point of view of UET) to think of this ‘Event Capsule’ and the whole underlying ‘spatial/temporal’ set-up as being ‘continuous’. There is no such thing as a ‘Space/Time Continuum’ as Minkowski understood the term.  ‘Time’ is not a dimension like ‘depth’ which can seamlessly be added on to ‘length’ or ‘width’ : there is a fundamental opposition between the spatial and temporal aspect of things that no physical theory or mathematical artifice can completely abolish. In the UET  model, the demarcations between the ‘spatial’ parts of adjacent Event Capsules do not widen, they  remain simple boundaries, but the demarcations between successive ksanas widen enormously, i.e. there are gaps in the ‘fabric’ of time. To be sure there must be ‘something’ underneath which persists and stops everything collapsing, but this underlying ‘substratum’ has no physical properties whatsoever, no ‘identity’, which is why it is often referred to, not inaccurately, both in Buddhism and sometimes even in modern physics, as ‘nothing’.
To return to the ‘Constant Region Postulate’. The elementary ‘occupied region’ may be conceived as a ‘Capsule’ having the dimensions  s0 × s0  × s= s03  for the spatial extent  and t0 ­for time, i.e. a region of extent s03 × t0 ­. These dimensions are fixed once and for all and, in the simplest UET model, s0 is a maximum and t0 ­is a minimum. Restricting ourselves for simplicity to a single spatial dimension and a single temporal dimension, we  thus have an ‘Event Rectangle’ of  s0  by t0­ .  
        For anything of interest to happen, we need more than one event-chain and, in particular, we need at least three ultimate events, one of which is to serve as a sort of landmark for the remaining pair. It is only by referring to this hypothetical or actual third event, occurring as it does at a particular spot independently of the event-pair, that we can meaningfully talk of the ‘movement’ to left or right of the second ultimate event in the pair with relation to the first. Alternatively, one could imagine an ultimate event giving rise to two events, one occurring ‘at the same spot’ and the other so many grid-spaces to the right (or left). In either case, we have an enormously expanded ‘Event Capsule’ spatially speaking compared to the original one. The Principle of the Constancy of the Area of the Occupied Region asserts that this ‘expanded’ Event Capsule which we can imagine as a ‘Space/Time rectangle’ (rather than Space/Time parallelipod), always has the ‘same’ area.
How can this be possible? Quite simply by making the spatial and temporal ‘dimensions’ inversely proportional to each other. As I have detailed in previous posts, we have in effect a ‘Space/Time Rectangle’ of sides sv and tv (subscript v for variable) such that sv × tv  = s0 × t0  = Ω = constant. Just conceivably, one could make s0  a minimum and t0 a maximum but this would result in a very strange universe indeed. In this model of UET, I take s0 as a maximum and t0 as a minimum. These dimensions are those of the archetypal ‘stationary’ or ‘inertial’ Event Capsule, one far removed from the possible influence of any other event-chains. I do not see how the ‘mixed ratio’ s0 : t0 can be determined on the basis of any fundamental physical or logical considerations, so this ratio just ‘happens to be’ what it is in the universe we (think we) live in. This ratio, along with the determination of c which RELATIVITY  HYPERBOLA DIAGRAMis a number (positive integer), are the most important constants in UET and different values would give rise to very different universes. In UET s0/t0 is often envisaged  in geometrical terms : tan β = s0/t0 = constant.    s0  and   t0   also have minimum and maximum values respectively, noted as  su    and tu  respectively, the subscript u standing for ‘ultimate’. We thus have a hyperbola but one constrained within limits so that there is no risk of ‘infinite’ values.

 

 

What is ‘speed’?   Speed is not one of the basic SI units. The three SI mechanical units are the metre, the standard of length, the kilogram, the standard of mass, and the second, the standard of time. (The remaining four units are the ampere, kelvin, candela and mole). Speed is a secondary entity, being the ratio of space to time, metre to second. For a long time, since Galileo in fact, physicists have recognized the ‘relational’ nature of speed, or rather velocity (which is a ‘vector’ quantity, speed + direction). To talk meaningfully about a body’s speed you need to refer it to some other body, preferably a body that is, or appears to be, fixed (Note 1). This makes speed a rather insubstantial sort of entity, a will-o’-the-wisp, at any rate compared to  ‘weight’, ‘impact’, ‘position’, ‘pain’ and so forth. The difficulty is compounded by the fact that we almost always consider ourselves to be ‘at rest’ : it is the countryside we see and experience whizzing by us when seated in a train. It requires a tremendous effort of imagination to see things from ‘the other object’s point of view’. Even a sudden jolt, an acceleration, is registered as a temporary annoyance that is soon replaced by the same self-centred ‘state of rest’. Highly complex and contrived set-ups like roller-coasters and other fairground machines are required to give us the sensation of ‘acceleration’ or ‘irregular movement’, a sensation we find thrilling precisely because it is so inhabitual. Basically, we think of ourselves as more or less permanently at rest, even when we know we are moving around. In UET everything actually is at rest for the space of a single ksana, it does not just appear to be and everything that happens occurs ‘at’ or ‘within’ a ksana (the elementary temporal interval).
I propose to take things further ─ not in terms of personal experience but physical theory. As stated, there is in UET no such thing as ‘continuous motion’, only succession ─ a succession of stills. An event takes place here, then a ksana or more later, another event, its replica perhaps, takes place there. What matters is what occurs and the number and order of the events that occur, everything else is secondary. This means not only that ultimate events do not move around ─ they simply have occurrence where they do have occurrence ─  but also that the distances between the events are in a sense ‘neither here nor there’, to use the remarkably  apt everyday expression. In UET v signifies a certain number of grid-spaces to right or left of a fixed point, a shift that gets repeated every ksana (or in more complex cases with respect to more than one ksana). In the case of a truncated event-chain consisting of just two successive events, v is the same as d, the ‘lateral displacement’ of event 2 with respect to the position of event 1 on the Locality (more correctly, the ‘equivalent’ of such a position a ksana later). Now, although the actual number of ‘grid-positions’ to right or left of an identifiable spot on the Locality is fixed, and continues to be the same if we are dealing with a ‘regular’ event-chain, the distance between the centres (‘kernels’) of adjacent spots is not fixed but can take any number (sic) of permissible values ranging from 0 to c* according to the circumstances. The ‘distance’ from one spot to another can thus be reckoned in a variety of legitimate ways ─ though the choice is not ‘infinite’. The force of the Constancy of the Occupied Region Principle is that, no matter how these intra-event distances are measured or experienced, the overall ‘area’ remains the same and is equal to that of the ‘default’ case, that of a ‘stationary’ Event Capsule (or in the more extended case a succession of such capsules).
This is a very different conception from that which usually prevails within Special Relativity as it is understood and taught today. Discussing the question of the ‘true’ speed of a particular object whose speed  is different according to what co-ordinate system you use, the popular writer on mathematics, Martin Gardner, famously wrote, “There no truth of the matter”. Although I understand what he meant, this is not how I would put it. Rather, all permissible ‘speeds’, i.e. all integral values of v, are “the truth of the matter”. And this does not lead us into a hopeless morass of uncertainty where “everything is relative” because, in contrast to ‘normal’ Special Relativity, there is in UET always a fixed framework of ultimate events whose number within a certain region of the Locality and whose individual ‘size’ never changes. How we evaluate the distances between them, or more precisely between the spots where they can and do occur, is an entirely secondary matter (though often one of great interest to us humans).

Space contraction and Time dilation 

In most books on Relativity, one has hardly begun before being launched into what is pretty straightforward stuff for someone at undergraduate level but what is, for the layman, a completely indigestible mass of algebra. This is a pity because the actual physical principle at work, though it took the genius of Einstein to detect its presence, is actually extreme simple and can much more conveniently be presented geometrically rather than, as usual today, algebraically. As far as I am concerned, space contraction and time dilation are facts of existence that have been shown to be true in any number of experiments : we do not notice them because the effects are very small at our perceptual level. Although it is probably impossible to completely avoid talking about ‘points of view’ and ‘relative states of motion’ and so forth, I shall try to reduce such talk to a minimum. It makes a lot more sense to forget about hypothetical ‘observers’ (who most of the time do not and could not possibly exist) and instead envisage length contraction and time dilation as actual mechanisms which ‘kick in’ automatically much as the centrifugal governor on Watt’s steam-engine kicks in to regulate the supply of heat and the consequent rate of expansion of the piston. See things like this and keep at the back of your mind a skeletal framework of ultimate events and you won’t have too much trouble with the concepts of space contraction and time dilation. After all why should the distances between events have to stay the same? It is like only being allowed to take photographs from a standing position. These distances don’t need to stay the same provided the overall area or extent of the ‘occupied region’ remains constant since it is this, and the causally connected events within it, that really matters.
Take v to represent a certain number of grid-spaces in one direction which repeats; for our simple truncated event-chain of just two events it is d , the ‘distance’ between two spots. d is itself conceived as a multiple of the ‘intra-event distance’, that  between the ‘kernels’ of any two adjacent ‘grid-positions’ in a particular direction. For any specific case, i.e. a given value of d or v, this ‘inter-possible-event’ distance does not change, and the specific extent of the kernel, where every ultimate event has occurrence if it does have occurrence, never changes ever. There is, as it were, a certain amount of ‘pulpy’, ‘squishy’ material (cf. cytoplasm in a cell) which surrounds the ‘kernel’ and which is, as it were, compressible. This for the ‘spatial’ part of the ‘Event Capsule’. The ‘temporal’ part, however, has no pulp but is ‘stretchy’, or rather the interval between ksanas is.
If the Constant Region Postulate is to work, we have somehow to arrange things that, for a given value of v or d, the spatial and temporal distances sort Relativity Circle Diagram tan sinthemselves out so that the overall area nonetheless remains the same. How to do this? The following geometrical diagram illustrates one way of doing this by using the simple formula tan θ = v/c  =  sin φ . Here v is an integral number of grid-positions ─ the more complex case where v is a rational number will be considered in due course ─ and c is the lowest unattainable limit of grid-positions (in effect (c* + 1) ).
Do these contractions and dilations ‘actually exist’ or are they just mathematical toys? As far as I am concerned, the ‘universe’ or whatever else you want to call what is out there, does exist and such simultaneous contractions and expansions likewise. Put it like this. The dimensions of loci (spots where ultimate events could in principle have occurrence) in a completely empty region of the Locality do not expand and contract because there is no ‘reason’ for them to do so : the default dimensions suffice. Even when we have two spots occupied by independent, i.e. completely disconnected,  ultimate events nothing happens : the ‘distances’ remain the ordinary stationary ones. HOWEVER, as soon as there are causal links between events at different spots, or even the possibility of such links, the network tightens up, as it were, and one can imagine causal tendrils stretching out in different directions like the tentacles of an octopus. These filaments or tendrils can and do cause contractions and expansions of the lattice ─ though there are definite elastic limits. More precisely, the greater the value of v, the more grid-spaces the causal influence ‘misses out’ and the more tilted the original rectangle becomes in order to preserve the same overall area.
We are for the moment only considering a single ‘Event Capsule’ but, in the case of a ‘regular event-chain’ with constant v ─ the equivalent of ‘constant straight-line motion’ in matter-based physics ─ we have  a causally connected sequence of more or less identical ‘Event Capsules’ each tilted from the default position as much as, but no more than, the last (since v is constant for this event-chain).
This simple schema will take us quite a long way. If we compare the ‘tilted’ spatial dimension to the horizontal one, calling the latter d and the former d′ we find from the diagram that d′ cos φ = d and likewise that t′ = t/cos φ . Don’t bother about the numerical values : they can be worked out  by calculator later.
These are essentially the relations that give rise to the Lorentz Transformations but, rather than state these formulae and get involved in the whole business of convertible co-ordinate systems, it is better for the moment to stay with the basic idea and its geometrical representation. The quantity noted cos φ which depends on  v and c , and only on v and c, crops up a lot in Special Relativity. Using the Pythagorean Formula for the case of a right-angled triangle with hypotenuse of unit length, we have

(1 cos φ)2 + (1 sin φ)2 = 12  or cos2 φ + sin2 φ = 1
        Since sin φ is set at v/c we have
        cos2 φ  = 1– sin2 φ   = 1 – (v/c)2       cos φ = √(1 – (v/c)2

         More often than not, this quantity  (√(1 – (v2/c2)  (referred to as 1/γ in the literature) is transferred over to the other side so we get the formula

         d′ = (1/cos φ) d   =     d /( √(1 – (v2/c2))      =  γ d

Viewed as an angle, or rather the reciprocal of the cosine of an angle, the ubiquitous γ of Special Relativity is considerably less frightening.

A Problem
It would appear that there is going to be a problem as d, or in the case of a repeating ‘rate’, v, approaches the limit c. Indeed, it was for this reason that I originally made a distinction between an attainable distance (attainable in one ksana), c*, and an unattainable one, c. Unfortunately, this does not eliminate all the difficulties but discussion of this important point will  be left to another post. For the moment we confine ourselves to ‘distances’ that range from 0 to c* and to integral values of d (or v).

Importance of the constant c* 

Now, it must be clearly understood that all sorts of ‘relations’ ─   perhaps correlations is an apter term ─ ‘exist’ between arbitrarily distant spots on the Locality (distant either spatially or  temporally or both) but we are only concerned with spots that are either occupied by causally connected ultimate events, or could conceivably be so occupied. For event-chains with a 1/1 ‘reappearance rhythm’  i.e. one event per ksana, the relation tan θ = v/c = sin φ (v < c) applies (see diagram) and this means that grid-spots beyond the point labelled c (and indeed c itself) lie ‘outside’ the causal ‘Event Capsule’ Anything that I am about to deduce, or propose, about such an ‘Event Capsule’ in consequence does not apply to such points and the region containing them. Causality operates only within the confines of single ‘Event Capsules’ of fixed maximum size, and, by extension, connected chains of similar ‘Event Capsules’.
Within the bounds of the ‘Event Capsule’ the Principle of Constant Area applies. Any way of distinguishing or separating the spots where ultimate events can occur is acceptable, provided the setting is appropriate to the requirements of the situation. Distances are in this respect no more significant than, say, colours, because they do not affect what really matters : the number of ultimate events (or number of possible emplacements of ultimate events) between two chosen spots on the Locality, and the order of such events.
Now, suppose an ultimate event can simultaneously produce a  clone just underneath the original spot,  and  also a clone as far as possible to the right. (I doubt whether this could actually happen but it is a revealing way of making a certain point.)
What is the least shift to the right or left? Zero. In such a case we have the default case, a ‘stationary’ event-chain, or a pair belonging to such a chain. The occupied area, however, is not zero : it is the minimal s03 t0 . The setting v = 0 in the formula d′ = (1/cos φ) d makes γ = 1/√(1 – (02/c2) = 1 so there is no difference between d′ and d. (But it is not the formula that dictates the size of the occupied region, as physicists tend to think : it is the underlying reality that validates the formula.)
For any value of d, or, in the case of repetition of the same lateral distance at each ksana, any value of v, we tilt the rectangle by the appropriate amount, or fit this value into the formula. For v = 10 grid-spaces for example, we will have a tilted Space/Time Rectangle with one side (10 cos φ) sand the other side                 (1/10 cos φ) t0 where sin φ = 10/c   so cos φ = √1 – (10/c)2  This is an equally valid space/time setting because the overall area is
         (10 cos φ) s0    ×   (1/10 cos φ) t0   =  s t0      

We can legitimately apply any integral value of v < c and we will get a setting which keeps the overall area constant. However, this is done at a cost : the distance between the centres of the spatial element of the event capsules shrink while the temporal distances expand. The default distance s0 has been shrunk to s0 cos φ, a somewhat smaller intra-event distance, and the default temporal interval t0 has been stretched to t0 /cos φ , a somewhat greater distance. Remark, however, that sticking to integral values of d or v means that cos φ does not, as in ‘normal’ physics, run through an ‘infinite’ gamut of values ─ and even when we consider the more complex case, taking reappearance rhythms into account, v is never, strictly never, irrational.
What is the greatest possible lateral distance? Is there one? Yes, by Postulate 2 there is and this maximal number of grid-points is labelled c*. This is a large but finite number and is, in the case of integral values of v, equal to c – 1. In other words, a grid-space c spaces to the left or right is just out of causal range and everything beyond likewise (Note 2).

Dimensions of the Elementary Space Capsule

I repeat the two basic postulates of Ultimate Event Theory that are in some sense equivalent to Einstein’s two postulates. They are

1. The mixed Space/Time volume/area of the occupied parallelipod/rectangle remains constant in all circumstances

 2. There is an upper limit to the lateral displacement of a causally connected event relative to its predecessor in the previous ksana

        Now, suppose we have an ultimate event that simultaneously produces a clone at the very next ksana in an equivalent spot AND another clone at the furthest possible grid-point c*. Even, taking things to a ridiculous extreme to make a point, suppose that a clone event is produced at every possible emplacement in between as well. Now, by the Principle of the Constancy of the Occupied Region, the entire occupied line of events in the second ksana can either have the ‘normal’ spacing between events which is that of the ‘rest’ distance between kernels, s0, or, alternatively, we may view the entire line as being squeezed into the dimensions of a single ‘rest’ capsule, a dimension s0 in each of three spatial directions (only one of which concerns us). In the latter case, the ‘intra-event’ spacing will have shrunk to zero ─ though the precise region occupied by an ultimate event remains the same. Since intra-event distancing is really of no importance, either of these two opposed treatments are ‘valid’.
What follows is rather interesting: we have the spatial dimension of a single ‘rest’ Event Capsule in terms of su, the dimension of the kernel. Since, in this extreme case, we have c* events squashed inside a lateral dimension of s0, this means that
s0 = c* su , i.e. the relation s0 : su = c*: 1. But s0 and su are, by hypothesis, universal constants and so is c* . Furthermore, since by definition sv tv = s0 t0 = Ω = constant , t0 /tv = sv/s0 and, fitting in the ‘ultimate’ s value, we have t0 /tu = su/c* su    = 1 : c*. In the case of ‘time’, the ‘ultimate’ dimension tu is a maximum since (by hypothesis) t0 is a minimum. c* is a measure of the extent of the elementary Event Capsule and this is why it is so important.
In UET everything is, during the space of a single ksana, at rest and in effect problems of motion in normal matter-based physics become problems of statics in UET ─ in effect I am picking up the lead given by the ancient Greek physicists for whom statics was all and infinity non-existent. Anticipating the discussion of mass in UET, or its equivalent, this interpretation ‘explains’ the tremendously increased resistance of a body to (relative) acceleration : something that Bucherer and others have demonstrated experimentally. This resistance is not the result of some arbitrary “You mustn’t go faster than light” law : it is the resistance of a region on the Locality of fixed extent to being crammed full to bursting with ultimate events. And it does not matter if the emplacements inside a single Event Capsule are not actually filled : these emplacements, the ‘kernels’, cannot be compressed whether occupied or not. But an event occurring at the maximum number of places to the right, is going to put the ‘Occupied Region’ under extreme pressure to say the least. In another post I will also speculate as to what happens if c* is exceeded supposing this to be possible.      SH    9/3/14

Notes:

Note 1  Zeno of Elea noted the ‘relativity of speed’ about two and a half thousand years before Einstein. In his “Paradox of the Chariot”, the least known of his paradoxes, Zeno asks what is the ‘true’ speed of a chariot engaged in a chariot race. A particular chariot has one speed with respect to its nearest competitor, another compared to the slowest chariot, and a completely different one again relative to the spectators. Zeno concluded that “there was no true speed” ─ I would say, “no single true speed”.

Note 2  The observant reader will have noticed that when evaluating sin φ = v/c and thus, by implication, cos φ as well, I have used the ‘unattainable’ limit c while restricting v to the values 0 to c*, thus stopping 1/cos φ from becoming infinite. Unfortunately, this finicky distinction, which makes actual numerical calculations much more complicated,  does not entirely eliminate the problem as v goes to c, but this important issue will be left aside for the moment to be discussed in detail in a separate post.
If we allow only integral values of v ranging from 0 to c* = (c – 1), the final tilted Casual Rectangle has  a ludicrously short ‘spatial side’ and a ridiculously long ‘temporal side’ (which means there is an enormous gap between ksanas). We have in effect

tan θ = (c–1)/c  (i.e. the angle is nearly 45 degrees or π/4)
and γ = 1/√1 – (c–1)2/c2 =  c/√c2 – (c–1)2 = c/√(2c –1)
Now, 2c – 1 is very close to 2c  so     γ  ≈ √c/2   

I am undecided as to whether any particular physical importance should be given to this value ─ possibly experiment will decide the issue one day.
In the event of v taking rational values (which requires a re-appearance rhythm other than 1/1), we get even more outrageous ‘lengths’  for sv and tv . In principle, such an enormous gap between ksanas, viewed from a vantage-point outside the speeding event-chain, should become detectable by delicate instruments and would thus, by implication, allow us to get approximate values for c and c* in terms of the ‘absolute units’ s0 and t0 . This sort of experiment, which I have no doubt will be carried out in this century, would be the equivalent in UET of the famous Millikan ‘oil-drop’ series of experiments that gave us the first good value of e, the basic unit of charge.

In Ultimate Event Theory (UET) the basic building-block of physical reality is not the atom or elementary particle (or the string whatever that is) but the ultimate event enclosed by a four-dimensional  ‘Space/Time Event-capsule’. This capsule has fixed extent s3t = s03t0 where s0 and t0 are universal constants, s0 being the maximum ‘length’ of s, the ‘spatial’ dimension,  and t0 being the minimal ‘length’ of t, the basic temporal interval or ksana. Although s3t = s03 t0  = Ω (a constant), s and t can and do vary though they have maximum and minimum values (as does everything in UET).
All ultimate events are, by hypothesis, of the same dimensions, or better they occupy a particular spot on the Event Locality, K0 , whose dimensions do not change (Note 1). The spatial region occupied by an ultimate event is very small compared to the dimensions of the ‘Event capsule’ that contains it and, as is demonstrated in the previous post (Causality and the Upper Limit), the ratio of ‘ultimate event volume’ to ‘capsule volume’ or  su3/s03 is
1: (c*)3 and of single dimension to single dimension 1 : c* (where c* is the space/time displacement rate of a causal impulse (Note 2)). Thus, s3 varies from a minimum value su3, the exact region occupied by an ultimate event, to a maximum value of  s03  where s0 = c* su. In practice, when the direction of a force or velocity is known, we only need bother about the ‘Space/Time Event Rectangle’  st = constant but we should not forget that this is only a matter of convenience : the ‘event capsule’ cannot be decomposed and  always exists in four dimensions (possibly more).

Movement and ‘speed’ in UET     If by ‘movement’ we mean change, then obviously there is movement on the physical level unless all our senses are in error. If, however, by ‘movement’ we are to understand ‘continuous motion of an otherwise unchanging entity’, then, in UET, there is no movement. Instead there is succession : one event-capsule is succeeded by another with the same dimensions. The idea of ‘continuous motion’ is thus thrown into the trash-can along with the notion of ‘infinity’ with which it has been fatally associated because of the conceptual underpinning of the Calculus. It is admittedly difficult to avoid reverting to traditional science-speak from time to time but I repeat that, strictly speaking, in UET there is no ‘velocity’ in the usual sense : instead there is a ‘space/time ratio’ which may remain constant, as in a ‘regular event-chain, or may change, as in the case of an ‘irregular (accelerated) event-chain. For the moment we will restrict ourselves to considering only regular event-chains and, amongst regular event-chains, only those with a 1/1 reappearance rate, i.e. when one or more constituent ultimate event recurs at each ksana.
An event-chain is a bonded sequence of events which in its simplest form is simply a single repeating ultimate event. We associate with every event-chain an ‘occupied region’ of the Locality constituted by the successive ‘event-capsules’. This region is always increasing since, at any ksana,  any ‘previous spots’ occupied by members of the chain remain occupied (cannot be emptied). This is an important feature of UET and depends on the Axiom of Irreversibility which says that once an event has occurrence on the Locality there is no way it can be removed from the Locality or altered in any way. This property of ‘irreversible occurrence’ is, if you like, the equivalent of entropy in UET since it is a quantity that can only increase ‘with time’.
So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

•   

•        

•    •    •    

•    •    •    •    

The two bold dotted lines (black and  red) thus define the limits of the ‘occupied region’ of the Locality, although these ‘guard-lines’ of ultimate events standing there like sentinels are not capable of preventing other events from occurring within the region whose extreme limits they define. Possible emplacements for ultimate events not belonging to these two chains are marked by grey points. The red dotted line may be viewed as displacing itself by so many spaces to the right at each ksana (relative to the vertical column). If we consider the vertical distance from bold black dot to dot to represent t0 , the ‘length’ of a single ksana (the smallest possible temporal interval), and the distance between neighbouring dots in a single row to be 0  then, if there are v spaces in a row (numbered 0, 1,2…..v) we have a Space/Time Event Rectangle of v s­0  × 1 t­0  , the ‘Space/time ratio’ being v grid-spaces per ksana.

It is important to realize what v signifies. ‘Speed’ (undirected velocity) is not a fundamental unit in the Système Internationale but a ratio of the fundamental SI units of spatial distance and time. For all that, v is normally conceived today as an independent  ‘continuous’ variable which can take any possible value, rational or irrational, between specified limits (which are usually 0 and c). In UET v is, in the first instance, simply a positive integer  which indicates “the number of simultaneously existing neighbouring spots on the Event Locality where ultimate events can have occurrence between two specified spots”. Often, the first spot where an ultimate event does or can occur is taken as the ‘Origin’ and the vth spot in one direction (usually to the right) is where another event has occurrence (or could have). The spots are thus numbered from 0 to v where v is a positive integer. Thus

0      1      2       3      4       5                v
•       •       •       •       •       • ………….•     

There are thus v intervals, the same number as the label applied to the final event ─ though, if we include the very first spot, there are (v + 1) spots in all where ultimate events could have (had) occurrence. This number, since it applies to ultimate events and not to distances or forces or anything else, is absolute.
      A secondary meaning of v is : the ratio of ‘values of lateral spatial displacement’ compared to ‘vertical’ temporal displacement’. In the simplest case this ratio will be v : 1 where the ‘rest’ values 0  and 0 are understood. This is the nearest equivalent to ‘speed’  as most of you have come across it in physics books (or in everyday conversation). But, at the risk of seeming pedantic, I repeat that there are (at least) two significant  differences between the meaning of v in UET and that of v  in traditional physics. In UET, v is (1) strictly a static space/time ratio (when it is not just a number) and (2) it cannot ever take irrational values (except in one’s imagination). If we are dealing with event-chains with a 1/1 reapperance rate, v is a positive integer but the meaning can be intelligibly extended to include m/n where m, n are integers. Thus  v = m/n spaces/ksana  would mean that successive events in an event-chain are displaced laterally by m spaces every nth ksana. But, in contrast to ‘normal’ usage, there is no such thing as a displacement of m/n spaces per (single) ksana. For both the ‘elementary’ spatial interval, the ‘grid-space’, and the ksana are irreducible.
One might suppose that the ‘distance’ from the 0th  to the vth spot does not change ─ ‘v is v’ as it were. However, in UET, ‘distance’ is not an absolute but a variable quantity that depends on ‘speed’ ─ somewhat the reverse of how we see things in our everyday lives since we usually think of distances between spots as fixed but the time it takes to get from one spot to the other as variable.

The basic ‘Space-Time Rectangle’ st can be represented thus

Relativity cos diagram

Rectangle   s0 × t0   =   s0 cos φ  × t0 /cos φ
where  PR cos φ = t0     

sv = s0 cos φ        tv = t0 /cos φ       sv = s0 cos φ       tv = t0 /cos φ    sv /s0  =  cos φ     tv /t0  =  1/cos φ s0 /t0  = tan β = constant       tv2  =  t02 + v2 s02     v2 s02 = t02 ( (1/cos2φ) – 1) s02/ t02  tan2 β  =  (1/v2) ((1/cos2φ) – 1) =  (1/v2) tan2 φ  

tan β  = s0 /t0   =    (tan θ)/(v cos θ)      since  sin φ =  tan θ = (v/c)                          

    v =    ( tan θ)/ (tan β cos φ)                  

 

So we have s = s0 cos φ  where φ ranges from 0 to the highest possible value that gives a non-zero length, in effect that value of  φ for which cos φ = s0/c* = su . What is the relation of s to v ? If sv is the spacing associated with the ratio v , and dependent on it, we have sv = s0 cos φ  and so sv /s0  = cos φ. So cos φ is the ‘shrink factor’ which, when applied to  any distance reckoned in terms of s0, converts it by changing the spacing. The ‘distance’ between two spots on the Locality is composed of two parts. Firstly, there is the number of intermediate spots where ultimate events can/could have/had occurrence and this ‘Event-number’ does not change ever. Secondly, there is the spacing between these spots which has a minimum value su which is just the diameter of the exact spot where an ultimate event can occur, and s0 which is the diameter of the Event capsule and thus the maximum distance between one spot where an ultimate event can have occurrence and the nearest neighbouring spot. The spacing  varies according to v  and it is now incumbent on us to determine the ‘shrink factor’ in terms of v.
The spacing s is dependent on so s = g(v) . It is inversely proportional to v since as v increases, the spacing is reduced while it is at a maximum when v = 0. One might make a first guess that the function will be of the form s = 1 – f(v)/h(c)   where f(v) ranges from 0 to h(c) . The simplest such function is just  s = (1 – v/c).
As stated, v in UET can only take rational values since it denotes a specific integral number of spots on the Locality. There  is a maximum number of such spots between  any two ultimate events or their emplacements, namely c –1  such spots if we label spot A as 0 and spot B as c. (If we make the second spot c + 1 we have c intermediate positions.) Thus v  = c/m where m is a rational number.  If we are concerned only with event-chains that have a 1/1 reappearance ratio, i.e. one event per ksana, then m  is an integer. So v  = c/n

We thus have tan θ = n/c  where n  varies from 0 to c* =  (c – 1) (since in UET a distinction is made between the highest attainable space/time displacement ratio and the lowest unattainable ratio c) .
So 0 ≤ θ < π/4  ─ since tan π/4 = 1. These are the only permissible values for tan θ .

Relativity tangent diagram

  

   

 

 

 

 

 

 

 

 

 

If now we superimpose the ‘v/c’ triangle above on the st Rectangle to the previous diagram we obtain

Relativity Circle Diagram tan sin

 

Thus tan θ = sin φ which gives
                cos φ = (1 – sin2 θ)1/2  = (1 – (v/c)2 )1/2  

This is more complicated than our first guess, cos φ = (1 – (v/c), but it has the same desired features that it goes to cos φ = 1 as v goes to zero and has a maximum value when v approaches c.
(This maximum value is 1/c √2c – 1  =  √2/√c )

     

 

 

         cos φ = (1 – (v/c)2 )1/2  is thus a ‘shrinkage factor’ which is to be applied to all lengths within event-chains that are in lateral motion with respect to a ‘stationary’ event chain. Students of physics will, of course, recognize this factor as the famous ‘Fitzgerald contraction’ of all lengths and distances along the direction of motion within an ‘inertial system’ that is moving at constant speed relative to a stationary one (Note 3)

Parable of the Two Friends and Railway Stations

It is important to understand what exactly is happening. As all books on Relativity emphasize, the situation is exactly symmetrical. An observer in system A would judge all distances within system B to be ‘contracted’, but an observer situated within system B would think exactly the same about the distances in system A. This symmetricality is a consequence of Einstein’s original assumption that  ‘the laws of physics take the same form in all inertial frames’. In effect, this means  that one inertial frame is as good as any other because if we could distinguish between two frames, for example by carrying out identical  mechanical or optical experiments, the two frames would not be equivalent with respect to  their physical behaviour. (In UET, ‘relativity’ is a consequence of the constancy of the area on the Locality occupied by the Event-capsule, whereas Minkowski deduced an equivalent principle from Einstein’s assumption of relativity.)
As an illustration of what is at stake, consider two friends undertaking train journeys from a station which spans the frontier between two countries. The train will stop at exactly the same number of stations, say 10, and both friends are assured that the stations are ‘equally spaced’ along each line. The friends start at the same time in Grand Central Station but go to platforms which take passengers to places in different countries.
In each journey there are to be exactly ten stops (including the final destination) of the same duration and the friends are assured that the two trains will be running at the ‘same’ constant speed. The two  friends agree to stop at the respective tenth station along the respective lines and then relative to each other. The  tracks are straight and close to the border so it is easy to compare the location of one station to another.
Friend B will thus be surprised if he finds that friend A has travelled a lot further when they  both get off at the tenth station.  He might conclude that the tracks were not straight, that the trains were  dissimilar or that they didn’t keep to the ‘same’ speed. Even might conclude  that, even though the distances between stations as marked on a map were the same for both countries, say 20 kilometres, the map makers had got it wrong. However, the problem would be cleared up if the two friends eventually learned that, although the two countries assessed distances in metres, the standard metre in the two countries was not the same. (This could not happen today but in the not sp distant past measurements of distance, often employing the same terms, did differ not only from one country to another but, at any rate within the Holy Roman Empire, from one ‘free town’ to another. A Leipzig ‘metre’ (or other basic unit of length) was thus not necessarily the same as a Basle one. It was only since the advent of the French Revolution and the Napoleonic system that weights and measures were standardized throughout most of Europe.’)

    This analogy is far from exact but makes the following point. On each journey, there are exactly the same number of stops, in this case 10, and both friends would agree on this. There is no question of a train in one country stopping at a station which did not exist for the friend in the other country. The trouble comes because of the spacing between stations which is not the same in the two countries, though at first sight it would appear to be because the same term is used.
    The stops correspond to ultimate events : their number and the precise region they occupy on the Locality is not relative but absolute. The ‘distance’ between one event and the next is, however, not absolute but varies according to the way you ‘slice’ the Event capsules and the region they occupy, though there is a minimum distance which is that of a ‘rest chain’.  As Rosser puts it, “It is often asked whether the length contraction is ‘real’?  What
the principle of relativity says is that the laws of physics are the same in all inertial frames, but the actual measures of particular quantities may be
different in different systems” (Note 4)

Is the contraction real?  And, if so,  why is the situation symmetrical? 

   What is not covered in the train journey analogy is the symmetricality of the situation. But if the situation is symmetrical, how can there be any observed discrepancy?
This is a question frequently asked by students and quite rightly so. The normal way of introducing Special Relativity does not, to my mind, satisfactorily answer the question. First of all, rest assured that the discrepancy really does exist : it is not a mathematical fiction invented by Einstein and passed off on the public by the powers that be.
μ mesons from cosmic rays hitting the atmosphere get much farther than they ought to — some even get close to sea level before decaying. Distance contraction explains this and, as far as I know, no other theory does. From the point of view of UET, the μ meson is an event-chain and, from its inception to its ‘decay’, there is a finite number of constituent ultimate events. This number is absolute and has nothing to do with inertial frames or relative velocities or anything you like to mention. We, however, do not see these occurrences and cannot count the number of ultimate events — if we could there would be no need for Special Relativity or UET. What we do register, albeit somewhat unprecisely, is the first and last members of this (finite) chain : we consider that the μ meson ‘comes into existence’ at one spot and goes out of existence at another spot on the Locality (‘Space/Time’ if you like). These events are recognizable to us even though we are not moving in sync with the μ meson (or at rest compared to it). But, as for the distance between the first and last event, that is another matter. For the μ meson (and us if we were in sync with it) there would be a ‘rest distance’ counted in multiples of s (or su).  But since we are not in sync with the meson, these distances are (from our point of view) contracted — but not from the meson’s ‘point of view’. We have thus to convert ‘his’ distances back into ours. Now, for the falling μ meson, the Earth is moving upwards at a speed close to that of light and so the Earth distances are contracted. If then the μ meson covers n units of distance in its own terms, this corresponds to rather more in our terms. The situation is somewhat like holding n strong dollars as against n debased dollars. Although the number of dollars remains the same, or could conceivably remain the same, what you can get with them is not the same : the strong dollars buy more essential goods and services. Thus, when converting back to our values we must increase the number. We find, then, that the meson has fallen much farther than expected though the number of ultimate events in its ‘life’ is exactly the same. We reckon, and must reckon, in our own distances which are debased compared to that of a rest event-chain. So the meson falls much farther than it would travel (longitudinally) in a laboratory. (If the meson were projected downwards in a laboratory there would be a comparable contraction.) This prediction of Special relativity has been confirmed innumerable times and constitutes the main argument in favour of its validity.
From the point of view of UET, what has been lost (or gained) in distance is gained (or lost) in ‘time’, since the area occupied by the event capsule or event capsules remains constant (by hypothesis).  The next post will deal with the time aspect.        SH  1 September 2013

 

Note 1  An ultimate event is, by definition, an event that cannot be further decomposed. To me, if something has occurrence, it must have occurrence somewhere, hence the necessity of an Event Locality, K0, whose function is, in the first instance, simply to provide a ‘place’ where ultimate events can have occurrence and, moreover, to stop them from merging. However, as time went on I found it more natural and plausible to consider an ultimate event, not as an entity in its own right, but rather as a sort of localized ‘distortion’ or ‘condensation’ of the Event Locality. Thus attention shifts from the ultimate event as primary entity to that of the Locality. There has been a similar shift in Relativity from concentration on isolated events and inertial systems (Special Relativity) to concentration on Space-Time itself. Einstein, though he pioneered the ‘particle/finitist’ approach ended up by believing that ‘matter’ was an illusion, simply being “that part of the [Space/Time] field where it is particularly intense”. Despite the failure of Einstein’s ‘Unified Field Theory’, this has, on the whole,  been the dominant trend in cosmological thinking up to the present time.
But today, Lee Smolin and others, reject the whole idea of ‘Space/Time’ as a bona fide entity and regard both Space and Time as no more than “relations between causal nodes”. This is a perfectly reasonable point of view which in its essentials goes back to Leibnitz, but I don’t personally find it either plausible or attractive. Newton differed from Leibnitz in that he emphatically believed in ‘absolute’ Space and Time and ‘absolute’ motion ─ although he accepted that we could not determine what was absolute and what was relative with current means (and perhaps never would be able to). Although I don’t use this terminology I am also convinced that there is a ‘backdrop’ or ‘event arena’ which ‘really exists’ and which does in effect provide ‘ultimate’ space/time units. 

Note 2. Does m have to be an integer? Since all ‘speeds’ are integral grid-space/ksana ratios, it would seem at first sight that m must be integral since c  (or c*) is an exact number of grid-spaces per ksana and v = (c*/m). However, this is to neglect the matter of reappearance ratios. In a regular event-chain with a 1/1 reappearance ratio, m would have to be integral ─ and this is the simplified event-chain we are considering here. However, if a certain event-chain has a space/time ratio of 4/7 , i.e. there is a lateral displacement of 4 grid-spaces every 7  ksanas, this can be converted to an ‘ideal’ unitary rate of 4/7 sp/ks.
In contemporary physics space and time are assumed to be continuous, so any sort of ‘speed’ is possible. However, in UET there is no such thing as a fractional unitary rate, e.g. 4/7th of a grid-space per ksana since grid-spaces cannot be chopped up into anything smaller. An ‘idealfractional rate per ksana is intelligible but it does not correspond to anything that actually takes place. Also, although a rate of m/n is intelligible, all rates, whether simple or ideal, must be rational numbers ─ irrational numbers are man-made conventions that do not represent anything that can actually occur in the  real world.

Note 3  Rosser continues :
     “For example, in the example of the game of tennis on board a ship going out to sea, it was reasonable to find that the velocity of the tennis ball was different relative to the ship than relative to the beach. Is this change of velocity ‘real’? According to the theory of special relativity, not only the measures of the velocity of the ball relative to the ship and relative to the seashore will be different, but the measures of the dimensions of the tennis court parallel to the direction of relative motion and the measures of the times of events will also be different. Both the reference frames at rest relative to the beach and to the ship can be used to describe the course of the game and the laws of physics will be the same in both systems, but the measures of certain quantities will differ.”                          W.G.V. Rosser, Introductory Relativity