Despite having already invented the Calculus (which he called the Theory of Fluxions), Newton did not use it in his magnum opus, the Principia Mathematica, probably because he felt uneasy about its logical basis. Instead he employed cumbersome strictly geometrical arguments without even employing co-ordinates ─ which makes the Principia almost unreadable for the modern student. Feynman, one of the greatest mathematical physicists of all time, confessed that he could not follow Newton’s proof that planets must follow elliptical orbits and instead offered his own geometrical proof (see Feynman’s Lost Lecture by Goodstein and Goodstein).
However, it is not true, as is often said, that Newton had no concept of limits. The very first section of Book I is entirely given over to eleven ‘Lemmas’ about Limits which he needs in order to show, amongst other things, that planets and other heavenly bodies verify an inverse square distance law. The key limit is the first:
Quantities, and the ratios of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given difference, become ultimately equal.

 If you deny it, suppose them to be ultimately equal, and let D be their ultimate difference. Therefore they cannot approach nearer to equality than by that given difference D; which is contrary to the supposition.”
Newton, Principia (Motte/Cajori translation  p. 29)

In particular, this Lemma leads on to the all-important Lemma VII which states that “the ultimate ratio of the arc, chord and tangent, any one to any other, is the ratio of equality”.

So, what are we to make of Lemma I? On the face of it, it sounds foolproof. Either diminishing ratios that converge to unity, attain their goal or they do not ─ exclusive sense of ‘or’. In practice, of course, this will not do; essentially Calculus wishes  to have it both ways, to make such ratios attain equality when this is convenient and have them not attain equality when this is embarrassing. At least Newton grasps the nettle: by this all-round Lemma he affirms that the limit is attained.
Or, does he? In the Scholium (Commentary) which concludes the section, Newton admits that there is a conceptual problem, at any rate when we consider speed. Why so? Because speed is not an independent entity but rather a ratio of distance to time, and, in dynamics, we desire to know a body’s speed at a particular moment of time. In such a case, is there, or is there not, such a thing as an ‘ultimate ratio’ of distance/time? Newton writes:

Perhaps it may be objected, that there is no ultimate proportion of evanescent quantities; because the proportion, before the quantities have vanished, is not the ultimate, and when they are vanished, is none”.

Newton’s reply to this objection is interesting:

By the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities not before they vanish, nor afterwards, but with which they vanish…..There is a limit which the velocity at the end of the motion may attain, but not exceed.”

This is all very well but contradicts lemma I since Newton says in the above passage that this ‘ultimate ratio’ ‘may be attained’ ─ in which case it would constitute a difference D that is not supposed to exist according to lemma I.
And, a little further on, Newton even contradicts what he has just said since he now denies that this ‘ultimate ratio’ is in fact attained:
        Those ultimate ratios with which quantities vanish are not truly the ratios of ultimate quantities, but limits towards which the ratios of quantities decreasing without limit do always converge; and to which they approach nearer than by any given difference, but never go beyond, nor in effect attain to, till the quantities are diminished ad infinitum” (p. 39 Motte/Cajori).

The contradiction remained like a worm in the apple of Calculus until the radical reworking the latter underwent at the end of the 19th century. The definition of a ‘limit’ that every mathematics student encounters today neatly sidesteps the problem ─ without resolving it. Mathematically speaking, it is immaterial whether a sequence or series actually attains the proposed limit; the only issue is whether the absolute value of the difference between all terms after a given point and the proposed limit can be made “smaller than any positive quantity” (Note 1).
The mathematics student of today is discouraged, sometimes even specifically prohibited, from asking the question that every enquiring person wants to pose: Does the function or sequence actually attain the limit? In most cases of any interest in Calculus and Analysis  the answer is that it does not. (The sequence 1, 1/2, 1/4, 1/8….1/2n for example does not ever attain the obvious limiting value of zero.) The adroit way in which the limit is defined, originally due to the 19th century mathematician Heine, means that, mathematically speaking, we get what we want, namely a clearcut test of whether or not a function ‘tends to a limit’ while avoiding altogether situations where, for example, we might find ourselves tempted to ‘divide by zero’.
However, Newton, despite being the greatest pure mathematician this country has  produced, was a physicist first and a mathematician second, which is why the modern ‘solution’ to the problem of limits, even had he thought of it, would probably not have appealed to him. I am afraid that I, as a philosophic empiricist, at any rate with regard to applied mathematics, am not at all satisfied by the sleight of hand; in cases of obvious physical importance I want to know whether a function or mode of behaviour generally actually does attain the proposed limit or not. However, Newton’s lemma VII which makes the “ultimate ratio of the arc, chord and tangent….the ratio of equality” does not convince me any more than it convinces any contemporary mathematics student.
So, what to do? The solution is quite simple and, I contend, perfectly valid mathematically ─ even though it will arouse howls of protest and derision from the aficionados of modern Analysis. We simply excise Newton’s lemma I and replace it by a positive statement:


Quantities, and the ratios of quantities, which in any finite time converge continually to equality, do not in general become ultimately equal but differ from strict equality by a small but finite amount. 

        Now, it is true that in general we do not know what this ‘small amount’ is ─ although in most applications it either is or could conceivably be ascertained. We now know that all energy interactions are quantised and that the inevitable inefficiency (because of friction and similar considerations) of an actual machine can be (and often is) estimated. Not only that, calculus is already used in situations where we know the value of the independent variable cannot be arbitrarily diminished. For example, dn in molecular thermo-dynamics cannot be made smaller than the size of a single molecule and dx in population studies cannot be smaller than a single living person. This does not matter too much since we are dealing with millions of entities, although it must be said that in more accurate work, the tendency these days is to not bother with calculus but to slog it out numerically with computers to the degree of precision required.
This drastic pruning of calculus does not make Analysis, and all that depends on it, altogether redundant since there is often no great difference in practice between assuming that dx has an ‘ultimate’ final value and letting it go as near to zero as we wish ─ the dx terms and a fortiori second and third order terms will most likely end up by being discarded anyway. Nonetheless, one can and should question whether the assumptions of Analysis, especially infinite divisibility, are realistic. I believe they are not. There is a growing movement amongst physicists (e.g. Causal Set Theory, Loop Quantum Gravity &c.) that even spacetime, the last refuge of the devotees of the continuous, might be ‘grainy’.             SH  25/02/20

Note 1 The technical definition for a function is:
f(x) tends to a limit l as x tends to a, if, given any positive number ε (however small), there is a positive number δ (which depends on ε) such that, for all x, except possibly a itself, lying between a − δ and a + δ,  f(x) lies between l − ε  and l + ε .  The definition of the limit of a sequence is similar.
Such a definition will probably not mean much to the non-mathematical reader but the idea behind it is a sort of guessing game. I claim that some sequence or function tends to a limit l and my opponent challenges me to show that I can produce terms of my sequence or function that get me closer to this limit than some arbitrarily small quantity such as 10−6 = 0.000001. If I succeed, my opponent chooses an even smaller difference and so the contest goes on. The point is that this difference, though it can be reduced to zero in some cases, need not necessarily go to zero. For example, I might claim that the diminishing sequence 1; 0.1; 0.01; 0.001; 0.0001; and so on, has zero as a limit. My opponent asks me to get within 1/1000 of my limit, i.e. to make the difference d smaller than, say, 1/1000. I do this easily enough by presenting him with 0.00001 which is a term in the sequence but is smaller than  1/1000 (since 0.00001 = 1/10000). Moreover, since this is a strictly diminishing sequence, all  terms further down the line will also have a smaller difference than the one I have to better. If my opponent ups his challenge, I can easily meet it since if he comes up with 1/10N (for some positive integer N) I can get closer simply by adding more zeroes to the denominator. Yet, in such a case, if actually asked to produce a term in my sequence that makes the difference zero exactly I cannot do so ─ since any term 1/10N , however large N is, is still a positive quantity albeit a small one. But this does not matter, the limit still holds since I can get as close to it as I am required to.






[Brief summary:  For those who are new to this website, a brief recapitulation. Ultimate Event Theory aims to be a description of the physical world where the event as opposed to the object (or field) is the primary item. The axioms of UET are given in an earlier post but the most important is the Axiom of Finitude which stipulates that Every event is composed of a finite number of ultimate events which are not further decomposable. Ultimate events ‘have occurrence’ on an Event Locality which exists only so as to enable ultimate events to take place somewhere  and to remain discrete. Spots on the Locality where events may and do have occurrence have three ‘spatial’ dimensions each of unit size, 1 stralda, and one temporal dimension of 1 ksana. Both the stralda and ksana are minimal and cannot be meaningfully subdivided. The physical world, or what we apprehend of it, is made up of event-chains and event-clusters which are bonded together and appear as relatively persistent objects. All so-called objects are thus, by hypothesis, discontinuous at a certain level and there are distinct gaps between the successive appearances of recurring ultimate events. These gaps, as opposed to the ‘grid-spots’, have ‘flexible’ extent with however a minimum and a maximum.]

Every repeating event, or event cluster, is in UET attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value 0 or any rational number n/m ─ but no irrational value. The r/r is quite distinct from the space/time displacement rate, the equivalent of ‘speed’, since it concerns the number of times an ultimate event repeats in successive quite apart from how far the repeat event is displaced ‘laterally’ from its previous position.
If r/r = 0, this means that the event in question does not repeat.
But this value is to be distinguished from r/r = 0/1 which signifies that the ultimate event reappears at every ksana but does not displace itself ‘laterally’ ― it is, if you like, a ‘rest’ event-chain.
If r/r = 1/1 the ultimate event reappears at every ksana and displaces itself one stralda at every ksana, the minimal spatial displacement. (Both the stralda and the ksana, being absolute minimums, are indivisible.)
        If r/r = m/n (with m, n positive whole numbers) this signifies that the ultimate event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other, as it were, when one space-time slice is replaced by the next. We generally assume that we ourselves constitute a standard  inertial system relative to which all others can be compared ─ we ‘are where we are’ at all instants and feel ourselves to be at rest except when our ‘natural state’ is manifestly disrupted, i.e. when we are accelerated by an outside force. In a similar way, in UET we conceive of ourselves as constituting a rest event-chain to which all others can be related. But we cannot see ourselves so we generally choose instead as a standard landmark event chain some (apparent) object that remains fixed at a constant distance as far as we can tell.

Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all — ‘normal’ physics has the same problem . The crucial difference is, of course, not between ‘vertical’ event-paths (‘stationary’ event-chains)  and ‘slanting’ event-paths (the equivalent of straight-line constant motion), but rather between ‘straight’ paths (whether vertical or slanting) and ones that are not straight, i.e. curved. As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and constant straight-line motion.
So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence. The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because time only stands still for the space of a single ksana — ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if the r/r of an event is 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right at every ksana (because there is no such thing as a fifth of a stralda). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.

We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important particularly when we consider collisions. If there is any likelihood of confusion the denominator, which is the ksana value,  will be marked in bold, thus 187/187.
Also, the stralda/ksana ratio for event-chains has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1). Since N is such an enormous number, this constraint can usually be ignored. An event or event-chain simply ceases to repeat when it reaches the limit.
These restrictions imply that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the number of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” (Note 1). (And this in turn means that much of the mathematical assumptions of Analysis and other areas of mathematics are unrealistic.)
Why is all this important or even worth stating? Because, unlike traditional physical systems, UET not only makes a distinction between constant and accelerated ‘motion’ (or rather their equivalents) but also between event-chains which have the same displacement rate but very different ‘reappearance rates’ — some repeating event-chain ‘miss out’ more ksanas than others.
A continuous function in Calculus is modelled as an unbroken line and, if we are dealing with a ‘moving object’, this object is assumed to ‘exist’ at every instant. In UET even a solid object is always discontinuous in that there is always a minute gap between consecutive appearances even in the case of the ‘densest’ event-chains. But, over and above this discontinuity which, since it is general and so minute, can usually be neglected, there remains the possibility of far more substantial discontinuities when a regularly repeating event may ‘miss out’ a large number of intermediate ksanas while nonetheless maintaining a regular rhythm. Giving the overall ‘speed’ and direction of an event-chain is not sufficient to identify it: a third property, the re-appearance rate, is required. There is all the difference in the world between an event-chain whose members (constituent ultimate events) appear at every consecutive ksana and an event-chain which only repeats at, say, every seventh or twentieth or hundredth ksana.
An important consequence is that a ‘particle’ (dense event-chain) can ‘pass through’ a solid barrier if the latter has a ‘tight’ reappearance rate while the ‘particle’ has one that is much more ‘spaced out’. Moreover, two ‘particles’ that have the same ‘speed’ (lateral displacement rate) but very different reappearance rates will behave very differently especially if their speeds are high relative to a barrier in front of them.
This feature of UET enables me to make a prediction even at this early stage. Both photons and neutrinos have speeds that are close to c, but their behaviour is remarkably different. Whilst it is very easy to block light rays, neutrinos are incredibly difficult to detect because they have no difficulty ‘passing through’ barriers as substantial as the Earth itself without leaving a trace. It has been said that a neutrino can pass through miles of solid lead without interacting with anything and indeed at this moment thousands are believed to be passing through my body and yours. On the basis of UET principles, this can only be so if the two event-chains known as ‘photon’ and ‘neutrino’ have wildly different reappearance rates, the neutrino being the most ‘spaced out’ r/r that is currently known to exist. Thus, if it should ever become possible to detect the ultimate event patterns of these event-chains, the ‘neutrino’ event-chain would be extremely ‘gapped’ while the photon would be extremely dense, i.e. apparently ‘continuous’ (Note 2). The accompanying diagram will give some idea of what I have in mind.

The existence and importance of reappearance-rates is one of the two principal innovations of Ultimate Event Theory and it may well have a bearing on the vexed question of so-called wave-particle duality. From the UET perspective, neither waves nor particles are truly fundamental  entities since both are bonded collections of ultimate events. A ‘wave’ is a ‘spaced-out’ collection of ultimate events, a ‘particle’ a dense conglomeration (although both wave and particle at sufficiently high magnification would reveal themselves to be discontinuous). Nonetheless, the perspective of UET is clearly much closer to the ‘particle’ approach to electro-magnetism (favoured by Newton) and gives rise to the following prediction. Since high frequency, short wave phenomena are clearly more ‘bunched up’ than low frequency, long-wave phenomena, it should one day, perhaps soon, be possible to detect discontinuities in very long wave radio transmissions while short wavelength phenomena would still appear to be continuous. The discontinuity would manifest itself as an irreducible ‘flicker’ like that of a light rapidly turned on and off — and may well have been already observed as a strangely persistent annoyance. Moreover, one can only suppose that there is some mechanism at work which shifts wave-particle phenomena from one mode to the other; such a mechanism simply ‘spaces out’ the constituent ultimate events of an apparent particle, or forcefully combines wave-like ultimate events into a dense bundle.
SH   16/03/20

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” would be the equivalent in UET terms of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points there is always another point” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET, the  so-called Principle of Parmenides (who first enunciated it) slightly adapted,  “If there were no limits, nothing would persist”.
As against the above axiom of continuity of ‘Ordered Geometry’ which underlies Calculus and much else, one could, if need be,  introduce into UET the axiom, “It is not always possible to introduce a further ultimate event between two distinct ultimate events”.

Note 2. It is possible that this facility of passing through apparently impenetrable barriers is the explanation of ‘electron tunnelling’ which undoubtedly exists because a microscope has been manufactured that relies on the principle.


“In the last analysis it is the ultimate picture which an age forms of the nature of its world that is its most fundamental possession”
   Burtt, The Metaphysical Foundations of Modern Science

Today, since the cultural environment is so violently anti-metaphysical, it has become fashionable for physical theories to be almost entirely mathematical. Not so long ago, people we now refer to as scientists regarded themselves as ‘natural philosophers’ which is why Newton’s great work is entitled Philosophiae Naturalis Principia Mathematica. When developing a radically new ‘world-view’, the ‘reality-schema’ must come first and the mathematics second, since new symbolic systems may well be required to flesh out the new vision ─ in Newton’s case Calculus (though he makes very sparing use of it in the Principia).
Newton set out his philosophic assumptions very clearly at the beginning, in particular his belief in ‘absolute positioning’ and ‘absolute time’ ─ “All things are placed in time as to order of succession; and in space as to order of situation” (Scholium to Definition VIII). And, as it happened, the decisive break with the Newtonian world-view did not come about because of any new mathematics, nor even primarily because of new data, but simply because Einstein denied what everyone had so far taken for granted, namely that “all things are placed in time as to order of succession” ─ in Special Relativity ‘space-like separated’ pairs of events do not have an unambiguous temporal ordering. The case of QM is more nuanced since the mathematics did come first but it was the apparent violation of causal process that made the theory so revolutionary (and which incidentally outraged Einstein).
The trouble with the current emphasis on mathematics is that, from an ‘eventric’ point of view, the tail is wagging the dog. What is real is what actually happens, not what is supposed to happen.
Moreover, mathematics is very far from being so free from metaphysical and ‘intuitive’ assumptions as is generally assumed.  Arithmetic and number theory go right back to Pythagoras who seems to have believed that, at bottom, everything could be explained in terms of whole number relations, hence the watchword “All is Number” (where number meant ‘ratio between positive integers’). And this ancient paradigm received unexpected support from the 20th century discovery that chemistry depends on whole number ratios between the elements (Note 1).
The rival theory of continuous quantity goes back to Plato who, essentially for philosophic reasons, skewed higher mathematics decisively towards the geometrical which is why even those parts of Euclid that deal with (Whole) Number Theory (Books VII ─ X) present numbers as continuous line segments rather than as arrays of dots. And Newton invented his Fluxions (what is now known as the calculus) because he believed reality was ‘continuous, ─ “I consider mathematical Quantities in this place not as consisting of very small parts but as described by a continued Motion…..These Geneses really exist, really take place in the Nature of Things” (Newton, De Quadratura).
The hold of the continuous, as opposed to the discrete, over mathematicians and physicists alike has been extraordinarily strong and held up the eventual triumph of the atomic hypothesis. Planck, the man who introduced ‘quanta’ into physics, wrote “Despite the great success that the atomic theory has so far involved, ultimately it will have to be abandoned in favour of the assumption of continuous matter”.
        Even contemporary branches of mathematics are far from being so ‘abstract’ as their authors claim, witness the excessive importance of the misleading image of the ‘Number Line’ and the general prejudice in favour of the continuous. Only logic is ‘reality-schema free’ and even here, there are systems of ‘deviant logic’ that attempt to make sense of the quantum world. The wholesale mathematisation of physics has itself been given philosophic support by authors such as Tegmark who claim that “at bottom reality is mathematical, not physical”.
All this to say that I make no apology for presenting a broad-brushed reality-schema or ‘world-view’ before attempting to develop a symbolic model and make predictions. It seems  we need some sort of general ‘metaphysical’ schema if only as a form of intellectual housekeeping, and it is much better to lay one’s cards on the table from the very beginning (as Newton does).
So, what is the schema of Eventrics and Ultimate Event Theory? The fundamental notion is of the ultimate event (an event that cannot be further decomposed). I believe there are such things as events and that they are (at least for the purposes of this theory) more fundamental than ‘things’. I also claim that events must ‘have occurrence’ somewhere ─ hence the need for an Event Locality which either precedes all events or is brought into existence as and when events ‘have occurrence’. Secondly, since most of the events that I and other humans are familiar with seem to be caused by other, usually preceding,  events, I feel that this datum needs to be introduced into the theory at the very start. There is thus, by hypothesis, a Casual Force operating on and between (most) events. This force I term Dominance in order to emphasize its usually one-sided operation, and perhaps also to be able to extend the sense a little (Note 2).
I have thus already found it necessary in a theory of events to introduce two entities that are not events, namely the Locality and Dominance. Nonetheless, they are very closely tied up with the production of events, since without the first nothing at all could happen (as I see it), and, without the second, all events would be disconnected from each other and reality would be a permanent vast blooming confusion which, reputedly, it is for the new-born infant.
Are all events caused by other events? This is the deterministic view which was for a long time favoured by the scientific community. The 19th century cosmologist Laplace went so far as to claim that if the positions and momenta of all bodies at the present moment were known, the entire future evolution of the universe could be deduced using only Newton’s laws. But, as we know, Quantum Mechanics and the Uncertainty Principle has put paid to such naïve assumptions; the notion of a strictly random event has now become entirely respectable. It can be loosely defined as “an event without causal predecessors” or, in the jargon of UET, “an event that has no passive relation of dominance to any other event that has occurrence on the Locality”. Because of QM and other considerations, I thus found it essential from the very outset to leave some room for ‘undominated’ or ‘random’ events in the general schema. (Not only that, I shall argue that, at one time, random events greatly outnumbered ordinary caused events.)
This naturally leads on to the question of origins and whether we need any. Most ‘origin-schemas’ require the prior existence either of ‘beings of another order’ (Brahman, God, Allah, Norse gods &c.) or of states that are barely conceivable to us mere mortals (Nirvana, the original Tao, the Quantum Vacuum &c.). All such beings/states/entities are other, fundamentally different from the world we (think we) know and the beings within it.
A few ‘origin-schemas’ envisage the universe as remaining basically the same at all times, or at most evolving from something not fundamentally different from the world we now (think we) inhabit. The Stoic cosmology of Eternal Recurrence, Fred Hoyle’s Steady State and perhaps the Hawking-Hartle ‘No Boundary’ theory fall into this  class. For the partisans of these schemas, the present universe is ‘self-explanatory’ and self-sufficient, requiring nothing outside itself for its existence or explication (Note 3).
For a long time modern science did indeed adhere to the ‘self-explanatory’ point of view, but current physical orthodoxy is a strange mixture of ‘other-‘ and ‘no-other’ origin-schemas. After dismissing for decades the question of “What was there before the Big Bang?” as meaningless, most current cosmological theories involve pre Big Bang uni- or multi-verses very different from our own but still ‘obeying the laws of physics’ which, though distilled uniquely from observations of this world, have  somehow become timeless and transcendent, in effect replacing the role of God Himself.
Partly for rational and partly for non-rational, i.e. temperamental, reasons I subscribe firmly to the first class of ‘origin theories’. I do not believe the physical universe is ‘self-explanatory’ notwithstanding the amazing success of the natural sciences, and it is significant that present cosmological theorists  have themselves found it necessary to push back into uncharted and inaccessible territory in their search for ultimate origins. The quasi-universality of religious belief throughout history, which, pared down to its essentials, means belief in a Beyond (Note 4) is today explained away as an ingrained habit of wishful thinking, useful perhaps when times are bad but  which humanity will eventually outgrow. However, I don’t find this explanation entirely convincing. There is perhaps more to it than that; this feeling that there is a reality beyond the physical sounds more like a faint but strangely persistent memory that the world of matter and its enforcers have never been able to completely obliterate. (This was precisely the view of the Gnostics.)
Be that as it may, I do assume an ultimate origin for events, a  source which is definitely not itself composed of events and is largely independent even of the Locality. This source ejects events randomly from itself, as it were, or events keep ‘leaking out’ of it to change the metaphor. The source is familiar to anyone who is conversant  with mysticism, it is the Brahman of Hinduism, the original Tao of Lao Tse, Ain Soph, the Boundless, of the Kabbalah, and what Bohm calls the ‘Explicate Order’. It is  unfashionable today to think in terms of ‘potentiality’ and contrast it with ‘actuality’, but it could be said that this source is “nothing in actuality but everything in potentiality”. Ain Soph is, as Bohm emphasizes, immeasurable in the strong sense ─ measurement is completely irrelevant to it. Since science and mathematics deal only with the measurable and the formal, Ain Soph does not fall within their remit ─ but equally well one can maintain, as all mystics do, that such a thing/place/entity is beyond our comprehension (but perhaps not entirely beyond our apprehension).
What, however, above all one must not do is to mix the measurable and the immeasurable ─ which is exactly what Cantor did, to the great detriment of modern mathematics. Inasmuch as the Unknowable can be known, science and mathematics are definitely not suitable means: ritual, ecstatic dance or directed meditation are traditionally regarded as more suitable ─ and part of their purpose is precisely to quieten or sideline the rational faculty which is, in this context, a hindrance rather than a help.
     Ain Soph, or whatever one wants to call the source, should not have any role to play in a physical or mathematical theory except, at most to function as the ultimate origin of uncaused events. We can, in practice, forgot about it. This means, however,  that ‘infinity’, ‘eternity’ and suchlike (pseudo)concepts should have no place in science or in mathematics since they belong entirely to the immeasurable (Note 5).
‘Reality’ thus splits up into two ‘regions’, which I name the Unmanifest and the Manifest. The former is the ultimate source of all events but does not itself consist of events, whilst the latter is ‘manifest’ (to us or other conscious beings) precisely because it is composed of events that we can observe.
These two regions  themselves divide into two giving the schema:
        (1) The Unmanifest Non-Occurrent
        (2) The Unmanifest Pre-Ocurrent
        (3) The Manifest Occurrent
        (4) The Manifest Post-Occurrent.

Why do we need (2.) and (4.)?
We need (2) largely because of Quantum Mechanics ─ more precisely because of the ‘orthodox’ Copenhagen interpretation of QM. This interpretation in effect splits the physical world into two layers, one of which is described by the wave function in its ‘independent state’ while the other arises when a human intervention causes the wave function to ‘collapse’ — an interesting metaphor. In the former (pure) state, whatever ‘goes on’ (and something apparently does) lacks entirely the specificity and discreteness of an ultimate event. We are, for example, invited to believe that a ‘photon’ (or rather a photo-wavicle) has no specific location prior to an intervention on our part ─ rather misleadingly termed a ‘measurement’. There is thus a layer of reality, and ‘physical reality’ at that, which does not consist of events but which seemingly does in some sense exist, and is all around and even in us. There is thus the need for an intermediary level between the remoteness of the true Unmanifest and the immediacy of the world of actual events we are familiar with (Note 6).
What of (4.), the Manifest Post-Occurrent ? It would seem that there are ‘entities’ of some sort which are not observable, not composed of bona fide observable events, but which are  nonetheless capable of giving rise to observable phenomena. I am thinking of such things as archetypes, myths, belief systems, generalized abstractions such as Nation, State, Humanity, perhaps even the self, Dawkins’s memes and so on. Logic and rational discourse tend to dismiss such things as pseudo-entities: there is the well-known anecdote of the tourist being shown around the Oxford colleges and asking where the university is. But the ‘university’ does have a reality of a sort, something in between the clearcut reality of a blow to the head and the unreality of a meaningless squiggle.
Moreover, it is in (4.) that I place such things as mathematical and physical theories. As far as I am concerned it is not the Oxford tourist but people like Tegmark (and Plato) who are guilty of a ‘category mistake’: in my terms, they situate mathematics in (1.), the Unmanifest Non-Occurrent, rather than in (4.) the Manifest Post-Occurrent. (1.) is a wholly transcendent level of reality, while (4.) is a manufactured realm which, though giving an appearance of solidity, would not exist, and would never have existed, if there had never been any human mathematicians (or other conscious beings). The Platonic view of mathematics, though tempting, is, I believe, a delusion: mathematics was made by man(kind) and was, originally at any rate, an extrapolation from human sense-impressions, though admittedly it is a very successful one.                                      SH 20/12/19 


Note 1. See the chapter on the ‘New Pythagoreanism’ in Shanks’s excellent book, Number Theory, or, for a more accessible treatment, in Valens’s The Number of Things.
Note 2 Dominance is roughly the equivalent of the Buddhist/Hindu concept of karma ─ but applied to all categories of events, not just morally significant ones.
Note 3. Newton granted a small role to God in the evolution of the universe, for example stopping heavenly bodies converging together, but Leibnitz argued that it was blasphemous to suppose any such intervention was needed since this implied that the Creator had not been a good enough designer in the first place. “No need for miracles” became a principal tenet of the Enlightenment though most thinkers found it necessary to introduce a Prime Mover to ‘get the ball rolling’, so to speak. Even this shadowy deus ex cathedra faded away into nothingness by the time of Laplace who famously informed Napoleon, “I had no need of that hypothesis” ─ the hypothesis in question being the existence of God.

Note 4 The Koran, for example, addresses itself specifically to “those who believe in the unseen” (Koran sutra 2 ‘The Heifer’ v. 3).

Note 5. This is precisely the point made by Lao Tse in the very first line of the Tao Te Ching which may be translated, “The Tao that can be named is not the original Tao”. Lao Tse was writing at a time when language, not mathematics or physics, was the most advanced intellectual achievement, and, were he alive today  would doubtless have written “The Tao that can be mathematized is not the original Tao”.

 Note 6.    QM is, incidentally, not the only system that posits an intermediary realm between the Limitless and the Limited. Hinayana Buddhism has a curious theory about ‘events’ passing through various stages of progressive ‘realization’ before becoming actual ─ most Indian author for some reason cite 17.




Events rather than things

The West has, from the Greeks onwards, been ‘object-based’ as opposed to ‘event-based’ at any rate with regard to natural philosophy. The only prominent Western thinker to have seriously supposed that matter, and thus by implication the entire physical universe, was inherently unstable and might conceivably disappear into thin air, i.e. revert to the nothingness from which it came, was Descartes. But, being a believer ─ a deist at any rate ─ like practically everyone else of his time, Descartes was able to bring God into the picture  to save not just appearances but (physical) reality itself. Descartes would have had a much larger audience in India than in Europe and Stcherbatsky states that a remark of Bergson’s summarizing Descartes’ theory, once translated into Sanskrit, “sounds just like a quotation from a Sanscrit text” (Note 1).
But the resounding success of the Newtonian paradigm firmly based on the concepts of matter, force and motion silenced such mystical sounding speculations. It is only in the 20th century that we find natural philosophers, or ‘scientists’ as they now consider themselves, talking about ‘events’ as such at all. Einstein’s theory of Special Relativity (SR) concern ‘events’ ─ “occurrences that every observer would agree took place such as an explosion” as the author of a textbook on Relativity defines them ─ rather than things and a good deal of SR is taken up with the (ultimately insoluble) problem of ordering events so as to plot the operational range of causality. Bertrand Russell remarks :“From all this [‘all this’ being a discussion of Relativity] it seems to follow that events, not particles, must be the ‘stuff’ of physics. What has   been thought of as a particle will have to be thought of as a series of events. (…) Thus ‘matter’ is not part of the ultimate material of the world, but merely a convenient way of collecting events into bundles.” (Note 2)
But Russell  does not follow up this particular line of thought mainly because of his misguided belief that mathematics was essentially an extension of logic. A sceptic and a rebel with respect to so many leading dogmas of his time, Russell was not the main to question the dogma of continuity which is so deeply embedded in Western mathematics.

As for Einstein, his basic philosophic position is not easy to determine but seems to have been, at least during his middle period, that ‘fields’ were the primary reality. Ultimately everything was part of a single ‘Unified Field’ which was continuous, and ‘matter’ was merely “that portion of the field which is particularly intense“. This is, of course, incompatible with the basic assumption of Ultimate Event Theory, namely that reality is made up of discrete bundles of ultimate events. However, these ‘observables’ can be viewed as disturbances of an underlying, invisible, all-pervading substratum which is continuous, somewhat in the manner that ripples or foam are discrete disturbances of a fluid that is continuous (or at any rate appears so to us). So there is, conceivably, an underlying ‘continuous’ entity after all (as David Bohm for one believed) but such an entity, source and origin of All That Is is not directly observable and thus does not properly speaking fall within the remit of science.

Space-time ultimates
Whitrow, in one of his numerous books on time, advances the idea of a minimal unit of time, the chronon, and suggests a plausible value based on the diameter of an elementary particle divided by c the speed of light. This was the first time I came across the idea in a Western writer. Whitrow also has some useful comments to make on the illogicality of calculus which always treats time and motion as continuous variables but, like Russell, he does not pursue this line of thought.
More recently, in his very remarkable book, A New Kind of Science, Stephen Wolfram writes:
The only thing that ultimately makes sense is to measure space and time taking each connection in the casual network to correspond to an identical elementary distance in space and elementary distance in time. One may guess that this elementary distance is around 10 (exp -35) meters , and that the elementary time interval is around 10 (exp -43) seconds.”    (p. 520)
        He draws the conclusion :
“Whatever these values are, a crucial point is that their ratio must be a fixed speed, and we can identify this with the speed of light. So this means that in a sense every connection in a causal network can be viewed as representing the propagation of an effect at the speed of light.”
        This certainly is a crucial point but I would prefer to see this fixed space-to-time ratio as simply defining the operation of causality, i.e. it is a speed barrier which no effect propagated from one ultimate event to another can exceed, or even attain (Note 2).
More recently still, Lee Smolin writes:
“If space and time consist of events, and the events are discrete entities that  can be counted, then space and time themselves are not continuous. If this is true, one cannot divide time indefinitely. Eventually we shall come to the elementary events, ones which cannot be further divided and are thus the simplest possible things that can happen. Just as matter is composed of atoms,  which can be counted, the history of the universe is constructed from a huge  number of elementary events” Lee Smolin, Three Roads to Quantum  Gravity p. 41-2  Phoenix Paperback Edition.
      However, Lee Smolin  writes in another place:
“A causal universe is not a series of stills following on, one after the other. [Why not?] There is time, but there is not really any notion of a moment of time. There are only process (sic.) that follow one another by causal necessity.”  (Ib. p. 55)


Lee Smolin thus, seemingly, pins his faith on ‘processes’ rather than ‘ultimate events’, whereas, for me, a process is simply a tightly connected chain of  events : in UET, it is the constituent ultimate events that are fundamental, not  the ensemble.
Also, Lee Smolin, reverting to a conception of Leibnitz, dispenses with the independent existence of what I call the Locality :“There is no meaning to space that is independent of the relationships among  real things in the world. Space is not a stage which might  be empty or full [Why not?], onto which things come and go. Space is nothing apart from the things that exist; it is only an aspect of the relationships that hold  between things.” Ib.  p. 18
I don’t see this. Rather, if anything at all happens (and seemingly ‘something’ does), then it must happen somewhere and this ‘somewhere‘ must seemingly already in some sense exist, even pre-exist, otherwise nothing could happen because there would be nowhere where it could happen. Smolin even goes so far as to attack the very idea of Space-Time possessing a ‘structure’ and declares, quite incorrectly as far as I can make out, that this never was Einstein’s conception. From my point of view, reducing Space-Time to ‘relations’ is throwing out the baby and keeping the bathwater. What Smolin views as the fictitious entity, ’empty space’, I see as the underlying reality while ‘relationships among real things in the world‘ are not even a secondary reality : in my book, they are still farther down the actuality scale, coming well after the ‘real things’ Smolin refers to (i.e. ultimate events).

Causal Set Theory
Causal Set Theory, a contemporary version (or extension?) of General Relativity that is, in this country, chiefly promoted by Professor Fay Dowker, is  the most appealing of contemporary cosmological theories because it maintains that ‘Space-Time’ is fundamentally discrete ─ “This reasoning [concerning the physics of Black Holes] leads us to the conclusion that every region of spacetime (and not only the horizon of a black hole) should be fundamentally discrete”. The quotation comes from Causal Set Theory as a Discrete Model for Classical Spacetime by F. Soss Rodriguez of Imperial College, London. This very professional article is available free on the Internet, or was when I downloaded it. It is not, however, for the general reader since it requires extensive knowledge of topology, logical theory and the mathematics of GR. Much more approachable is Introduction to causal sets: an alternative view of spacetime structure by David D. Reid, also available on the Internet.

A Contemporary Theory of Events?
Although Causal Set Theory is committed to discreteness, it is not essentially a theory of events and their interactions. On the other hand, A Formal Ontological Theory Based on Timeless Events by Gustavo E. Romero from the Instituto Argentino de Radioastronomia of Buenos Aires really is a genuine event-based physical theory, the first that I have come across from a contemporary thinker.
Although the author says at the beginning “I assume as background knowledge the predicate calculus, set theory, semantics, and real analysis”, the text is just about approachable by the general reader, at least in parts. The author covers much of the ground that I have laboriously been exploring since I first conceived of Ultimate Event Theory after reading Stcherbatsky’s great book, Buddhist Logic. Romero specifically mentions Buddhist thinkers as the leading promoters of the ‘event-based’ paradigm, admirably summarizing their position as
The whole world [for them] is an inter-dependent storm of events that, here and there, cluster giving the illusion of stability and delivering the illusion of being”.
Although the confident use of symbolic logic gives this paper a style and concision that I can at present only envy, there is a danger that the crucial philosophic issues ─ and by implication, physical issues as well ─ are not sufficiently highlighted. My main disagreement as far as I can see is as follows. The author quite rightly distinguishes between the Universe, U, which is “the composition of all things” and the World, W, which is “the composition of all actual events and processes”. Also, he writes, quite properly, Events do not change, they simply are”.
        However, he goes on to declare, “The totality of events is changeless, otherwise there would be an event not included in the totality, which is absurd”. But this is not in the least absurd! Unless, of course, one believes, as I suspect Romero does, that, as I put it, “Everything that can have occurrence already has occurrence”. This is indeed the view of Barbour, the author of The End of Time, and many others and is implicit in the Block Universe version of General Relativity (which is the current orthodox theory of Space/Time inasmuch as there is one). That Romero adheres to this view is shown by his defining the World as “the composition of all actual events and processes” ─ the weasel word being ‘actual’. Hopefully, not all possible events are also actual; for if they were/are there is “nothing new under the sun” and as far as I am concerned there would be no point in living.

Einstein and Time

Einstein, towards the end of his life, did indeed come to believe that ‘past, present and future’ were “a stubbornly persistent illusion“as he put it and he was serious enough about this to mention it in a latter of soi-disant sympathy to the widow of one of Einstein’s longest friends, Besso, on the event of the latter’s death. Nonetheless, there is evidence that Einstein was much troubled by the implications.
Einstein said that the problem of the Now worried him seriously. He explained that the experience of the Now means something essentially different from the past and future, but that this important difference does not and cannot occur within physics. That this experience cannot be grasped by science seemed to him a matter of painful but inevitable resignation” (Note 3)
It is ironic that the Western thinker who first placed events rather than things under the spotlight, namely Einstein, was also the man who dealt a devastating, possibly lethal, body blow to the renascent event-paradigm. For Einstein initiated a re-examination of the concept of simultaneity and his ponderings ended up by establishing that the term has little or no meaning on a universal scale. That there are events that are not unambiguously ‘ordered in time’ ─ the so-called ‘space-like’ events of SR ─ led on eventually (sic) to the idea that “everything is simultaneous”, for that is what the Block Universe theory implies. For there is ‘no before and after’, only a sickly ‘eternal present’. Such a conception is, just possibly, ‘correct’ physically speaking but is utterly unacceptable psychologically: it would make nonsense of all our social institutions (especially laws) and inherited ways of thinking. It is far worse than the ancients’ blind belief in fate, for the latter only implied that certain events were predestined and unalterable, not that all of them were.
S.H.  4/12/19

Note 1. More specifically, Stcherbatsky quotes Bergson (Creative Evolution p. 23-24) as writing, “the world of the mathematician deals with a world that dies and is reborn at every instant, the world which Descartes was thinking of when he spoke of continuous creation”. Stcherbatsky comments, “This idea is quite Buddhistic and…put into Sanscrit… sounds like a quotation from an Indian text” (Buddhist Logic, footnote p. 109).
Quite why Bergson should have thought  that the mathematician’s world was instantaneous is unclear; certainly the world of Euclidian geometry is not in the least ephemeral, on the contrary it views shapes sub specie aeternitatis which is why Plato endorsed it so emphatically. Bergson was perhaps thinking of differential equations which model physical changes over increasingly smaller intervals of time, but, even here, continuity rather than discontinuity is the name of the game.

Note 2. It is traditional, but by no means obligatory, to identify the actual speed of light with this ‘maximum transmission speed‘ for all physical or informational processes. Quite possibly, light, likewise other speedy particles such as neutrinos, approach but do not actually reach this speed, which allows us to attribute to them a small mass. Today, the consensus seems to be that the neutrino does possess a small mass. To my mind, nothing material can have strictly zero mass: this is a contradiction in terms. A strictly massless particle is certainly impossible in Newtonian physics since it would have absolutely no capacity to resist any attempt to change its state of rest or constant rectilinear motion ─ it would be the ultimate puff-ball.

Note 3.  From Carnap, Intellectual Autobiography  (quoted Smolin).  “Moreover,” Smolin adds, “Einstein was not satisfied by Carnap’s reply and repeated that “such scientific descriptions cannot possibly satisfy our human needs; that there is something about the Now which is just outside the realm of science” ”       Smolin, Time Reborn p. 91-2




Bertrand Russell bewails the passing of the scientific spirit with the Greeks and notes that from Plotinus (A.D. 204-70) onwards “men were encouraged to look within rather than to look without“. But there is much to be gained from looking within : the only thing is that the insights to be gained have not yet been turned into science and technology. Maybe their time has come or is coming.
India is a strange civilization since its leading thinkers seem not only to have considered what I call the Unmanifest as more important than the everyday physical world (the Manifest) but to have actually been more at home there. Nonetheless, lost within the dense thickets of abstruse Hindu and Buddhist speculation, there are ideas which may yet find their application, in particular the concept of dharma.
We think of Buddhism today as a philosophical religion that recommends non-violence and compassion but, admirable though such aims may be, they do not appear to have been at all the Buddha’s main concern, to judge by the development of the religion he founded during the six or seven centuries after his supposed life.

“The formula of the Buddhist Credo — which professedly contains the shortest statement of the essence and spirit of Buddhism — declares that Buddha discovered the elements of existence (dharmas), their causal connection, and a method to suppress their efficiency for ever. Vasubandhu makes a similar statement about the essence of the doctrine : it is a method of converting the elements of existence into a condition of rest, out of which they never will arise again.” Stcherbatsky, The Central Conception of Buddhism

The (Hinayana) Buddhist equivalent of Democritus’ terse statement “Nothing exists except atoms and void” would thus be something like
“Nothing exists except Nirvana, Karma and Dharma”

       Nirvana is the state of absolute quiescence which is the end and origin of everything.
     Karma (literally ‘activity’, ‘action’) almost always has a strong moral sense in Buddhism — “[It is] that kind of activity which has an ethical charge and which must give rise to a ‘retributionary’ reverberation at a later time” (Anacker, Works of Vasubandhu). To be karmic an act must first of all be deliberate and, secondly, must be the result of an intent to harm another sentient being, or the result of an intent to relieve suffering. Although the Buddha categorically affirmed freedom of will, Buddhist psychology, known as Abhidharma, naturally accepted that most of our daily actions such as eating, sleeping and so on are ‘quasi-automatic’ and do not bring about either reward or punishment in this or a future life. But this concentration on moral acts and their consequences merely underlines the whole aim and approach of Buddhism as a religion/philosophy which is to bring to an end the suffering that is an inevitable part of human (and animal) existence. It would thus seem perfectly legitimate to extend the sense of karma to causal processes in general — “the law of karma,….is only a special case of causality” (Stcherbatsky, BL).  Had the Buddhist thinkers wished to develop a physical as opposed to a spiritual/psychological belief system, they would most likely have made karma (in this extended sense) a prominent feature of such a system.  But not only did they abstain from doing so but they would have regarded excessive interest in the physical world as altogether undesirable since it did not further enlightenment but on the contrary tended to obstruct it.
     For (Hinayana) Buddhist thinkers of the time dharma(s) are the ephemeral but constantly reappearing ‘elements’ which make up absolutely everything we think of as real, material or immaterial. All alleged ‘entities’ such as Matter, Soul, the universe, individual objects, persons  &c. &c. are not true entities but merely bundles (skandhas) or sequences (santanas) of dharmas. Hence the first line of my poem (see Note)

                      Just elements exist, there is no world”

     Although the subsequent Mahayana (‘Greater Vehicle’) Buddhist thinkers denied the ‘absolute reality’ of the ‘dharma(s)’, the Hinayana thinkers of this era (Vasubandhu, Dignaga, Dharmottara &c.) emphatically affirmed their reality — but with the proviso that our ‘normal’ perceptions are hopelessly distorted by irrelevant intellectual additions that are delusory. The dharma(s) have sometimes been compared to the noumena or ‘Things-in-themselves’ of Kant, but they are in fact what Kant would have called phenomena, but phenomena purified by a (usually) long and painful process of demystification and deintoxication. The Hinayana philosophical approach is all on its own in claiming that knowledge of what is ‘really real’ does not at all entail fleeing from the physical world into a transcendent Neverneverland but on the contrary recovering the pristine world of ‘direct sensation’ — “in pure reality there is not the slightest bit of imaginative construction” (Stcherbatsky, BL).

   All this is all very well, but what exactly are the dharma(s) and to what extent can they be made to form the basis of a physical theory?  (Although the plural of dharma is made by adding an ‘s’ I cannot quite accustom myself to doing this.) Being irreducibles, there is nothing more elementary in terms of which the dharma(s)y can be defined. However, what can be said, summarizing the conclusion of Stcherbatsky’s excellent book and other sources, is that dharma(s) are :

1. entirely separate one from another;
2. have no duration;
3. tend to congregate in bundles;
4. are subject to a causal force which makes them ‘co-operate’ with one another.
5. are in a perpetual state of commotion.

    I draw certain far-reaching, possibly fanciful, conclusions from (1-5) above — or, if you like, I interpret them in accord with my own independent thought-experience.
    (1) to my mind implies that there are gaps between dharma(s) and thus that there are no continuous entities whatsoever — with the exception of nirvana which one could (perhaps) just conceivably equate with the quantum vacuum.
    (1) in combination with (2) means that there is incessant change (replacement of one dharma by another) but strictly speaking no motion, no continuous motion that is. What we call motion is nothing but consecutive dharma(s) which are so close to each other that the mind merges them together just as it does the separate images on a cinema screen. “Momentary things” writes Kamalasila, “cannot displace themselves because they disappear at the very place at which they appeared”.
    (3) explains, or rather describes, the appearance of what we consider to be matter : it is the result of the ‘combining’ — the Indian sources say ‘co-operating’ — tendencies of the dharma(s).
    (4) recognizes that what has occurrence is subject to certain formal ‘laws’, i.e. events do not usually occur at random and certain events are invariably followed by similar different events with which they are regularly associated (‘This being, that arises’).
    It is difficult to know what to make of (5), the claim that the dharma are ‘turbulent‘, ‘agitated‘ — though this is perhaps the most important characteristic of the dharma(s). (The Buddha was doubtless thinking of the great difficulty of ‘quietening’ the mind during meditation and, for that matter, during all conscious states.)  Now, air or water can be turbulent — what does this mean?  Physically, if we are to believe the current scientific world view, it means that the microscopic molecules that make up what we loosely call ‘air’ or ‘water’ are rushing about in a random manner, colliding violently with each other. This state of commotion is to be contrasted with the state of affairs when everything is ‘still’ —  though, according to contemporary science, the molecules of a fluid are still moving about randomly even when the fluid is ‘in equilibrium’ (albeit less violently). There is, interestingly, no mention in Buddhist literature of the dharma(s) actually colliding with one another even when they are collected into bundles (‘skandhas‘). So the ‘turbulence’ should perhaps be interpreted as the tendency of these ‘elements‘ to reform, or rather to bring into momentary existence other similar dharma(s) until, eventually, when finally pacified, they cease altogether to conglomerate in space or to persist in time Vasubandhu’s “condition of rest from which they never shall arise again”.
    Although the following conception is much more Hindu than Buddhist in spirit, and would have been strenuously rejected by the Buddhists who developed the dharma theory, I personally envisage the ‘turbulence’ as pertaining to an invisible, all-pervasive substratum: the dharma(s) are specks of turbulence on the surface of a sort of cosmic fluid, foam on an invisible ocean. When the turbulence dies away, the ocean returns to its original state of quiet — until the next cycle commences. Where have the dharma(s) gone to? Nowhere. What we call ‘matter’ and ‘life’ are nothing more (nor less) than a temporary surface film on this enduring ‘sub-stance’. The universe is a knot tied in a (non-material) string : it is pointless to ask where the knot has gone to when the knot is finally untied.

S.H. 4/10/19

Abbreviations:     BL  refers to Buddhist Logic Vol. I by Stcherbatsky (Dover Publications 1962, an “unabridged republication of the work first published by the Academy of Sciences of the U.S.S.R., Leningrad, circa 1930”).

Note  The full version is:

Just elements exist, there is no world,
Events emerge from nowhere, blossom, fall,
just elements exist, there is no world,

Events emerge from nowhere, blossom, fall,
Like hail upon the earth or glistening froth,
Just elements exist, there is no world.

 Like hail upon the earth or glistening froth,
The dharma form and open, scatter, burst,
Each moment brings forth others, vanishes.

The dharma form and open, scatter, burst,
Glistening the froth appears and thunderous the hail,
Just elements exist, there is no world.

Glistening the froth appears and thunderous the hail,
As ceaselessly the living dharma form,
Each moment brings forth others and then vanishes.

from Origins by Sebastian Hayes

A completely axiomatic theory purports to make no appeal to experience whatsoever though one doubts whether any such expositions are quite as ‘pure’ as their authors claim. Even Hilbert’s 20th century formalist version of Euclid, his Grundlagen der Geometrie, has been found wanting in this respect ─ “A 2003 effort by Meikle and Fleuriot to formalize the Grundlagen with a computer found that some of Hilbert’s proofs appear to rely on diagrams and geometric intuition” (Wikipedia).

What exactly is the axiomatic method anyway? It seems to have been invented by the Greeks and in essence it is simply a scheme that divides a subject into :
(1) that part which has to be taken for granted in order to get started at all ─ in Euclid the Axioms, Definitions and Postulates (Note 1); and
(2) that part which is derived by valid chains of reasoning from the first, namely the Theorems ─ Heath calls them ‘Propositions’.
A strictly deductive, axiomatic presentation of a scientific subject made perfect sense in the days when Western scientists believed that an all-powerful God had made the entire universe with the aid of a handful of mathematical formulae but one wonders whether it is really appropriate today when biology has become the leading science. Evolution proceeds via random mutation plus ruthless selection and human societies and/or individuals often seem to owe more to happenstance and experience than reasoning and logic. Few, if any, important discoveries in mathematics have been strictly deductive: I doubt if anyone ever sat down of an evening with the Axioms of von Neumann Set Theory in order to deduce something interesting and original, and certainly no one ever learned mathematics that way (except possibly a robot). For all that, the structural simplicity and elegance of the axiomatic method remains extremely appealing and is one of the reasons why Euclid’s Elements and Newton’s Principia are among the half dozen best-selling books of all time ─ though few people read them today.
Apart from the axioms which are an integral part of a science or branch of mathematics, there exist also certain methodological principles (or prejudices) which, properly speaking, don’t belong to the subject, but nonetheless determine the general approach and overshadow the whole work. These principles should, ideally, be stated at the outset though they rarely are.

There are two principles that I find I have used implicitly or explicitly throughout my attempts to kick-start Ultimate Event Theory. The first is Occam’s Razor, or the Principle of Parsimony, which in practice means preferring the simplest and most succinct explanation ‘other things being equal’. According to Russell, Occam, a mediaeval logician, never actually wrote that “Entities are not to be multiplied without necessity” (as he is usually quoted as stating), but he did write “It is pointless to do with more what can be done with less” which comes to much the same thing. The Principle of Parsimony is uncontroversial and very little needs to be said about it except that it is a principle that is, as it were, imposed on us by necessity rather than being in any way ‘self-evident’. We do not really have any right to assume that Nature always chooses the simplest solution: indeed it sometimes looks as if Nature enjoys complication just for the sake of it. Aristotle’s Physics is a good deal simpler than Newton’s and the latter’s much easier to visualize than Einstein’s: but the evidence so far seems to favour the more complicated theory.
The second most important principle that I employ may be called the Principle of Parmenides, since he first stated it in its most extreme form,
       “If there were no limits, there would be nothing”.
In the context of Ultimate Event Theory this often becomes:
        “If there were no limits, nothing would exist, except (possibly) the Locality itself”
and the slightly different “If there were no limits, nothing would persist”.

      This may sound unexceptional but what I deduce from this principle is highly controversial, namely the necessity to expel the notion of actual infinity from science altogether, and likewise in mathematics (Note 2). The ‘infinite’ is by definition ‘limitless’ and so falls under the ban of this very sensible principle. Infinity has no basis in our sense experience since no one, with the exception of certain mystics, has ever claimed to have ‘known’ the infinite. And mystical experience, though perfectly valid and apparently extremely enjoyable, obviously requires careful assessment before it can be introduced into a theory, scientific or otherwise. In the majority of cases, it is clear that what mystics (think they) experience is not at all what mathematicians mean by the sign ∞ but is rather an alleged reality which is ‘non-finite’ in the sense that any form of measurement would be totally inappropriate and irrelevant. (It is closer to what Bohm calls the Implicate Order as opposed to the Explicate Order ─ unhappy names for a  very useful dichotomy). In present-day science, ‘infinity’ simply functions as a sort of deus ex machina (Note 3) to get one out of a tight spot, and even then only temporarily. As far as I know, there is not a scrap of evidence to suggest that any known process or observable entity actually is either ‘infinitely large‘ or ‘infinitely small’. All energy exchanges are subject to quantum restrictions (i.e. come in finite packages) and all sorts of entities which were once regarded as ‘infinitely small’ such as atoms and molecules can now actually be ‘seen’, if only via an electron tunnelling microscope. Even the universe we live in, which for Newton and everyone else alive in his time, was ‘infinite’ in size, is sometimes thought today to have a finite current extent and is certainly thought to have a specific age (around 13.8 billion years). All that is left as a final bastion of the infinity delusion is space and time and even here one or two noted contemporary physicists (e.g. Lee Smolin and Fay Dowker) dare to suggest that the fabric of Space-Time may be ‘grainy’. But enough on this subject which, in my case,  tends to become obsessive.
What can an axiomatic theory be expected to do? One thing it cannot be expected to do is to give specific quantitative results. Newton showed that the law of gravitation had to be  an inverse square distance law but it was some time before a value could be attributed to the indispensable gravitational constant, G. And Eddington quite properly  said that we could conclude simply by reasoning that in any physical universe there would have to be an upper bound for the speed of a particle or the  transmission of information, but that we would not be able to deduce by reasoning alone the actual value of this upper bound (namely c ≈ 108 metres/second).
It is also legitimate, even in a broadly axiomatic presentation, to appeal to common experience from time to time, provided one does not abuse this facility. For example, de Sitter’s solution of Einstein’s field equations could not possibly apply to the universe we (think we) live in, since his solution required that such a ‘universe’ would be entirely empty of matter ─ which we believe not to be the case.
One would, however, require a broadly axiomatic theory to lead, by reasoning alone, to some results which, as it happens, we know to be correct, and also, if possible, to make certain other predictions that no rival theory had made. And a  theory which embodies a very different ‘take’ on life and the world might still prove worthwhile stating even if it is destined to be promptly discarded: it might prepare the ground for other, more mature,  theories by pointing in a certain  unexpected direction. Predictive power is not the only goal and raison d’etre of a scientific theory : the old Ptolemaic astronomy was for a long time perfectly satisfactory as a predictive system and, according to Koestler, Copernicus’s original heliocentric system was no simpler. As a piece of kinematics, the Ptolemaic Earth-centred system was adequate and, with the addition of more epicycles could probably ‘give the right answer’ even today. However, Copernicus’s revolution paved the way for Galileo’s and Newton’s dynamical world-view in which the movements of planets were viewed in terms of applied forces and so proved far more fruitful.
It is also worth saying that a different world-view from the current established one may remain more satisfactory with respect to certain specific areas, while being utterly inadequate for other purposes. If one is completely honest, one would, I think, have to admit that the now completely discredited magical animistic world-view has a certain cogency and persuasiveness when applied to aberrant human behaviour:   this is why we still talk meaningfully of charm, charisma, inspiration, luck, jinxes, fascination, fate ─ concepts that belong firmly to another era.
Finally, the world-views of other cultures and societies are not just historical curiosities : people in these societies had different priorities and may well have noticed, and subsequently sought to explain, things that modern man is unaware of. Ultimate Event Theory has its roots in the world-views of societies long dead and gone: in particular the world-view of certain Hinayana Buddhist monks in Northern India during the first few centuries of our era, and that of certain Native Amerindian tribes like the Hopi as reflected in the
structure of their languages (according to the Whorf-Sapir theory).

                                                                                                                                SH  26/09/19

Notes :
Note 1  The status of the fourth and last Euclidian subsection, the Definitions, is not entirely clear: they were supposed to be ‘informative’ only in the manner of an entry in a dictionary and “to have no existential import”. On the other hand, Russell concedes that “definitions are often nothing more than disguised axioms”.

Note 2 This is in line with Poincare’s categorical statement, “There is, and can be, no actual infinity”. Gauss, often considered the greatest mathematician of all time, said something similar.

Note 3 A deus ex machine was , in Greek tragedy, a supernatural being who was lowered onto the stage by a sort of crane and whose purpose was to ‘save’ the hero or heroine when no one else could.
Larry Constantine, in an insightful letter to the New Scientist (13 Aug 2011 p. 30), wrote : “Accounting for our universe by postulating infinite parallel universes or explaining the Big Bang as the collision of “branes” are not accounts at all, but merely ignorance swept under a cosmic rug — a rug which itself demands explanation but is in turn buried under still more rugs.”



Events rather than Things

Descartes kicked off  modern Western philosophy with his thought experiment of deciding what he absolutely couldn’t disbelieve in. He concluded  that he could, momentarily at any rate, disbelieve in all sorts of things, even the existence of other people, but that he couldn’t disbelieve in the existence of himself, the ‘thinking being’. Now for anyone who has done meditation (and for some who  haven’t likewise), Descartes is way off. It really is possible to disbelieve in one’s own existence if by this we mean the ‘person’ who was born at such and such a date and place, went to such and such a school, and so on (Note 1). This ‘entity’ simply drifts away once you are alone, reduce the input from the outside world and confine yourself strictly to your present sensations. Indeed, it is often more difficult to believe that such a ‘being’ ever did exist, than to doubt its existence !
However, what you can’t dismiss even when meditating in isolation in a dark quiet room is the idea that there are some sort of events continually occurring, call them mental or  physical or psychic (at this level such distinctions have little meaning). My version of the cogito ergo sum is thus, “There are thought/sensations, therefore there is something”. Sensations and thoughts are not physical objects but events of a particular kind, so why not take the concept of the event as primary and see where one gets to from there.
Moreover, one can at once draw certain conclusions. There must seemingly be a ‘somewhere’ for these sensations/thoughts to occur just as there must be a location for extendable bodies. We require a ‘place’ : let us call it the Locality. There is, however, no obligation to identify this ‘place where mental/physical events are occurring’ as the head (or brain) of René Descartes or of Sebastian Hayes (the author of the present pamphlet) and to rashly conclude, as Descartes does, that such a person necessarily exists . Nor is there any need just yet to identify the Locality with modern Einsteinian ‘Space-Time’ (though clearly for some people there is an irressistible temptation to do so). A second deduction, or rather observation, is that these mental/physical events do not occur ‘all at once’, they come ‘one after the other’, i.e. they are successive events.
A further question that requires settling is whether these fleeting thought/sensations are connected up in some way. This is not so easy to answer. In some cases quite clearly a certain thought does give rise to another in much the same way as a certain physical impulse triggers an action. But there also seem to be cases when thought/sensations simply emerge from nowhere and drift away into nowhere, i.e. appear to be entirely disconnected from neighbouring events. The first category, the thoughts that follow each other according to a recognizable pattern, naturally us to believe in some form of Causality but there is reason to believe that it is not always operative.
All this seemed enough to make a start. I had a primary entity, the Event — primary because I couldn’t disbelieve in it — and, following closely after it in the sequence of ideas, the notions of an Event-Locality and of an Ordering of Events or Event-Succession. Finally, some causal principle linking one thought/event to another was needed which I eventually baptised Dominance, partly to emphasize the (usually) one-sided rapport between two or more events but also to stress that a force is at work. Today, the notion of a binding causal connection between disparate events has been largely replaced by the much weaker statistical concept of correlation ; indeed there is a strong tendency in contemporary scientific thought to  expel both causality and force from physics altogether.      

What is an Event?

Modern  axiomatic systems usually leave the basic notions, such as ‘lines’, ‘points’ &c. undefined for the good reason that, if they really are fundamental, there is nothing more basic in terms of which they can be described. At first glance this seems reasonable enough but the practice has always struck me as being rather deceitful. The authors of new-fangled geometries such as Hilbert know perfectly well that they could take for granted the reader’s prior knowledge of what a line or a point is — so why not say so?  My basic concept, the event, cannot be defined using other concepts that are more fundamental but what I can do is to openly appeal to the ‘intuitive’, or rather experiential, knowledge that people have of ‘events’ while striving to make this ‘prior knowledge’ more precise.
So what is an event? ‘Something that happens’, an ‘occurrence’…… It is easier to say what it is not. An event is not a thing. Why not? Because things are long-lasting, relatively permanent. An event is punctual, it is not lasting, not permanent, it is here and it is gone never to be experienced again. And it seems to have more to do with time (in the sense of succession) than space (in the sense of extension). An event is usually pinpointed by referring to events of the same type that happened before or after it, rather than by referring to events that happened alongside it. The Battle of Stalingrad came after the fall of France and before the Normandy invasion: the fighting that was going on in parts of Russia other than Stalingrad is not usually mentioned. An event is ‘entire’, ‘all of a piece’, ‘has no parts’, it is not a process, has no inner development since there is no ‘time’ (duration) for it to develop, it is here and then gone. The decline-and-fall of the Roman Empire is not an event.
An important consequence is that events cannot be tampered with –once they have happened, they have happened.

    “The moving finger writes and having writ,
Moves on, nor all they piety nor wit
Can lure it back to cancel half a line
    Nor all thy tears wash out a word of it”

But objects, since they are more spread out in time, are alterable, can be expanded, diminished, painted over, vandalized, restored, bought and sold and likewise individuals can change for the better or worse otherwise life would not be worth living.
Events also seem to be more intimately involved with causality than things. The question, “Why is that tree there?” though by no means  nonsensical sounds somewhat peculiar. But “Why did that branch break?” is a natural question to ask. Why indeed. And, as stated, events usually appear to be causally connected, we feel them to be very strongly bonded to specific other events which is why we look for ‘cause-and-‘effect’ pairs of events.
To sum up:  An event is punctual, sequential,  entire, evanescent, irrevocable, and usually dependent on earlier events.

Ultimate Events

But here we come across a problem.
Although an event such as a battle, an evening out, a chance meeting with a friend, even a fall, is perceived as a ‘single item’, as being entire — otherwise we would not call it an event — it is obvious that any event you like to mention can be subdivided into a myriad of smaller events. Even a blow with a hammer, though treated in mechanics as an impulsive force and thus as having no duration to speak of, is not in point of fact a single event since, thanks to modern technology, we can take snapshots showing the progressive deformation of the object struck.
So, are we to conclude that all events are in fact composite? This is, I suppose, a permissible assumption but it does not appeal to me since it leads at once to infinite regress. It is already bad enough having to treat ‘space’ as being ‘infinitely divisible’ as the traditional mathematical treatment of motion  assumes it to be. But it is much worse to suppose that any little action we make is in reality made up of an ‘infinite’ number of smaller events. I certainly don’t want to go down this path and so I find myself obliged at the very outset to introduce an axiom which states that there are certain events which cannot be further decomposed. I name these ultimate events and they play much the same role in Eventrics as atoms once did in physical theory.
Ultimate events, if they exist at all (and I am convinced they do) must be very short-lived indeed since there are many physical processes which are known to take only a few nanoseconds and any such process must contain more than one ultimate event. Perhaps ultimate events will remain forever unobservable and unrecordable in any way, though I doubt this since the same was until recently said of atoms prior to  the invention of the electron tunnelling microscope. Today it is possible to count the atoms on the surface of a piece of metal and sheets of graphene a single atom thick have either already been manufactured, or very soon will be. I can easily foresee that one day we will have the equivalent of Alvogrado’s number for standard bundles of ultimate events. Whether or not this will come to pass, what we can do right now is to assume that all the features that we attribute to ordinary events but which are only approximately true, are strictly true of ultimate events.  Thus ultimate events really are punctual, all of a piece, have no parts and so on.
Assuming that a macroscopic event is made up of a large number of ultimate events, there must seemingly be something that keeps the ultimate events separate, i.e. stops them fusing. There is here a further choice. Are the ultimate events stacked up tightly against each other so that their extremities touch, as it were, or are they separated by gaps? Almost all thinkers who have taken the concept of ‘atoms of time’ seriously have opted for the first possibility but it does not appeal to me, indeed  I find it implausible. If ultimate events (or chronons) have a sort of skin as cells apparently have, this would imply that there is at least a rudimentary structure, an ‘inside’ and an ‘outside’ to an ultimate event. This seems an unnecessary and, to me, rather artificial assumption; also there are advantages in opting for the second alternative that will only become apparent later. At any rate, I decided from the very beginning to assume that there are gaps between ultimate events which means that bundles and streams of macroscopic events are not just made up of discrete micro-entities (ultimate events) but are discontinuous in a very strong sense. This is an extremely important assumption and it applies right across the board. Since everything is (by hypothesis) made up of ultimate events, it means that there are no truly physical continuous entities whatsoever with the single exception of ultimate events themselves (since they are entire by definition) and (possibly) the Locality itself. As the philosopher Heidegger put it in a memorable phrase, “Being is shot through with nothingness”.

A  (very) Rough Visual Image

Many of the early Western scientists had a clear mental picture of solid bodies knocking into each other like billiard balls and, reputedly, Newton had a Eureka moment on seeing an apple falling to the ground in the orchard of the family farm (Note 1). Such mental pictures, though they do not always stand up to close scrutiny have nonetheless been extremely helpful (as well as misleading) to scientists and philosophers in the past. Today abstraction is the name of the game but I suspect that many a pure mathematician employs crude images on the sly when no one is looking — some brave spirits even admit to doing so. I think it is better to declare one’s mental imagery from the outset. I picture to myself a sort of grid extending in all possible directions, or, better, a featureless expanse which reveals itself to be such a grid as soon as, or just before, an event ‘has occurrence’. Moreover, I imagine an ultimate event completely filling a grid-cube or grid-disc so that there is no room for any other ultimate events at this spot. This is the image that comes to mind when I say to myself, “This particular event has occurrence there and nowhere else”.
I now stipulate that a ‘spot’ of this grid is either occupied or empty but not both at once. This might seem obvious but it is nonetheless worth stating : it is the equivalent of the logical Law of Non-Contradiction but applied to events. No kind of prediction system would be much use to anyone if, say, it predicted that there would be an eclipse of the moon at a particular place and time and that there would simultaneously not  be an eclipse at the same spot. One might reasonably object that Quantum Mechanics with its superposition of states does not verify this principle, but that is precisely why Quantum Mechanics is so worrisome (Note 2).
Thirdly, I assume that once a square of the grid is occupied it remains occupied ‘forever’. This is merely another way of saying, “What has happened has happened”, and I doubt if many people would quarrel with that. It is not possible to rewrite the (supposed) past because such events are not accessible to us and, even if they were, they could not be tampered with : there is no way un-occur an event, or so I at any rate believe.
Finally, for the sake of simplicity, I assume to begin with that all ultimate events are the ‘same size’, i.e. occupy a spot of equivalent size on the Locality.

Axioms of Ultimate Event Theory

Putting these last assumptions together, along with my requirement that every occurrence can be decomposed into so many ultimate events, also my requirement that there must be some sort of interconnectedness between certain events, we have a set of axioms, i.e. assertions which it is not necessary or possible to ‘prove’ — you either take them or leave them. The whole art of finding the right axioms is to choose those that seem the most ‘reasonable’ (least far-fetched) but which readily giving rise to non-obvious deductions. Ultimately the validity of the axioms depends on what one can make them do (Note 3).
Ultimate Event Theory, or my contemporary version of it, thus seems to require the following set of Definitions and Axioms :

 FUNDAMENTAL ITEMS:    Events, the Locality, Succession, Dominance.

    An ultimate event is an event that cannot be further decomposed.
    The Locality is the connected totality of all spots where ultimate events may have occurrence.
   Dominance is an influence that certain ultimate events exert on other events and on (repetitions of) themselves.


Everything that has occurrence is made up of a finite number of ultimate events.


All ultimate events have the same extent, i.e. occupy spots on the Locality of equivalent size.


A  spot on the Locality may receive at most one ultimate event, and every ultimate event that has occurrence occupies one, and only one, spot on the Locality.


 A spot on the Locality is either  full, i.e. occupied by an ultimate event, or it is empty, but not both at once.


 If an ultimate event has occurrence,  there is no way in which it  can be altered or prevented from having occurrence.


Only events that have occurrence on the Locality may exercise dominance over  other events.


There are gaps between successive ultimate events.

                                                                                                                                            SH  25/09/19

Note 1  “Introspective experience [according to Buddhists] shows us no ‘ego’ at all and no ‘world’ but only a stream of all sorts of sensations, strivings, and representations which together constitute ‘reality'”  (Max Weber, The Religions of India

Note 2 Historians are often embarrassed by anecdotes about Newton seeing an apple fall and wondering whether there might be a universal ‘force of attraction’. But there is plenty of good evidence that the story is based on fact though Newton gave slightly different accounts of it in later life as we all tend to do about really important events. It is notable that (arguably) the three greatest scientists of all time, Archimedes, Newton and Einstein all had Eureka moments.

Note 3   If one accepts the original Schrödinger schema of Quantum Mechanics, the wave function itself does not model ‘events’ since whatever is going on prior to the collapse of the wave function, entirely lacks the specificity and decisiveness of events. So there are apparently ‘real entities’ that are not composed of ultimate events. But we lack appropriate terms to deal with such semi-realities: probability’ is far too weak a term, ‘potentiality’ is in every way preferable. Contemporary scientific parlance studiously avoids the concept of ‘potentiality’, so important for Aristotle,  because of the dead weight of positivism — but the concept is due for a revival.

Note 4   Since sketching out the barebones of this theory some thirty years ago, I have somewhat lost faith in the appropriateness of the axiomatic method but, until something better is available, one continues to use it. We enter the drama of life in media res, as it were, and, I am inclined to think that, like human societies or animal species, the universe  itself ‘makes things up as it goes along’, as it were, subject to some very general fundamental constraints of a logical nature. Such an Experimental Universe Theory is not yet the accepted contemporary scientific paradigm by a long shot but we seem to be moving steadily towards it.




The Rise and Fall of Atomism

So-called ‘primitive’ societies by and large split the world into two, what one might call the Manifest (what we see, hear &c.) and the Unmanifest (what we don’t perceive directly but intuit or are subliminally aware of). For the ‘primitives’ everything originates in the Unmanifest, especially drastic and inexplicable changes like earthquakes, sudden storms, avalanches and so on,  but also more everyday but nonetheless mysterious occurrences like giving birth, changing a substance by heating it (i.e. cooking), growing up, aging, dying. The Unmanifest is understandably considered to be much more important than the Manifest — since the latter originates in the first but not vice-versa — and so the shaman, or his various successors, the ‘sage’, ‘prophet’, ‘initiate’ &c. claims to have special knowledge because he or she has ready access to the Unmanifest which normal people do not.  The shaman and more recently the priest is, or claims to be, an intermediary between the two realms, a sort of spiritual marriage broker. Ultimately, a single principle or ‘hidden force’ drives everything, what has been variously termed in different cultures mana, wakanda, ch’i ….  Ch’i is ‘what makes things go’ as Chuang-tzu puts it, in particular what makes things more, or less, successful. If the cheetah can run faster than all other animals, it is because the cheetah has more mana and the same goes for the racing car; a warrior wins a contest of strength because he has more mana, a young woman has more suitors because of mana and so on.
Charm and charisma are watered down modern versions of mana and, like mana, are felt to originate in the beyond, in the non here and now, in the Unmanifest. This ancient dualistic scheme is far from dead and is likely to re-appear in the most unexpected places despite the endless tut-tutting of rationalists and sceptics; as a belief system it is both plausible and comprehensible, even conceivably contains a kernel of truth. As William James put it, “The darker, blinder strata of character are the only places in the world in which we catch real fact in the making”.
Our own Western civilization, however,  is founded on that of Ancient Greece (much more so than on ancient Palestine). The Greeks, the ones we take notice of at any rate, seem to have been the first people to have disregarded the Unmanifest entirely and to have considered that supernatural beings, whether they existed or not, were not required if one wanted to understand the physical universe: basic natural processes properly understood sufficed (Note 1). Democritus of Abdera, whose works have unfortunately been lost,  kicked off a vast movement which has ultimately led to the building of the Hadron Particle Collider, with his amazing statement, reductionist if ever there was one, Nothing exists except atoms and void.

Atoms and void, however, proved to be not quite enough to describe the universe : Democritus’s whirling atoms and the solids they composed when they settled themselves down were seemingly subject to certain  ‘laws’ or ‘general principles’ such as the Law of the Lever or the Principle of Flotation, both clearly stated in quantitative form by Archimedes.  But a new symbolic language, that of higher mathematics, was required to talk about such things since the “Book of Nature is written in the language of mathematics” as Kepler, a Renaissance successor and great admirer of the Greeks,  put it. Geometry stipulated the basic shapes and forms to which the groups of atoms were confined when they combined together to form regular solids — and so successfully that, since the invention of the high definition microscope, ‘Platonic solids’ and other fantastical shapes studied by the Greek geometers can actually be ‘seen’ today embodied in the arrangement of molecules in rock crystals and in the fossils of minute creatures known as radiolarians.
To all this Newton added the important notion of Force and gave it a precise meaning, namely the capacity to alter a body’s state of rest or constant straight line motion, either by way of contact (pushes and pulls) or, more mysteriously, by  ‘gravitational attraction’ which could operate at a distance through a vacuum. Nothing succeeds like success and by the middle of the nineteenth century Laplace had famously declared that he had “no need of that hypothesis”  — the existence of God — to explain the movements of heavenly bodies while Helmholtz declared that “all physical problems are reducible to mechanical problems” and thus, in principle, solvable by applying Newton’s Laws. Why stop there? The dreadful implication, spelled out by maverick thinkers such as Hobbes and La Mettrie,  was that human thoughts and emotions, maybe life itself,  were also ultimately reducible to “matter and motion” and that it was only a question of time before everything would be completely explained scientifically.
The twentieth century has at once affirmed and destroyed the atomic hypothesis. Affirmed it because molecules and atoms, at one time considered by most physicists simply as useful fictions, can actually be ‘seen’ (i.e. mapped precisely) with an electron tunnelling microscope and substances ‘one atom thick’ like graphene are actually being manufactured, or soon will be. However, atoms have turned out not to be indestructible or even indivisible as Newton and the  early scientists supposed.  Atomism and materialism have, by a curious circuitous route, led us back to a place not so very far from our original point of departure since the 20th century scientific buzzword, ‘energy’, has disquieting similarities to mana.  No one has ever seen or touched ‘Energy‘ any more that they have ever seen or touched mana. And, strictly speaking, energy in physics is ‘Potential Work’, i.e. Work which could be done but is not actually being done  while ‘Work’ in physics has the precise meaning, Force × distance moved in the direction of the applied force.  Energy is not something actual at all, certainly not something perceptible by the senses or their extensions, it is “strictly speaking a definition rather than a physical entity, merely being the first integral of the equations of motion” (Heading, Mathematical Methods in Science and Engineering p. 546). It is questionable whether statements in popular science books such as “the universe is essentially radiant energy” have any real meaning — taken literally they imply that the universe is ‘pure potentiality’ which it clearly isn’t.
The present era thus exhibits the contradictory tendencies of being on the one hand militantly secular and ‘materialistic’ both in the acquisitive and the philosophic senses of the word, while the foundations of this entire Tower of Babel, good old solid ‘matter’ composed of  “hard, massy particles” (Newton)  and “extended bodies” (Descartes) has all but evaporated. When he wished to refute the idealist philosopher, Bishop Berkeley, Samuel Johnson famously kicked a stone, but it would seem that the Bishop  has had the last laugh.

A New Starting Point?

Since the wheel of thought concerning the physical universe has more or less turned full circle, a few brave 20th century souls have wondered whether, after all, ‘atoms‘ and ‘extended bodies’ were not the best starting point, and that one might do better starting with something else. What though? There was in the early 20th century a resurgence of ‘animism’ on the fringes of science and philosophy,  witness Bergson’s élan vital (‘Life-force’), Dreisch’s ‘entelechy‘ and similar concepts. The problem with such theories is not that they are implausible — on the contrary they have strong intuitive appeal — but that they seem to be scientifically and technologically sterile. In particular, it is not clear how such notions can be represented symbolically by mathematical (or other) symbols, let alone tested in laboratory conditions.
Einstein, for his part, pinned his faith on ‘fields‘ and went so far as to state that “matter is merely a region where the field is particularly intense”. However, his attempt to unify physics via a ‘Unified Field’ was unsuccessful: unsuccessful for the layman because the ‘field‘ is an elusive concept at best, and unsuccessful for the physicist because Einstein never did succeed in combining mathematically the four basic physical forces, gravity, electro-magnetism and the strong and weak nuclear forces.
More recently, there have been one or two valiant attempts to present and attempt to elucidate the universe in terms of ‘information’, even to view the extent of viewing it as a vast computer or cellular automaton (Chris Langton, Stephen Wolfram et al.). But such attempts may well one day appear just as crudely anthropomorphic as Boyle’s vision of the universe as a sort glorified town clock. Apart from that one hopes that the universe, or whatever is behind it, has better things to do than simply pile up endless stacks of data like the odious Super Brains of Olaf Stapledon’s prescient SF fantasy The Last and First Men whose only ’emotion’ is curiosity.

The Event

During the Sixties and Seventies, at any rate within the booming counter-culture, there was a feeling that the West had somehow ‘got it wrong’ and was leading everyone towards disaster with its obsessive emphasis on material goods and material explanations. The principal doctrine of the hippie movement, inasmuch as it had one, was that “Experiences are more important than possessions” — and the more outlandish the experiences the better.  Zen-style ‘Enlightenment’ suddenly seemed much more appealing than the Eighteenth century movement of the same name which spearheaded Europe into the secular, industrial era . A few physicists, such as Fritjof Capra, argued that, although classical physics was admittedly very materialistic in the bad sense, modern physics “wasn’t like that” and had strong similarities with the key ideas of eastern mysticism. However, though initially attracted, I found modern physics (wave/particle duality, quantum entanglement, Block Universe, &c. &c.) a shade too weird, and what followed soon after, String Theory, completely opaque to all but a small band of elite advanced mathematicians .
But the trouble didn’t start in the 20th century. Newtonian mechanics was clearly a good deal more sensible but Calculus, when I started learning mathematics towards middle age, proved to be a major stumbling block, not so much because it was difficult to learn as because its basic principles and procedures were so completely  unreasonable. D’Alembert is supposed to have said to a student who expressed some misgivings about manipulating infinitesimals, “Allez à l’avant; la foi vous viendra” (“Keep going, conviction will follow”), but in my case it never did. Typically, the acceleration (change of velocity) of a moving body is computed by supposing the velocity of the body to be constant during a certain ‘short’ interval in time; we then reduce this interval ‘to the limit’ and, hey presto! we have the derivative appearing like the rabbit out of the magician’s hat. But if the particle is always accelerating its speed is never constant, and if the particle is always moving, it is never at a fixed location. The concept of ‘instantaneous velocity’ is mere gobbledeegook as Bishop Berkeley pointed out to Newton centuries ago. In effect, ‘classical’ Calculus has its cake and eats it too — something we all like doing if we can get away with it — since it merrily sets δx to non-zero and zero simultaneously on opposite sides of the same equation. ‘Modern’, i.e. post mid nineteenth-century Calculus, ‘solved’ the problem by the ingenious concept of a ‘limit’, the key idea in the whole of Analysis. Mathematically speaking, it turns out to be irrelevant whether or not a particular function actually attains  a given limit (assuming it exists) just so long as it approaches closer than any desired finite quantity . But what anyone with an enquiring mind wants to know is whether in reality the moving arrow actually attains its goal or whether the closing door ever actually slams shut (to use two examples mentioned by Zeno of Elea). As a matter of fact in neither case do they attain their objectives according to Calculus, modern or classical,  since, except in the most trivial case of a constant function, ‘taking the derivative’ involves throwing away non-zero terms on the Right Hand Side which, however puny, we have no right to get rid of just because they are inconvenient. As Zeno of Elea pointed out over two thousand years ago, if the body is in motion it is not at a specific point, and if  situated exactly at a specific point, it is not in motion. 
     This whole issue can, however, be easily resolved by the very natural supposition (natural to me at any rate) that intervals of time cannot be indefinitely diminished and that motion consists of a succession of stills in much the same way as a film we see in the cinema gives the illusion of movement. Calculus only works, inasmuch as it does work, if the increment in the independent variable is ‘very small’ compared to the level of measurement we are interested in, and the more careful textbooks warn the student against relying on Calculus in cases where the minimum size of the independent variable is actually known — for example  in molecular thermo-dynamics where dn cannot be smaller than that of a single molecule.
In any case, on reflection, I realized that I had always felt ‘time’ to be discontinuous, and life to be made up of a succession of discrete moments. This implies — taking things to the limit —  that there must be a minimal  ‘interval of time’ which, moreover, is absolute and does not depend on the position or motion of an imaginary observer. I was thus heartened when, in my vasual reading, I learned that nearly two thousand years ago, certain Indian Buddhist thinkers had advanced the same supposition and even apparently attempted to give an estimate of the size of such an ‘atom of time’ that they referred to as a ksana. More recently, Whitrow, Stefan Wolfram and one or two others, have given estimates of the size of a chronon  based on the Planck limit — but it is not the actual size that is important as the necessary existence of such a limiting value (Note 2).
Moreover, taking seriously the Sixties mantra that “experiences are more important than things” I wondered whether one could, and should, apply this to the physical world and take as a starting point not the ‘fundamental thing’, the atom, but the fundamental event, the ultimate event, one that could not be further decomposed. The resulting general theory would be not so much physics as Eventrics, a theory of events which naturally separates out into the study of the equivalent of the microscopic and macroscopic realms in physics. Ultimate Event Theory, as the name suggests, deals with the supposed ultimate constituents of physical (and mental) reality – what Hinayana Buddhists referred to as dharma(s) — while large-scale Eventrics deals with ‘historical events’ which are massive bundles of ultimate events and which have their own ‘laws’.
        The essential as far as I was concerned was that I suddenly had the barebones of a physical schema : ‘reality’ was composed of  events, not of objects (Note 3), or “The world is the totality of events and not of things” to adapt Wittgenstein’s aphorism.  Ultimate Event Theory was born, though it has taken me decades to pluck up the courage to put such an intuitively reasonable theory into the public domain, so enormous is the paradigm shift involved in these few innocuous sounding assumptions.       S.H. (3/11/ 2019)

Note 1 There exists, however, an extremely scholarly (but nonetheless very readable) book, The Greeks and the Irrational by E.R. Dodds, which traces the history of an ‘irrational’ counter-current in Greek civilisation from Homer to Hellenistic times. The author, a professor of Greek and a one time President of the Psychical Research Society, asked himself the question, “Were the Greeks in fact quite so blind to the importance of non-rational factors in man’s experience and behaviour as is commonly assumed both by their apologists and by their critics?” The book in question is the result of his erudite ponderings on the issue.

Note 2 Caldirola suggests 6.97 × 10−24 seconds for the minimal temporal interval, the chronon ─ what I refer to by the Sanscrit term ksana. Other estimates exist such as 5.39 ×10–44  seconds. Causal Set Theory and some other contemporary relativistic theories assume minimal values for spatial and temporal intervals, though I did not know this at the time (sic).

Note 3 Bertrand Russell, of all people, clearly anticipated the approach taken in UET, but made not the slightest attempt to lay out the conceptual foundations of the subject.  “Common sense thinks of the physical world as composed of ‘things’ which persist through a certain period of time and move in space. Philosophy and physics developed the notion of ‘thing’ into that of ‘material substance’, and thought of material substance as consisting of particles, each very small, and each persisting throughout all time. Einstein substituted events for particles; each event had to each other a relation called ‘interval’, which could be analyzed in various ways into a time-element and a space-element. (…) From all this it seems to follow that events, and not particles, must be the ‘stuff’ of physics. What has been thought of as a particle will have to be thought of as a series of events. (…) ‘Matter’ is not part of the ultimate material of the world, but merely a convenient way of collecting events into bundles.”  Russell, History of Western Philosophy p. 786 (Counterpoint, 1979


Every event or event cluster is in Ultimate Event Theory (UET) attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value of any rational number n/m or zero ─ but no irrational value. The r/r of an event is roughly the equivalent of its speed in traditional physics, i.e. it is a distance/time ratio.

If r/r = 0, this means that the event in question does not repeat.
If r/r = m/n this signifies that the event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other when one space-time ‘slice’ is replaced by the next. Such an event-chain is roughly the equivalent of an inertial system in normal physics. We generally assume that we ourselves constitute a standard  inertial system relative to which all other inertial systems can be compared ─ we ‘are where we are’ at all instants and so, in a certain sense, are always at rest. In a similar way we constitute a sort of standard landmark event-chain to which all other event-chains can be related. But we cannot see ourselves so we choose instead as standard landmark event chain some  object (=repeating event-cluster) that remains at a constant distance from us as far as we can tell.  Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all. The crucial difference is, of course, not between ‘vertical’ event-paths and ‘slanting’ event-paths but between ‘straight’ paths, whether vertical or not, and ones that are jagged or curved, i.e. not straight (assuming these terms are appropriate in this context). As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and motion.

So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence.

The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it, i.e. time only stands still for the space of a single ksana. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
Thus 0/n ≠ 0

m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if r/r = 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right every ksana (because the stralda is indivisible). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.
We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will thus, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important when we consider collisions. If there is any likelihood of confusion the denominator will be marked in bold, thus 187/187.

Also, the stralda/ksana ratio for event-chains always has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes according to Special Relativity. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1) — or at the Nth ksana if it is allowed to attain the limit.

These restrictions mean that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the amount of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is certainly not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event”(Note 1).       SH  11/09/19

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” is the equivalent in UET of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points….” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET based on a maxim of the great ancient Greek philosopher Parmenides:
“If there were no limits, nothing would persist except the limitless itself”.

The genesis of Ultimate Event Theory can be traced back to a stray remark made by the author of popular books on mathematics, W.W. Sawyer. In the course of an exchange of views on contradiction in mathematics, Sawyer threw off the casual remark that “a scientific theory would be useless if it predicted that an event such as an eclipse of the sun would happen at a given place and time, and also that it would not happen at the same time and place”. Such a ‘Law of Non-Contradiction for Events’ was assumed by all the classical physicists and seems to be a necessary (though never stated) assumption for doing science at all. Arguably, Quantum Mechanics does not verify this principle, but this is precisely why QM is so worrisome (Note 1).
Sawyer’s chance remark sounds innocuous enough, but the principle involved turns out to be extremely far-reaching. We have in effect a non-contradiction law for events (not statements), a building block of a physical not a logic theory. Now, it seems of the essence of an ‘event’ that it either happens ‘at a particular time and place’ or it does not: there is no middle ground. There would be little point in announcing that a certain musical or theatrical event was scheduled to take place in such and such a Town Hall on, say, Monday, the 25th of December in the year 20**, but also scheduled not to take place on the given time and date. And certainly once the time and date has passed, the ‘event’ either has occurred or it has not. Moreover, it seems to be of the essence of an ‘event’ to be ‘punctual’, ‘precise’ as to place and time.
An ‘event’, however, is clearly itself made up of smaller events, there are, as it were, macro- and micro-events.  Narrowing everything down and ‘taking the limit’, we end up with the eminently reasonable supposition that there are ‘ultimate events’, i.e. events that cannot be further decomposed. Secondly, since like macro-events they are ‘precise as to time and place’, we may presume that they, as it were, occupy a single ‘grid position’ on the Event Locality. This at any rate is the schema I proposed to work with.
There are two philosophic assumptions, one negative and one positive, built into this schema, namely 1. that there is no such thing as infinite regress and 2. that an ‘event’ has to happen ‘somewhere’. Calculus and much of traditional physics has ‘infinite regress’ (or ‘infinite divisibility’ which comes to the same thing) built into it, i.e. it rejects (1). Some contemporary systems such as Loop Quantum Gravity (LPG) are prepared to consider that space-time is perhaps ‘grainy’, but they do not see the need for an ‘event locality’, i.e. they reject (2). In  LPG what we call time and space are  simply ‘relations’ between basic entities (nodes) and have no real existence. And one could, of course, dispense both with actual infinity and an Event Locality i.e. reject both (1) and (2) — but such a course does not appeal to me. I opted to exclude infinity from my proposed system of the world but, on the other hand, to accept that that there is indeed a ‘Locality’, i.e. a ‘place’ where ultimate events can and do have occurrence.

Dispensing with actual infinity gets rid in one fell swoop of the ingenious paradoxes of Zeno and  the reality of Cantor’s transfinite sets in which no one except Cantor himself really believes. Instead of ‘infinite sets’, we have ‘indefinitely extendable sets’ which, as far as I can see, do all the work required of them without us having to (pretend to) believe in ‘actual infinity’. It is tedious to have to explain to mathematicians that so-called infinite sequences can indeed (and very often do) have a finite limit but that this limit is, in the vast majority of cases, manifestly not attained. The terms ‘sum’ and ‘limit’ are not interchangeable and so-called ‘infinite’ series only ever have partial sums, indeed are indefinitely extendable sequences of partial sums. For example the well known series 1 + 1/2 + 1/4 + 1/8 + …. has limit 2 but cannot ever attain it. Most (all?) non-trivial so-called ‘infinite’ series are strictly speaking interminable.
As to (2), the positive requirement, it is to me inconceivable that ‘something that happens’, i.e. an event, does not happen somewhere, i.e. has a precise position on some underlying substratum without which it simply could not occur. The idea of space and time being ‘relations’ between things that exist rather than things that exist in their own right goes back to Leibnitz and is one of the features that distinguishes his mathematics and science from that of Newton who was a great believer in absolute time and space and thus in absolute position. I do not think there is any experiment that can determine the issue one way or the other and doubtless temperament comes into play here, but for what it is worth I believe that Newton’s approach makes much more sense and has been more fruitful. As far as I am concerned, I am convinced that an event, if it occurs at all, occurs somewhere though there is no reason at this stage to attribute any property to this ‘somewhere’ except that it allows events to ‘have occurrence’. It does, however, make the ‘Event Locality’ a primary actor since this Locality seemingly existed prior to any particular events taking place. One could alternatively consider that an event, when and as it has occurrence, as it were carves out a place for itself to happen. In this schema the Locality is an essentially negative entity which does nothing to obstruct occurrence and that is all. This is a perfectly reasonable approach but again one that does not appeal to me for aesthetic or temperamental reasons.  However, once I accepted ultimate events and an Event Locality I realized that I had two ‘primary entities’ that henceforth could be taken for granted. A third primary entity was some ‘force of causality’ providing order and coherence to events as they occurred, or rather re-occurred, and so we have the three primary entities of Ultimate Event Theory:  ultimate events, the Locality and a kind of causality that I call Dominance.       SH  

Note 1  It is not, I think, at this stage worth getting involved in interminable discussions about Schrödinger’s dead-and-alive cats though the issue will have to be faced at some stage. Suffice it to say, for the moment, that the wave function, prior to an intervention by a human or other conscious agent does not verify the Law of Non-Contradiction for Events — and one way out is to simply accept that the wave function does not describe ‘events’ at all, though it does deal in ‘potential’ physical entities that are capable of producing bona fide events.