Archives for posts with tag: Parmenides

[Brief summary:  For those who are new to this website, a brief recapitulation. Ultimate Event Theory aims to be a description of the physical world where the event as opposed to the object (or field) is the primary item. The axioms of UET are given in an earlier post but the most important is the Axiom of Finitude which stipulates that Every event is composed of a finite number of ultimate events which are not further decomposable. Ultimate events ‘have occurrence’ on an Event Locality which exists only so as to enable ultimate events to take place somewhere  and to remain discrete. Spots on the Locality where events may and do have occurrence have three ‘spatial’ dimensions each of unit size, 1 stralda, and one temporal dimension of 1 ksana. Both the stralda and ksana are minimal and cannot be meaningfully subdivided. The physical world, or what we apprehend of it, is made up of event-chains and event-clusters which are bonded together and appear as relatively persistent objects. All so-called objects are thus, by hypothesis, discontinuous at a certain level and there are distinct gaps between the successive appearances of recurring ultimate events. These gaps, as opposed to the ‘grid-spots’, have ‘flexible’ extent with however a minimum and a maximum.]

Every repeating event, or event cluster, is in UET attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value 0 or any rational number n/m ─ but no irrational value. The r/r is quite distinct from the space/time displacement rate, the equivalent of ‘speed’, since it concerns the number of times an ultimate event repeats in successive quite apart from how far the repeat event is displaced ‘laterally’ from its previous position.
If r/r = 0, this means that the event in question does not repeat.
But this value is to be distinguished from r/r = 0/1 which signifies that the ultimate event reappears at every ksana but does not displace itself ‘laterally’ ― it is, if you like, a ‘rest’ event-chain.
If r/r = 1/1 the ultimate event reappears at every ksana and displaces itself one stralda at every ksana, the minimal spatial displacement. (Both the stralda and the ksana, being absolute minimums, are indivisible.)
        If r/r = m/n (with m, n positive whole numbers) this signifies that the ultimate event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other, as it were, when one space-time slice is replaced by the next. We generally assume that we ourselves constitute a standard  inertial system relative to which all others can be compared ─ we ‘are where we are’ at all instants and feel ourselves to be at rest except when our ‘natural state’ is manifestly disrupted, i.e. when we are accelerated by an outside force. In a similar way, in UET we conceive of ourselves as constituting a rest event-chain to which all others can be related. But we cannot see ourselves so we generally choose instead as a standard landmark event chain some (apparent) object that remains fixed at a constant distance as far as we can tell.

Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all — ‘normal’ physics has the same problem . The crucial difference is, of course, not between ‘vertical’ event-paths (‘stationary’ event-chains)  and ‘slanting’ event-paths (the equivalent of straight-line constant motion), but rather between ‘straight’ paths (whether vertical or slanting) and ones that are not straight, i.e. curved. As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and constant straight-line motion.
So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence. The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because time only stands still for the space of a single ksana — ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if the r/r of an event is 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right at every ksana (because there is no such thing as a fifth of a stralda). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.

We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important particularly when we consider collisions. If there is any likelihood of confusion the denominator, which is the ksana value,  will be marked in bold, thus 187/187.
Also, the stralda/ksana ratio for event-chains has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1). Since N is such an enormous number, this constraint can usually be ignored. An event or event-chain simply ceases to repeat when it reaches the limit.
These restrictions imply that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the number of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” (Note 1). (And this in turn means that much of the mathematical assumptions of Analysis and other areas of mathematics are unrealistic.)
Why is all this important or even worth stating? Because, unlike traditional physical systems, UET not only makes a distinction between constant and accelerated ‘motion’ (or rather their equivalents) but also between event-chains which have the same displacement rate but very different ‘reappearance rates’ — some repeating event-chain ‘miss out’ more ksanas than others.
A continuous function in Calculus is modelled as an unbroken line and, if we are dealing with a ‘moving object’, this object is assumed to ‘exist’ at every instant. In UET even a solid object is always discontinuous in that there is always a minute gap between consecutive appearances even in the case of the ‘densest’ event-chains. But, over and above this discontinuity which, since it is general and so minute, can usually be neglected, there remains the possibility of far more substantial discontinuities when a regularly repeating event may ‘miss out’ a large number of intermediate ksanas while nonetheless maintaining a regular rhythm. Giving the overall ‘speed’ and direction of an event-chain is not sufficient to identify it: a third property, the re-appearance rate, is required. There is all the difference in the world between an event-chain whose members (constituent ultimate events) appear at every consecutive ksana and an event-chain which only repeats at, say, every seventh or twentieth or hundredth ksana.
An important consequence is that a ‘particle’ (dense event-chain) can ‘pass through’ a solid barrier if the latter has a ‘tight’ reappearance rate while the ‘particle’ has one that is much more ‘spaced out’. Moreover, two ‘particles’ that have the same ‘speed’ (lateral displacement rate) but very different reappearance rates will behave very differently especially if their speeds are high relative to a barrier in front of them.
This feature of UET enables me to make a prediction even at this early stage. Both photons and neutrinos have speeds that are close to c, but their behaviour is remarkably different. Whilst it is very easy to block light rays, neutrinos are incredibly difficult to detect because they have no difficulty ‘passing through’ barriers as substantial as the Earth itself without leaving a trace. It has been said that a neutrino can pass through miles of solid lead without interacting with anything and indeed at this moment thousands are believed to be passing through my body and yours. On the basis of UET principles, this can only be so if the two event-chains known as ‘photon’ and ‘neutrino’ have wildly different reappearance rates, the neutrino being the most ‘spaced out’ r/r that is currently known to exist. Thus, if it should ever become possible to detect the ultimate event patterns of these event-chains, the ‘neutrino’ event-chain would be extremely ‘gapped’ while the photon would be extremely dense, i.e. apparently ‘continuous’ (Note 2). The accompanying diagram will give some idea of what I have in mind.


The existence and importance of reappearance-rates is one of the two principal innovations of Ultimate Event Theory and it may well have a bearing on the vexed question of so-called wave-particle duality. From the UET perspective, neither waves nor particles are truly fundamental  entities since both are bonded collections of ultimate events. A ‘wave’ is a ‘spaced-out’ collection of ultimate events, a ‘particle’ a dense conglomeration (although both wave and particle at sufficiently high magnification would reveal themselves to be discontinuous). Nonetheless, the perspective of UET is clearly much closer to the ‘particle’ approach to electro-magnetism (favoured by Newton) and gives rise to the following prediction. Since high frequency, short wave phenomena are clearly more ‘bunched up’ than low frequency, long-wave phenomena, it should one day, perhaps soon, be possible to detect discontinuities in very long wave radio transmissions while short wavelength phenomena would still appear to be continuous. The discontinuity would manifest itself as an irreducible ‘flicker’ like that of a light rapidly turned on and off — and may well have been already observed as a strangely persistent annoyance. Moreover, one can only suppose that there is some mechanism at work which shifts wave-particle phenomena from one mode to the other; such a mechanism simply ‘spaces out’ the constituent ultimate events of an apparent particle, or forcefully combines wave-like ultimate events into a dense bundle.
SH   16/03/20

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” would be the equivalent in UET terms of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points there is always another point” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET, the  so-called Principle of Parmenides (who first enunciated it) slightly adapted,  “If there were no limits, nothing would persist”.
As against the above axiom of continuity of ‘Ordered Geometry’ which underlies Calculus and much else, one could, if need be,  introduce into UET the axiom, “It is not always possible to introduce a further ultimate event between two distinct ultimate events”.

Note 2. It is possible that this facility of passing through apparently impenetrable barriers is the explanation of ‘electron tunnelling’ which undoubtedly exists because a microscope has been manufactured that relies on the principle.

 

Advertisement

 

The Rise and Fall of Atomism

So-called ‘primitive’ societies by and large split the world into two, what one might call the Manifest (what we see, hear &c.) and the Unmanifest (what we don’t perceive directly but intuit or are subliminally aware of). For the ‘primitives’ everything originates in the Unmanifest, especially drastic and inexplicable changes like earthquakes, sudden storms, avalanches and so on,  but also more everyday but nonetheless mysterious occurrences like giving birth, changing a substance by heating it (i.e. cooking), growing up, aging, dying. The Unmanifest is understandably considered to be much more important than the Manifest — since the latter originates in the first but not vice-versa — and so the shaman, or his various successors, the ‘sage’, ‘prophet’, ‘initiate’ &c. claims to have special knowledge because he or she has ready access to the Unmanifest which normal people do not.  The shaman and more recently the priest is, or claims to be, an intermediary between the two realms, a sort of spiritual marriage broker. Ultimately, a single principle or ‘hidden force’ drives everything, what has been variously termed in different cultures mana, wakanda, ch’i ….  Ch’i is ‘what makes things go’ as Chuang-tzu puts it, in particular what makes things more, or less, successful. If the cheetah can run faster than all other animals, it is because the cheetah has more mana and the same goes for the racing car; a warrior wins a contest of strength because he has more mana, a young woman has more suitors because of mana and so on.
Charm and charisma are watered down modern versions of mana and, like mana, are felt to originate in the beyond, in the non here and now, in the Unmanifest. This ancient dualistic scheme is far from dead and is likely to re-appear in the most unexpected places despite the endless tut-tutting of rationalists and sceptics; as a belief system it is both plausible and comprehensible, even conceivably contains a kernel of truth. As William James put it, “The darker, blinder strata of character are the only places in the world in which we catch real fact in the making”.
Our own Western civilization, however,  is founded on that of Ancient Greece (much more so than on ancient Palestine). The Greeks, the ones we take notice of at any rate, seem to have been the first people to have disregarded the Unmanifest entirely and to have considered that supernatural beings, whether they existed or not, were not required if one wanted to understand the physical universe: basic natural processes properly understood sufficed (Note 1). Democritus of Abdera, whose works have unfortunately been lost,  kicked off a vast movement which has ultimately led to the building of the Hadron Particle Collider, with his amazing statement, reductionist if ever there was one, Nothing exists except atoms and void.

Atoms and void, however, proved to be not quite enough to describe the universe : Democritus’s whirling atoms and the solids they composed when they settled themselves down were seemingly subject to certain  ‘laws’ or ‘general principles’ such as the Law of the Lever or the Principle of Flotation, both clearly stated in quantitative form by Archimedes.  But a new symbolic language, that of higher mathematics, was required to talk about such things since the “Book of Nature is written in the language of mathematics” as Kepler, a Renaissance successor and great admirer of the Greeks,  put it. Geometry stipulated the basic shapes and forms to which the groups of atoms were confined when they combined together to form regular solids — and so successfully that, since the invention of the high definition microscope, ‘Platonic solids’ and other fantastical shapes studied by the Greek geometers can actually be ‘seen’ today embodied in the arrangement of molecules in rock crystals and in the fossils of minute creatures known as radiolarians.
To all this Newton added the important notion of Force and gave it a precise meaning, namely the capacity to alter a body’s state of rest or constant straight line motion, either by way of contact (pushes and pulls) or, more mysteriously, by  ‘gravitational attraction’ which could operate at a distance through a vacuum. Nothing succeeds like success and by the middle of the nineteenth century Laplace had famously declared that he had “no need of that hypothesis”  — the existence of God — to explain the movements of heavenly bodies while Helmholtz declared that “all physical problems are reducible to mechanical problems” and thus, in principle, solvable by applying Newton’s Laws. Why stop there? The dreadful implication, spelled out by maverick thinkers such as Hobbes and La Mettrie,  was that human thoughts and emotions, maybe life itself,  were also ultimately reducible to “matter and motion” and that it was only a question of time before everything would be completely explained scientifically.
The twentieth century has at once affirmed and destroyed the atomic hypothesis. Affirmed it because molecules and atoms, at one time considered by most physicists simply as useful fictions, can actually be ‘seen’ (i.e. mapped precisely) with an electron tunnelling microscope and substances ‘one atom thick’ like graphene are actually being manufactured, or soon will be. However, atoms have turned out not to be indestructible or even indivisible as Newton and the  early scientists supposed.  Atomism and materialism have, by a curious circuitous route, led us back to a place not so very far from our original point of departure since the 20th century scientific buzzword, ‘energy’, has disquieting similarities to mana.  No one has ever seen or touched ‘Energy‘ any more that they have ever seen or touched mana. And, strictly speaking, energy in physics is ‘Potential Work’, i.e. Work which could be done but is not actually being done  while ‘Work’ in physics has the precise meaning, Force × distance moved in the direction of the applied force.  Energy is not something actual at all, certainly not something perceptible by the senses or their extensions, it is “strictly speaking a definition rather than a physical entity, merely being the first integral of the equations of motion” (Heading, Mathematical Methods in Science and Engineering p. 546). It is questionable whether statements in popular science books such as “the universe is essentially radiant energy” have any real meaning — taken literally they imply that the universe is ‘pure potentiality’ which it clearly isn’t.
The present era thus exhibits the contradictory tendencies of being on the one hand militantly secular and ‘materialistic’ both in the acquisitive and the philosophic senses of the word, while the foundations of this entire Tower of Babel, good old solid ‘matter’ composed of  “hard, massy particles” (Newton)  and “extended bodies” (Descartes) has all but evaporated. When he wished to refute the idealist philosopher, Bishop Berkeley, Samuel Johnson famously kicked a stone, but it would seem that the Bishop  has had the last laugh.

A New Starting Point?

Since the wheel of thought concerning the physical universe has more or less turned full circle, a few brave 20th century souls have wondered whether, after all, ‘atoms‘ and ‘extended bodies’ were not the best starting point, and that one might do better starting with something else. What though? There was in the early 20th century a resurgence of ‘animism’ on the fringes of science and philosophy,  witness Bergson’s élan vital (‘Life-force’), Dreisch’s ‘entelechy‘ and similar concepts. The problem with such theories is not that they are implausible — on the contrary they have strong intuitive appeal — but that they seem to be scientifically and technologically sterile. In particular, it is not clear how such notions can be represented symbolically by mathematical (or other) symbols, let alone tested in laboratory conditions.
Einstein, for his part, pinned his faith on ‘fields‘ and went so far as to state that “matter is merely a region where the field is particularly intense”. However, his attempt to unify physics via a ‘Unified Field’ was unsuccessful: unsuccessful for the layman because the ‘field‘ is an elusive concept at best, and unsuccessful for the physicist because Einstein never did succeed in combining mathematically the four basic physical forces, gravity, electro-magnetism and the strong and weak nuclear forces.
More recently, there have been one or two valiant attempts to present and attempt to elucidate the universe in terms of ‘information’, even to view the extent of viewing it as a vast computer or cellular automaton (Chris Langton, Stephen Wolfram et al.). But such attempts may well one day appear just as crudely anthropomorphic as Boyle’s vision of the universe as a sort glorified town clock. Apart from that one hopes that the universe, or whatever is behind it, has better things to do than simply pile up endless stacks of data like the odious Super Brains of Olaf Stapledon’s prescient SF fantasy The Last and First Men whose only ’emotion’ is curiosity.

The Event

During the Sixties and Seventies, at any rate within the booming counter-culture, there was a feeling that the West had somehow ‘got it wrong’ and was leading everyone towards disaster with its obsessive emphasis on material goods and material explanations. The principal doctrine of the hippie movement, inasmuch as it had one, was that “Experiences are more important than possessions” — and the more outlandish the experiences the better.  Zen-style ‘Enlightenment’ suddenly seemed much more appealing than the Eighteenth century movement of the same name which spearheaded Europe into the secular, industrial era . A few physicists, such as Fritjof Capra, argued that, although classical physics was admittedly very materialistic in the bad sense, modern physics “wasn’t like that” and had strong similarities with the key ideas of eastern mysticism. However, though initially attracted, I found modern physics (wave/particle duality, quantum entanglement, Block Universe, &c. &c.) a shade too weird, and what followed soon after, String Theory, completely opaque to all but a small band of elite advanced mathematicians .
But the trouble didn’t start in the 20th century. Newtonian mechanics was clearly a good deal more sensible but Calculus, when I started learning mathematics towards middle age, proved to be a major stumbling block, not so much because it was difficult to learn as because its basic principles and procedures were so completely  unreasonable. D’Alembert is supposed to have said to a student who expressed some misgivings about manipulating infinitesimals, “Allez à l’avant; la foi vous viendra” (“Keep going, conviction will follow”), but in my case it never did. Typically, the acceleration (change of velocity) of a moving body is computed by supposing the velocity of the body to be constant during a certain ‘short’ interval in time; we then reduce this interval ‘to the limit’ and, hey presto! we have the derivative appearing like the rabbit out of the magician’s hat. But if the particle is always accelerating its speed is never constant, and if the particle is always moving, it is never at a fixed location. The concept of ‘instantaneous velocity’ is mere gobbledeegook as Bishop Berkeley pointed out to Newton centuries ago. In effect, ‘classical’ Calculus has its cake and eats it too — something we all like doing if we can get away with it — since it merrily sets δx to non-zero and zero simultaneously on opposite sides of the same equation. ‘Modern’, i.e. post mid nineteenth-century Calculus, ‘solved’ the problem by the ingenious concept of a ‘limit’, the key idea in the whole of Analysis. Mathematically speaking, it turns out to be irrelevant whether or not a particular function actually attains  a given limit (assuming it exists) just so long as it approaches closer than any desired finite quantity . But what anyone with an enquiring mind wants to know is whether in reality the moving arrow actually attains its goal or whether the closing door ever actually slams shut (to use two examples mentioned by Zeno of Elea). As a matter of fact in neither case do they attain their objectives according to Calculus, modern or classical,  since, except in the most trivial case of a constant function, ‘taking the derivative’ involves throwing away non-zero terms on the Right Hand Side which, however puny, we have no right to get rid of just because they are inconvenient. As Zeno of Elea pointed out over two thousand years ago, if the body is in motion it is not at a specific point, and if  situated exactly at a specific point, it is not in motion. 
     This whole issue can, however, be easily resolved by the very natural supposition (natural to me at any rate) that intervals of time cannot be indefinitely diminished and that motion consists of a succession of stills in much the same way as a film we see in the cinema gives the illusion of movement. Calculus only works, inasmuch as it does work, if the increment in the independent variable is ‘very small’ compared to the level of measurement we are interested in, and the more careful textbooks warn the student against relying on Calculus in cases where the minimum size of the independent variable is actually known — for example  in molecular thermo-dynamics where dn cannot be smaller than that of a single molecule.
In any case, on reflection, I realized that I had always felt ‘time’ to be discontinuous, and life to be made up of a succession of discrete moments. This implies — taking things to the limit —  that there must be a minimal  ‘interval of time’ which, moreover, is absolute and does not depend on the position or motion of an imaginary observer. I was thus heartened when, in my vasual reading, I learned that nearly two thousand years ago, certain Indian Buddhist thinkers had advanced the same supposition and even apparently attempted to give an estimate of the size of such an ‘atom of time’ that they referred to as a ksana. More recently, Whitrow, Stefan Wolfram and one or two others, have given estimates of the size of a chronon  based on the Planck limit — but it is not the actual size that is important as the necessary existence of such a limiting value (Note 2).
Moreover, taking seriously the Sixties mantra that “experiences are more important than things” I wondered whether one could, and should, apply this to the physical world and take as a starting point not the ‘fundamental thing’, the atom, but the fundamental event, the ultimate event, one that could not be further decomposed. The resulting general theory would be not so much physics as Eventrics, a theory of events which naturally separates out into the study of the equivalent of the microscopic and macroscopic realms in physics. Ultimate Event Theory, as the name suggests, deals with the supposed ultimate constituents of physical (and mental) reality – what Hinayana Buddhists referred to as dharma(s) — while large-scale Eventrics deals with ‘historical events’ which are massive bundles of ultimate events and which have their own ‘laws’.
        The essential as far as I was concerned was that I suddenly had the barebones of a physical schema : ‘reality’ was composed of  events, not of objects (Note 3), or “The world is the totality of events and not of things” to adapt Wittgenstein’s aphorism.  Ultimate Event Theory was born, though it has taken me decades to pluck up the courage to put such an intuitively reasonable theory into the public domain, so enormous is the paradigm shift involved in these few innocuous sounding assumptions.       S.H. (3/11/ 2019)

Note 1 There exists, however, an extremely scholarly (but nonetheless very readable) book, The Greeks and the Irrational by E.R. Dodds, which traces the history of an ‘irrational’ counter-current in Greek civilisation from Homer to Hellenistic times. The author, a professor of Greek and a one time President of the Psychical Research Society, asked himself the question, “Were the Greeks in fact quite so blind to the importance of non-rational factors in man’s experience and behaviour as is commonly assumed both by their apologists and by their critics?” The book in question is the result of his erudite ponderings on the issue.

Note 2 Caldirola suggests 6.97 × 10−24 seconds for the minimal temporal interval, the chronon ─ what I refer to by the Sanscrit term ksana. Other estimates exist such as 5.39 ×10–44  seconds. Causal Set Theory and some other contemporary relativistic theories assume minimal values for spatial and temporal intervals, though I did not know this at the time (sic).

Note 3 Bertrand Russell, of all people, clearly anticipated the approach taken in UET, but made not the slightest attempt to lay out the conceptual foundations of the subject.  “Common sense thinks of the physical world as composed of ‘things’ which persist through a certain period of time and move in space. Philosophy and physics developed the notion of ‘thing’ into that of ‘material substance’, and thought of material substance as consisting of particles, each very small, and each persisting throughout all time. Einstein substituted events for particles; each event had to each other a relation called ‘interval’, which could be analyzed in various ways into a time-element and a space-element. (…) From all this it seems to follow that events, and not particles, must be the ‘stuff’ of physics. What has been thought of as a particle will have to be thought of as a series of events. (…) ‘Matter’ is not part of the ultimate material of the world, but merely a convenient way of collecting events into bundles.”  Russell, History of Western Philosophy p. 786 (Counterpoint, 1979

 

Every event or event cluster is in Ultimate Event Theory (UET) attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value of any rational number n/m or zero ─ but no irrational value. The r/r of an event is roughly the equivalent of its speed in traditional physics, i.e. it is a distance/time ratio.

If r/r = 0, this means that the event in question does not repeat.
If r/r = m/n this signifies that the event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other when one space-time ‘slice’ is replaced by the next. Such an event-chain is roughly the equivalent of an inertial system in normal physics. We generally assume that we ourselves constitute a standard  inertial system relative to which all other inertial systems can be compared ─ we ‘are where we are’ at all instants and so, in a certain sense, are always at rest. In a similar way we constitute a sort of standard landmark event-chain to which all other event-chains can be related. But we cannot see ourselves so we choose instead as standard landmark event chain some  object (=repeating event-cluster) that remains at a constant distance from us as far as we can tell.  Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all. The crucial difference is, of course, not between ‘vertical’ event-paths and ‘slanting’ event-paths but between ‘straight’ paths, whether vertical or not, and ones that are jagged or curved, i.e. not straight (assuming these terms are appropriate in this context). As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and motion.

So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence.

The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it, i.e. time only stands still for the space of a single ksana. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
Thus 0/n ≠ 0

m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if r/r = 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right every ksana (because the stralda is indivisible). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.
We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will thus, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important when we consider collisions. If there is any likelihood of confusion the denominator will be marked in bold, thus 187/187.

Also, the stralda/ksana ratio for event-chains always has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes according to Special Relativity. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1) — or at the Nth ksana if it is allowed to attain the limit.

These restrictions mean that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the amount of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is certainly not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event”(Note 1).       SH  11/09/19

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” is the equivalent in UET of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points….” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET based on a maxim of the great ancient Greek philosopher Parmenides:
“If there were no limits, nothing would persist except the limitless itself”.