As related in the previous post, Einstein, in his epoch-making 1905 paper, based his theory of Special Relativity on just two postulates,

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

I asked myself if I could derive the main results of the Special Theory, the Rule for the Addition of Velocities, Space Contraction, Time Dilation and the ‘Equivalence’ of Mass and Energy from UET postulates.
Instead of Einstein’s Postulate 2, the ‘absolute value of the speed of light’, I employ a more general but very similar principle, namely that there is a ‘limiting speed’ for the propagation of causal influences from one spot on the Locality to another. In the simplest case, that of an  event-chain consisting of a single ultimate event that repeats at every ksana, this amounts to asking ourselves ‘how far’ the causal influence can travel ‘laterally’ from one ksana to the next. I see the Locality as a sort of grid extending indefinitely in all directions where  each ‘grid-position’ or ‘lattice-point’ can receive one, and only one, ultimate event (this is one of the original Axioms, the Axiom of Exclusion). At each ksana the entire previous spatial set-up is deftly replaced by a new, more or less identical one. So, supposing we can locate the ‘same’ spot, i.e. the ‘spot’ which replaces the one where the ultimate event had occurrence at the last ksana, is there a limit to how far to the left (or right) of this spot the ultimate event can re-occur? Yes, there is. Why? Well, I simply cannot conceive of there being no limit to how far spatially an ‘effect’ ─ in this case the ‘effect’ is a repetition of the original event ─ can be from its cause. This would be a holographic nightmare where anything that happens here affects, or at least could affect, what happens somewhere billions of light years away. One or two physicists, notably Heisenberg, have suggested something of the sort but, for my part, I cannot seriously contemplate such a state of affairs.  Moreover, experience seems to confirm that there is indeed a ‘speed limit’ for all causal processes, the limit we refer to by the name of c.
However, this ‘upper speed limit’ has a somewhat different and sharper meaning in Ultimate Event Theory than it does in matter-based physics because c (actually c*) is an integer and corresponds to a specific number of adjacent ‘grid-positions’ on the Locality existing at or during a single ksana. It is a distance rather than a speed and even this is not quite right : it is a ‘distance’ estimated not in terms of ‘lengths’ but only in terms of the number of the quantity of intermediary ultimate events that could conceivably be crammed into this interval.
In UET a distinction is made between an attainable limiting number of grid-positions to right (or left) denoted c* and the lowest unattainable limit, c, though this finicky distinction in many cases can be neglected. But the basic schema is this. A  ‘causal influence’, to be effective, must not only be able to at least traverse the distance between one ksana and the next ‘vertically’ (otherwise nothing would happen) but must also stretch out ‘laterally’ i.e. ‘traverse’ or rather ‘leap over’ a particular number of  grid-positions. There is an upper limit to the number of positions that can be ‘traversed’, namely c*, an integer. This number, which is very great but not infinite ─ actual infinity is completely banished from UET ─ defines the universe we (think we) live in since it puts a limit to the operation of causality (as  Einstein clearly recognized), and without causality there can, as far as I am concerned, be nothing worth calling a universe. Quite conceivably, the value of this constant c i(or c*) is very different in other universes, supposing they exist, but we are concerned only  with this ‘universe’ (massive causally connected more or less identically repeating event-cluster).
So far, so good. This sounds a rather odd way of putting things, but we are still pretty close to Special Relativity as it is commonly taught. What of Einstein’s other principle? Well, firstly, I don’t much care for the mention of “laws of physics”, a concept which Einstein along with practically every other modern scientist inherited from Newton and which harks back to a theistic world-view whereby God, the supreme law-giver, formulated a collection of ‘laws’ that everything must from the moment of Creation obey ─ everything material at any rate. My concern is with what actually happens whether or not what happens is ‘lawful’ or not. Nonetheless, there do seem to be certain very general principles that apply across the board and which may, somewhat misleadingly, be classed as laws. So I shall leave this question aside for the moment.
The UET Principle that replaces Einstein’s First Principle (“that the laws of physics are the same in all inertial frames”) is rather tricky to formulate but, if the reader is patient and broad-minded enough, he or she should get a good idea of what I have in mind. As a first formulation, it goes something like this:

The occupied region between two or more successive causally related positions on the Locality is invariant. 

         This requires a little elucidation. To start with, what do I understand by ‘occupied region’? At least to a first approximation, I view the Locality (the ‘place’ where ultimate events can and do have occurrence) as a sort of three-dimensional lattice extending in all directions which  flashes on and off rhythmically. It would seem that extremely few ‘grid-spots’ ever get occupied at all, and even less spots ever become the seats of repeating events, i.e. the location of the  first event of an event-chain. The ‘Event Locality’ of UET, like the Space/Time  of matter-based physics, is a very sparsely populated place.
Now, suppose that an elementary event-chain has formed but is marooned in an empty region of the Locality. In such a case, it makes no sense to speak of ‘lateral displacement’ : each event follows its predecessor and re-appears at the ‘same’ ─ i.e.  ‘equivalent’ ─ spot. Since there are no landmark events and every grid-space looks like every other, we can call such an event-chain ‘stationary’. This is the default case, the ‘inertial’ case to use the usual term.
We concentrate for the moment on just two events, one the clone of the other re-appearing at the ‘same spot’ a ksana later. These two events in effect define an ‘Event Capsule’ extending from the centre (called ‘kernel’ in UET) of the previous grid-space to the centre of the current one and span a temporal interval of one ksana. Strictly speaking, this ‘Event Capsule’ has two parts, one half belonging to the previous ksana and the other to the second ksana, but, at this stage, there is no more than a thin demarcation line separating the two extremities of the successive ksanas. Nonetheless, it would be quite wrong (from the point of view of UET) to think of this ‘Event Capsule’ and the whole underlying ‘spatial/temporal’ set-up as being ‘continuous’. There is no such thing as a ‘Space/Time Continuum’ as Minkowski understood the term.  ‘Time’ is not a dimension like ‘depth’ which can seamlessly be added on to ‘length’ or ‘width’ : there is a fundamental opposition between the spatial and temporal aspect of things that no physical theory or mathematical artifice can completely abolish. In the UET  model, the demarcations between the ‘spatial’ parts of adjacent Event Capsules do not widen, they  remain simple boundaries, but the demarcations between successive ksanas widen enormously, i.e. there are gaps in the ‘fabric’ of time. To be sure there must be ‘something’ underneath which persists and stops everything collapsing, but this underlying ‘substratum’ has no physical properties whatsoever, no ‘identity’, which is why it is often referred to, not inaccurately, both in Buddhism and sometimes even in modern physics, as ‘nothing’.
To return to the ‘Constant Region Postulate’. The elementary ‘occupied region’ may be conceived as a ‘Capsule’ having the dimensions  s0 × s0  × s= s03  for the spatial extent  and t0 ­for time, i.e. a region of extent s03 × t0 ­. These dimensions are fixed once and for all and, in the simplest UET model, s0 is a maximum and t0 ­is a minimum. Restricting ourselves for simplicity to a single spatial dimension and a single temporal dimension, we  thus have an ‘Event Rectangle’ of  s0  by t0­ .  
        For anything of interest to happen, we need more than one event-chain and, in particular, we need at least three ultimate events, one of which is to serve as a sort of landmark for the remaining pair. It is only by referring to this hypothetical or actual third event, occurring as it does at a particular spot independently of the event-pair, that we can meaningfully talk of the ‘movement’ to left or right of the second ultimate event in the pair with relation to the first. Alternatively, one could imagine an ultimate event giving rise to two events, one occurring ‘at the same spot’ and the other so many grid-spaces to the right (or left). In either case, we have an enormously expanded ‘Event Capsule’ spatially speaking compared to the original one. The Principle of the Constancy of the Area of the Occupied Region asserts that this ‘expanded’ Event Capsule which we can imagine as a ‘Space/Time rectangle’ (rather than Space/Time parallelipod), always has the ‘same’ area.
How can this be possible? Quite simply by making the spatial and temporal ‘dimensions’ inversely proportional to each other. As I have detailed in previous posts, we have in effect a ‘Space/Time Rectangle’ of sides sv and tv (subscript v for variable) such that sv × tv  = s0 × t0  = Ω = constant. Just conceivably, one could make s0  a minimum and t0 a maximum but this would result in a very strange universe indeed. In this model of UET, I take s0 as a maximum and t0 as a minimum. These dimensions are those of the archetypal ‘stationary’ or ‘inertial’ Event Capsule, one far removed from the possible influence of any other event-chains. I do not see how the ‘mixed ratio’ s0 : t0 can be determined on the basis of any fundamental physical or logical considerations, so this ratio just ‘happens to be’ what it is in the universe we (think we) live in. This ratio, along with the determination of c which RELATIVITY  HYPERBOLA DIAGRAMis a number (positive integer), are the most important constants in UET and different values would give rise to very different universes. In UET s0/t0 is often envisaged  in geometrical terms : tan β = s0/t0 = constant.    s0  and   t0   also have minimum and maximum values respectively, noted as  su    and tu  respectively, the subscript u standing for ‘ultimate’. We thus have a hyperbola but one constrained within limits so that there is no risk of ‘infinite’ values.

 

 

What is ‘speed’?   Speed is not one of the basic SI units. The three SI mechanical units are the metre, the standard of length, the kilogram, the standard of mass, and the second, the standard of time. (The remaining four units are the ampere, kelvin, candela and mole). Speed is a secondary entity, being the ratio of space to time, metre to second. For a long time, since Galileo in fact, physicists have recognized the ‘relational’ nature of speed, or rather velocity (which is a ‘vector’ quantity, speed + direction). To talk meaningfully about a body’s speed you need to refer it to some other body, preferably a body that is, or appears to be, fixed (Note 1). This makes speed a rather insubstantial sort of entity, a will-o’-the-wisp, at any rate compared to  ‘weight’, ‘impact’, ‘position’, ‘pain’ and so forth. The difficulty is compounded by the fact that we almost always consider ourselves to be ‘at rest’ : it is the countryside we see and experience whizzing by us when seated in a train. It requires a tremendous effort of imagination to see things from ‘the other object’s point of view’. Even a sudden jolt, an acceleration, is registered as a temporary annoyance that is soon replaced by the same self-centred ‘state of rest’. Highly complex and contrived set-ups like roller-coasters and other fairground machines are required to give us the sensation of ‘acceleration’ or ‘irregular movement’, a sensation we find thrilling precisely because it is so inhabitual. Basically, we think of ourselves as more or less permanently at rest, even when we know we are moving around. In UET everything actually is at rest for the space of a single ksana, it does not just appear to be and everything that happens occurs ‘at’ or ‘within’ a ksana (the elementary temporal interval).
I propose to take things further ─ not in terms of personal experience but physical theory. As stated, there is in UET no such thing as ‘continuous motion’, only succession ─ a succession of stills. An event takes place here, then a ksana or more later, another event, its replica perhaps, takes place there. What matters is what occurs and the number and order of the events that occur, everything else is secondary. This means not only that ultimate events do not move around ─ they simply have occurrence where they do have occurrence ─  but also that the distances between the events are in a sense ‘neither here nor there’, to use the remarkably  apt everyday expression. In UET v signifies a certain number of grid-spaces to right or left of a fixed point, a shift that gets repeated every ksana (or in more complex cases with respect to more than one ksana). In the case of a truncated event-chain consisting of just two successive events, v is the same as d, the ‘lateral displacement’ of event 2 with respect to the position of event 1 on the Locality (more correctly, the ‘equivalent’ of such a position a ksana later). Now, although the actual number of ‘grid-positions’ to right or left of an identifiable spot on the Locality is fixed, and continues to be the same if we are dealing with a ‘regular’ event-chain, the distance between the centres (‘kernels’) of adjacent spots is not fixed but can take any number (sic) of permissible values ranging from 0 to c* according to the circumstances. The ‘distance’ from one spot to another can thus be reckoned in a variety of legitimate ways ─ though the choice is not ‘infinite’. The force of the Constancy of the Occupied Region Principle is that, no matter how these intra-event distances are measured or experienced, the overall ‘area’ remains the same and is equal to that of the ‘default’ case, that of a ‘stationary’ Event Capsule (or in the more extended case a succession of such capsules).
This is a very different conception from that which usually prevails within Special Relativity as it is understood and taught today. Discussing the question of the ‘true’ speed of a particular object whose speed  is different according to what co-ordinate system you use, the popular writer on mathematics, Martin Gardner, famously wrote, “There no truth of the matter”. Although I understand what he meant, this is not how I would put it. Rather, all permissible ‘speeds’, i.e. all integral values of v, are “the truth of the matter”. And this does not lead us into a hopeless morass of uncertainty where “everything is relative” because, in contrast to ‘normal’ Special Relativity, there is in UET always a fixed framework of ultimate events whose number within a certain region of the Locality and whose individual ‘size’ never changes. How we evaluate the distances between them, or more precisely between the spots where they can and do occur, is an entirely secondary matter (though often one of great interest to us humans).

Space contraction and Time dilation 

In most books on Relativity, one has hardly begun before being launched into what is pretty straightforward stuff for someone at undergraduate level but what is, for the layman, a completely indigestible mass of algebra. This is a pity because the actual physical principle at work, though it took the genius of Einstein to detect its presence, is actually extreme simple and can much more conveniently be presented geometrically rather than, as usual today, algebraically. As far as I am concerned, space contraction and time dilation are facts of existence that have been shown to be true in any number of experiments : we do not notice them because the effects are very small at our perceptual level. Although it is probably impossible to completely avoid talking about ‘points of view’ and ‘relative states of motion’ and so forth, I shall try to reduce such talk to a minimum. It makes a lot more sense to forget about hypothetical ‘observers’ (who most of the time do not and could not possibly exist) and instead envisage length contraction and time dilation as actual mechanisms which ‘kick in’ automatically much as the centrifugal governor on Watt’s steam-engine kicks in to regulate the supply of heat and the consequent rate of expansion of the piston. See things like this and keep at the back of your mind a skeletal framework of ultimate events and you won’t have too much trouble with the concepts of space contraction and time dilation. After all why should the distances between events have to stay the same? It is like only being allowed to take photographs from a standing position. These distances don’t need to stay the same provided the overall area or extent of the ‘occupied region’ remains constant since it is this, and the causally connected events within it, that really matters.
Take v to represent a certain number of grid-spaces in one direction which repeats; for our simple truncated event-chain of just two events it is d , the ‘distance’ between two spots. d is itself conceived as a multiple of the ‘intra-event distance’, that  between the ‘kernels’ of any two adjacent ‘grid-positions’ in a particular direction. For any specific case, i.e. a given value of d or v, this ‘inter-possible-event’ distance does not change, and the specific extent of the kernel, where every ultimate event has occurrence if it does have occurrence, never changes ever. There is, as it were, a certain amount of ‘pulpy’, ‘squishy’ material (cf. cytoplasm in a cell) which surrounds the ‘kernel’ and which is, as it were, compressible. This for the ‘spatial’ part of the ‘Event Capsule’. The ‘temporal’ part, however, has no pulp but is ‘stretchy’, or rather the interval between ksanas is.
If the Constant Region Postulate is to work, we have somehow to arrange things that, for a given value of v or d, the spatial and temporal distances sort Relativity Circle Diagram tan sinthemselves out so that the overall area nonetheless remains the same. How to do this? The following geometrical diagram illustrates one way of doing this by using the simple formula tan θ = v/c  =  sin φ . Here v is an integral number of grid-positions ─ the more complex case where v is a rational number will be considered in due course ─ and c is the lowest unattainable limit of grid-positions (in effect (c* + 1) ).
Do these contractions and dilations ‘actually exist’ or are they just mathematical toys? As far as I am concerned, the ‘universe’ or whatever else you want to call what is out there, does exist and such simultaneous contractions and expansions likewise. Put it like this. The dimensions of loci (spots where ultimate events could in principle have occurrence) in a completely empty region of the Locality do not expand and contract because there is no ‘reason’ for them to do so : the default dimensions suffice. Even when we have two spots occupied by independent, i.e. completely disconnected,  ultimate events nothing happens : the ‘distances’ remain the ordinary stationary ones. HOWEVER, as soon as there are causal links between events at different spots, or even the possibility of such links, the network tightens up, as it were, and one can imagine causal tendrils stretching out in different directions like the tentacles of an octopus. These filaments or tendrils can and do cause contractions and expansions of the lattice ─ though there are definite elastic limits. More precisely, the greater the value of v, the more grid-spaces the causal influence ‘misses out’ and the more tilted the original rectangle becomes in order to preserve the same overall area.
We are for the moment only considering a single ‘Event Capsule’ but, in the case of a ‘regular event-chain’ with constant v ─ the equivalent of ‘constant straight-line motion’ in matter-based physics ─ we have  a causally connected sequence of more or less identical ‘Event Capsules’ each tilted from the default position as much as, but no more than, the last (since v is constant for this event-chain).
This simple schema will take us quite a long way. If we compare the ‘tilted’ spatial dimension to the horizontal one, calling the latter d and the former d′ we find from the diagram that d′ cos φ = d and likewise that t′ = t/cos φ . Don’t bother about the numerical values : they can be worked out  by calculator later.
These are essentially the relations that give rise to the Lorentz Transformations but, rather than state these formulae and get involved in the whole business of convertible co-ordinate systems, it is better for the moment to stay with the basic idea and its geometrical representation. The quantity noted cos φ which depends on  v and c , and only on v and c, crops up a lot in Special Relativity. Using the Pythagorean Formula for the case of a right-angled triangle with hypotenuse of unit length, we have

(1 cos φ)2 + (1 sin φ)2 = 12  or cos2 φ + sin2 φ = 1
        Since sin φ is set at v/c we have
        cos2 φ  = 1– sin2 φ   = 1 – (v/c)2       cos φ = √(1 – (v/c)2

         More often than not, this quantity  (√(1 – (v2/c2)  (referred to as 1/γ in the literature) is transferred over to the other side so we get the formula

         d′ = (1/cos φ) d   =     d /( √(1 – (v2/c2))      =  γ d

Viewed as an angle, or rather the reciprocal of the cosine of an angle, the ubiquitous γ of Special Relativity is considerably less frightening.

A Problem
It would appear that there is going to be a problem as d, or in the case of a repeating ‘rate’, v, approaches the limit c. Indeed, it was for this reason that I originally made a distinction between an attainable distance (attainable in one ksana), c*, and an unattainable one, c. Unfortunately, this does not eliminate all the difficulties but discussion of this important point will  be left to another post. For the moment we confine ourselves to ‘distances’ that range from 0 to c* and to integral values of d (or v).

Importance of the constant c* 

Now, it must be clearly understood that all sorts of ‘relations’ ─   perhaps correlations is an apter term ─ ‘exist’ between arbitrarily distant spots on the Locality (distant either spatially or  temporally or both) but we are only concerned with spots that are either occupied by causally connected ultimate events, or could conceivably be so occupied. For event-chains with a 1/1 ‘reappearance rhythm’  i.e. one event per ksana, the relation tan θ = v/c = sin φ (v < c) applies (see diagram) and this means that grid-spots beyond the point labelled c (and indeed c itself) lie ‘outside’ the causal ‘Event Capsule’ Anything that I am about to deduce, or propose, about such an ‘Event Capsule’ in consequence does not apply to such points and the region containing them. Causality operates only within the confines of single ‘Event Capsules’ of fixed maximum size, and, by extension, connected chains of similar ‘Event Capsules’.
Within the bounds of the ‘Event Capsule’ the Principle of Constant Area applies. Any way of distinguishing or separating the spots where ultimate events can occur is acceptable, provided the setting is appropriate to the requirements of the situation. Distances are in this respect no more significant than, say, colours, because they do not affect what really matters : the number of ultimate events (or number of possible emplacements of ultimate events) between two chosen spots on the Locality, and the order of such events.
Now, suppose an ultimate event can simultaneously produce a  clone just underneath the original spot,  and  also a clone as far as possible to the right. (I doubt whether this could actually happen but it is a revealing way of making a certain point.)
What is the least shift to the right or left? Zero. In such a case we have the default case, a ‘stationary’ event-chain, or a pair belonging to such a chain. The occupied area, however, is not zero : it is the minimal s03 t0 . The setting v = 0 in the formula d′ = (1/cos φ) d makes γ = 1/√(1 – (02/c2) = 1 so there is no difference between d′ and d. (But it is not the formula that dictates the size of the occupied region, as physicists tend to think : it is the underlying reality that validates the formula.)
For any value of d, or, in the case of repetition of the same lateral distance at each ksana, any value of v, we tilt the rectangle by the appropriate amount, or fit this value into the formula. For v = 10 grid-spaces for example, we will have a tilted Space/Time Rectangle with one side (10 cos φ) sand the other side                 (1/10 cos φ) t0 where sin φ = 10/c   so cos φ = √1 – (10/c)2  This is an equally valid space/time setting because the overall area is
         (10 cos φ) s0    ×   (1/10 cos φ) t0   =  s t0      

We can legitimately apply any integral value of v < c and we will get a setting which keeps the overall area constant. However, this is done at a cost : the distance between the centres of the spatial element of the event capsules shrink while the temporal distances expand. The default distance s0 has been shrunk to s0 cos φ, a somewhat smaller intra-event distance, and the default temporal interval t0 has been stretched to t0 /cos φ , a somewhat greater distance. Remark, however, that sticking to integral values of d or v means that cos φ does not, as in ‘normal’ physics, run through an ‘infinite’ gamut of values ─ and even when we consider the more complex case, taking reappearance rhythms into account, v is never, strictly never, irrational.
What is the greatest possible lateral distance? Is there one? Yes, by Postulate 2 there is and this maximal number of grid-points is labelled c*. This is a large but finite number and is, in the case of integral values of v, equal to c – 1. In other words, a grid-space c spaces to the left or right is just out of causal range and everything beyond likewise (Note 2).

Dimensions of the Elementary Space Capsule

I repeat the two basic postulates of Ultimate Event Theory that are in some sense equivalent to Einstein’s two postulates. They are

1. The mixed Space/Time volume/area of the occupied parallelipod/rectangle remains constant in all circumstances

 2. There is an upper limit to the lateral displacement of a causally connected event relative to its predecessor in the previous ksana

        Now, suppose we have an ultimate event that simultaneously produces a clone at the very next ksana in an equivalent spot AND another clone at the furthest possible grid-point c*. Even, taking things to a ridiculous extreme to make a point, suppose that a clone event is produced at every possible emplacement in between as well. Now, by the Principle of the Constancy of the Occupied Region, the entire occupied line of events in the second ksana can either have the ‘normal’ spacing between events which is that of the ‘rest’ distance between kernels, s0, or, alternatively, we may view the entire line as being squeezed into the dimensions of a single ‘rest’ capsule, a dimension s0 in each of three spatial directions (only one of which concerns us). In the latter case, the ‘intra-event’ spacing will have shrunk to zero ─ though the precise region occupied by an ultimate event remains the same. Since intra-event distancing is really of no importance, either of these two opposed treatments are ‘valid’.
What follows is rather interesting: we have the spatial dimension of a single ‘rest’ Event Capsule in terms of su, the dimension of the kernel. Since, in this extreme case, we have c* events squashed inside a lateral dimension of s0, this means that
s0 = c* su , i.e. the relation s0 : su = c*: 1. But s0 and su are, by hypothesis, universal constants and so is c* . Furthermore, since by definition sv tv = s0 t0 = Ω = constant , t0 /tv = sv/s0 and, fitting in the ‘ultimate’ s value, we have t0 /tu = su/c* su    = 1 : c*. In the case of ‘time’, the ‘ultimate’ dimension tu is a maximum since (by hypothesis) t0 is a minimum. c* is a measure of the extent of the elementary Event Capsule and this is why it is so important.
In UET everything is, during the space of a single ksana, at rest and in effect problems of motion in normal matter-based physics become problems of statics in UET ─ in effect I am picking up the lead given by the ancient Greek physicists for whom statics was all and infinity non-existent. Anticipating the discussion of mass in UET, or its equivalent, this interpretation ‘explains’ the tremendously increased resistance of a body to (relative) acceleration : something that Bucherer and others have demonstrated experimentally. This resistance is not the result of some arbitrary “You mustn’t go faster than light” law : it is the resistance of a region on the Locality of fixed extent to being crammed full to bursting with ultimate events. And it does not matter if the emplacements inside a single Event Capsule are not actually filled : these emplacements, the ‘kernels’, cannot be compressed whether occupied or not. But an event occurring at the maximum number of places to the right, is going to put the ‘Occupied Region’ under extreme pressure to say the least. In another post I will also speculate as to what happens if c* is exceeded supposing this to be possible.      SH    9/3/14

Notes:

Note 1  Zeno of Elea noted the ‘relativity of speed’ about two and a half thousand years before Einstein. In his “Paradox of the Chariot”, the least known of his paradoxes, Zeno asks what is the ‘true’ speed of a chariot engaged in a chariot race. A particular chariot has one speed with respect to its nearest competitor, another compared to the slowest chariot, and a completely different one again relative to the spectators. Zeno concluded that “there was no true speed” ─ I would say, “no single true speed”.

Note 2  The observant reader will have noticed that when evaluating sin φ = v/c and thus, by implication, cos φ as well, I have used the ‘unattainable’ limit c while restricting v to the values 0 to c*, thus stopping 1/cos φ from becoming infinite. Unfortunately, this finicky distinction, which makes actual numerical calculations much more complicated,  does not entirely eliminate the problem as v goes to c, but this important issue will be left aside for the moment to be discussed in detail in a separate post.
If we allow only integral values of v ranging from 0 to c* = (c – 1), the final tilted Casual Rectangle has  a ludicrously short ‘spatial side’ and a ridiculously long ‘temporal side’ (which means there is an enormous gap between ksanas). We have in effect

tan θ = (c–1)/c  (i.e. the angle is nearly 45 degrees or π/4)
and γ = 1/√1 – (c–1)2/c2 =  c/√c2 – (c–1)2 = c/√(2c –1)
Now, 2c – 1 is very close to 2c  so     γ  ≈ √c/2   

I am undecided as to whether any particular physical importance should be given to this value ─ possibly experiment will decide the issue one day.
In the event of v taking rational values (which requires a re-appearance rhythm other than 1/1), we get even more outrageous ‘lengths’  for sv and tv . In principle, such an enormous gap between ksanas, viewed from a vantage-point outside the speeding event-chain, should become detectable by delicate instruments and would thus, by implication, allow us to get approximate values for c and c* in terms of the ‘absolute units’ s0 and t0 . This sort of experiment, which I have no doubt will be carried out in this century, would be the equivalent in UET of the famous Millikan ‘oil-drop’ series of experiments that gave us the first good value of e, the basic unit of charge.

Most schoolchildren these days who have got beyond GCSE, certainly those who study science, have heard of the Lorentz transformations which, in the theory of Special Relativity, replace the Galilean transformations. These ‘transformations’ ─ ‘adaptations’ would perhaps be a better term ─   enable us to plot the motion of a body using two different co-ordinate systems  (basically three lines at right angles to each other), and to convert the specifications of a body’s position within one system to their equivalent specifications in the other. Why do we need these transformations/adaptations? Because in everyday life, especially modern life,  we are forever switching from one event environment to another and often need to communicate our changing position to someone whose instantaneous position and general state of motion is different to ours. If I take a couple of steps in an aeroplane I have changed my location within the aeroplane by a few metres at most. But, from the point of view of a land-based control system at an airport, my position has changed very substantially indeed because the controller will have ‘added in’ the speed of the aeroplane. If we did not continually convert positions within one system to equivalent positions in another, there would be serious accidents all the time and modern life would be impossible. Even at the level of a band of hunters stalking a mammoth, quite sophisticated calculations of the prey’s movements relative to, say, certain prominent rocks or trees visible to all the hunters would have been necessary. Indeed, some ethnologists have even conjectured that it was this need to communicate vital information to colleagues while on a hunting expedition that gave rise to spoken language in the first place ─ “Go right, he’s behind that tree”, “He’s turning round, watch out!!” and so forth (Note 1.)

    Although they have a fancy name, ‘Galileian’ transformations (named after Galileo) are entirely commonsensical. If an object is moving steadily in a set direction while we remain at the same spot, we obviously have to take the object’s speed into account when, for example,  we take aim with a rifle or bow and arrow. Distances from a fixed point ─ a particular rock or tree, say ─  will be different  depending on whether we are using the animal’s changing ‘position system’ or our own static one. The object’s distance from a fixed point along the direction of travel is going to increase if the object is moving relative to it, but, if the object is moving at a constant speed this distance will increase in a completely  predictable manner. Our distance d from the fixed point, however, is going to stay the same if we are stationary. To work out the moving object’s changing distance from the fixed point we have to factor in the object’s speed v in order to predict where it will be in so many seconds or minutes and aim accordingly. What we do is to multiply the object’s speed, which we assume to be constant, by the anticipated lapse of time, not forgetting to add the original distance from the fixed point.  Mathematically d′ (d prime) = d + vt where v is (by hypothesis) constant, say 5 metres a second. If the original distance between the object and a certain landmark is 10 metres and the object is moving at a rate of 5 metres per second, the distance at the end of the fifth second will be 10 + 25 = 35 metres.

The situation becomes more complicated if we ourselves are moving relative to a fixed point, stalking our prey in a canoe for example, since we have to factor in our speed as well. Nonetheless, predators and even experienced human hunters are incredibly good at making these sort of complicated predictions, even taking into account a prey’s variable  speed and zig-zag path. Big cats and hunters with bow and arrow  became good at this sort of thing because their survival depended on such calculations : sheer speed and brute strength are, by themselves, not enough. Just as the science of geometry (literally ‘land-measurement’) had its unromantic origin in the accurate surveying of land for taxation purposes, kinematics (the study of objects in motion) almost certainly had its origins in hunting and warfare, especially  archery.

But if we ourselves are stationary whle the distance of our moving object changes along the direction of travel, i.e. we have to modify the x  coordinate, everything else remains the same. If there is a wall exactly parallel to the direction of travel, the object’s distance from the wall, provided he or it keeps on track, remains the same, also the object’s height above sea level if there are no bumps or hills on its path. As for time, ‘obviously’ a second is a second wherever you are or whatever you are doing. Newton himself stated his firm belief in ‘absolute time’ in his Principia though he conceded that actual methods or devices for measuring time might vary quite a lot for technical reasons ─ precise time-keeping on a ship, for example, is notoriously difficult  because of the erratic motion of the ship in bad weather and designing a timepiece that ‘kept time’ accurately on board ship was a huge challenge (see the book Longitude).

To sum up, in a simple case of an object’s motion in one direction at constant speed, the Galileian transformations remained the same except in the direction of motion. Mathematically, we have x′ = x ± vt,  y′ = y, z′ = z, t′ = t.

All this sounds so obvious that scarcely anyone gave much thought to the matter until the null result of the Michelsen-Morley experiment designed to detect the Earth’s movement through the surrounding ‘aether’. Lorentz seems to have developed his transformations in an essentially ad hoc manner in order to cope with the Michelsen-Morley experiment and other puzzling experimental results. The Lorentz transformations differ from the ‘normal’ Galileian ones  in two respects. Firstly, the adaptation required for distances along the presumed path of travel when changing from one coordinate system to the other, is somewhat more complicated. Secondly, the ‘time’ dimension needs tinkering with. In effect the Lorentz Transformations imply that ‘time’ does not run at the same rate in a stationary system as it does in a moving one ─ though Lorentz himself does not seem to have drawn this particular conclusion. The Irish physicist Fitzgerald did seriously suggest that “lengths were contracted in the direction of the Earth’s motion through the ether” but no one before Einstein seems to have thought that time ‘runs slower’ or ‘faster’ depending on one’s ‘state of motion’ and reference system. Since any regular sequence of events can be used as a time-keeping system, this in effect means that physical processes ‘speed up’ or ‘slow down’ depending on where you are standing and how you and the observed object are moving relative to each other. (Note 2).

Einstein, in his 1905 paper,- developed the exact same Lorentz formulae from first principles and always maintained that he did not at the time (sic) know of Lorentz’s work. What were Einstein’s assumptions? Only two.

 

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

It has since been pointed out that Einstein did, in fact, assume rather more than this. For one thing, he assumed that ‘free space’ is homogeneous and isotropic (the same in all directions) (Note 3). Einstein also seems to have envisaged  ‘space’ and ‘time’ as being ‘continuous’ ─ certainly all physicists at the time  assumed this without question and the wave theory of electro-magnetism required it as its inventor, Maxwell, was well aware. However, the continuity postulate does not seem to have played much of a part in the derivation of the equations of Special Relativity though it did become more prominent when the mathematician Minkowski (one of Einstein’s teachers) got to work on the problem and coined the well-known phrase ‘Space/Time continuum’. It is only quite  recently that one or two physicists  have dared to suggest that space and time might be  ‘grainy’ and, even then, very few have seriously thought about the serious consequences of such an assumption. Unless specifically told otherwise, physics students still tend to  assume that physical processes are continuous, proceed ‘without a break’ as it were.  Despite the photo-electric effect, quantum wave/particle duality, the complete victory of the digital computer over the analogue and all sorts of other phenomena that point in the direction of discreteness and discontinuity, most physicists  still think of ‘space’, ‘time’ and electro-magnetism as being ‘continuous’. And training in calculus methods, of course, only reinforces this essentially misguided model of physical reality.
(Note 4)

Coordinate Systems ─ who needs them?

Although inertia itself is undoubtedly a force to be reckoned with (and thus ‘real’) inertial ‘frames’, which play such a big role in Special Relativity, do not exist in Nature. Migrating birds navigate expertly across the surface of the globe without knowing anything at all about co-ordinate systems  ─ though they must have some sort of an inherited ‘neural positioning system’ of their own. Co-ordinate systems and similar mathematical devices are man-made systems, constructed for our own interest and convenience, and it was precisely this realisation that, along with other considerations, motivated Einstein himself at a slightly later date to try to formulate the ‘laws of physics’ in a way that did not depend on any particular reference frame, inertial or not. Practically speaking, at any rate in an advanced industrial society, we need the complicated and often tiresome paraphernalia of co-ordinate systems, inertial frames and transformation formulae, but this does not make any of them independently real. And this is worth saying because the tendency today is to give the student the impression that there is only one way to Special Relativity and other branches of physics, the co-ordinate way : the student is taught to distrust his or her ‘intuition’ and ‘common sense’ in favour of familiarity with complicated mathematical constructs which are in effect treated as being  ‘more real than reality’.
The problem is compounded by warnings about “sticking strictly to observables”. Heisenberg and Bohr are the main people responsible for this positivistic approach and it may be some consolation to those of us who find this kind of talk irritating and counter-productive to know that Einstein was of the same opinion. He in effect told Heisenberg that it made no sense to ‘keep strictly to observables’ since “It is the theory that decides what we can observe” ─ a pretty devastating retort. Certainly, it is theory that points the experimenter in a direction that otherwise he or she would most likely never have taken. It is ironical that the experimentalist Millikan was so outraged by Einstein’s ‘particles of light’ theory that he at once embarked on a series of experiments to prove that the hare-brained notions  of an obscure employee at the Swiss Patents Office were a lot of rubbish (Note 5) ─ and ended up by providing Einstein with valuable evidence in favour of  his (Einstein’s) theory.
Though I certainly hope that one day someone will undertake experiments that test some of my own guesses and predictions, I make no apology for the ‘qualitative’ approach I am currently taking. Formulae we already have, enough to sink a battleship, but, to make further progress, we may well need to ‘return to basics’ and think differently about well-known physical phenomena. I also believe it is desirable where possible to obtain a clear visual or tactile picture of what is most likely going on beneath the surface of phenomena. Most people, even pure mathematicians, work with a semi-conscious model of reality at the back of their minds anyway, so one might as well lay one’s hands on the table and admit the fact. Several of these mental picture-maps have proved to be extremely helpful and we would not be where we are today without the had to be abandoned in the end. Incredible though it sounds today, the majority of the scientific establishment at the beginning of the 20th century was deeply sceptical about the ‘reality’ of atoms  and elementary particles because their existence could, at the time, only be inferred, not observed directly. The Austrian physicist Mach (of Mach numbers fame) remained sceptical practically to the end of his life.

A Limiting Speed for all causal processes

During the years following Einstein’s two 1905 papers that laid out the basic ideas of the theory of Special Relativity ─ ‘special’ because it only applied to inertial systems and, in particular,  ignored gravity altogether ─ Einstein was criticised, and in a sense rightly, for the excessive importance he gave to light and electro-magnetism. Einstein himself conceded that “it is immaterial what physical process one chooses for a definition of time…..[provided] the process enables relations to be established between different places.” Nonetheless, the emphasis given to electro-magnetism (and for that matter to clocks and measuring rods) masks the true nature of the Special Theory : it is basically a theory about the propagation not of light but of causality. Time and time keeping come into the picture only because causality is sequential ─ the effect follows the cause and there is always a time lag between the two.
There is, in the literature of Relativity, a great deal of talk  about ‘sending messages across Space/Time’  and how one cannot do this at a speed greater than that of light (though now and again some reputable scientists raise their heads above the parapet and claim to have done just this). But why is the sending of a message important? To many people today, it is important because that is basically what everything is about, i.e. the transfer of ‘information’ ─ as if the universe had nothing better to do than to amass data and send it around at high speed over its own Internet. But this is no answer at all. The rate of transmission of ‘information’ is important because if I want something done at a place far away from where I am standing, I have in some way or other to ‘send a message’ so that someone or something will carry out my wishes. It is the resulting event and not the transfer of information than matters, not data and its dissemination but what actually happens. “The world is everything that has occurrence”, not “everything that can be computer-programmed” ─ to paraphrase Wittgenstein (Note 6).
It remains regrettable that Einstein did not emphasize more strongly the causal aspect of his theory. Minkowski upstaged Einstein by transferring the basic issue around which SR revolved away from the realm of physics to the realm of (semi-pure) mathematics. This development ─ of which Einstein originally disapproved ─ was in one sense fruitful since it led on to the General Theory of Relativity, but it also closed off other avenues along which Einstein’s thinking had been moving. Einstein in effect transferred his interest from particles (photo-electric effect, Brownian motion &c.) to fields and the field is, by definition, a continuous concept. There are no ‘holes’ or ‘gaps’ in the magnetic field surrounding a horseshoe or bar magnet, nor any ‘safe places to hide’ in the gravitational field near the centre of the Earth. Indeed, Einstein ended up by believing so much in  fields that he decided they were the only reality and attempted to develop a theory that there was, at bottom, only one all embracing, ‘Unified Field’.

Relevance to Ultimate Event Theory

 Why is all this significant in the context of the theory of ‘Ultimate Events’ I am trying to develop? Because, at one stage in his career, Einstein focussed his attention on precisely localized events in Space/Time and their causal interconnections and made the most important contribution to date to a scientific theory of causality by pointing out that the speed of light, if it was an upper limit to the propagation of causal influences of any nature whatsoever, divided the universe into distinct ‘causal zones’. The possible ‘causal range’ of an event was thus limited in a precise way and one thus had, in principle, a foolproof way of deciding whether events occurring at distant  places could or could not be, causally connected (Note 7).
        In a sense, I suppose what I am attempting to do, without at first realizing it and naturally without Einstein’s abilities,  is to turn the clock back to 1905 before Minkowski and Quantum Mechanics muddied the event waters. What we had then for a brief moment was  a world-view centred on pointlike events that had occurrence at precise positions on the Locality (Space/Time if you like) and a network of possible causal relations stretching out in every direction (except backwards in time). Light and electro-magnetic phenomena are not essential to this overall picture  which covers all physical phenomena and the idea that there is an ‘upper speed limit’ for all physical processes would apply even in a world where there was no such thing as electro-magnetic radiation, supposing this to be possible.
So, I asked myself what assumptions, within the general schema of Ultimate Event Theory, are required to derive the basic results of Special Relativity or their equivalents? Only two as far as I can see ─ apart from certain very general assumptions about the nature of ‘events’ and their localization, the equivalent of a crude molecular/atomic theory such as existed around 1900. The second postulate is tricky to formulate and will be dealt with separately but the first is straightforward enough : it is the Limiting Speed Principle. This states that there is a limiting ‘speed’ to the propagation of all causal influences ─ a fairly reasonable assumption I think. What is its basis? Well, I simply cannot imagine a ‘world’ where there was not such a principle : effects would be instantaneous with causes and everything would be happening at once. Such a state of affairs does apparently exist, at least exceptionally,  in sorcery but not usually in science (Note 8) Eddington once observed that one could decide simply from a priori assumptions that there  must, in every possible universe, be an upper speed limit to the transfer of energy or information ─ though he added that the actual value of  such a constant would have to be decided by experiment.  I entirely go along with this; so what effect does the Principle have on the ‘normal’ method of compounding velocities?

Addition of Velocities 

If I roll a marble  down the floor of a carriage in a train moving at a constant rate, it will have a certain speed that I naturally calculate relative to the ‘co-ordinate system’ of the carriage which for me is stationary ─ it might as well be held up at a station for all the difference it makes. However, for someone standing outside the train, say in a field as the train whizzes by, the marble has a much greater velocity. Calling the marble’s velocity relative to me and the train v1 and the velocity of the train v2, the marble’s velocity, viewed by a farmer or shepherd in a field is simply v3 = v1 + v2 . But the Earth itself is moving on an approximately straight path during a short interval of time at nearly constant speed, so if we add on the Earth’s orbital speed and then the speed of the solar system as it orbits the centre of the Milky Way galaxy, we soon arrive at quite stupendous speeds even for a slow-moving  marble.
The big question is : can a ‘linear’ combination of speeds end up by exceeding the upper limit, c, and, if so, does this matter? The answer to the first question is, “Yes” and to the second question, “That depends”.
In UET, a distinction is made between an unattainable upper limit c, and the highest attainable speed, c* . But this complication need not bother us for the moment and I shall follow established usage and treat c as being an all-round upper limit, whether attained or not, which is around  3 × 108 m/sec. So v, the speed of any object or process (excluding light, X-rays, microwaves and so on)  is by definition less than c, i.e. v < c . But suppose we ‘compound’ speeds as in the case of the marble in the train which is itself fixed to the moving earth &c. &c., can a combination of speeds, each less than c, exceed c ?  Obviously, yes, since, for example, c/2 + c/2 + c/2  = (3/2)c  > c.
        What about if we restrict ourselves to just two speeds ? The most instructive way to deal with this is to express v in terms of c, i.e. make it a ‘fraction’ of c, conceivably an improper fraction. So we have c/m + c/n where both m and n are > 1 . The ‘compounded’ speed is thus  c/m + c/n  =  c(1/m + 1/n)  m, n > 1
The smallest possible integral choice for m and n is 2. In such a case the sum of the two individual speeds, though both less than c, equals c since c(1/2  + 1/2)  = c. However, even this speed is not actually greater than c  and for any other choice of integers we will not even be able to equal c. For example,  c/2 + c/3 = (5/6) c < c . More generally,  c(1/m + 1/n) = c (m + n)/(mn)  and mn, the denominator > (m + n) unless m, n = 2 (both of them). This is actually an interesting result in UET though not in normal  physics.
What about if we let m and n be ‘improper fractions’, i.e. rational numbers > 1. We will still have individual speeds < c since in each case the denominator > 1. All we need is to find a case where both m and n  > 1 (and so c/m and c/n are each < c) but where (m + n)/(mn) > 1. This is not too difficult. For example, if v1 = c/(3/2)        v2 = c/(9/8)

v1 + v2  = c ((3/2) + (9/8)) )  = c ( (12 + 9)/8  = 42/27 = c (14/9) > c
                   (3/2) × (9/8)          27/16

        There are in fact any number of cases where we can find rational numbers (but not integers) that fit the case.

So the ‘obvious’ rule for adding velocities doesn’t work if we restrict all velocities to be less than c ─ or in the sole case of light equal to c. So how can we manipulate the rule for adding velocities to always keep under the upper limit of c ?  Examining the expression  c (m + n)/mn  it is clear that we must somehow negate the fateful influence of the dominator mn since it is this factor, increasing as it does so much faster than (m + n), that tips the balance. So what we need is a new factor  mn/f(mn)    which when  applied will get rid of the mn in c (m + n)/mn   and stop the whole  thing from exceeding c. The simplest such function is mn/(1 + mn) since, for all m, n > 1 this is greater than mn while at the same time ridding us of the mn in c (m + n)/mn    Also, and crucially,   the final result    

(m + n)  ×     mn        =  (m + n)    < 1           since the denominator is
mn         (1 + mn)        ( 1 + mn)

larger than the numerator for all m, n > 1. (If you don’t believe me try out a few values.)

So, if we make the rule for combining two velocities, each less than c,

v1 + v2 = (v1 + v2 )
1 + v1 v2
we shall always get a result less than c.
Whether this is the only rule that ‘does the trick’ I do not know ─ I would guess that it is not but it does seem to be the simplest such rule and the one that follows most naturally from the situation. Nonetheless, I am not altogether happy with this derivation (given in more detail in the post 48) because it is too mathematical. I would like to see this rule emerging from some inevitable physical, visualizable situation but currently I don’t see how to manage this (Note 9).
Anyway, we have now the first of Einstein’s new formulae of SR, the rule for the addition or compounding of velocities so that the resulting velocity never exceeds a certain limit. What is required now is a way of deriving, using the basic concepts of UET, the Einstein formulae for space contraction, time dilation and, more important of all, the celebrated E = mc2 equivalence of mass and energy equation.    SH 
Note 1. It is worth pointing out that, although the use of some sort of reference system to locate a moving object, must go back very far in time, the most natural way of doing this is not numerical/geometrical but topological. We do not say an object in a room is so many feet from a particular corner at ground level, so many feet above the ground and so on. We say the object is ‘on’ the table, ‘underneath’ the chair, ‘above the bookcase’, ‘alongside’ the fireplace and so on. These directions are ‘topological’ since topology is the branch of mathematics which deals in ‘nearness’ and ‘connectedness’ to the exclusion of metrical distance. Bohm is the first person to have pointed this out as far as I know (in a recorded debate with, I think, Price).

Note 2   From a Newtonian, or even classical scientific’ point of view, it was unthinkable that ‘the rate of physical processes should ‘speed up’ or ‘slow down’ depending on where you were standing and how you and the observed object were moving relative to each other. But it is not actually such a shocking idea from a non-scientific, ‘subjective’ standpoint. We are all familiar with how ‘experienced time’ speeds up or slows down according to mood, “A watched kettle never boils” and so on. It is only necessary to extend the range and validity of the basic principle. Explaining, or perhaps simply describing, the Mossbauer effect to someone the other day, I said, “The basic idea is that all processes at the top of a high building proceed at a faster rate than at the bottom” and she did not find this particularly startling.

Note 3  According to General Relativity ‘Space-Time’ is not homogeneous and isotropic in all directions but ‘blotchy’ and warped even when there are no massive objects in the immediate vicinity; also the velocity of light in free space is not strictly constant since a light ray deviates from a straight line (accelerates) when passing near a massive object such as the Sun.

Note 4   Even the few physicists who do entertain the idea that space and time might be ‘grainy’, do not usually go so far as to suggest that there are ‘gaps’ or ‘holes’ in the apparently dense fabric of Space/Time. However, see the excellent article Follow the Bouncing Universe by Martin Bojowald in the Scientific American booklet “Our Universe and Beyond”.
A few philosophers such as Plato and Heidegger have indeed suggested something along these lines, but without making much of the notion. Hinayana Buddhism, of course, takes it for granted that physical reality is ‘gapped’, since there are no ‘continuous’  entities whatsoever ─ except perhaps nirvana which is not a ‘normal’ entity to say the least.

Note 5  “The only person who took much notice of it [Einstein’s 1905 paper] was an American experimental physicist, Robert Millikan, who was so infuriated when he heard about it that he promptly set out to try to prove Einstein was wrong.” John & Mary Gribbin, Annus Mirabilis p. 85.   

Note 6 At one point in his astonishing career, Alexander the Great became convinced that Philotas, the son of Alexander’s most trusted general, Parmenio, was plotting to kill him. After a hurried ‘court-martial’ Macedonian-style Philotas was found guilty and ‘pierced with javelins’. What of Parmenio? It seemed to Alexander unwise to leave the old general alive, whether guilty or not, since he would automatically become a focus for further rebellions. But he was far away, in Media, so a message was sent for him to be put to death. The point is that it was not the sending of the message that mattered, it was Paremenio’s death, an event, that mattered.
The Romans set up an elaborate system of beacons over much of the Empire, thus using the speed of light for the transmission of information, and the English did the same at the time of the Spanish Armada. But even this system involved some delay, partly because of the finite speed of action and reaction within the human nervous system but also because of the nature of light itself. Most people at the time of the Armada still believed that the speed of light was ‘infinite’ but Galileo, for one, thought otherwise and tried to ‘time’ the delay involved in the transmission of light signals. He was not successful in this but he paved the way for more accurate estimates using advanced astronomical techniques. Einstein, in so many ways a man treading in the very footprints of his illustrious predecessor, realized how important this issue was since, if light had a fixed speed and light was the fastest ‘thing’ there was, this put an upper limit to the speed of propagation of all causal processes. And Einstein, like the man in the street, was a fervent believer in causality ─ so much so that this put him off Quantum Theory since the latter (arguably) violates the ‘laws’ of causality.
The question of whether or not a message could have been sent from one point to another ‘in Space/Time’ remains an incredibly important issue, and we have not heard the last of it,  since it not only concerns theoretical physics but, for example, criminal law : it is on this basis that we decide whether an alibi is valid, whether a Mafia chief  ‘had the time’ and the means to send orders by mobile phone and so on and so forth.

Note 7    ????

Note 8 One might reasonably wonder why no such principle had been formulated before. The answer is that no one seem to have envisaged that any physical process could happen at anything remotely approaching, even less exceeding, the speed of light. Newton was embarrassed by the fact that the operation of gravity was, in his theory, ‘instantaneous’ and it was precisely for this and related reasons that almost all continental scientists totally rejected the Law of Universal Attraction as being much too far-fetched. Contemporary physicists seem to think that gravitational effects, ripples in Space/Time and the like, propagate at the speed of light.

Note 9 How exactly Einstein himself arrived at this simple but absolutely crucial formula is not clear. His 1905 paper makes difficult reading today while subsequent ‘popular’ accounts by the great man are a little too clearcut. One suspects that Einstein knew exactly where he wanted to get to and fished around for a likely formula that would take him to the desired conclusion. This is the normal way in which physicists and mathematicians discover things, i.e. they work backwards from a conjectured result, not forwards, step by step, following a straight deductive path. Humans rarely discover anything important by applying painstaking logical procedures : we make ‘inspired guesses’, some of which eventually turn out to be the case and others not.

 

The phenomenon of time dilation, though not noticeable in everyday circumstances, is not a mathematical trick but really exists. Corrections for time dilation are made regularly to keep the Global Positioning System from getting out of sync. The phenomenon becomes a good deal more comprehensible if we consider a network of ultimate events which does not change and spaces between then which can and do change. We are familiar with the notion that ‘time speeds up’ or ‘slows down’ when we are elated or anxious : the same, or very similar occurrences, play out differently according to our moods. Of course, it will be pointed out that the distances between events do not ‘actually change’ in such cases, only our perceptions. But essentially the same applies to ‘objective reality’. or would apply if our senses or instruments were accurate enough the selfsame events would slow down or speed up according to our viewpoint and relative state of motion.
RELATIVITY TIME DILATION DIAGRAMThis could easily be demonstrated by making a hinged ‘easel’ or double ladder which can be extended at will in one direction without altering the spacing in the other, lateral, direction. The ‘time dimension’ is down the page.The stars represent ultimate events, light flashes perhaps which are reflected back and forth in a mirror (though light flashes are made of trillions of ultimate events packed together). The slanting zig-zag line connects the ultimate events : they constitute an event-chain. By pulling the red right hand side of the double ladder outwards and extending it at the same time, we increase the difference between the ultimate events on this part of the ladder but do not increase the ‘lateral’ distance. These events would appear to an observer on the black upright plane as ‘stretched out’ and the angle we use represents the relative speed. As the angle approaches 90 degrees, i.e. the red section nearly becomes horizontal, the red part of the ladder has become enormously long. Setting different angles shows the extent of the time dilation for different relative speeds.
Note that in this diagram the corresponding space contraction is not shown since it is not this spatial dimension that is being contracted (though there will still be a contraction in the presumed direction of motion). We are to imagine someone flashing a torch across a spaceship and the light being reflected back. Any regular repeating event can be considered to be a ‘clock’.
What such a diagram does not show, however, is that, from the point of view of the red ladder, it is the other event-chain that will be stretched : the situation is reversible.

The idea for this diagram is not original : Stefan Wolfram has a similar more complicated set of diagrams on p. 524 of his great work A New Kind of Science. However, he makes the ‘event-lines’ continuous and does not use stars to mark ultimate events.  More elaborate models could actually be made and shown in science museums to demonstrate time dilation. There is, I think, nothing outrageous in the idea that the ‘distance between two events’ is variable : as stated we experience this ourselves all the time. What is shocking is the idea of the whole of Space/Time contracting and dilating. Ultimate events provide as it were the skeleton which shows up in an X-ray : distances between events are flesh that does not show up. There is ‘nothing’ between these events, nothing physical at any rate.        SH 

,

Although, in modern physics,  many elementary particles are extremely short-lived, others such as protons are virtually immortal. But either way, a particle, while it does exist, is assumed to be continuously existing. And solid objects such as we see all around us like rocks and trees are also assumed to carry on being rocks and trees from start to finish even though they do undergo considerable changes in physical and chemical composition. What is out there is  always there when it’s out there, so to speak.
However, in Ultimate Event Theory (UET) the ‘natural’ tendency is for everything to flash in and out of existence and most ultimate events, the ‘atoms’ or elementary particles of  Eventrics,  disappear for ever leaving no trace and even with more precise instruments than we have at present, wouldshow up as a sort of faint permanent background ‘noise’, a ‘flicker of existence’. Certain ultimate events, those that have acquired persistence ─ we shall not for the moment ask how and why they acquire this property ─ are able to bring about, i.e. cause, their own re-appearance and eventually to constitute a repeating event-chain or causally bonded sequence. And some event-chains also have the capacity to bond to other event-chains, eventually  forming relatively persistent clusters that we know as matter.  All apparently solid objects are, according to the UET paradigm, conglomerates of repeating ultimate events that are bonded together ‘laterally’, i.e. within  the same ksana, and ‘vertically’, i.e. from one ksana to the next. And the cosmic glue is not gravity or any other of the four basic forces of contemporary physics but causality.

The Principle of Spatio/Temporal Continuity

Newtonian physics, likewise 18th and 19th century rationalism generally, assumes what I have referred to elsewhere as the Postulate of Spatio-temporal Continuity. This postulate or principle, though rarely explicitly  stated in philosophic or scientific works,  is actually one of the most important of the ideas associated with the Enlightenment and thus with the entire subsequent intellectual development of Western society (Note 1). In its simplest form, the principle says that an event occurring here, at a particular spot in Space-Time (to use the traditional term), cannot have an effect there, at a spot some distance away without having effects at all (or at least most or some) intermediate spots. The original event, as it were, sets up a chain reaction and a frequent image used is that of a whole row of upright dominoes falling over one after the other once the first has been pushed over. This is essentially how Newtonian physics views the action of a force on a body or system of bodies, whether the force in question is a contact force (push/pull) or a force acting at a distance like gravity ─ though in the latter case Newton was unable to provide a mechanical model of how such a force could be transmitted across apparently empty space.
As we envisage things today, a blow affects a solid object by making the intermolecular distances of the surface atoms contract a little and they pass on this effect to neighbouring atoms which in turn affect nearby objects they are in contact with or exert an increased pressure on the atmosphere, and so on. Moreover, although this aspect of the question is glossed over in Newtonian (and even modern) physics, each transmission of the original impulse  ‘takes time’ : the re-action is never instantaneous (except possibly in the case of gravity) but comes ‘a moment later’, more precisely at least one ksana later. This whole issue will be discussed in more detail later, but, within the context of the present discussion, the point to bear in mind is that,  according to Newtonian physics and rationalistic thought generally, there can be no leap-frogging with space and time. Indeed, it was because of the Principle of Spatio-temporal Continuity that most European scientists rejected out of hand Newton’s theory of universal attraction since, as Newton admitted, there seemed to be no way that a solid body such as   the Earth could affect another solid body such as the Moon thousands  of kilometres without affecting the empty space between. Even as late as the mid 19th century, Maxwell valiantly attempted to give a mechanical explanation of his own theory of electro-magnetism, and he did this essentially because of the widespread rock-hard belief in the principle of spatio-temporal continuity.

So, do I propose to take the principle over into UET? No, except possibly in special situations. If I did take over the principle, it would mean that certain regions of the Locality would soon get hopelessly clogged up with colliding event-chains. Indeed, if all the possible positions in between two spots where ultimate events belonging to the same chain had occurrence were occupied, event-chains would behave as if they were solid objects and one might as well just stick to normal physics. A further, and more serious, problem is that, if all event-chains were composed of events that repeated at every successive ksana, one would expect event-chains with the same ‘speed’ (space/time ratio with respect to some ‘stationary’ event-chain) to behave in the same way when confronted with an obstacle. Manifestly, this does not happen since, for example, photon event-chains behave very differently from neutrino event-chains even though both propagate at the same, or very similar, speeds.
One of the main reasons for elaborating a theory of events in the first place was my deep-rooted conviction ─ intuition if you like ─ that physical reality is discontinuous. I do not believe there is, or can be, such a thing as continuous motion, though there is and probably always will be succession and thus change since, even if nothing else is happening, one ksana is perpetually being replaced by another, different, one ─ “the moving finger writes, and, having writ, moves on” (Rubaiyat of Omar Khayyam). Moreover, this movement is far from smooth : ‘time’ is not a river that flows at a steady rate as (Newton envisaged it) but a succession of ‘moments’, beads of different sizes threaded together to make a chain and with minute gaps between the beads which allow the thread that holds them together to become momentarily visible.
If, then, one abandons the postulate of Spatio-temporal Continuity, it becomes perfectly feasible for members of an event-chain to ‘miss out’ intermediate positions and so there most definitely can be ‘leap-frogging’ with space and time. Not only are apparently continuous phenomena discontinuous but one suspects that they have very different staccato rhythms.

‘Atomic’ Event Capsule model

 At this point it is appropriate to review the basic model.
I envisage each ultimate event as having occurrence at a particular spot on the Locality, a spot of negligible but not zero extent. Such spots, which receive (or can receive) ultimate events are the ‘kernels’ of much larger ‘event-capsules’ which are themselves stacked together in a three-dimensional lattice. I do not conceive of there being any appreciable gaps between neighbouring co-existing event-capsules : at any rate, if there are gaps they would seem to be very small and of no significance, essentially just demarcation lines. According to the present theory these spatial ‘event-capsules’ within which all ultimate events have occurrence cannot be extended or enlarged  ─ but they can be compressed. There is, nonetheless,  a limit to how far they can be squeezed because the kernels, the spots where ultimate events can and do occur, are incompressible.
I believe that time, that is to say succession, definitely exists; in consequence, not only ultimate events but the space capsules themselves, or rather the spots on the Locality where there could be ultimate events, appear and disappear just like everything else. The lattice framework, as it were, flicks on and off and it is ‘on’ for the duration of a ksana, the ultimate time interval (Note 2). When we have a ‘rest event-chain’ ─ and every event-chain is ‘at rest’ with respect to itself and an imaginary observer moving on or with it ─ the ksanas follow each other in close succession, i.e. are as nearly continuous as an intrinsically  discontinuous process can be.
According to the theory, the ‘size’ or ‘extent’ of a ksana cannot be reduced  ─ otherwise there would be little point in introducing the concept of a minimal temporal interval and we would be involved in infinite regress, the very thing which I intend to avoid at all costs. However, the distance between ksanas can, so it is suggested, be extended, or, more precisely, the distance between the successive kernels of the event capsules, where the ultimate events occur, can be extended. That is, there are gaps between events. As is explained in other posts, in UET the ‘Space/Time region’ occupied by the successive members of an event-chain remains the same irrespective of ‘states of motion’ or other distinguishing features. But the dimensions themselves can and do change. If the space-capsules contract, the time dimension must expand and this can only mean that the gaps between ksanas widen (since the extent of an ‘occupied’ ksana is cnstant. The more the space capsules contract, the more the gaps must increase (Note 3).  But, as with everything else in UET, there is a limiting value since the space capsules cannot contract beyond the spatial limits of the incompressible kernels. Note that this ‘Constant Region Principle’ only applies to causally related regions of space ─ roughly what students of SR view as ‘light cones’.

The third parameter of motion

 In traditional physics, when considering an object or body ‘in motion’, we essentially only need to specify two variables : spatial position and time. Considerations of momentum and so forth is only required because it affects future positions at future moments, and aids prediction. To specify an object’s ‘position in space’, it is customary in scientific work to relate the object’s position to an imaginary spot called the Origin where three mutually perpendicular axes meet. To specify the object’s position ‘in time’ we must show or deduce how many ‘units of time’ have elapsed since a chosen start position when t = 0. Essentially, there are only two parameters required, ‘space’ and ‘time’ : the fact that the first parameter requires (at least) three values is not, in the present context, significant.
Now, in UET we likewis need to specify an event’s position with regard to ‘space’ and ‘time’. I envisage the Event Locality at any ‘given moment’ as being composed of an indefinitely extendable set of ‘grid-positions’. Each ‘moment’ has the same duration and, if we label a particular ksana 0 (or 1) we can attach a (whole) number to an event subsequent to what happened when t = 0 (or rather k = 0). As anyone who has a little familiarity with the ideas of Special Relativity knows, the concept of an  ‘absolute present’ valid right across the universe is problematical to say the least. Nonetheless, we can talk of events occurring ‘at the same time’ locally, i.e. during or at the same ksana. (The question of how these different  ‘time zones’ interlock will be left aside for the moment.)
Just as in normal physics we can represent the trajectory of an ‘object’ by using three axes with the y axis representing time and, due to lack of space and dimension, we often squash the three spatial dimensions down to two, or, more simply still, use a single ‘space’ axis, x (Note 4). In normal physics the trajectory of an object moving with constant speed will be represented by a continuous vertical straight line and an object moving at constant non-zero speed relative to an object considered to be stationary will be represented by a slanting but nonetheless still straight line. Accelerated motion produces a ‘curve’ that is not straight. All this essentially carries over into UET except that, strictly, there should be no continuous lines at all but only dots that, if joined up, would form lines. Nonetheless, because the size of a ksana is so small relative to our very crude senses, it is usually acceptable to represent an ‘object’s’ trajectory as a continuous line. What is straight in normal physics will be straight in UET. But there is a third variable of motion in UET which has no equivalent in normal physics, namely an event’s re-appearance rhythm.
        Fairly early on, I came up against what seemed to be an insuperable difficulty with my nascent model of physical reality. In UET I make a distinction between an attainable ‘speed limit’ for an event-chain and an upper unatttainable limit, noting the first c * and the second c. This allows me to attribute a small mass ─ mass has not yet been defined in UET but this will come ─  to such ‘objects’ as photons. However, this distinction is not significant in the context of the present discussion and I shall  use the usual symbol c for either case. Now, it is notorious that different elementary particles (ultimate event chains) which apparently have the same (or very nearly identical) speeds do not behave in the same way when confronted with obstacles (large dense event clusters) that lie on their path. Whereas it is comparatively easy to block visible light and not all that difficult to block or at least muffle much more energetic gamma rays, it is almost impossible to stop a neutrino in its path, so much so that they are virtually undetectable. Incredible though it sounds, “about 400 billion neutrinos from the Sun pass through us every second” (Close, Particle Physics) but even state of the art detectors deep in the earth have a hard  job  detecting a single passing neutrino. Yet neutrinos travel at or close to the speed of light. So how is it that photons are so easy to block and neutrinos almost impossible to detect?
The answer, according to matter-based physics, is that the neutrino is not only very small and very fast moving but “does not feel any of the four physical Reappearance rates 2forces except to some extent the weak force”. But I want to see if I can derive an explanation without departing from the basic principles and concepts of Ultimate Event Theory. The problem in UET is not why the repeating event-pattern we label a neutrino passes through matter so easily ─ this is exactly what I would expect ─ but rather how and why it behaves so  differently from certain other elementary event-chains. Any ‘particle’, provided it is small enough and moves rapidly, is likely, according to the basic ideas of UET, to ‘pass through’ an obstacle just so long as the obstacle is not too large and not too dense. In UET, intervening spatial positions are simply skipped and anything that happens to be occupying these intermediate spatial positions will not in any way ‘notice’ the passing of the more rapidly moving ‘object’. On this count, however, two ‘particles’ moving at roughly the same speed (relative to the obstacle) should either both pass through an  obstacle or both collide with it.
But, as I eventually realized, this argument is only valid if the re-appearance rates of the two ‘particles’ are assumed to be the same. ‘Speed’ is nothing but a space/time ratio, so many spatial positions against so many ksanas. A particular event-chain has, say, a ‘space/time ratio’ of 8 grid-points per ksana. This means that the next event in the chain will have occurrence at the very next ksana exactly eight grid-spaces along relative to some regularly repeating event-chain considered to be stationary. On this count, it would seem impossible to have fractional rates and every ‘re-appearance rate’ would be a whole number : there would be no equivalent in UET of a speed of, say, 4/7 metres per second since grid-spaces are indivisible.
However, I eventually realized that it was not one of my original assumptions that an event in a chain must repeat (or give rise to a different event) at each and every ksana. This at once made fractional rates possible even though the basic units of space and time are, in UET, indivisible. A ‘particle’ with a rate of 4/7 s0 /t0 could, for example, make a re-appearance four times out of every seven ksanas ─ and there are any number of ways that a ‘particle’ could have the same flat rate while not having the same re-appearance rhythm. 

Limit to unitary re-appearance rate

It is by no means obvious that it is legitimate to treat ‘space’ and ‘time’ equivalently as dimensions of a single entity known as ‘Space/Time’. A ‘distance’ in time is not just a distance in space transferred to a different axis and much of the confusion in contemporary physics comes from a failure to accept, or at the very least confront, this fact. One reason why the dimensions are not equivalent is that, although a spatial dimension such as length remains the same if we now add on width, the entire spatial complex must disappear if it is to give rise to a similar one at the succeeding moment in time ─ you cannot simply ‘add’ on another dimension to what is already there.
However, for the the time being I will follow accepted wisdom in treating a time distance on the same footing as a space distance. If this is so, it would seem that, in the case of an event-chain held together by causality, the causal influence emanating from the ‘kernel’ of one event capsule, and which brings about the selfsame event (or a different one) a ksana later in an equivalent spatial position, must traverse at least the ‘width’ or diameter of a space capsule, noted s0 (if the capsule is at rest). Why? Because if it does not at least get to the extremity of the first spatial capsule, a distance of ½ s0  and then get to the ‘kernel’ of the following one, nothing at all will happen and the event-chain will terminate abruptly.
This means that the ‘reappearance rate’ of an event in an event-chain must at least be 1/1 in absolute units, i.e. 1 s0 /t0 , one grid-space per ksana. Can it be greater than this? Could it, for example, be  2, 3 or 5 grid-spacesper ksana? Seemingly not. For if and when the ultimate event re-appears, say  5 ksanas later, the original causal impulse will have covered a distance of 5 s0   ( s0 being the diameter or spatial dimension of each capsule) and would have taken 5 ksanas to do  this. And so the space/time displacement rate would be the same (but not in this case the actual inter-event distances).
It is only the unitary rate, the distance/time ratio taken over a single ksana, that cannot be less (or more) than one grid-space per ksana : any fractional (but not irrational) re-appearance rate is perfectly conceivable provided it is spread out over several ksanas.  A re-appearance rate of m/n s0/t0  simply means that the ultimate event in question re-appears in an equivalent spatial position on the Locality m times every n ksanas where m/n ≤ 1. And there are all sorts of different ways in which this rate be achieved. For example, a re-appearance rate of 3/5 s0/t0 could be a repeating pattern such as

 

   ™˜™™™™™™™™™™™™™™™™™™™™™™Reappearance rates 1

 

 

 

 

 

and one pattern could change over into the other either randomly or, alternatively, according to a particular rule.
As one increases the difference between the numerator and the denominator, there are obviously going to be many more possible variations : all this could easily be worked out mathematically using combinatorial analysis. But note that it is the distribution of ™ and ˜ that matters since, once a re-appearance rhythm has begun, there is no real difference between a ‘vertical’ rate of  ™˜™˜ and ˜™˜™ ─ it all depends on where you start counting. Patterns only count as different if this difference is recognizable no matter where you start examining the sequence.
Why does all this matter? Because, each time there is a blank line, this means that the ultimate event in question does not make an appearance at all during this ksana, and, if we are dealing with large denominators, this could mean very large gaps indeed in an event chain. Suppose, for example, an event-chain had a re-appearance rate of 4/786. There would only be four appearances (black dots) in a period of 786 ksanas, and there would inevitably be very large blank sections of the Locality when the ultimate event made no appearance.

Lower Limit of re-creation rate 

Since, by definition, everything in UET is finite, there must be a maximum number of possible consecutive non-reappearances. For example, if we set the limit at, say, 20 blank lines, or 200 03 2000, this would mean that, each time this was observed, we could conclude that the event-chain had terminated. This is the UET equivalent  of the Principle of Spatio-Temporal Continuity and effectively excludes phenomena such as an ultimate event in an event-chain making its re-appearance a century later than its first appearance. This limit would have to be estimated on the  basis of experiments since I do not see how a specific value can be derived from theoretical considerations alone. It is tempting to estimate that this value would involve c* or a multiple of c* but this is only a wild guess ─ Nature does not always favour elegance and simplicity.
Such a rule would limit how ‘stretched out’ an event-chain can be temporally and, in reality , there may not after all be a hard and fast general rule  : the maximal extent of the gap could decline exponentially or in accordance with some other function. That is, an abnormally long gap followed by the re-appearance of an event, would decrease the possible upper limit slightly in much the same way as chance associations increase the likelihood of an event-chain forming in the first place. If, say, there was an original limit of a  gap of 20 ksanas, whenever the re-appearance rate had a gap of 19, the limit would be reduced to 19 and so on.
It is important to be clear that we are not talking about the phenomenon of ‘time dilation’ which concerns only the interval between one ksana and the next according to a particular viewpoint. Here, we simply have an event-chain ‘at rest’ and which is not displacing itself laterally at all, at any rate not from the viewpoint we have adopted.

Re-appearance Rate as an intrinsic property of an event-chain  

Since Galileo, and subsequently Einstein, it has become customary in physics to distinguish, not between rest and motion, but rather between unaccelerated motion and  accelerated motion. And the category of ‘unaccelerated motion’ includes all possible constant straight-line speeds including zero (rest). It seems, then,  that there is no true distinction to be made between ‘rest’ and motion just so long as the latter is motion in a straight line at a constant displacement rate. This ‘relativisation’ of  motion in effect means that an ‘inertial system’ or a particle at rest within an inertial system does not really have a specific velocity at all, since any estimated velocity is as ‘true’ as any other. So, seemingly, ‘velocity’ is not a property of a single body but only of a system of at least two bodies. This is, in a sense, rather odd since there can be no doubt that a ‘change of velocity’, an acceleration, really is a feature of a single body (or is it?).
So what to conclude? One could say that ‘acceleration’ has ‘higher reality status’ than simple velocity since it does not depend on a reference point outside the system. ‘Velocity’ is a ‘reality of second order’ whereas acceleration is a ‘reality of first order’. But once again there is a difference between normal physics and UET physics in this respect. Although the distinction between unaccelerated and accelerated motion is taken over into UET (re-baptised ‘regular’ and ‘irregular’ motion), there is in Ultimate Event Theory a new kind of ‘velocity’ that has nothing to do with any other body whatsoever, namely the event-chain’s re-appearance rate.
When one has spent some time studying Relativity one ends up wondering whether after all “everything is relative” and that  the universe is evaporating away even as we look it leaving nothing but a trail of unintelligible mathematical formulae. In Quantum Mechanics (as Heisenberg envisaged it anyway) the properties of a particular ‘body’ involve the properties of all the other bodies in the universe, so that there remain very few, if any, intrinsic properties that a body or system can possess. However, in UET, there is a reality safety net. For there are at least two  things that are not relative, since they pertain to the event-chain or event-conglomerate itself whether it is alone in the universe or embedded in the dense network of intersecting event-chains we view as matter. These two things are (1) the number of ultimate events in a given portion of an event-chain and (2) the re-appearance rate of events in the chain. These two features are intrinsic to every chain and have nothing to do with velocity or varying viewpoints or anything else.  To be continued SH

Note 1   This principle (Spatio-temporal Continuity) innocuous  though it may sound, has also had  extremely important social and political implications since, amongst other things, it led to the repeal of laws against witchcraft in the ‘advanced’ countries. For example, the new Legislative Assembly in France shortly after the revolution specifically abolished all penalties for ‘imaginary’ crimes and that included witchcraft. Why was witchcraft considered to be an ‘imaginary crime’? Essentially because it  violated the Principle of Spatio-Temporal Continuity. The French revolutionaries who drew the statue of Reason through the streets of Paris and made Her their goddess, considered it impossible to cause someone’s death miles away simply by thinking ill of them or saying Abracadabra. Whether the accused ‘confessed’ to having brought about someone’s death in this way, or even sincerely believed it, was irrelevant : no one had the power to disobey the Principle of Spatio-Temporal Continuity. The Principle got somewhat muddied  when science had to deal with electro-magnetism ─ Does an impulse travel through all possible intermediary positions in an electro-magnetic field? ─ but it was still very much in force in 1905 when Einstein formulated the Theory of Special Relativity. For Einstein deduced from his basic assumptions that one could not ‘send a message’ faster than the speed of light and that, in consequence,  this limited the speed of propagation of causality. If I am too far away from someone else I simply cannot cause this person’s death at that particular time and that is that. The Principle ran into trouble, of course,  with the advent of Quantum Mechanics but it remains deeply entrenched in our way of thinking about the world which is why alibis are so important in law, to take but one example. And it is precisely because Quantum Mechanics appears to violate the principle that QM is so worrisome and the chief reason why some of the scientists who helped to develop the theory such as Einstein himself, and even Schrodinger, were never happy with  it. As Einstein put it, Quantum Mechanics involved “spooky action at a distance” ─ exactly the same objection that the Cartesians had made to Newton. 

Note 2  Ideally, we would have a lighted three-dimensional framework flashing on and off and mark the successive appearances of the ‘object’ as, say, a red point of light comes on periodically when the lighted framework comes on.

Note 3 In principle, in the case of extremely high speed event-chains, these gaps should be detectable even today though the fact that such high speeds are involved makes direct observation difficult. 

Note 4 This is not how we specify an object’s position in ordinary conversation. As Bohm pertinently pointed out, we in effect speak in the language of topology rather than the language of co-ordinate geometry. We say such and such an object is ‘under’, ‘over’, ‘near’, ‘to the right of’ &c. some other well-known  prominent object, a Church or mountain when outside, a bookcase or fireplace when in a room.
Not only do coordinates not exist in Nature, they do not come at all naturally to us, even today. Why is this? Chiefly, I suspect because they are not only cumbersome but practically useless to a nomadic, hunting/food gathering life style and we humans spent at least 96% of our existence as hunter/gatherers. Exact measurement only becomes essential when human beings start to manufacture complicated objects and even then many craftsmen and engineers used ‘rules of thumb’ and ‘rough estimates’ well into the 19th century.

In its present state, Ultimate Event Theory falls squarely between two stools : too vague and ‘intuitive’ to even get a hearing from professional scientists, let alone be  taken seriously, it is too technical and mathematical to appeal to the ‘ordinary reader’. Hopefully, this double negative can be eventually turned into a double positive, i.e. a rigorous mathematical theory capable of making testable predictions that nonetheless is comprehensible and has strong intuitive appeal. I will personally not be able to take the theory to the desired state because of my insufficient mathematical and above all computing expertise : this will be the work of others. What I can do is, on the one hand, to strengthen the mathematical, logical side as much as I can while putting the theory in a form the non-mathematical reader can at least comprehend. One friend in particular who got put off by the mathematics asked me whether I could not write something that gives the gist of the theory without any mathematics at all. Thus this post which recounts the story of how and why I came to develop Ultimate Event Theory in the first place some thirty-five years ago.

 Conflicting  beliefs

Although scientists and rationalists are loath to admit it, personal temperament and cultural factors play a considerable part in the development of theories of the universe. There are always individual and environmental factors at work although the accumulation of unwelcome but undeniable facts may eventually overpower them. Most people today are, intellectually speaking, opportunists with few if any deep personal convictions, and there are good reasons for this. As sociological and biological entities we are strongly impelled to accept what is ‘official doctrine’ (in whatever domain) simply because, as a French psycho-analyst whose name escapes me famously wrote, “It is always dangerous to think differently from the majority”.
At the same time, one is inclined, and in some cases compelled, to accept only those ideas about the world that make sense in terms of our own experience. The result is that most people spend their lives doing an intellectual balancing act between what they ‘believe’ because this is what they are told is the case, and what they ‘believe’ because this is what their experience tells them is (likely to be) the case. Such a predicament is perhaps inevitable if we decide to live in society and most of the time the compromise ‘works’; there are, however, moments in the history of nations and in the history of a single individual when the conflict becomes intolerable and something has to give.

The Belief Crisis : What is the basis of reality?

Human existence is a succession of crises interspersed with periods of relative stability (or boredom). First, there is the birth crisis (the most traumatic of all), the ‘toddler crisis’ when the infant starts to try to make sense of the world around him or her, the adolescent crisis, the ‘mid-life’ crisis which kicks in at about forty and the age/death crisis when one realizes the end is nigh. All these crises are sparked off by physical changes which are too obvious and powerful to be ignored with the possible exception of the mid-life crisis which is not so much biological as  social (‘Where am I going with my life?’ ‘Will I achieve what I wanted?’).
Apart from all these crises ─ as if that were not enough already ─  there is the ‘belief crisis’. By ‘crisis of belief’ I mean pondering the answer to the question ‘What is real?’ ‘What do I absolutely have to believe in?’. Such a crisis can, on the individual level, come at any moment, though it usually seems to hit one between the eyes midway between the adolescent ‘growing up’ crisis and the full-scale mid-life crisis. As a young person one couldn’t really care less what reality ‘really’ is, one simply wants to live as intensely as possible and ‘philosophic’ questions can just go hang. And in middle age, people usually find they want to find some ‘meaning’ in life before it’s all over. Now, although the ‘belief crisis’ may lead on to the ‘middle age meaning crisis’ it is essentially quite different. For the ‘belief crisis’ is not a search for fulfilment but simply a deep questioning about the very nature of reality, meaningful or not. It is not essentially an emotional crisis nor is it inevitable ─ many people and even entire societies by-pass it altogether without being any the worse off, rather the reverse (Note 1).
Various influential thinkers in history went through such a  ‘belief crisis’ and answered it in memorable ways : one thinks at once of the Buddha or Socrates. Of all peoples, the Greeks during the Vth and VIth centuries BC seem to have experienced a veritable epidemic of successive ‘belief crises’ which is what  makes them so important in the history of civilization  ─ and also what made the actual individuals and city-states so unstable and so quarrelsome. Several of the most celebrated answers to the ‘riddle of reality’ date back to this brilliant era. Democritus of Abdera answered the question, “What is really real?” with the staggering statement, “Nothing exists except atoms and void”. The Pythagoreans, for their part, concluded that the principle on which the universe was based was not so much physical as numerical, “All is Number”. Our entire contemporary scientific and technological ‘world-view’ (‘paradigm’) can  be traced back to the  two giant thinkers, Pythagoras and Democritus, even if we have ultimately ‘got beyond’  them since we have ‘split the atom’ and replaced numbers as such by mathematical formulae. In an equally turbulent era, Descartes, another major ‘intellectual crisis’ thinker, famously decided that he could disbelieve in just about everything but not that there was a ‘thinking being’ doing the disbelieving, cogito ergo sum (Note 2).
In due course, in my mid to late thirties, at about the time of life when Descartes decided to question the totality of received wisdom, I found myself with quite a lot of time on my hands and a certain amount of experience of the vicissitudes of life behind me to ponder upon. I too became afflicted by the ‘belief crisis’ and spent the greater part of my spare time (and working time as well) pondering what was ‘really real’ and discussing the issue interminably with the same person practically every evening (Note 3). 

Temperamental Inclinations or Prejudices

 My temperament (genes?) combined with my experience of life pushed me in certain well-defined philosophic directions. Although I only  started formulating Eventrics and Ultimate Event Theory (the ‘microscopic’ part of Eventrics) in the early nineteen-eighties and by then had long since retired from the ‘hippie scene’, the heady years of the late Sixties and early Seventies provided me with my  ‘field notes’ on the nature of reality (and unreality), especially the human part of it. The cultural climate of this era, at any rate in America and the West, may be summed up by saying that, during this time “a substantial number of people between the ages of fifteen and thirty decided that sensations were far more important than possessions and arranged their lives in consequence”. In practice this meant forsaking steady jobs, marriage, further education and so on and spending one’s time looking for physical thrills such as doing a ton up on the M1, hitch-hiking aimlessly around the world, blowing your mind with drugs, having casual but intense sexual encounters and so on. Not much philosophy here but when I and other shipwrecked survivors of the inevitable débâcle took stock of the situation, we retained a strong preference for a ‘philosophy’  that gave primary importance to sensation and personal experience.
The physical requirement ruled out traditional religion since most religions, at any rate Christianity in its later public  form, downgraded the body and the physical world altogether in favour of the ‘soul’ and a supposed future life beyond the grave. The only aspect of religion that deserved to be taken seriously, so I felt, was mysticism since mysticism is based not on hearsay or holy writ but on actual personal experience. The mystic’s claim that there was a domain ‘beyond the physical’ and that this deeper reality can to some degree actually be experienced within this life struck me as not only inspiring but even credible ─ “We are more than what we think we are and know more than what we think we know” as someone (myself) once put it.
At the same time, my somewhat precarious hand-to-mouth existence had given me a healthy respect for the ‘basic physical necessities’ and thus inclined to reject all theories which dismissed physical reality as ‘illusory’, tempting though this sometimes is (Note 4). So ‘Idealism’ as such was out. In effect I wanted a belief system that gave validity and significance to the impressions of the senses, sentio ergo sum to Descartes’ cogito ergo sum or, better, sentio ergo est :  ‘I feel therefore there is something’.

Why not physical science ?

 Why not indeed. The main reason that I didn’t decide, like most people around me,  that “science has all the answers” was that, at the time, I knew practically no science. Incredible though this seems today, I had managed to get through school and university without going to a single chemistry or physics class and my knowledge of biology was limited to one period a week for one year and with no exam at the end of it.
But ignorance was not the only reason for my disqualifying science as a viable ‘theory of everything’. Apart from being vaguely threatening ─ this was the era of the Cold War and CND ─ science simply seemed monumentally irrelevant to every aspect of one’s personal daily life. Did knowing about neutrons and neurons make you  more capable of making more effective decisions on a day to day basis? Seemingly not. Scientists and mathematicians often seemed to be less (not more) astute in running their lives than ordinary practical people.
Apart from this, science was going through a difficult period when even the physicists themselves were bewildered by their own discoveries. Newton’s billiard ball universe had collapsed into a tangled mess of probabilities and  uncertainty principles : when even Einstein, the most famous modern scientist, could not manage to swallow Quantum Theory, there seemed little hope for Joe Bloggs. The solid observable atom was out and unobservable quarks were in, but Murray Gell-Mann, the co-originator of the quark theory, stated on several occasions that he did not ‘really  believe in quarks’ but merely used them as ‘mathematical aids to sorting out the data’. Well, if even he didn’t believe in them, why the hell should anyone else? Newton’s clockwork universe was bleak and soulless but was at least credible and tactile : modern science seemed nothing more than a farrago of  abstruse nonsense that for some reason ‘worked’ often to the amazement of the scientists themselves.
There was another, deeper, reason why physical science appeared antipathetic to me at the time : science totally devalues personal experience. Only repeatable observations in laboratory conditions count as fact : everything else is dismissed as ‘anecdotal’. But the whole point of personal experience is that (1) it is essentially unrepeatable and (2) it must be spontaneous if it is to be worthwhile. The famous ‘scientific method’ might have a certain value if we are studying lifeless atoms but seemed unlikely to uncover anything of interest in the human domain — . the best ‘psychologists’ such as  conmen and dictators are sublimely ignorant of psychology. Science essentially treats everything as if it were dead, which is why it struggles to come up with any strong predictions in the social, economic and political spheres. Rather than treat living things as essentially dead, I was more inclined to treat ‘dead things’ (including the universe itself) as if they were in some sense alive. 

Descartes’ Thought Experiment 

Although I don’t think I had actually read Descartes’ Discours sur la méthode at the time, I had heard about it and the general idea was presumably lurking at the back of my mind. Supposedly, Descartes who, incredibly, was an Army officer at the time, spent a day in what is described in history books as a poêle (‘stove’) pondering the nature of reality. (The ‘stove’ must have been a small chamber close to a source of heat.) Descartes came to the conclusion that it was possible to disbelieve in just about everything except that there was a ‘thinking  being’, cogito ergo sum. To anyone who has done meditation, even in a casual way, Descartes’ conclusion appears by no means self-evident. The notion of individuality drops away quite rapidly when one is meditating and all one is left with is a flux of mental/physical impressions. It is not only possible but even ‘natural’ to temporarily disbelieve in the reality of the ‘I’ (Note 5)─ but one cannot and does not disbelieve in the reality of the various sensations/impressions that are succeeding each other as ‘one’ sits (or stands).

Descartes’ thought experiment nonetheless seemed  suggestive and required, I thought, more precise evaluation. Whether the ‘impressions/sensations’ are considered to be mental, physical or a mixture of the two, they are nonetheless always events and as such have the following features:

(1) they are, or appear to be, ‘entire’, ‘all of a piece’, there is no such thing as a ‘partial’ event/impression;

(2) they follow each other very rapidly;

(3) the events do not constitute a continuous stream, on the contrary there are palpable gaps between the events (Note 6);

(4) there is usually a connection between successive events, one thought ‘leads on’ to another and we can, if we are alert enough, work backwards from one ‘thought/impression’ to its predecessor and so on back to the start of the sequence;

(5) occasionally ‘thought-events’ crop up that seem to be  completely disconnected from all previous ‘thought-events’, arriving as it were ‘out of the blue.’.

Now, with these five qualities, I already have a number of features which I believe must be part of reality, at any rate individual ‘thought/sensation’  reality. Firstly, whether my thoughts/sensations are ‘wrong’, misguided, deluded or what have you, they happen, they take place, cannot be waved away. Secondly, there is always sequence : thought ‘moves from one thing to another’ by specific stages. Thirdly, there are noticeable gaps between the thought-events. Fourthly, there is  causality : one thought/sensation gives rise to another in a broadly predictable and comprehensible manner. Finally, there is an irreducible random element in the unfolding of thought-events — so not everything is deterministic apparently.
These are properties I repeatedly observe and feel I have to believe in. There are also a number of conclusions to be drawn from the above; like all deductions these ‘derived truths’ are somewhat less certain than the direct impressions, are ‘second-order’ truths as it were, but they are nonetheless compelling, at least to me. What conclusions? (1) Since there are events, there  must seemingly be a ‘place’ where these events can and do occur, an Event Locality. (2) Since there are, and continue to be, events, there  must be an ultimate source of events, an Origin, something distinct from the events themselves and also (perhaps) distinct from the Locality.
A further and more radical conclusion is that this broad schema can legitimately be generalized to ‘everything’, at any rate to everything in the entire known and knowable universe. Why make any hard and fast distinction between mental events and their features and ‘objective’ physical events and their features? Succession, discontinuity and causality are properties of the ‘outside’ world as well, not just that of the private world of an isolated thinking individual.
What about other things we normally assume exist such as trees and tables and ourselves? According to the event model, all these things must either be (1) illusory or irrelevant (same thing essentially) (2) composite and secondary and/or (3) ‘emergent’.
Objects are bundles of events that keep repeating more or less in the same form. And though I do indeed believe that ‘I’ am in some sense a distinct entity and thus ‘exist’, this entity is not fundamental, not basic, not entirely reducible to a collection of events. If the personality exists at all ─ some persons  have doubts on this score ─ it is a complex, emergent entity. This is an example of a ‘valid’ but not  fundamental item of reality.
Ideas, if they take place in the ‘mind’, are events whether true, false or meaningless. They are ‘true’ to the extent that they can ultimately be grounded in occurrences of actual events and their interactions, or interpretations thereof. I suppose this is my version of the ‘Verification Principle’ : whatever is not grounded in actual sensations is to be regarded with suspicion.  This does not necessarily invalidate abstract or metaphysical entities but it does draw a line in the sand. For example, contrary to most contemporary rationalists and scientists, I do not entirely reject the notion of a reality beyond the physical because the feeling that there is something ‘immeasurable’ and ‘transcendent’ from which we and the world emerge is a matter of experience to many people, is a part of the world of sensation though somewhat at the limits of it. This reality, if it exists, is ‘beyond name and form’ (as Buddhism puts it) is ‘non-computable’, ‘transfinite’. But I entirely reject the notion of the ‘infinitely large’ and the ‘infinitely small’ which has bedevilled science and mathematics since these (pseudo)entities are completely  outside  personal experience and always will be. With the exception of the Origin (which is a source of events but not itself an event), my standpoint is that  everything, absolutely everything, is made up of a finite number of ultimate events and an ultimate event is an event  that cannot be further decomposed. This principle is not, perhaps, quite so obvious as some of the other principles. Nonetheless, when considering ‘macro’ events ─ events which clearly can be decomposed into smaller events ─ we have two and only two choices : either the process comes to an end with an ‘ultimate’ event or it carries on interminably (while yet eventually coming to an end). I believe the first option is by far the more reasonable one.
With this, I feel I have the bare bones of not just a philosophy but a ‘view of the world’, a schema into which pretty well everything can be fitted ─ the contemporary buzzword is ‘paradigm’. Like Descartes emerging from his ‘stove’, I considered  I had a blueprint for reality or at least that part of it amenable to direct experience. To sum up, I could disbelieve, at least momentarily,  in just about everything but not that (1) there were events ; (2) that events occurred successively; (3) were subject to some sort of omnipresent causal force with  occasional lapses into lawlessness. Also, (4) these events happened somewhere (5) emerged from something or somewhere and (6) were decomposable into ‘ultimate’ events that could not be further decomposed.  This would do for a beginning, other essential features would be added to the mix as and when required.                                                                             SH

Note 1  Many extremely successful societies seem to have been perfectly happy in  avoiding the ‘intellectual crisis’ altogether : Rome did not produce a single original thinker and the official Chinese Confucian world-view changed little over a period of more than two thousand years. This was doubtless  one of the main reasons why these societies lasted so long while extremely volatile societies such as VIth century Athens or the city states of Renaissance Italy blazed with the light of a thousand suns for a few moments and then were seen and heard no more.

Note 2 Je pris garde que, pendant que je voulais ainsi penser que tout était faux, il fallait nécessairement que moi, qui le pensais, fusse quelquechose. Et remarquant que cette vérité : je pense, donc je suis, était si ferme et si assure, que toutes les autres extravagantes suppositions des sceptiques n’étaient capables de l’ébranler, je jugeai que je pouvais le reçevoir, sans scrupule, pour le premier principe de la philosophie que je cherchais.”
      René Descartes, Discours sur la Méthode Quatrième Partie
“I noted, however, that even while engaged in thinking that everything was false, it was nonetheless a fact that I, who was engaged in thought, was ‘something’. And observing that this truth, I think, therefore I am, was so strong and so incontrovertible, that the most extravagant proposals of sceptics could not shake it, I concluded that I could justifiably take it on  board, without misgiving, as the basic proposition of philosophy that I was looking for.”  [loose translation]

Note 3  The person in question was, for the record, a primary school teacher by the name of Marion Rowse, unfortunately now long deceased. She was the only person to whom I spoke about the ideas that eventually became Eventrics and Ultimate Event Theory and deserves to be remembered for this reason.

Note 4   As someone at the other end of the social spectrum, but who seemingly also went through a crisis of belief at around the same time, put it, “I have gained a healthy respect for the objective aspect of reality by having lived under Nazi and Communist regimes and by speculating in the financial markets” (Soros, The Crash of 2008 p. 40).
According to Boswell, Dr. Johnson refuted Bishop Berkeley, who argued that matter was essentially unreal, by kicking a wall. In a sense this was a good answer but perhaps not entirely in the way Dr. Johnson intended.  Why do I believe in the reality of the wall? Because if I kick it hard enough I feel pain and there is no doubt in my mind that pain is real — it is a sensation. The wall must be accorded some degree of reality because, seemingly, it was the cause of the pain. But the reality of the wall, is, as it were, a ‘derived’ or ‘secondary’  reality : the primary reality is the  sensation, in this case the pain in my foot. I could, I argued to myself, at a pinch, disbelieve in the existence of the wall, or at any rate accept that it is not perhaps so ‘real’ as we like to think it is, but I could not disbelieve in the reality of my sensation. And it was not even important whether my sensations were, or were not, corroborated by other people, were entirely ‘subjective’ if you like, since, subjective or not, they remained sensations and thus real.

Note 5 In the Chuang-tzu Book, Yen Ch’eng, a disciple of the philosopher Ch’i  is alarmed because his master, when meditating, appeared to be “like a log of wood, quite unlike the person who was sitting there before”. Ch’I replies, “You have put it very well; when you saw me just now my ‘I’ had lost its ‘me’” (Chaung-tzu Book II. 1) 

Note 6 The practitioner of meditation is encouraged to ‘widen’ these gaps as much as possible (without falling asleep) since it is by way of the gaps that we can eventually become familiar with the ‘Emptiness’ that is the origin and end of everything.

 

What is time? It is succession. Succession of what? Of events ─ events that take place ‘one after the other’. If there is no succession of events, either nothing happens at all or everything that can happen occurs in an endless, eternal present.
We measure time by referring to two (or more) events which are easily recognizable, themselves having  negligible extension ‘in time’  and which repeat in a fashion we have reason to believe does not change appreciably. Tick-tock. The sounds are sharp, easily recognizable and the old-fashioned pendulum swings fairly regularly ─ though, of course, certain crystals vibrate a good deal more regularly. If we do  not have two  easily recognizable, repeating,  ‘marker events’ which signify the beginning and end of an ‘interval of time’, ‘duration’ is vague and subjective. This shows that duration is a secondary notion compared to succession. Without succession, no duration ─ or at least no accurately measurable duration. One can argue interminably about how long the interval between two events ‘really’ is, or was, but the events themselves either happen/happened, or they don’t/didn’t happen. Occurrence and succession are primary features of physical reality, duration a secondary property.
As I see it, physics ought by rights to be based chiefly on notions of occurrence, succession and causality since all these ‘things’ are primary ─ causality perhaps a shade less fundamental than the other two (since there can be occurrence without causality as in the case of ‘random’ events). In Ultimate Event Theory, the occurrence of an event is absolute in the sense that its occurrence has nothing whatsoever to do with anyone’s location, state of mind, state of motion in relation to other objects, and so on. Also,  since all events are (by hypothesis) made up of a finite number of ultimate events, every event-chain A-B whose first event is A and last event B has an event-number which is a positive integer. But the distances between the successive ultimate events composing the chain are secondary features : they have minimum and maximum values but otherwise are flexible and are legitimately evaluated quite differently according to one’s viewpoint and ‘state of motion’ (Note 1).

Space/Time Event-Capsules and ‘Objects’ 

In the preceding posts, it was hypothesized that the region occupied by the ‘Space-Time Rectangle’ of a single event-chain  or event-complex is constant. In the simplest case of a single repeating ultimate event, we have a repeating unitary four-dimensional  ‘Event Capsule’ successively occupying a region s0 × s0 × s0 × t0 = s03 t0. We, of course, do not ‘see’, or otherwise register, each individual event capsule but run several of them together much as the eye/brain runs together the separate stills that compose a ‘motion’ picture. We eventually become aware of an ‘event block’ which is nonetheless (according to UET) composed of a set of identical unitary event capsules each of them occupying a region s03 t0. This ‘occupied region’ is thus  d s03 × t t0  where d and t are integers. If all the available positions within this region are occupied by repeating ultimate events, we have a repeating  Event Conglomerate of volume d3 × t  in ‘absolute’ units. For simplicity, we shall only consider one of the spatial dimensions and confine ourselves to the Space/Time Event Rectangle of d × t with d and t in ‘absolute’ units. This is the equivalent in UET of a ‘solid object’.
Of course, it is most unlikely that all neighbouring. spatial positions would be occupied by ultimate events which, taken together, constitute Relativity Unitary Capsule Emptythe equivalent of an  ‘object’ or body : there will, almost certainly, be sizeable gaps just as there are ‘holes’ in an apparently solid body. Nonetheless, at least in imagination, we can connect up any two positions on the Locality and construct a ‘Space/Time’ Event Region which, in the simplified case, reduces to a Space/Time Event Rectangle composed of unitary Space/Time Event Rectangles.

 

 

In the following simplified diagram each line represents a section of the Event Locality at one particular ksana (‘moment in time’) and each square a grid-position.

ksana 0  ……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
ksana 1   ……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐……..
ksana 2  ……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
ksana 3  ……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
ksana 4 ……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………

Any of these little squares symbolizing two-dimensional grid positions could, in theory, receive an ultimate event and any two arbitrarily selected grid-positions could be connected up, for example those marked in black

……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………

If we treat these grid-positions as extremities of an ‘occupied region’ equivalent to a perfectly dense ‘object’ occupying the entire rectangle, we have

……..⅐∎∎∎∎∎∎∎∎∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐∎∎∎∎∎∎∎∎∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐∎∎∎∎∎∎∎∎∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………

An object in UET is, then,  an identically repeating event conglomerate that only persists because the individual ultimate events are powerfully bonded both ‘laterally’, within a single ksana, and, more significantly, ‘vertically’, from one ksana to the next. If ultimate events are not bonded ‘laterally’, they do not constitute an ‘object’ : their proximity is entirely coincidental and if one or more of these ultimate events for some reason ceased to reappear, this would have no consequences on the others. And if the ultimate events are not bonded ‘vertically’, i.e. lack the property called persistence, the entire conglomerate does not repeat : it simply disappears without a trace.
Now, since everything in UET is finite (except the extent of the Locality itself), there must be a limit to the possible extent of ‘lateral bonding’ : that is, an ‘object’ cannot exceed certain dimensions at any one ksana. There is presumably also a limit to the number of times any ultimate event can repeat though, to judge by the length of time the present universe has existed and certain elementary particles such as protons, this limit must be inconceivably great. Much more important practically is the ‘lateral displacement limit’. For example, consider two occupied grid positions at successive ksanas

……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………

Could these two ultimate events be causally connected, i.e. part of an event-chain? It is quite irrelevant that there are a large number of blank spaces between the two occupied positions since, in UET, it is not necessary for an event chain to fill the intervening space ─ a completely dense repeating event conglomerate is almost certainly a rarity and even perhaps  impossible.
The only problem is thus the extent of the lateral displacement from one  ksana to the next since this has a maximum possible value, traditionally noted as c but  in UET noted as c*. This gives us a test as to whether the two events occupying two squares are, or could conceivably be, causally connected. If the lateral distance covered in one ksana exceeds a certain number of spaces, namely c*,  the events are not causally connected. Of course, it is essentially a matter of convenience, or of viewpoint, which of two regular event chains is considered to be ‘vertical’ and which ‘slanting’, but this does not mean that lateral displacement of events from one ksana to another does not occur. We can imagine the original ultimate event repeating in an exactly equivalent spatial position at the very next ksana, and simultaneously producing a  clone’ of itself so many spaces to the right or left.

……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………

If the distance is too great, we can confidently conclude that the red ultimate event has not been produced by the black one since the lateral distance exceeds the reach of a causal impulse emanating from the point of its occurrence (Note 3). All this is, of course, well known to students of Relativity, but it is important to recognize how naturally and inevitably this result (and all that follows from it) arises once we have accepted once and for all that there must, for a priori reasons, be a spatial limit to the transmission of a causal impulse. Someone in another place and time could have (and possibly did) hit upon such a conclusion  purely from first principles centuries before Einstein was even born (Note 4 Galileo…..)and when there would have been no way of carrying out appropriate experimental tests just as there was, in  Newton’s own day,  no way of showing that two small objects suspended in a room and free to move actually attract each other.

Causality and the unitary Space/Time Event Capsule

In normal physics, it is assumed that the ‘natural’ state of a body is to carry on existing more or less in the same state from one moment to the next, and if the  body does change, a fortiori disappear, we conclude that an external (or sometimes internal) force is at work. An ‘object’ does not ‘cause itself to happen’ as it were : the very idea sounds absurd. However, In UET, this is precisely what does prevail since the ‘natural state’ of everything is to appear once and then disappear for ever. The reappearance of an ultimate event more or less in the same position on the Locality is just as much the result of causality as the sudden appearance of a completely new, and, in general, different, event regarded as the ‘effect’. The continued existence of anything is ‘self-caused’ (Note 5 Sheldrake).
So, before examining repeating batches and conglomerates of ultimate events, we should examine exactly what is involved in the reappearance of a single ultimate event since without this happening there will be no repeating conglomerates, no apparently solid objects, no universe, nothing at all except a sort of “buzzing. blooming confusion” (Piaget) brought about by ephemeral random events emerging from the Event Origin and at once disappearing back into it.
For anything to last at all, certain conditions must be met. The first and most important is that the causal influence which brings about the repetition of the ultimate event must at least be able to traverse the distance from the centre of one Event Capsule to another. For, just as the  nucleus of an atom does not extend to the outer reaches of the atom but on the contrary is marooned in a comparatively vast empty area, an ultimate event in UET is conceived as occupying a minute ‘kernel’ at the centre of a Space/Time Event Capsule. When such an ultimate event is isolated or part of an event-chain ‘at rest’, the dimensions of all such capsules are fixed and are always the same. These ‘rest’ dimensions are the ‘true’ dimensions of the capsule and are s0 for the spatial dimension and t0 for the ‘time’ dimension.
‘Distance’, both spatial and temporal, is not absolute in UET but can be legitimately ‘measured’ (or, better, experienced) in different ways — though there are nonetheless, as for everything in UET, minimal and maximal values. What is ‘absolute’ is firstly, the number of ultimate events in an event-chain (or portion thereof) and, secondly, the total extent of the ‘occupied region’ on the Locality. In the simplest possible case which is what we are considering here, we have just a single ultimate event which repeats in the ‘same’ position a ksana later. To accurately model what I believe goes on in reality, it would be necessary not only to have a three-dimensional grid or lattice-frame extending as far as the eye can see in all three directions but also for it to consist of lines of lights which are switched on and off regularly. An ultimate event would then be represented by, say, a red flash inside a rectangular ‘box’ consisting of lines of little lights arranged in series. The whole framework of ‘fairy lights’ representing the Event Locality is then switched on and off, and each time the framework is switched on, a red flash appears inside a lower ‘box’. In the case of two appearances, the ‘vertical displacement’ of the red light would scarcely be noticeable, but if we keep on switching the whole framework on and off and having the red light appear slightly below where it was previously, we obtain a rough  idea of a ‘stationary’ event-chain. To represent ‘lateral displacement’, which is generally what is understood by ‘movement’, we would need to have a second flashing coloured light displacing itself regularly in a slanting straight line relative to the red one. By speeding up the flashings, we would get an impression of ‘continuous movement’ even though nothing is moving at all, merely flashing on and off.
We have then, as it were, arrays of 3- dimensional boxes representing the Event Capsules at a particular ksana, and a  red light, symbolizing an ultimate event, which appears inside first one box and then another below it at the very next ksana.Relativity Rows of capsules

For a ‘rest’ event-chain which we are assuming, the event capsules have fixed ‘rest’ values for both spatial and temporal distances. If we neglect two of the three spatial dimensions we thus have a Space-Time Rectangle of extent s0  by t0.  

 

Minimum and Maximum Displacement Rates 

Imagine then an ultimate event occurring  at a certain spot on the Locality, or rather at the kernel (centre) of an empty grid-space. It remains there for the space of one ksana, disappears, and reappears (or not) at an equivalent spot at the next ksana. How far has the causal influence travelled? If the ultimate event is conceived as occurring at the very centre of a spatial cube of dimension s0

˜Relativity Single event capsule

So the causal impulse must cover very nearly ½ s0 relative to the first grid-space and another ½ s0 relative to the second ksana, in order to  be able to ‘recreate’ a clone of the original ultimate event. The total spatial distance covered is thus (very nearly) ½ s0  + ½ s0  =  1s0 and this distance has been covered  within the ‘space’ of a single ksana. The ultimate event’s ‘vertical displacement rate’ is thus 1s0 /t0 , one grid-space per ksana. And if it keeps re-appearing regularly in the same way, it will keep  the same ‘vertical’ displacement rate — ‘vertical’ because it is not displacing itself either to the left or the right at each ksana relative to where it was previously.
So what happens if the causal influence is not strong enough to traverse such a distance? In such a case, the ultimate event does not re-appear and that is the end of the matter. (I am assuming that the occurrence has taken place at a sparsely populated region of the Locality so that the ultimate event is effectively isolated and not subject to any influence from other event-chains.) Whether or not a causal influence that fails to ‘go the distance’ subsequently completes its work in subsequent ksanas need not concern us for the moment : the point is that during (or ‘at’) the following ksana the ultimate event either does or does not reappear, an open/shut case.
A displacement rate of 1s0 /t0 is thus the minimum (vertical) displacement rate possible. Moreover, the dimensions s0 and t0 are fixed and thus their product, the rectangular ‘area’  s0 × t0 also. It is a postulate of UET that this rectangular area (and equivalent 4-dimensional region) remains constant though the ‘length’ of the ‘sides’ can change, i.e.  for variable ‘sides’  sv and tv, sv tv = s0  t0  = Ω, a constant, and so tvtv /t0 = s0 /sv . It would seem that, in our universe at any rate, s0 is a maximum which makes t0 a minimum. So the vertical displacement rate is maximum spatial unit distance/ minimum temporal unit distance. Note that, since to is a minimum, there is no possible change that can occur anywhere within a smaller interval of time ─ ‘time’ is not infinitely divisible. Also, since there are limits to everything, the minimum spatial distance, which can be noted su (for s ultimate), will be paired off with the maximum ‘temporal length’ of a ksana, tmax and the other extreme ratio will be smin/tmax  = su/tmax where su  ×  tmax  = so × t0 since the area of the rectangle stays the same.  
        But we do not need to pay attention to this for the moment since the sides are not at present going to change. Given that the rate 1s0 /t0  is the least possible, what about the maximum possible rate? Rather surprisingly, this also turns out to be 1s0 /t0 ! For suppose a more powerful causal impulse is able to carry itself over two or more event capsules and recreate the selfsame ultimate event two, four, or a hundred ksanas later. Even if it could do this, the rate would still not exceed 1s0 /t0 since, if the ultimate event were re-created four ksanas later, the causal impulse would have traversed a spatial distance of 4 s0 ­and taken exactly four ksanas to do it.

Relation of vertical displacement rate to ‘speed’ or lateral displacement rate  

In ‘normal’ physics ─ and normal conversation ─ we completely neglect what I term ‘vertical displacement rate’ since we do not conceive of ‘bodies’ appearing and re-appearing. What we are passionately concerned about is what we refer to as ‘speed’, and this in  UET is what I call ‘relative lateral displacement rate’ ─ ‘lateral’ because the ‘time’ axis is usually imagined as being ‘vertical’. Now, it is today (almost) universally recognized that there is a maximum ‘speed’ for all bodies, namely c which is roughly 3 × 108 metres/sec in macroscopic unts. In UET, since everything has a limit, there is a maximum lateral displacement rate for the transmission of causality whether or not light and electro-magnetic radiation actually do travel at exactly this rate. This ‘relative lateral displacement rate’ covers not only cases of ‘cause and effect’ in the normal sense, but the special kind of causality involved when an ultimate event reinvents itself so many spaces to the right or left of its spatial position at the previous ksana. We can thus imagine an ultimate event being re-created (1) in the same position, i.e. zero spaces to the right or left;  (2) one space to the right (or left) of the original position, (3) two spaces to the right, and so on up to, but not exceeding, c* spaces to the right where c* is a positive integer.

……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐………

←  spaces   →

……..⅐∎⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐………
……..⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐⅐∎⅐⅐⅐⅐⅐………

←                     c* spaces           →

We have, in each case, a ‘Causal’ Space/Time Event Rectangle of dimensions  d spaces × 1 ksana  where d can attain but not exceed c*. Each of these positions could in principle receive an ultimate event and if all are filled (an unlikely occurrence) we will have at each ksana exactly d ultimate events not counting the original one (or d + 1 if we do count the first one). This number of possible ultimate events, brought about by a single ultimate event a ksana earlier, does not and cannot change but, in accordance with the principle of constant area, the distances between events (or rather their positions on the Locality) can and does change. We need not concern ourselves at the moment with the question of whether this change is ‘real’ or essentially subjective : the important  point is to get a clear visual image of the ‘sides’ of the Space/Time Event Rectangle contracting and expanding but nonetheless maintaining the same overall area. Some idea of this is given  by the following diagrams where the blobs represent ultimate events.

Limit of lengths of spatial and temporal ‘sides’ 

Can this process of contraction/expansion be continued indefinitely? In normal physics it can since, with some exceptions and occasional trepidation, contemporary physics still retains a key feature of Leibnitz’s and Newton’s Calculus, namely the usually unstated assumption that ‘space and time’ are ‘infinitely divisible’, incredible though this sounds (Note 2).  In UET on the other hand the spatial contraction has a clearcut limit which is the dimension of the ‘kernel’ of a Space/Time capsule. The precise region where an ultimate event can have occurrence, the equivalent of the nucleus of an atom, does not (by assumption) itself change in size and thus constitutes the ‘ultimate’ unit of distance in UET since everything that we see and hear or otherwise apprehend is made up of ultimate events and each of them (by hypothesis) occupies an equivalent 3-dimensional spot on the Locality whose ‘rest’ dimensions do not change.
Noting the ‘ultimate’ dimension as su, we deduce that there must be a numerical ratio between su and s0, the maximum dimension of a grid-space, which is also that of a ‘rest’ capsule, since everything in UET is basically related by whole number ratios to the complete exclusion of irrationals. So s0 = n su where n is a positive integer ─ i.e. one could in principle fit n ‘kernels’ lengthways and in the other two directions, if we assume a cubic shape, or, if the capsule is conceived as being spherical, we would have a diameter of n kernels.
Now, if the Event Rectangle keeps the same area, it must stay equal to its rest size of  1 s0 × 1 t0  and the spatial distances between ultimate events, or positions where ultimate events could occur, must contract while the temporal distances expand. At the limiting value of c* positions the spatial dimension has shrunk from being s0  to its minimum  size of su , i.e. each grid-position has shrunk to the size of the kernel and it cannot shrink any more since no smaller length exists or can exist. But there are precisely n multiples of su in s0  which can only mean that c* = n. And so the ratio of ‘kernel’, the smallest unit of length that can possibly exist, to the ‘rest’ size of the capsule is the same as the ratio of the ‘rest’ length of a grid-position to the maximum number of positions that can be displaced by a causal impulse within a single ksana. This unexpected result which flows from the basic assumptions of Ultimate Event Theory shows a pleasing symmetry since the constant c* , the limiting displacement rate (‘speed’), also shows up within the structure of the basic Event Capsule, the equivalent in UET of an ‘atom’.

Time Dilation 

When sv , the  variable length of the side of the Event Capsule, reaches its minimum value of su , tv is a maximum since, if  s0 ­is a maximum, t0  must be a minimum. Since  s0 × to  = sv  ×  tv , we have tmax /t0  = s0/su  = c*/1  so  tmax = c* × t0      As sv gets shorter, tv  increases  to keep the ‘occupied area’ constant.  When the spatial distances  d × s0  are large enough to be noticeable, (though the same thing applies if they are small) this gives rise the well-known phenomenon of ‘length contraction’ ─ well-known to students of physics, I mean. And this in turn means that the ‘time dimension’ gets extended to keep the overall area constant.

Relativity Expanding Contracting Rectangles

This is the same as the situation in Special Relativity. But, in UET, the Space/Time Rectangle does not end up with one side becoming ‘infinitely long’ and the other ‘infinitesimally short’ since v can essentially only take integral (or at most rational)  values and peaks at v = c* = (c – 1), i.e. one grid-space short of the ‘unattainable’ value c  ─ unattainable if we are considering a  Causal Space/Time Event Rectangle (Note 3).

Summary    The chief points arising from this discussion are :

(1) Every ultimate event has occurrence at the ‘kernel’ of a Space/Time Capsule of fixed ‘rest’ dimensions
s0 ×  s0 × s0 ×  s0 × t0  

(2) The ratio of the spatial dimension of the kernel su to the spatial dimension s0  is 1 : n ; 

(3) The least ‘re-appearance rate’ of an ultimate event is 1 s0 per ksana which also turns out to be the maximum rate;

(4) The limiting ‘lateral displacement rate’ of a causal impulse is set at c* s0 per ksana ;

(5) The volume of a causal ‘Space/Time Parallelipod’, reduced for simplicity to the area of a causal Space/Time Event Rectangle, is
constant and = s03 t0  or s0 t0  for a rectangle   ;

(6) The number of possible or actual ultimate events within a Casual Space/Time Parallelipod or Rectangle is constant and always a positive integer;

(7) The spatial and temporal distances between possible or actual ultimate events on or inside the Space/Time Parallelipod or Rectangle contract and expand in order to keep the overall volume/area constant;

(8) Spatial distances are submultiples (proper fractions)  of the basic ‘rest’ dimension s0  ;

(9) Temporal distances, i.e. the ‘duration’ of a ksana, the smallest interval of time, are multiples of the basic ‘rest’ temporal unit t0 ;

(10)  The maximum ‘lateral displacement rate’ is c* per ksana (c* a positive integer) where c* = n  

(11) There is a limiting value of the contracted spatial unit distance, namely su  which is the dimension of the kernel ; 

(12) There is a limiting value of the expanded unit temporal  distance, namely tmax =  c* t0   ;

All these features are derived from the basic postulates of Ultimate Event Theory. They differ from the usual features of Special Relativity in the following respects:

(1) Lengths cannot be indefinitely contracted, nor will they ever appear to be, and the same goes for time dilation. Popular books referring to someone falling into a Black Hole suggest that he or she will fall an ‘infinite distance’ in a single mini-second and his or her cry of despair will last for all eternity ─ this does not and cannot happen in UET. It should be possible one day soon to test these predictions (though not with respect to Black Holes): UET says that ‘space contraction’ and ‘time dilation’ will approach but never exceed limiting values, and indeed such observed limits would give us some idea of the dimensions of the Event Capsule.

(2) The actual speed (lateral displacement rate) of event-chains may attain, but not exceed, c* , which means they can be attributed a small but finite ‘mass’ ─ though ‘mass’ has not yet been properly defined in UET (see coming post). A rough definition would be something like the following.

mass :   capacity of an event-chain to resist any attempt to change its re-appearance rate, relative direction of ‘motion’, and a fortiori its very continuing re-appearance.”

(3)  It is in principle possible in UET for an event-chain to eventually exceed the maximum ‘causal displacement limit’ (roughly ‘speed’) but such an event-chain would immediately cease to repeat (having lost persistence or self-causation) and to all intents and purposes would ‘disappear into thin air’ leaving no trace. This would explain the sudden disappearances of ‘particles’ should this be observed. There would be no appreciable energy loss which conflicts with the doctrine of the Constancy of Mass/Energy (energy is as yet undefined in UET).

(To be continued in next Post)

NOTES

Note 1.  One should speak of ‘state of succession’ rather than ‘state of motion’ since continuous motion does not exist in UET (or in reality). But the phrase ‘state of succession’ seems strange even to me and ‘relative state of succession’ even stranger. All this goes to show how strongly we have been marked by the fallacious idea of continuous motion.

Note 2  Hume, the arch sceptic, memorably wrote “No priestly dogma invented on purpose to tame and subdue the rebellious reason of mankind ever shocked common sense more than the doctrine of  infinite divisibility with its consequences” (“Essay on Human Understanding”)

Note 3 We do not get the fantastic picture of someone falling into a Black Hole and being contracted down to nothing while his or her cry of despair lasts for all eternity. That there are definite limits to possible length contraction and time dilation is a proposition that is in principle verifiable — and I believe it will be verified during this century. And once we have approximate values for these limits, we may, by extrapolating backwards, obtain an idea of the dimensions of the basic Space/Time capsule, i.e.  s0 and  t0   .

 Although, in modern physics,  many elementary particles are extremely short-lived, others such as protons are virtually immortal. But either way, a particle, while it does exist, is assumed to be continuously existing. And solid objects such as we see all around us like rocks and hills, are also assumed to be ‘continuously existing’ even though they may undergo gradual changes in internal composition. Since solid objects and even elementary particles don’t appear, disappear and re-appear, they don’t have a ‘re-appearance rate ’ ─ they’re always there when they are there, so to speak.
However, in UET the ‘natural’ tendency is for everything to flash in and out of existence and virtually all  ultimate events disappear for ever after a single appearance leaving a trace that would, at best, show up as a sort of faint background ‘noise’ or ‘flicker of existence’. All apparently solid objects are, according to the UET paradigm, conglomerates of repeating ultimate events that are bonded together ‘laterally’, i.e. within  the same ksana, and also ‘vertically’, i.e. from one ksana to the next (since otherwise they would not show up again ever). A few ultimate events, those that have acquired persistence ─ we shall not for the moment ask how and why they acquire this property ─ are able to bring about, i.e. cause, their own re-appearance : in such a case we have an event-chain which is, by definition,  a causally bonded sequence of ultimate events.
But how often do the constituent events of an event-chain re-appear?  Taking the simplest case of an event-chain composed of a single repeating ultimate event, are we to suppose that this event repeats at every single ksana (‘moment’ if you like)? There is on the face of it no particular reason why this should be so and many reasons why this would seem to be very unlikely.    

The Principle of Spatio-Temporal Continuity 

Newtonian physics, likewise 18th and 19th century rationalism generally, assumes what I have referred to elsewhere as the Postulate of Spatio-temporal Continuity. This postulate or principle, though rarely explicitly  stated in philosophic or scientific works,  is actually one of the most important of the ideas associated with the Enlightenment and thus with the entire subsequent intellectual development of Western society. In its simplest form, the principle says that an event occurring here, at a particular spot in Space-Time (to use the current term), cannot have an effect there, at a spot some distance away without having effects at all (or at least most?/ some?) intermediate spots. The original event sets up a chain reaction and a frequent image used is that of a whole row of upright dominoes falling over one by one once the first has been pushed over. This is essentially how Newtonian physics views the action of a force on a body or system of bodies, whether the force in question is a contact force (push/pull) or a force acting at a distance like gravity.
As we envisage things today, a blow affects a solid object by making the intermolecular distances of the surface atoms contract a little and they pass on this effect to neighbouring molecules which in turn affect nearby objects they are in contact with or exert an increased pressure on the atmosphere,  and so on. Moreover, although this aspect of the question is glossed over in Newtonian (and even modern) physics, each transmission of the original impulse  ‘takes time’ : the re-action is never instantaneous (except possibly in the case of gravity) but comes ‘a moment later’, more precisely at least one ksana later. This whole issue will be discussed in more detail later, but, within the context of the present discussion, the point to bear in mind is that,  according to Newtonian physics and rationalistic thought generally, there can be no leap-frogging with space and time. Indeed, it was because of the Principle of Spatio-temporal Continuity that most European scientists rejected out of hand Newton’s theory of universal attraction since, as Newton admitted, there seemed to be no way that a solid body such as  the Earth could affect another solid body such as the Moon thousands  of kilometres with nothing in between except ‘empty space’.   Even as late as the mid 19th century, Maxwell valiantly attempted to give a mechanical explanation of his own theory of electro-magnetism, and he did this essentially because of the widespread rock-hard belief in the principle of spatio-temporal continuity.
The principle, innocuous  though it may sound, has also had  extremely important social and political implications since, amongst other things, it led to the repeal of laws against witchcraft in the ‘advanced’ countries ─ the new Legislative Assembly in France shortly after the revolution specifically abolished all penalties for ‘imaginary’ crimes and that included witchcraft. Why was witchcraft considered to be an ‘imaginary crime’? Essentially because it  offended against the Principle of Spatio-Temporal Continuity. The French revolutionaries who drew the statue of Reason through the streets of Paris and made Her their goddess, considered it impossible to cause someone’s death miles away simply by thinking ill of them or saying Abracadabra. Whether the accused ‘confessed’ to having brought about someone’s death in this way, or even sincerely believed it, was irrelevant : no one had the power to disobey the Principle of Spatio-Temporal Continuity.
The Principle got somewhat muddied  when science had to deal with electro-magnetism ─ Does an impulse travel through all possible intermediary positions in an electro-magnetic field? ─ but it was still very much in force in 1905 when Einstein formulated the Theory of Special Relativity. For Einstein deduced from his basic assumptions that one could not ‘send a message’ faster than the speed of light and that, in consequence,  this limited the speed of propagation of causality. If I am too far away from someone else I simply cannot cause this person’s death at that particular time and that is that. The Principle ran into trouble, of course,  with the advent of Quantum Mechanics but it remains deeply entrenched in our way of thinking about the world which is why alibis are so important in law, to take but one example. And it is precisely because Quantum Mechanics appears to violate the principle that QM is so worrisome and the chief reason why some of the scientists who helped to develop the theory such as Einstein himself, and even Schrodinger, were never happy with  it. As Einstein put it, Quantum Mechanics involved “spooky action at a distance” ─ exactly the same objection that the Cartesians had made to Newton.
So, do I propose to take the principle over into UET? The short answer is, no. If I did take over the principle, it would mean that, in every bona fide event-chain, an ultimate event would make an appearance at every single ‘moment’ (ksana), and I could see in advance that there were serious problems ahead if I assumed this : certain regions of the Locality would soon get hopelessly clogged up with colliding event-chains. Also, if all the possible positions in all ‘normal’ event-sequences were occupied, there would be little point in having a theory of events at all, since, to all intents and purposes, all event-chains would behave as if they were solid objects and one might as well just stick to normal physics. One of the main  reasons for elaborating a theory of events in the first place was my deep-rooted conviction ─ intuition if you like ─ that physical reality is discontinuous and that there are gaps between ksanas ─ or at least that there could be gaps given certain conditions. In the theory I eventually roughed out, or am in the process of roughing out, both spatio-temporal continuity and infinity are absent and will remain prohibited.
But how does all this square with my deduction (from UET hypotheses) that the maximum propagation rate of causality is a single grid-position per ksana, s0/t0, where s0 is the spatial dimension of an event capsule ‘at rest’ and t0 the ‘rest’ temporal dimension? In UET, what replaces the ‘object-based’ image of a tiny nucleus inside an atom, is the vision of a tiny kernel of fixed extent where every ultimate event occurs embedded in a relatively enormous four-dimensional event capsule. Any causal influence emanates from the kernel and, if it is to ‘recreate’ the original ultimate event a ksana later, it must traverse at least half the ‘length’ (spatial dimesion) of one capsule plus half of the next one, i.e. ½ s0 + ½ s0 = 1 s0 where s0 is the spatial dimension of an event-capsule ‘at rest’ (its normal state). For if the causal influence did not ‘get that far’, it would not be able to bring anything about at all, would be like a messenger who could not reach a destination receding faster than he could run flat out. The runner’s ‘message’, in this case the recreation of a clone of the original ultimate event, would never get delivered and nothing would ever come about at all.
This problem does not occur in normal physics since objects are not conceived as requiring a causal force to stop them disappearing, and, on top of that, ‘space/time’ is assumed to be continuous and infinitely divisible. In UET there are minimal spatial and temporal units (that of the the grid-space and the ksana) and ‘time’ in the UET sense of an endless succession of ksanas, stops for no man or god, not even physicists who are born, live and die successively like everything else. I believe that succession, like causality, is built into the very fabric of physical reality and though there is no such thing as continuous motion, there is and always will be change since, even if nothing else is happening, one ksana is being replaced by another, different, one ─ “the moving finger writes, and, having writ, moves on” (Rubaiyat of Omar Khayyam). Heraclitus said that “No man ever steps into the same river twice”, but a more extreme follower of his disagreed, saying that it was impossible to step into the same river once, which is the Hinayana  Buddhist view. For ‘time’ is not a river that flows at a steady rate (as Newton envisaged it) but a succession of ‘moments’ threaded like beads on an invisible  chain and with minute gaps between the beads.

Limit to unitary re-appearance rate

So, returning to my repeating ultimate event, could the ‘re-creation rate’ of an ultimate event be  greater than the minimal rate of 1 s0/t0 ? Could it, for example, be  2, 3 or 5 spacesper ksana? No. For if and when the ultimate event re-appeared, say  5 ksanas later, the original causal impulse would have covered a distance of 5 s0   ( s0 being the spatial dimension of each capsule) and would have taken 5 ksanas to do  this. Consequently the space/time displacement rate would be the same (but not in this case the individual distances). I note this rate as c* in ‘absolute units’, the UET equivalent of c, since it denotes an upper limit to the propagation of the causal influence (Note 1). For the very continuing existence of anything depends on causality : each ‘object’ that does persist in isolation does so because it is perpetually re-creating itself (Note 2).

But note that it is only the unitary rate, the distance/time ratio taken over a single ksana,  that cannot be less (or more) than one grid-space per ksana or 1 s0/t0 : any fractional (but not irrational) re-appearance rate is perfectly conceivable provided it is spread out over several ksanas. A re-appearance rate of m/n s0/t0  simply means that the ultimate event in question re-appears in an equivalent spatial position on the Locality m times every n ksanas where m/n ≤ 1. And there are all sorts of different ways in which this rate be achieved. For example, a re-appearance rate of 3/5 s0/t0 could be a repeating pattern such as

Reappearance rates 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

and one pattern could change over into the other either randomly or, alternatively, according to a particular rule.
As one increases the difference between the numerator and the denominator, there are obviously going to be many more possible variations : all this could easily be worked out mathematically using combinatorial analysis. But note that it is the distribution of ™the black and white at matters since, once a re-appearance rhythm has begun, there is no real difference between a ‘vertical’ rate of 0™˜™˜●0● and ˜™˜™™˜™˜™˜™˜●0™˜™˜●0 ™˜™™˜™˜ ˜™˜™ ─ it all depends on where you start counting. Patterns with the same repetition rate only count as different if this difference is recognizable no matter where you start examining the sequence.
Why does all this matter? Because, each time there is a blank line, this means that the ultimate event in question does not make an appearance at all during this ksana, and, if we are dealing with large denominators, this could mean very large gaps indeed in an event chain. Suppose, for example, an event-chain had a re-appearance rate of 4/786. There would only be four appearances (black dots) in a period of 786 ksanas, and there would inevitably be very large blank sections of the Locality when the ultimate event made no appearance.

Lower Limit of re-creation rate 

Since, by definition, everything in UET is finite, there must be a maximum number of possible consecutive gaps  or non-reappearances. For example, if we set the limit at, say, 20 blank lines, or 200, this would mean that, each time this blank period was observed, we could conclude that the event-chain had terminated. This is the UET equivalent  of the Principle of Spatio-Temporal Continuity and effectively excludes phenomena such as an ultimate event in an event-chain making its re-appearance a century later than its first appearance. This limit would have to be estimated on the  basis of experiments since I do not see how a specific value can be derived from theoretical considerations alone. It is tempting to estimate that this value would involve c* or a multiple of c* but this is only a wild guess ─ Nature does not always favour elegance and simplicity.
Such a rule would limit how ‘stretched out’ an event-chain can be temporally and, in reality , there may not after all be a hard and fast general rule  : the maximal extent of the gap could decline exponentially or in accordance with some other function. That is, an abnormally long gap followed by the re-appearance of an event, would decrease the possible upper limit slightly in much the same way as chance associations increase the likelihood of an event-chain forming in the first place. If, say, there was an original limit of a  gap of 20 ksanas, whenever the re-appearance rate had a gap of 19, the limit would be reduced to 19 and so on.
It is important to be clear that we are not talking about the phenomenon of ‘time dilation’ which concerns only the interval between one ksana and the next according to a particular viewpoint. Here, we simply have an event-chain where an ultimate event is repeating at the same spot on the spatial part of the Locality : it is ‘at rest’ and not displacing itself laterally at all. The consequences for other viewpoints would have to be investigated.

Re-appearance Rate as an intrinsic property of an event-chain  

Since Galileo, and subsequently Einstein, it has become customary in physics to distinguish, not between rest and motion, but rather between unaccelerated motion and  accelerated motion. And the category of ‘unaccelerated motion’ includes all possible constant straight-line speeds including zero (rest). It seems, then,  that there is no true distinction to be made between ‘rest’ and motion just so long as the latter is motion in a straight line at a constant displacement rate. This ‘relativisation’ of  motion in effect means that an ‘inertial system’ or a particle at rest within an inertial system does not really have a specific velocity at all, since any estimated velocity is as ‘true’ as any other. So, seemingly, ‘velocity’ is not a property of a single body but only of a system of at least two bodies. This is, in a sense, rather odd) since there can be no doubt that a ‘change of velocity’, an acceleration, really is a feature of a single body (or is it?).
Consider a spaceship which is either completely alone in the universe or sufficiently remote from all massive bodies that it can be considered in isolation. What is its speed? It has none since there is no reference system or body to which its speed can be referred. It is, then, at rest ─ or this is what we must assume if there are no internal signs of acceleration such as plates falling around or rattling doors and so on. If the spaceship is propelling itself forward (or in some direction we call ‘forward’) intermittently by jet propulsion the acceleration will be note by the voyagers inside the ship supposing there are some. Suppose there is no further discharge of chemicals for a while. Is the spaceship now moving at a different and greater velocity than before? Not really. One could I suppose refer the vessel’s new state of motion to the centre of mass of the ejected chemicals but this seems rather artificial especially as they are going to be dispersed. No matter how many times this happens, the ship will not be gaining speed, or so it would appear. On the other hand, the changes in velocity, or accelerations are undoubtedly real since their effects can be observed within the reference frame.
So what to conclude? One could say that ‘acceleration’ has ‘higher reality status’ than simple velocity since it does not depend on a reference point outside the system. ‘Velocity’ is a ‘reality of second order’ whereas acceleration is a ‘reality of first order’. But once again there is a difference between normal physics and UET physics in this respect. Although the distinction between unaccelerated and accelerated motion is taken over into UET (re-baptised ‘regular’ and ‘irregular’ motion), there is in Ultimate Event Theory, but not in contemporary physics, a kind of ‘velocity’ that has nothing to do with any other body whatsoever, namely the event-chain’s re-appearance rate.
When one has spent some time studying Relativity one ends up wondering whether after all “everything is relative” and quite a lot of physicists and philosophers seems to actually believe something not far from this : the universe is evaporating away as we look it and leaving nothing but a trail of unintelligible mathematical formulae. In Quantum Mechanics (as Heisenberg envisaged it anyway) the properties of a particular ‘body’ involve the properties of all the other bodies in the universe, so that there remain very few, if any, intrinsic properties that a body or system can possess. However, in UET, there is a reality safety net. For there are at least two  things that are not relative, since they pertain to the event-chain or event-conglomerate itself whether it is alone in the universe or embedded in a dense network of intersecting event-chains we view as matter. These two things are (1) occurrence and (2) rate of occurrence and both of them are straight numbers, or ratios of integers.
An ultimate event either has occurrence or it does not : there is no such thing as the ‘demi-occurrence’ of an event (though there might be such a thing as a potential event). Every macro event is (by the preliminary postulates of UET) made up of a finite number of ultimate events and every trajectory of every event-conglomerate has an event number associated with it. But this is not all. Every event-chain ─ or at any rate normal or ‘well-behaved’ event-chain ─ has a ‘re-appearance rate’. This ‘re-appearance rate’ may well change considerably during the life span of a particular event-chain, either randomly or following a particular rule, and, more significantly, the ‘re-appearance rates’ of event-conglomerates (particles, solid bodies and so on) can, and almost certainly do, differ considerably from each other. One ‘particle’ might have a re-appearance rate of 4, (i.e. re-appear every fourth ksana) another with the same displacement rate  with respect to the first a rate of 167 and so on. And this would have great implications for collisions between event-chains and event-conglomerates.

Re-appearance rates and collisions 

What happens during a collision? One or more solid bodies are disputing the occupation of territory that lies on their  trajectories. If the two objects miss each other, even narrowly, there is no problem : the objects occupy ‘free’ territory. In UET event conglomerates have two kinds of ‘velocity’, firstly their intrinsic re-appearance rates which may differ considerably, and, secondly, their displacement rate relative to each other. Every event-chain may be considered to be ‘at rest’ with respect to itself, indeed it is hard to see how it could be anything at all if this were not the case. But the relative speed of even unaccelerated event-chains will not usually be zero and is perfectly real since it has observable and often dramatic consequences.
Now, in normal physics, space, time and existence itself is regarded as continuous, so two objects will collide if their trajectories intersect and they will miss each other if their trajectories do not intersect. All this is absolutely clearcut, at least in principle. However, in UET there are two quite different ways in which ‘particles’ (small event conglomerates) can miss each other.
First of all, there is the case when both objects (repeating event-conglomerates) have a 1/1 re-appearance rate, i.e. there is an ultimate event at every ksana in both cases. If object B is both dense and occupies a relatively large region of the Locality at each re-appearance, and the relative speed is low, the chances are that the two objects will collide. For, suppose a relative displacement rate of 2 spaces to the right (or left)  at each ksana and take B to be stationary and A, marked in red, displacing itself two spaces at every ksana.

Reappearance rates 2

Clearly, there is going to be trouble at the  very next ksana.
However, since space/time and existence and everything else (except possibly the Event Locality) is not continuous in UET, if the relative speed of the two objects were a good deal greater, say 7 spaces per 7 ksanas (a rate of 7/7)  the red event-chain might manage to just miss the black object.

This could not happen in a system that assumes the Principle of Spatio-Temporal Continuity : in UET there is  leap-frogging with space and time if you like. For the red event-chain has missed out certain positions on the Locality which, in principle could have been occupied.

But this is not all. A collision could also have been avoided if the red chain had possessed a different re-appearance rate even though it remained a ‘slow’ chain compared to the  black one. For consider a 7/7 re-appearance rate i.e. one appearance every seven ksanas and a displacement rate of two spaces per ksana relative to the black conglomerate taken as being stationary. This would work out to an effective rate of 14 spaces to the right at each appearance ─ more than enough to miss the black event-conglomerate.

Moreover, if we have a repeating event-conglomerate that is very compact, i.e. occupies very few neighbouring grid-spaces at each appearance (at the limit just one), and is also extremely rapid compared to the much larger conglomerates it is likely to come across, this ‘event-particle’ will miss almost everything all the time. In UET it is much more of a problem how a small and ‘rapid’ event-particle can ever collide with anything at all (and thus be perceived) than for a particle to apparently disappear into thin air. When I first came to this rather improbable conclusion I was somewhat startled. But I did not know at the time that neutrinos, which are thought to have a very small mass and to travel nearly at the speed of light, are by far the commonest particles in the universe and, even though millions are passing through my fingers as I write this sentence, they are incredibly difficult to detect because they interact with ordinary ‘matter’ so rarely (Note 3). This, of course, is exactly what I would expect ─ though, on the other hand, it is a mystery why it is so easy to intercept photons and other particles. It is possible that the question of re-appearance rates has something to do with this : clearly neutrinos are not only extremely compact, have very high speed compared to most material objects, but also have an abnormally high re-appearance rate, near to the maximum.
RELATIVITY   Reappeaance Rates Diagram         In the adjacent diagram we have the same angle sin θ = v/c but progressively more extended reappearance rates 1/1; 2/2; 3/3; and so on. The total area taken over n ksanas will be the same but the behaviour of the event-chains will be very different.
I suspect that the question of different re-appearance rates has vast importance in all branches of physics. For it could well be that it is a similarity of re-appearance rates ─ a sort of ‘event resonance’ ─ that draws disparate event chains together and indeed is instrumental in the formation of the very earliest event-chains to emerge from the initial randomness that preceded the Big Bang or similar macro events.
Also, one suspects that collisions of event conglomerates  disturb not only the spread and compactness of the constituent events-chains, likewise their ‘momentums’, but also and more significantly their re-appearance rates. All this is, of course, highly speculative but so was atomic theory prior to the 20th century event though atomism as a physical theory and cultural paradigm goes back to the 4th century BC at least.        SH  29/11/13

 

 

Note 1  Compared to the usual 3 × 108 metres/second the unitary  value of s/t0 seems absurdly small. But one must understand that s/t0 is a ratio and that we are dealing with very small units of distance and time. We only perceive large multiples of these units and it is important to bear in mind that s0is a maximum while t0 is a minimum. The actual kernel, where each ultimate event has occurrence, turns out to be s0/c* =  su so in ‘ultimate units’ the upper limit is c* su/t0.  It is nonetheless a surprising and somewhat inexplicable physiological fact that we, as human beings, have a pretty good sense of distance but an incredibly crude sense of time. It is only necessary to pass images at a rate of about eight per second for the brain to interpret the successive in images as a continuum and the film industry is based on this circumstance. Physicists, however, gaily talk of all sorts of important changes happening millionths or billionths of a second and in an ordinary digital watch the quartz crystal is vibrating thousands of times a second (293,000 I believe).

 

Note 2  Only Descartes amongst Western thinkers realized there was a problem here and ascribed the power of apparent self-perpetuation to the repeated intervention of God; today, in a secular world, we perforce ascribe it to ‘ natural forces’.

In effect, in UET, everything is pushed one stage back. For Newton and Galileo the  ‘natural’ state of objects was to continue existing in constant straight line motion whereas in UET the ‘natural’ state of ultimate events is to disappear for ever. If anything does persist, this shows there is a force at work. The Buddhists call this all-powerful causal force ‘karma but unfortunately they were only interested in the moral,  as opposed to physical, implications of karmic force otherwise we would probably have had a modern theory of physics centuries earlier than we actually did.

Note 3  “Neutrinos are the commonest particles of all. There are even more of them flying around the cosmos than there are photons (…) About 400 billion neutrinos from the Sun pass through each one of us every second.”  Frank Close, Particle Physics A Very Short Introduction (OUP) p. 41-2 

In Ultimate Event Theory (UET) the basic building-block of physical reality is not the atom or elementary particle (or the string whatever that is) but the ultimate event enclosed by a four-dimensional  ‘Space/Time Event-capsule’. This capsule has fixed extent s3t = s03t0 where s0 and t0 are universal constants, s0 being the maximum ‘length’ of s, the ‘spatial’ dimension,  and t0 being the minimal ‘length’ of t, the basic temporal interval or ksana. Although s3t = s03 t0  = Ω (a constant), s and t can and do vary though they have maximum and minimum values (as does everything in UET).
All ultimate events are, by hypothesis, of the same dimensions, or better they occupy a particular spot on the Event Locality, K0 , whose dimensions do not change (Note 1). The spatial region occupied by an ultimate event is very small compared to the dimensions of the ‘Event capsule’ that contains it and, as is demonstrated in the previous post (Causality and the Upper Limit), the ratio of ‘ultimate event volume’ to ‘capsule volume’ or  su3/s03 is
1: (c*)3 and of single dimension to single dimension 1 : c* (where c* is the space/time displacement rate of a causal impulse (Note 2)). Thus, s3 varies from a minimum value su3, the exact region occupied by an ultimate event, to a maximum value of  s03  where s0 = c* su. In practice, when the direction of a force or velocity is known, we only need bother about the ‘Space/Time Event Rectangle’  st = constant but we should not forget that this is only a matter of convenience : the ‘event capsule’ cannot be decomposed and  always exists in four dimensions (possibly more).

Movement and ‘speed’ in UET     If by ‘movement’ we mean change, then obviously there is movement on the physical level unless all our senses are in error. If, however, by ‘movement’ we are to understand ‘continuous motion of an otherwise unchanging entity’, then, in UET, there is no movement. Instead there is succession : one event-capsule is succeeded by another with the same dimensions. The idea of ‘continuous motion’ is thus thrown into the trash-can along with the notion of ‘infinity’ with which it has been fatally associated because of the conceptual underpinning of the Calculus. It is admittedly difficult to avoid reverting to traditional science-speak from time to time but I repeat that, strictly speaking, in UET there is no ‘velocity’ in the usual sense : instead there is a ‘space/time ratio’ which may remain constant, as in a ‘regular event-chain, or may change, as in the case of an ‘irregular (accelerated) event-chain. For the moment we will restrict ourselves to considering only regular event-chains and, amongst regular event-chains, only those with a 1/1 reappearance rate, i.e. when one or more constituent ultimate event recurs at each ksana.
An event-chain is a bonded sequence of events which in its simplest form is simply a single repeating ultimate event. We associate with every event-chain an ‘occupied region’ of the Locality constituted by the successive ‘event-capsules’. This region is always increasing since, at any ksana,  any ‘previous spots’ occupied by members of the chain remain occupied (cannot be emptied). This is an important feature of UET and depends on the Axiom of Irreversibility which says that once an event has occurrence on the Locality there is no way it can be removed from the Locality or altered in any way. This property of ‘irreversible occurrence’ is, if you like, the equivalent of entropy in UET since it is a quantity that can only increase ‘with time’.
So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

•   

•        

•    •    •    

•    •    •    •    

The two bold dotted lines (black and  red) thus define the limits of the ‘occupied region’ of the Locality, although these ‘guard-lines’ of ultimate events standing there like sentinels are not capable of preventing other events from occurring within the region whose extreme limits they define. Possible emplacements for ultimate events not belonging to these two chains are marked by grey points. The red dotted line may be viewed as displacing itself by so many spaces to the right at each ksana (relative to the vertical column). If we consider the vertical distance from bold black dot to dot to represent t0 , the ‘length’ of a single ksana (the smallest possible temporal interval), and the distance between neighbouring dots in a single row to be 0  then, if there are v spaces in a row (numbered 0, 1,2…..v) we have a Space/Time Event Rectangle of v s­0  × 1 t­0  , the ‘Space/time ratio’ being v grid-spaces per ksana.

It is important to realize what v signifies. ‘Speed’ (undirected velocity) is not a fundamental unit in the Système Internationale but a ratio of the fundamental SI units of spatial distance and time. For all that, v is normally conceived today as an independent  ‘continuous’ variable which can take any possible value, rational or irrational, between specified limits (which are usually 0 and c). In UET v is, in the first instance, simply a positive integer  which indicates “the number of simultaneously existing neighbouring spots on the Event Locality where ultimate events can have occurrence between two specified spots”. Often, the first spot where an ultimate event does or can occur is taken as the ‘Origin’ and the vth spot in one direction (usually to the right) is where another event has occurrence (or could have). The spots are thus numbered from 0 to v where v is a positive integer. Thus

0      1      2       3      4       5                v
•       •       •       •       •       • ………….•     

There are thus v intervals, the same number as the label applied to the final event ─ though, if we include the very first spot, there are (v + 1) spots in all where ultimate events could have (had) occurrence. This number, since it applies to ultimate events and not to distances or forces or anything else, is absolute.
      A secondary meaning of v is : the ratio of ‘values of lateral spatial displacement’ compared to ‘vertical’ temporal displacement’. In the simplest case this ratio will be v : 1 where the ‘rest’ values 0  and 0 are understood. This is the nearest equivalent to ‘speed’  as most of you have come across it in physics books (or in everyday conversation). But, at the risk of seeming pedantic, I repeat that there are (at least) two significant  differences between the meaning of v in UET and that of v  in traditional physics. In UET, v is (1) strictly a static space/time ratio (when it is not just a number) and (2) it cannot ever take irrational values (except in one’s imagination). If we are dealing with event-chains with a 1/1 reapperance rate, v is a positive integer but the meaning can be intelligibly extended to include m/n where m, n are integers. Thus  v = m/n spaces/ksana  would mean that successive events in an event-chain are displaced laterally by m spaces every nth ksana. But, in contrast to ‘normal’ usage, there is no such thing as a displacement of m/n spaces per (single) ksana. For both the ‘elementary’ spatial interval, the ‘grid-space’, and the ksana are irreducible.
One might suppose that the ‘distance’ from the 0th  to the vth spot does not change ─ ‘v is v’ as it were. However, in UET, ‘distance’ is not an absolute but a variable quantity that depends on ‘speed’ ─ somewhat the reverse of how we see things in our everyday lives since we usually think of distances between spots as fixed but the time it takes to get from one spot to the other as variable.

The basic ‘Space-Time Rectangle’ st can be represented thus

Relativity cos diagram

Rectangle   s0 × t0   =   s0 cos φ  × t0 /cos φ
where  PR cos φ = t0     

sv = s0 cos φ        tv = t0 /cos φ       sv = s0 cos φ       tv = t0 /cos φ    sv /s0  =  cos φ     tv /t0  =  1/cos φ s0 /t0  = tan β = constant       tv2  =  t02 + v2 s02     v2 s02 = t02 ( (1/cos2φ) – 1) s02/ t02  tan2 β  =  (1/v2) ((1/cos2φ) – 1) =  (1/v2) tan2 φ  

tan β  = s0 /t0   =    (tan θ)/(v cos θ)      since  sin φ =  tan θ = (v/c)                          

    v =    ( tan θ)/ (tan β cos φ)                  

 

So we have s = s0 cos φ  where φ ranges from 0 to the highest possible value that gives a non-zero length, in effect that value of  φ for which cos φ = s0/c* = su . What is the relation of s to v ? If sv is the spacing associated with the ratio v , and dependent on it, we have sv = s0 cos φ  and so sv /s0  = cos φ. So cos φ is the ‘shrink factor’ which, when applied to  any distance reckoned in terms of s0, converts it by changing the spacing. The ‘distance’ between two spots on the Locality is composed of two parts. Firstly, there is the number of intermediate spots where ultimate events can/could have/had occurrence and this ‘Event-number’ does not change ever. Secondly, there is the spacing between these spots which has a minimum value su which is just the diameter of the exact spot where an ultimate event can occur, and s0 which is the diameter of the Event capsule and thus the maximum distance between one spot where an ultimate event can have occurrence and the nearest neighbouring spot. The spacing  varies according to v  and it is now incumbent on us to determine the ‘shrink factor’ in terms of v.
The spacing s is dependent on so s = g(v) . It is inversely proportional to v since as v increases, the spacing is reduced while it is at a maximum when v = 0. One might make a first guess that the function will be of the form s = 1 – f(v)/h(c)   where f(v) ranges from 0 to h(c) . The simplest such function is just  s = (1 – v/c).
As stated, v in UET can only take rational values since it denotes a specific integral number of spots on the Locality. There  is a maximum number of such spots between  any two ultimate events or their emplacements, namely c –1  such spots if we label spot A as 0 and spot B as c. (If we make the second spot c + 1 we have c intermediate positions.) Thus v  = c/m where m is a rational number.  If we are concerned only with event-chains that have a 1/1 reappearance ratio, i.e. one event per ksana, then m  is an integer. So v  = c/n

We thus have tan θ = n/c  where n  varies from 0 to c* =  (c – 1) (since in UET a distinction is made between the highest attainable space/time displacement ratio and the lowest unattainable ratio c) .
So 0 ≤ θ < π/4  ─ since tan π/4 = 1. These are the only permissible values for tan θ .

Relativity tangent diagram

  

   

 

 

 

 

 

 

 

 

 

If now we superimpose the ‘v/c’ triangle above on the st Rectangle to the previous diagram we obtain

Relativity Circle Diagram tan sin

 

Thus tan θ = sin φ which gives
                cos φ = (1 – sin2 θ)1/2  = (1 – (v/c)2 )1/2  

This is more complicated than our first guess, cos φ = (1 – (v/c), but it has the same desired features that it goes to cos φ = 1 as v goes to zero and has a maximum value when v approaches c.
(This maximum value is 1/c √2c – 1  =  √2/√c )

     

 

 

         cos φ = (1 – (v/c)2 )1/2  is thus a ‘shrinkage factor’ which is to be applied to all lengths within event-chains that are in lateral motion with respect to a ‘stationary’ event chain. Students of physics will, of course, recognize this factor as the famous ‘Fitzgerald contraction’ of all lengths and distances along the direction of motion within an ‘inertial system’ that is moving at constant speed relative to a stationary one (Note 3)

Parable of the Two Friends and Railway Stations

It is important to understand what exactly is happening. As all books on Relativity emphasize, the situation is exactly symmetrical. An observer in system A would judge all distances within system B to be ‘contracted’, but an observer situated within system B would think exactly the same about the distances in system A. This symmetricality is a consequence of Einstein’s original assumption that  ‘the laws of physics take the same form in all inertial frames’. In effect, this means  that one inertial frame is as good as any other because if we could distinguish between two frames, for example by carrying out identical  mechanical or optical experiments, the two frames would not be equivalent with respect to  their physical behaviour. (In UET, ‘relativity’ is a consequence of the constancy of the area on the Locality occupied by the Event-capsule, whereas Minkowski deduced an equivalent principle from Einstein’s assumption of relativity.)
As an illustration of what is at stake, consider two friends undertaking train journeys from a station which spans the frontier between two countries. The train will stop at exactly the same number of stations, say 10, and both friends are assured that the stations are ‘equally spaced’ along each line. The friends start at the same time in Grand Central Station but go to platforms which take passengers to places in different countries.
In each journey there are to be exactly ten stops (including the final destination) of the same duration and the friends are assured that the two trains will be running at the ‘same’ constant speed. The two  friends agree to stop at the respective tenth station along the respective lines and then relative to each other. The  tracks are straight and close to the border so it is easy to compare the location of one station to another.
Friend B will thus be surprised if he finds that friend A has travelled a lot further when they  both get off at the tenth station.  He might conclude that the tracks were not straight, that the trains were  dissimilar or that they didn’t keep to the ‘same’ speed. Even might conclude  that, even though the distances between stations as marked on a map were the same for both countries, say 20 kilometres, the map makers had got it wrong. However, the problem would be cleared up if the two friends eventually learned that, although the two countries assessed distances in metres, the standard metre in the two countries was not the same. (This could not happen today but in the not sp distant past measurements of distance, often employing the same terms, did differ not only from one country to another but, at any rate within the Holy Roman Empire, from one ‘free town’ to another. A Leipzig ‘metre’ (or other basic unit of length) was thus not necessarily the same as a Basle one. It was only since the advent of the French Revolution and the Napoleonic system that weights and measures were standardized throughout most of Europe.’)

    This analogy is far from exact but makes the following point. On each journey, there are exactly the same number of stops, in this case 10, and both friends would agree on this. There is no question of a train in one country stopping at a station which did not exist for the friend in the other country. The trouble comes because of the spacing between stations which is not the same in the two countries, though at first sight it would appear to be because the same term is used.
    The stops correspond to ultimate events : their number and the precise region they occupy on the Locality is not relative but absolute. The ‘distance’ between one event and the next is, however, not absolute but varies according to the way you ‘slice’ the Event capsules and the region they occupy, though there is a minimum distance which is that of a ‘rest chain’.  As Rosser puts it, “It is often asked whether the length contraction is ‘real’?  What
the principle of relativity says is that the laws of physics are the same in all inertial frames, but the actual measures of particular quantities may be
different in different systems” (Note 4)

Is the contraction real?  And, if so,  why is the situation symmetrical? 

   What is not covered in the train journey analogy is the symmetricality of the situation. But if the situation is symmetrical, how can there be any observed discrepancy?
This is a question frequently asked by students and quite rightly so. The normal way of introducing Special Relativity does not, to my mind, satisfactorily answer the question. First of all, rest assured that the discrepancy really does exist : it is not a mathematical fiction invented by Einstein and passed off on the public by the powers that be.
μ mesons from cosmic rays hitting the atmosphere get much farther than they ought to — some even get close to sea level before decaying. Distance contraction explains this and, as far as I know, no other theory does. From the point of view of UET, the μ meson is an event-chain and, from its inception to its ‘decay’, there is a finite number of constituent ultimate events. This number is absolute and has nothing to do with inertial frames or relative velocities or anything you like to mention. We, however, do not see these occurrences and cannot count the number of ultimate events — if we could there would be no need for Special Relativity or UET. What we do register, albeit somewhat unprecisely, is the first and last members of this (finite) chain : we consider that the μ meson ‘comes into existence’ at one spot and goes out of existence at another spot on the Locality (‘Space/Time’ if you like). These events are recognizable to us even though we are not moving in sync with the μ meson (or at rest compared to it). But, as for the distance between the first and last event, that is another matter. For the μ meson (and us if we were in sync with it) there would be a ‘rest distance’ counted in multiples of s (or su).  But since we are not in sync with the meson, these distances are (from our point of view) contracted — but not from the meson’s ‘point of view’. We have thus to convert ‘his’ distances back into ours. Now, for the falling μ meson, the Earth is moving upwards at a speed close to that of light and so the Earth distances are contracted. If then the μ meson covers n units of distance in its own terms, this corresponds to rather more in our terms. The situation is somewhat like holding n strong dollars as against n debased dollars. Although the number of dollars remains the same, or could conceivably remain the same, what you can get with them is not the same : the strong dollars buy more essential goods and services. Thus, when converting back to our values we must increase the number. We find, then, that the meson has fallen much farther than expected though the number of ultimate events in its ‘life’ is exactly the same. We reckon, and must reckon, in our own distances which are debased compared to that of a rest event-chain. So the meson falls much farther than it would travel (longitudinally) in a laboratory. (If the meson were projected downwards in a laboratory there would be a comparable contraction.) This prediction of Special relativity has been confirmed innumerable times and constitutes the main argument in favour of its validity.
From the point of view of UET, what has been lost (or gained) in distance is gained (or lost) in ‘time’, since the area occupied by the event capsule or event capsules remains constant (by hypothesis).  The next post will deal with the time aspect.        SH  1 September 2013

 

Note 1  An ultimate event is, by definition, an event that cannot be further decomposed. To me, if something has occurrence, it must have occurrence somewhere, hence the necessity of an Event Locality, K0, whose function is, in the first instance, simply to provide a ‘place’ where ultimate events can have occurrence and, moreover, to stop them from merging. However, as time went on I found it more natural and plausible to consider an ultimate event, not as an entity in its own right, but rather as a sort of localized ‘distortion’ or ‘condensation’ of the Event Locality. Thus attention shifts from the ultimate event as primary entity to that of the Locality. There has been a similar shift in Relativity from concentration on isolated events and inertial systems (Special Relativity) to concentration on Space-Time itself. Einstein, though he pioneered the ‘particle/finitist’ approach ended up by believing that ‘matter’ was an illusion, simply being “that part of the [Space/Time] field where it is particularly intense”. Despite the failure of Einstein’s ‘Unified Field Theory’, this has, on the whole,  been the dominant trend in cosmological thinking up to the present time.
But today, Lee Smolin and others, reject the whole idea of ‘Space/Time’ as a bona fide entity and regard both Space and Time as no more than “relations between causal nodes”. This is a perfectly reasonable point of view which in its essentials goes back to Leibnitz, but I don’t personally find it either plausible or attractive. Newton differed from Leibnitz in that he emphatically believed in ‘absolute’ Space and Time and ‘absolute’ motion ─ although he accepted that we could not determine what was absolute and what was relative with current means (and perhaps never would be able to). Although I don’t use this terminology I am also convinced that there is a ‘backdrop’ or ‘event arena’ which ‘really exists’ and which does in effect provide ‘ultimate’ space/time units. 

Note 2. Does m have to be an integer? Since all ‘speeds’ are integral grid-space/ksana ratios, it would seem at first sight that m must be integral since c  (or c*) is an exact number of grid-spaces per ksana and v = (c*/m). However, this is to neglect the matter of reappearance ratios. In a regular event-chain with a 1/1 reappearance ratio, m would have to be integral ─ and this is the simplified event-chain we are considering here. However, if a certain event-chain has a space/time ratio of 4/7 , i.e. there is a lateral displacement of 4 grid-spaces every 7  ksanas, this can be converted to an ‘ideal’ unitary rate of 4/7 sp/ks.
In contemporary physics space and time are assumed to be continuous, so any sort of ‘speed’ is possible. However, in UET there is no such thing as a fractional unitary rate, e.g. 4/7th of a grid-space per ksana since grid-spaces cannot be chopped up into anything smaller. An ‘idealfractional rate per ksana is intelligible but it does not correspond to anything that actually takes place. Also, although a rate of m/n is intelligible, all rates, whether simple or ideal, must be rational numbers ─ irrational numbers are man-made conventions that do not represent anything that can actually occur in the  real world.

Note 3  Rosser continues :
     “For example, in the example of the game of tennis on board a ship going out to sea, it was reasonable to find that the velocity of the tennis ball was different relative to the ship than relative to the beach. Is this change of velocity ‘real’? According to the theory of special relativity, not only the measures of the velocity of the ball relative to the ship and relative to the seashore will be different, but the measures of the dimensions of the tennis court parallel to the direction of relative motion and the measures of the times of events will also be different. Both the reference frames at rest relative to the beach and to the ship can be used to describe the course of the game and the laws of physics will be the same in both systems, but the measures of certain quantities will differ.”                          W.G.V. Rosser, Introductory Relativity

 

 

1. Anomalous nature of causality

Causality has a peculiar status in science. Without it, there could hardly be science at all and Claude Bernard, the 19th century French biologist, went so far as to define science as determinism. Quantum Mechanics has, of course, somewhat dented the privileged position of causality in scientific thinking ─ but much less than is commonly thought. We still have ‘statistical determinism’ and, for most practical purposes the difference between absolute and statistical determinism is academic. Moreover, contrary to what many people think, chaos theory does not dispense with determinism : in principle chaotic behaviour, though unpredictable, is held to be nonetheless deterministic (Note 1).
I consider that we just have to take Causality on board.  Apart from the idea that there is some sort of objective reality ‘out there’, causality tops the list of concepts I would be the most reluctant to do without. The trouble is that belief in causality is essentially an act of faith since there seems  no way to demonstrate whether it is operative or not. If there were actually some sort of test whereby we could show that causality was at work, in the sort of way we can test whether an electric current is flowing through a circuit, questions of causality could be resolved rapidly and decisively. But no such test exists and, seemingly, can’t exist. Scientists and engineers generally believe in causality because they can’t do without it, but several philosophers have questioned whether there really is such a thing, notably Hume and Wittgenstein. The fact  that event A has up to now always been succeeded by event B does not constitute proof as Hume correctly observed. Scientists, engineers and practical people need to believe in causality and so they simply ignored Hume’s attack, though his arguments have never been refuted. Hume himself said that he abandoned his scepticism concerning the reality of causality when playing billiards ─ as well he might.

Relativity and the Upper Limit to Causal Processes

What we can do, since the advent of Relativity, is to decide when causality is not operative by appealing to the well-known test of whether two events lie within the ‘light cone’. But all this talk of light and observers and sending messages is misleading : it is putting the cart before the horse. What we should concentrate on is causality.
Eddington once said that one could deduce from a priori reasons (i.e. without carrying out any experiments) that there must be an upper limit to the speed of light in any universe, though one could not deduce a priori what the value of this limit had to be. Replace ‘speed of light’ by ‘speed of propagation of a causal influence’ and I agree with him. Certainly I can’t conceive of any ‘world’ where the operation of causality was absolutely simultaneous.
I thus propose introducing as an axiom  of Ultimate Event Theory  that

        There exists an upper limit to the ‘speed’ (event-space/ksana ratio) for the propagation of all causal influences

  In traditional physics, since Relativity, this upper limit is noted as c ≈ 3 × 108 metres per second. Without wishing to be pedantic, I think it is worthwhile at the outset distinguishing between an attainable upper limit and the lowest unattainable upper limit. The latter will be noted c* while the attainable limit will be noted as c in accordance with tradition. In Ultimate Event Theory the units are not metres and seconds : the standard unit of length is s0 , the distance between two adjacent spots on the Event Locality K0, and t0 , the ‘length’ of a ksana, the distance between two successive ultimate events in an event-chain. Thus  c* is an integer = c* s0/t0  and c, the greatest actually attainable ‘speed’ ─ better, displacement ratio ─ is (c* – 1) s0/t0   
        In modern physics, since Einstein’s 1905 paper, c, the maximum rate of propagation of causality is equated with the actual speed of light. I argued in an article some years ago that there was no need to exactly identify c, the upper limit of the propagation of causality with the actual speed of light, but only to conclude that the speed of light was very close to this limit (Note 2)

Revised Rule for Addition of Velocities

 Einstein realized that his assumption  (introduced as an Axiom) that “the speed of light in vacuo is the same for all observers irrespective of the movement of the source” invalidated the usual rule for adding velocities. Normally, when considering ‘motion within motion’ as when, for example, I walk down a moving train, we just add my own speed relative to the train to the current speed of the train relative to the Earth. But, if  V = v + w  we can easily push V over the limit simply by making v and w very close to c. For example, if v = (3/4)c and w = (2/3)c  the combined speed will be greater than c.

Since c is a universal constant, the variables v and w may  be defined in terms of c. So, let  v = c/m   w = c/n  where m, n > 1   (though not necessarily integral).

Thus, using the normal (Galileian) Rule for adding velocities

V = c/m + c/n  = c( 1   +  1  ) = c (m + n) 
                   m       n           mn

The factor (m + n)   <if m > 1, n > 1
         mn

For, let m = (1 + d), n = (1 + e)  with d, e > 0    then

      mn =  (1 + d)(1 + e)  = (1 + (d+e) + de) = (2 + (d+e)) – (1 – de)

                                                        = (m + n) – (1 – de)

                        So  (m + n) – mn = (1– de)  <  1

So, in order to take V beyond c, all we have to do is make  de < 1 and this will be true whenever 0 < d < 1  and 0 < e < 1. For example, if we have m = 3/2 = 1 + ½    and n = 9/8 = 1 + 1/8   we obtain a difference of (1 – 1/16) = 15/16. And in fact

If v = c/(3/2)   w = c/(9/8)  we have

V = (2/3)c + (8/9) c = (14/9) c >  c

The usual formula for ‘adding’ velocities is thus no good since it allows the Upper Limit to be exceeded and this is impossible (by assumption). So we must look for another formula, involving m and n only, which will stop V from exceeding c. We need a factor which, for all possible non-zero quantities m and n (excluding m = 1, n = 1) will make the product < 1.

Determining the New Rule for Adding Velocities

 The first step would seem to be to cancel the mn in the expression (m + n)/mn . So for the multiplying factor  we want mn divided by some expression involving m and n only but which (1) is larger than mn and (2) has the effect of making

           (m + n)  × mn    <  1  for all possible m, n > 1
          mn       f(mn)

The simplest function is just (1 + mn) since this is > mn and also has the desired result that

                (m + n)  ×     mn        <  1  for all possible m, n > 1
                  mn          (1 + mn)

This is so because (m + n) > (1 + mn)  for all m, n > 1

        Again, we set m = (1 + d), n = (1 + e)  so

(m + n)  = 2 + (c + d)  and 1 + mn = 1 + (1 + (d + e) + de)

                                                        =  (2 + (d + e) + de)

                                                        =  (m + n) + de

So  (m + n)/(1 + mn) < 1  (since denominator is larger than the nominator).

Moreover, this function (1 + mn) is the smallest such function that fits the bill for all legitimate values of m  and n. For, if we set f(m, n) = (e + mn)  we must have e > (1 – de) for all values d, e > 0 . The smallest such e is just 1 itself.

I start by assuming that there is an unattainable Upper Limit to the propagation of causal influences, call it c*. This being the case, the ‘most extended’ regular event-chain can have at most a spatial distance/temporal distance ratio of c . Anything beyond this is not possible (Note 3).
This value c is a universal constant and any ‘ordinary’ speeds (space/time ratios) within event-chains ought, if possible to be defined with reference to c, i.e. as c/m, c/n and so on. What are the units of c? The ‘true’ units are s0 and t0, the inter-ultimate event spatial and temporal distances, i.e. the ‘distance’ from the emplacement of one ultimate event to its nearest simultaneously existing neighbour (in one spatial dimension) and the distance from one ultimate event to its immediate successor in a chain. These distances are those of an event-chain at rest with regard to the Locality K0. These values are, by hypothesis, constants.
A successor event in an event-chain can only displace itself by integral units since every event must occupy a spot on the Locality. The smallest displacement would just be that of a single grid-space, 1 s­0 . Using the c/m notation this is a displacement ratio of c/c  = 1 0/t0    And the smallest ‘combined speed’ V would be V = c/c + c/c  = 2  using the ‘traditional’ method of combining velocities. But, using the new formula we have

V = c (c + c)        =   2c2     s­0/t
       (1 + c2)         (1 + c2)

        This is very slightly less than c(2c)/c2 = 2 . We may consider the second fraction       c2  .    =       1   
                                                                                                                                                      (1 + c2)       1 + 1/c2
  as a ‘shrinkage factor’. Since c is so large 1/c2 is minute and the shrinkage factor is correspondingly small.
More generally, for  u = c/m   and  w = c/n  we have a ‘shrinkage factor’ of  1/(1 + 1/mn)
      This should be interpreted as follows. By the Space/Time capsule Axiom, the region s3t is constant and = s03 t0  where s0 and  t0  are constants. We neglect two of the spatial dimensions and concentrate only on the ‘rectangle’ st which is s0 by  t0  for a ‘stationary’ event chain. Since the sides of the ‘rest’ rectangle are fixed, so is the mixed space/time ratio s0 /t0   This in principle gives the ratio width to height of the region occupied by a single ultimate event in a rest chain ─ but, of course, we do not at present know the values of s0 and t0 .
Associated with a single event-chain is the region of the Locality it occupies. If an ultimate event conglomerate repeats at every ksana (has a reappearance rate of 1/1), the event-chain effectively monopolises the available space, i.e. stops any other ultimate events from having occurrence within this region. If there are N events in the chain the total area of the occupied region is N (s0)3 t0 . Note that if the event-chain contains N events, there are  (N – 1) intervals whereas if we number the ultimate events 0, 1, 2, 3….  there are N temporal intervals, i.e. N ksanas in all. Also, it is important to note that, in this model, each ultimate event itself only occupies a small part of this ‘Space/Time capsule’ of size (s0)3 t0 ─ but its occupancy is enough to exclude all other possible ultimate events.
As stated before, when dealing with simple event-chains with a fixed ‘direction’, we can neglect two of the three ‘spatial’ dimensions (the y and z dimensions), so we only need to bother about the ‘Space-Time rectangle’ of fixed size s0 t0 . Thus, when dealing with a simple regular event-chain we only need to bother about the region occupied by N such rectangles. Although the area  of the rectangle s0 t0 is constant (= R), the ratio of the sides need not be. However, for all s, t  st = s0 t0   the lengths s0  and t0  of this mixed ‘Space-Time’ rectangle are the ‘rest’ lengths, the dimensions of each capsule when considered in isolation ─ ‘rest’ lengths because, by the Rest Axiom (or definition) every event-chain is at rest relative to the Event Locality K0 (Note 4) .  Although there is no such thing as absolute movement relative to the Locality, there certainly is relative movement (displacement ksana by ksana) of one event-chain with respect to another which may be considered to be stationary. And this relative movement changes the distances distances of the event capsules and so of the entire chain. The changed distances are noted sv and tv  and, since the product sv tv is constant and equal to the ‘rest area’ s0 t0, ­the sides of the rectangle, sv and tv change inversely i.e. s0 /sv   = tv/tv  so if the ‘spatial dimension’ of the rectangle decreases, the ‘time dimesnion’, the length of a ksana in absolute terms increases. I had in a previous post introduced tentatively as an axiom that the rest length of a ksana, t0 , was a minimum. But, in fact as I hoped, this is a consequence of the behaviour of the s dimension. The Upper Limit Assumption and the consequent discussion of the rule for adding velocities, shows that s0 is a maximum which in turn makes t0 a maximum as required. And practically speaking, in ‘normal conditions’, s and t will also have a maximum and minimum, namely the values they take when the displacement ratio s/t  = c the upper limit. Thus s0 > sv ≥ sc and t0 < tv ≤ tc  

Revised Rule for Addition of Velocities

 Einstein realized that his assumption  (introduced as an Axiom) that “the speed of light in vacuo is the same for all observers irrespective of the movement of the source” invalidated the usual rule for adding velocities. Normally, when considering ‘motion within motion’ as when, for example, I walk down a moving train, we just add my own speed relative to the train to the current speed of the train relative to the Earth. But, if  V = v + w  we can easily push V over the limit simply by making v and w very close to c. For example, if v = (3/4)c and w = (2/3)c  the combined speed will be greater than c.
Since c is a universal constant, the variables v and w may  (and should) be defined in terms of c. So, let  v = c/m   w = c/n  where m, n > 1   (though not necessarily integral).

Thus, using the normal (Galileian) Rule for adding velocities

          V    = c/m + c/n  = c( 1   +  1  ) = c (m + n) 
                                           m      n           mn

The factor (m + n)   <if m > 1, n > 1
                     mn

For, let m = (1 + d), n = (1 + e)  with d, e > 0    then

         mn =  (1 + d)(1 + e)  = (1 + (d+e) + de) = (2 + (d+e)) – (1 – de)

                                                        = (m + n) – (1 – de)

               Thus          (m + n) – mn = (1– de)  <  1

So, in order to take V beyond c, all we have to do is make  de < 1 and this will be true whenever 0 < d < 1  and 0 < e < 1. For example, if we have m = 3/2 = 1 + ½    and n = 9/8 = 1 + 1/8   we obtain a difference of (1 – 1/16) = 15/16. And in fact

If v = c/(3/2)   w = c/(9/8)  we have

V = (2/3)c + (8/9) c = (14/9) c >  c

The usual formula for ‘adding’ velocities is thus no good since it allows the Upper Limit to be exceeded and this is impossible (by assumption). So we must look for another formula, involving m and n only, which will stop V from exceeding c. We need a factor which, for all possible non-zero quantities m and n (excluding m = 1, n = 1) will make the product < 1.
The first step would seem to be to get rid  of the mn in the expression (m + n)/mn . So for the multiplying factor  we want mn divided by some expression involving m and n only but which (1) is larger than mn and (2) has the effect of making

(m + n)  × mn    <  1  for all possible m, n > 1
 mn       f(mn)

The simplest function having the desired properties is just mn/(1 + mn) since this is > mn for m, n > 1.and also has the desired result that

(m + n)  ×     mn        <  1  for all possible m, n > 1
 mn        (1 + mn)

For, let m = (1 + d), n = (1 + e)  so that

(m + n)  = 2 + (c + d)  and 1 + mn = 1 + (1 + (d + e) + de)
                                                        =  (2 + (d + e) + de)
                                                        =  (m + n) + de

So  (m + n)/(1 + mn) < 1  (since denominator is larger than the nominator).

Moreover, this function (mn)/(1 + mn) is the smallest such function that fits the bill for all legitimate values of m  and n. For, if we set f(m, n) = (e + mn)  we must have e > (1 – de) for all values d, e > 0 . The smallest such e is just 1 itself.

The factor     mn       =        1   .       should be regarded as
                   (1 + mn)        1 + (1/mn)

a ‘shrinkage factor’ which gets applied automatically when velocities are combined. It is not a mathematical fiction but something that  is really operative in the physical world and which excludes  ‘runaway’ speeds which otherwise would wreck the system ─ much as a thermostat stops a radiator from overheating. It is not today helpful to view such procedures as ‘physical laws’ ─ though this is how Newton and possibly even Einstein viewed them. Rather, they are automatic procedures that ‘kick in’ when appropriate.
Mathematics is a tool for getting a handle on reality, no more, no less, and it is essential to distinguish between mathematical procedures which are simply aids to calculation or invention and those which correspond to actual physical mechanisms. I believe that the factor (mn)/(1 + mn)  ─ and likewise γ = (1/(1 – v2/c2))1/2  that we shall come to later─ fall into the latter category. How and why such mechanisms got developed in the first place, we do not know and perhaps will never know, though it is quite conceivable that they developed like so much else by ‘trial and error’ over a vast expanse of ‘time’ in much the same way as biological mechanisms developed without the users of these mechanisms knowing what they were doing or where they were heading.

The Space/Time Capsule and the units of c. 

This value c is a universal constant and any ‘ordinary’ speeds (space/time ratios) within event-chains ought, if possible, to be defined with reference to c, i.e. as c/m, c/n and so on. But what are the units of c in Ultimate Event Theory?
          As stated in the previous post, I take as axiomatic that “the region of the Space/Time  capsule s3t is constant and equal to the ‘rest’ value of so3 t0.  But, although the product is fixed, s and t can and do vary. When dealing with a (resolved) force or motion which thus has a unique spatial direction, we only need bother about the rectangle of area s × t which can be plotted as a hyperbola of the form st =  constant.
         However, unlike most rectangular hyperbolae, the graph does not extend to infinity along both axes ─ nothing in UET extends to infinity. So s and t must have minimal and maximal values. I have assumed so far that s0 is a maximum and this is in accord with Special Relativity. So this makes t0 a minimum since st = s0 t0  = Ω . Actually, while writing this post, it has occurred to me that one could do things the other way round and have s0 as a minimum and t0 a maximum since there does not seem any reason a priori why this should not be possible. But I shall not pursue this line of thought at the moment.
So, if we wish to convert to ‘ultimate’ units of distance and time, we can use t0  , the minimal length of a ksana which it attains in a ‘rest chain’ as the appropriate temporal unit. But what about spatial distance? Since s0 is the maximum value for the spatial dimension of the mixed Space-Time capsule’ of fixed volume  s3 t  = s03 t0, we must ask whether s has a minimum? The answer is yes. In UET there is no infinity and everything has a minimum and a maximum with the single exception of the Event Locality itself, K0, which has neither because it is intrinsically ‘non-metrical’. s03 t0 represents the volume of the ‘Space/time capsule’ enclosing an ultimate event and which in UET is the ultimate basic building-block of physical reality. But an ultimate event itself occupies a non-zero, albeit minuscule, region. Since there is nothing smaller than an ultimate event, this value, we can take the dimensions of the region occupied as the ultimate volume and each of the spatial lengths as the ultimate unit of distance. So su  will serve as the ‘ultimate’ unit of distance where the subscript u means ‘ultimate’. And, since st = s0 t0  = Ω for all permissible values of s  and t, we have su tu  =  s0 t   thus su /s= t0/tu . So the ratios of the extreme values of spatial and temporal units are inversely related. Thus s0 = M su  where m is an integer since every permissible length must be a whole number multiple of the base unit. Thus s0/t0  which we have noted as M, is, in ultimate units, (M su)/t0  so M = c
This was, to me, an unexpected and rather satisfying,  result. Instead of c appearing, as it were, from nowhere, the UET treatment gives a clear physical meaning to the idea that light travels at (or near to) the limiting value for all causal processes. We can argue the matter in this way.
I view an ultimate event as something akin to a hard seed in a pulpy fruit or the nucleus in an atom, where the fruit as a whole or the atom is the space/time  capsule. Suppose a causal influence emanating from the kernel of a Space/Time Relativity  Upper Limitcapsule where there is an ultimate event. Then, if it going to have any effect on the outside world, it must at least travel a distance of ½ so to get outside its capsule and another  ½ so to get to the centre of the next capsule where it repeats (or produces a different ultimate event). And in the case of a regular event-chain with a 1/1 reappearance ratio (i.e. one ultimate event at each and every ksana) the causal force must traverse this distance within the space of a single ksana. If the chain is considered in isolation and thus at rest, the length of every ksana will be the minimal temporal distance t0. The causal influence must thus have a space/time ratio of  (½ s+ ½ so )/t0  =  so /t0  = c.
Thus, c is not only the limiting speed for causal processes, but turns out to be the only possible speed in a rest chain since a causal influence must get outside a Space/Time capsule if it is going to have any effects.  And, since every event-chain is itself held together by causal forces, it makes sense that the electro-magnetic event-chain commonly known as ‘light’ cannot exceed this limit ─ an event-chain which exceeded the limit, supposing this to be conceivable, would immediately terminate since any subsequent events would be completely dissociated from prior events. What this means is that if, say, two light rays were sent out at right angles to each other, each event in the ‘moving chain’ would be displaced a distance of s0 at each ksana relative to the ‘stationary‘ chain, while the causal influence in the stationary chain would have traversed exactly the same distance in absolute units. In General Relativity, the constant c is often replaced by 1 to make calculations easier : this interpretation justifies the practice. For in the ‘capsule’ units  s9 and t0 the ratio is csu/t0 = 1 s0 /t0   

Asymmetry of Space/Time

It would seem that there is a serious asymmetry between the ‘spatial’ dimension(s) and the temporal. Since s/s0 = t0/t , spatial  and temporal distances ─ ‘space’ and ‘time’ ─ are inversely proportional (Note 3). We are a species with a very acute sense of spatial distance and a very crude sense of time ─ the film industry is based on the fact that the eye, or rather the eye + brain, ‘runs together’ images that are flashed across the screen at a rate of more than a few frames per second (8, I believe). And we do not have too much difficulty imagining huge spatial distances such as the diameter of the galaxy, while we find it difficult to conceive of anything at all of any interest happening within a hundredth, let alone a hundred billionth, of a second. Yet cosmologists happily talk about all sorts of complicated processes going on within minute fractions of a second after the Big Bang, so we know that there can be quite a lot of activity in what is, to us, a very small temporal interval.
For whatever reason, one feels that  the smallest temporal interval (supposing there is one) and which in UET is t0, must be extremely small compared to the maximum unitary ‘spatial’ distance s0 . This may be an illusion based on our physiology but I suspect that there is something in it : ‘time’ would seem to be much more finely tuned than space. This goes some way to explaining why we are unaware of their being any ‘gaps’ between ksanas as I believe there must be, while there are perhaps no gaps between contemporaneous spatial capsules. I believe there must be gaps between ksanas because the whole of the previous spatial set-up has to vanish to give way to the succeeding set-up, whereas ‘length’ does not have to vanish to pass on to width or depth. These gaps, if they exist, are probably extremely small and so do not show up even with our contemporary precision instruments. However, at extremely high (relative) speeds, gaps between ksanas should be observable and one day, perhaps quite soon, will be observed and their minimal and maximum extent calculated at least approximately.

 Strange Consequence of the Addition Rule

Curiously, the expression

        (m + n)  ×     mn        <  1  for all possible values of m, n
          mn        (1 + mn)

except when m = 1, n = 1 (or both). I have already shown that

(m + n)  ×     mn        <  1  when m, n > 1
 mn        (1 + mn)

But the inequality holds even when we are dealing with (possibly imaginary) speeds greater  than the limit. For consider c/m + c/n where m < 1  n < 1   i.e. when c/m and c/n  are each > 1

Let m = (1 – d)   n = (1 – e)   d, e > 0

        Then (m + n)     = 2 – (d + e) 

                  1 + mn   = 1 + {1 – d)(1 – e)}
                                = 1 + {1 –(d + e) + de)}
                                = 2 – (d + e) + de
                                = (m + n) + de  > (m + n) since d, e > 0

This means that even if both velocities exceed c, their combination (according to the new addition rule) is less than c !

For take c/(1/2) + c/(1/5) = 2c + 5 c = 7c by the ‘normal’ addition rule. But, according to the new rule, we have

c ((1/2) + (1/5))  =   c (0.5 + 0.2) = c (0.7)  <  c
(1 + (1/2)(1/5)           (1.1)              1.1

         I am not sure how to interpret this in the context of Ultimate Event Theory and causality. It would seem to imply that ‘event sequences’ ─ one cannot call them ‘chains’ because there is no bonding between the constituent events ─ which have displacement rates above  the causal limit, when combined, are somehow dragged back below the upper limit and become a bona fide event-chain. So, independently, such loose sequences can exist by themselves ‘above the limit’, but if they get entangled with each other, they get pulled back into line. In effect, this makes a sort of sense. Either causality is not operative at all, or, if it is operative, it functions at or below the limit.
This curious result has, of course, been noted many times in discussions about Special Relativity and given rise to all sorts of fantasies such as particles propagating backwards in time, effects preceding causes and so on. Although Ultimate Event Theory may itself appear far-fetched to a lot of people, it does not accommodate such notions : sequence and causality, though little else, are normally conserved and the ‘arrow of time’ only points one way. There must, however, be some good physical reason for this ‘over the speed limit’ anomaly and it will one day be of use technologically.

Random Events   

A random event by definition does not have a causal predecessor, although it can have a causal successor event. Random events are thus causal anomalies,  spanners in the works : they should not exist but apparently they do ─ indeed I suspect that they heavily outnumber well-behaved events which belong to recognizable event-chains (but individual random ultimate events are so short-lived they are practically speaking unobservable at present).
One explanation for the occurrence of random events ─ and they certainly seem to exist ─ is that they are events that have got dissociated from the event-chains to which they ‘by right’ belonged because “they arrived too fast”. If this is so, these stray events could pop up more or less anywhere on the Locality where there was an unoccupied space and they would appear completely out of place (i.e. ‘random’) because they would have no connection at all  with neighbouring events. This is indeed how many so-called ‘random’ events do appear to us : they look for all the world as if they have been wrongly assembled by an absent-minded worker. One might  draw a parallel with ‘jumping genes’ where sections of DNA get fitted into sections where they have no business to be (as far as we can tell).                                                                        S. H.     7/8/13

Note 1 Whether considering that a chaotic system is both unpredictable and yet deterministic is ‘having your cake and eating it’ I leave to others to decide. There is a generic difference between “being unable to make exact predictions because we can never know exactly the original situation” and “being unable to make predictions because the situation evolves in a radically unpredictable, i.e. random, manner”. No one disputes that in cases of non-linear dynamics the situation is inclined to ‘blow up’ because small variations in the original conditions can have vast consequences. Nor does anyone dispute that we will most likely never be able to know the initial conditions with the degree of certainty we would like to have. Therefore, ‘chaotic systems’ are unpredictable in practice ─ though they follow certain contours because of the existence of ‘strange attractors’.
But are the events that make up chaotic systems unpredictable in principle? Positivists sweep the whole discussion under the carpet with the retort, “If we’ll never be able to establish complete predictability, there’s no point in discussing the issue”. But for people of a ‘realistic’ bent, amongst whom I include myself, there is all the reason in the world to discuss the issue and come to the most ‘reasonable’ conclusion. I believe  there is a certain degree of  randomness ‘hard-wired’ into the workings of the universe anyway, even in ‘well-behaved’ linear systems. Nessim Taleb is, in my view, completely right to insist that there is a very real and important difference between the two cases. He believes there really is an inherent randomness in the workings of the universe and so nothing will ever be absolutely predictable. In consequence, he argues that, instead of bothering about how close we can get to complete predictability, it makes more sense to ‘prepare for the worst’ and allow in advance for the unforeseen.

Note 2. If you don’t identify the upper limit with the observed speed of an actual process, this allows you, even in ‘well-behaved’ linear systems. Nessim Taleb is, in my view, completely right to insist that there is a very real and important difference between the two cases. He believes there really is an inherent randomness in the workings of the universe and so nothing will ever be absolutely predictable. In consequence, he argues that, instead of bothering about how close we can get to complete predictability, it makes more sense to ‘prepare for the worst’ and allow in advance for the unforeseen.

Note 2. If you don’t identify the two exactly, this allows you to attribute a small mass to the ‘object under consideration and, as a matter of fact, at the time it was thought that the neutrino, which travels at around the same speed as light, was massless whereas we now have good reason to believe that the neutrino does have a small mass. But this issue is not germane to the present discussion and, for the purposes of this article, it is not necessary to make too much distinction between the two. When there is possible confusion, I shall use c* to signify the strictly unattainable limit and c to signify the upper limit of what can be attained. Thus v ≤ c  but v < c.

Note 3 Although hardly anyone seems to have been bothered by the issue, it questionable that it is legitimate to have mixed space/time values since this presupposes that there is a shared basic unit.

Note 4   If ‘random’ events greatly outnumber well-behaved causal events, why do we not record the fact with our senses and conclude that ‘reality’ is completely unpredictable? The ancients, of course, did believe this to a large extent and, seemingly, this was one reason why the Chinese did not forge ahead further than they actually did : they lacked the Western notion of ‘physical law’ (according to Needham). There may have been some subliminal perception of underlying disorder that surfaced in such ancient beliefs. But the main reason why the horde of ‘random’ ultimate events passes unnoticed is that these events flash in and out of existence without leaving much of a trace : only very few form recognizable event-chains and our senses are only responsive to relatively large conglomerates of events.

 

Almost everyone schoolboy these days has heard of the Lorentz transformations which replace the Galileian transformations in Special Relativity. They are basically a means of dealing with the relative motion of two bodies with respect to two orthogonal co-ordinate systems. Lorentz first developed them in an ad hoc manner somewhat out of desperation in order to ‘explain’ the null result of the Michelson-Morley experiment and other puzzling experimental results. Einstein, in his 1905 paper, developed them from first principles and always maintained that he did not at the time even know of Lorentz’s work. What were Einstein’s assumptions?

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

As has since been pointed out, Einstein did, in fact, assume rather more than this. For one thing, he assumed that ‘free space’ is homogeneous and isotropic (the same in all directions) (Note 1). A further assumption that Einstein seems to have made is that ‘space’ and ‘time’ are continuous ─ certainly all physicists at the time  assumed this without question and the wave theory of ele tro-magnetism required it as Maxwell was aware. However, the continuity postulate does not seem to have played much of a part in the derivation of the equations of Special Relativity  though it did stop Einstein’s successors from thinking in rather different ways about ‘Space/Time’. Despite everything that has happened and the success of Quantum Mechanics and the photo-electric effect and all the rest of it, practically all students of physics think of ‘space’, ‘time’ and electro-magnetism as being ‘continuous’, rather than made up of discrete bits especially since Calculus is primarily concerned with ‘continuous functions’. Since nothing in the physical world is continuous, Calculus is in the main a false model of reality.

Inertial frames, which play such a big role in Special Relativity, as it is currently taught, do not exist in Nature : they are entirely man-made. It was essentially this realisation that motivated Einstein’s decision to try to formulate physics in a way that did not depend on any particular co-ordinate system whatsoever. Einstein assumed relativity and the constancy of the speed of light and independently deduced the Lorentz  transformations. This post would be far too long if I went into the details of Special Relativity (I have done this elsewhere) but, for the sake of the general reader, a brief summary can and should be given. Those who are familiar with Special Relativity can skip this section.

The Lorentz/Einstein Transformations     Ordinary people sometimes find it useful, and physicists find it indispensable, to locate an object inside a real or imaginary  three dimensional box. Then, if one corner of the imaginary box (e.g. room of house, railway carriage &c.) is taken as the Origin, the spot to which everything else is related, we can pinpoint an object by giving its distance from the corner/Origin, either directly or by giving the distance in terms of three directions. That is, we say the object is so many spaces to the right on the ground, so many spaces on the ground at right angles to this, and so many spaces upwards. These are the three co-ordinate axes x, y and z. (They do not actually need to be at right angles but usually they are and we will assume this.)

Also, if we are locating an event rather than an object, we will need a fourth specification, a ‘time’ co-ordinate telling us when such and such an event happened. For example, if a balloon floating around the room at a particular time, to pinpoint the event, it would not be sufficient to give its three spatial co-ordinates, we would need to give the precise time as well. Despite all the hoo-ha, there is nothing in the least strange or paradoxical about us living in a ‘four-dimensional universe’. Of course, we do done  : the only slight problem is that the so-called fourth dimension, time, is rather different from the other three. For one thing, it seems to only have one possible direction instead of two; also the three ‘spatial’ directions are much more intimately connected to each other than they are to the ‘time’ dimension. A single unit serves for the first three, say the metre, but for the fourth we need a completely different unit, the second, and we cannot ‘see’ or ‘touch’ a second whereas we can see and touch a metre rod or ruler.
Now, suppose we have a second ‘box’ moving within the original box and moving in a single direction at a constant speed. We select the x axis for the direction of motion. Now, an event inside the smaller box, say a pistol shot, also takes place within the larger box : one could imagine a man firing from inside the carriage of a train while it has not yet left the station. If we take the corner of the railway carriage to be the origin, clearly the distance from where the shot was fired to the railway carriage origin will be different from the distance from where the buffers train are. In other words, relative to the railway carriage origin, the distance is less than the distance to the buffers. How much less? Well, that depends on the speed of the train as it pulls out. The difference will be the distance the train has covered since it pulled out. If the train pulls out at constant speed 20 metres/second  metres/second and there has been a lapse of, say, 4 seconds, the distance will be  80 metres. More generally, the difference will be vt where t starts at 0 and is counted in seconds. So, supposing relative to the buffers, the distance is x, relative to the railway carriage the distance is v – xt a rather lesser distance.
Everything else, however, remains the same. The time is the same in the railway carriage as what is marked on the station clock. And, if there is only displacement in one dimension, the other co-ordinates don’t change : the shot is fired from a metre above ground level for example in both systems and so many spaces in from the near side in both systems. This all seems commonsensical and, putting this in formal mathematical language, we have the Galilean Transformations so-called

x = x – vt    y  = y    z  – z     t= t 

All well and good and nobody before the dawn of the 20th century gave much more thought to the matter. Newton was somewhat puzzled as to whether there was such a thing as ‘absolute distance’ and ‘absolute time’, hence ‘absolute motion’, and though he believed in all three, he accepted that, in practice, we always had to deal with relative quantities, including speed.
If we consider sound in a fluid medium such as air or water, the ‘speed’ at which the disturbance propagates differs markedly depending on whether you are yourself stationary with respect to the medium or in motion, in a motor-boat for example. Even if you are blind, or close your eyes, you can tell whether a police car is moving towards or away from you by the pitch of the siren, the well-known Doppler effect. The speed of sound is not absolute but depends on the relative motion of the source and the observer. There is something a little unsettling in the idea that an object does not have a single ‘speed’ full stop, but rather a variety of speeds depending on where you are and how you are yourself moving. However, this is not too troublesome.
What about light? In the latter 19th century it was viewed as a disturbance rather like sound that was propagated in an invisible medium, and so it also should have a variable speed depending on one’s own state of motion with respect to this background, the ether. However, no differences could be detected. Various methods were suggested, essentially to make the figures come right, but Einstein cut the Gordian knot once and for all and introduced as an axiom (basic assumption) that the speed of light in a vacuum (‘free empty space’) was fixed and completely independent of the observer’s state of motion. In other words, c, the speed of light, was the same in all co-ordinate systems (provided they were moving at a relative constant speed to each other). This sounded crazy and brought about a completely different set of ‘transformations’, known as the Lorentz Transformations  although Einstein derived them independently from his own first principles. This derivation is given by Einstein himself in the Appendix to his ‘popular’ book “Relativity : The Special and General Theory”, a book which I heartily recommend. Whereas physicists today look down on books which are intelligible to the general reader, Einstein himself who was not a brilliant student at university (he got the lowest physics pass of his year) and was, unlike Newton, not a particularly gifted pure mathematician, took the writing of accessible ‘popular’ books extremely seriously. Einstein is the author of the staggering put-down, “If you cannot state an issue clearly and simply, you probably don’t understand it”.
If we use the Galileian Tranformations and set v = c , the speed of light (or any form of electro-magnetism) in a vacuum, we have x = ct  or with x in metres and t in seconds, x = 3 × 108 metres (approximately) when t = 1 second. Transferring to the other co-ordinate system which is moving at v metres/sec relative to the first, we have  x’  x – vt  and, since t is the same as t, when dividing we obtain for x’ /t ,  (x – vt)/t = ((x/t) – v)  = (c – v), a somewhat smaller speed than c. This is exactly what we would expect if dealing with a phenomenon such as sound in a fluid medium. However, Einstein’s postulate is that, when dealing with light, the ratio distance/time is constant in all inertial frames, i.e. in all real or imaginary ‘boxes’ moving in a single direction with a constant difference in their speeds.

One might doubt whether it is possible to produce ‘transformations’ that do keep c the same for different frames. But it is. We need not bother about the y and z co-ordinates because they are most likely going to stay the same ─ we can arrange to set both at zero if we are just considering an object moving along in one direction. However, the x and t equations are radically changed. In particular, it is not possible to set t = t, meaning that ‘time’ itself (whatever that means) will be modified when we switch to the other frame.           The equations are

         x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

The reader unused to mathematics will find them forbidding and they are indeed rather tiresome to handle though one gets used to them. If you take the ratio If  x /t you will find ─ unless you make a slip ─ that, using the Lorentz Transformations you eventually obtain c as desired.

We have x = ct  or t = x/c  and the Lorentz Transformations

                    x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

Then  x/t  = γ (x – vt)        =   (x – vt)       =    c2(x – vt)
γ
(t – vx/c2)         (t – vx/c2)         (c2t – vx)   

               = c2(x – vt)      =  c2x – cv(ct)
                 
(c2t – vx)            (c(ct) – vx)

                                        =  c2x – cvx)       = (cx)(c – v)
                                            (cx – vx)            x(c – v)                  

                                          =   c

The amazing thing that this is true for any value of v ─ provided it is less than c ─ so it applies to any sort of system moving relative to the original ‘box’, as long as the relative motion is constant and in a straight line. It is true for v = 0 , i.e. the two boxes are not moving relatively to each other : in such a case the complicated Lorentz Transformations reduce to x = x      t = t   and so on.
The Lorentz/Einstein Transformations have several interesting and revealing properties. Though complicated, they do not contain terms in x2 or t2 or higher powers : they are, in mathematical parlance, ‘linear’. This is what we want for systems moving at a steady pace relatively to each other : squares and higher powers rapidly produce erratic increases and a curved trajectory on a space/time graph. Secondly, if v is very small compared to c, the ratio v/c which appears throughout the formulae is negligible since c is so enormous. For normal speeds we do not need to bother about these terms and the Galileian formulae give  satisfactory results.
Finally, and this is possibly the most important feature : the Lorentz/Einstein Transformations are ‘symmetric’. That is, if you work backwards, starting with the ‘primed’ frame and x and t, and convert to the original frame, you end up with a mirror image of the formulae with a single difference, a change of sign in the xto formula denoting motion in the opposite direction (since this time it is the original frame that is moving away). Poincaré was the first to notice this and could have beaten Einstein to the finishing line by enunciating the Principle of Relativity several years earlier ─ but for some reason he didn’t, or couldn’t, make the conceptual leap that Einstein made. The point is that each way of looking at the motion is equally valid, or so Einstein believed, whether we envisage the countryside as moving towards us when we are in the train, or the train moving relative to the static countryside.

Relativity from Ultimate Event Theory?

    Einstein assumed relativity and the constancy of the speed and deduced the Lorentz Transformations : I shall proceed somewhat in the opposite direction and attempt to derive certain well-known features of Special Relativity from basic assumptions of Ultimate Event Theory (UET). What assumptions?

To start with, the Event Number Postulate  which says that
  Between any two  events in an event-chain there are a fixed number of ultimate events. 
And (to recap basic definitions) an ultimate event is an event that cannot be further decomposed — this is why it is called ultimate.
Thus, if the ultimate events in a chain, or subsection of a chain, are numbered 0, 1, 2, 3…….n  there are n intervals. And if the event-chain is ‘regular’, sort of equivalent of an intertial system, the ‘distance’  between any two successive events stays the same. By convention, we can treat the ‘time’ dimension as vertical — though, of course, this is no more than a useful convention.   The ‘vertical’ distance between the first and last ultimate events of a  regular event-chain thus has the value n × ‘vertical’ spacing, or n × t.  Note that whereas the number indicating the quantity of ultimate events and intervals, is fixed in a particular case,  t turns out to be a parameter which, however, has a minimum ‘rest’ value noted t0. This minimal ‘rest’ value is (by hypothesis) the same for all regular event-chains.

….        Likewise, between any two ‘contemporary’ i.e. non-successive, ultimate events there are a fixed number of spots where ultimate events could have (had) occurrence. If there are two or more neighbouring contemporary ultimate events bonded together we speak of an event-conglomerate and, if this conglomerate repeats or gives rise to another conglomerate of the same size, we have a ‘conglomerate event-chain’. (But normally we will just speak of an event-chain).
A conglomerate is termed ‘tight’, and the region it occupies within a single ksana (the minimal temporal interval) is ‘full’ if one could not fit in any more ultimate events (because there are no available spots). And, if all the contemporary ultimate events are aligned, i.e. have a single ‘direction’, and are labelled   0, 1, 2, 3…….n  , then, there are likewise n ‘lateral’ intervals along a single line.

♦        ♦       ♦       ♦       ♦    ………

If the event-conglomerate is ‘regular’, the distance between any two neighbouring events will be the same and, for n events has the value n × ‘lateral’ inter-event spacing, or n × s. Although s, the spacing between contemporary ultimate events must obviously always be greater than the spot occupied by an ultimate event, for all normal circumstances it does not have a minimum. It has, however, a maximum value s0 .

The ‘Space-Time’ Capsule

Each ultimate event is thus enclosed in a four-dimensional ‘space-time capsule’ much, much larger than itself — but not so large that it can accommodate another ultimate event. This ‘space-time capsule’ has the mixed dimension s3t.
In practice, when dealing with ‘straight-line’ motion, it is only necessary to consider a single spatial dimension which can be set as the x axis. The other two dimensions remain unaffected by the motion and retain the ‘rest’ value, s­0.  Thus we only need to be concerned with the ‘space-time’ rectangle st.
We now introduce the Constant Size Postulate

      The extent, or size, of the ‘space-time capsule’ within which an ultimate event can have occurrence (and within which only one can have occurrence) is absolute. This size is completely unaffected by the nature of the ultimate events and their interactions with each other.

           We are talking about the dimensions of the ‘container’ of an ultimate event. The actual region occupied by an ultimate event, while being non-zero, is extremely small compared to the dimensions of the container and may for most purposes be considered negligible, much as we generally do not count the mass of an electron when calculating an atom’s mass. Just as an atom is mainly empty space, a space time capsule is mainly empty ‘space-time’, if the expression is allowed.
Note that the postulate does not state that the ‘shape’ of the container remains constant, or that the two ‘spatial’ and ‘temporal’ dimensions should individually remain constant. It is the extent of the space-time parallelipod’ s3t which remains the same or, in the case of the rectangle it is the product st ,that is fixed, not s and t individually.  All quantities have minimum and maximum values, so let the minimum temporal interval be named  t0 and, Space time Area diagramconversely, let s0 be the maximum value of s. Thus the quantity s0 t0 ,  the ‘area’ of the space-time rectangle, is fixed once and for all even though the temporal and spatial lengths can, and do, vary enormously. We have, in effect a hyperbola where xy = constant but with the difference that the hyperbola is traced out by a series of dots (is not continuous) and does not extend indefinitely in any direction (Note 3).
         This quantity s0 t0  is an extremely important constant, perhaps the most important of all. I would guess that different values of s0 t0   would lead to very different universes. The quantity is mixed so it is tacitly assumed that there is a common unit. What this common unit is, is not clear : it can only be  based on the dimensions of an ultimate event itself, or its precise emplacement (not its container capsule), since K0 , the backdrop or Event Locality does not have a metric, is elastic, indeterminate in extent.
         Although one can, in imagination, associate or combine all sorts of events with each other, only events that are bonded sequentially constitute an event-chain, and only bonded contemporary events remain contemporary in successive ksanas. This ‘bonding’ is not a mathematical fiction but a very real force, indeed the most basic and most important force in the physical universe without which the latter would collapse at any and every moment — or rather at every ksana.
         Now, within a single ksana one and only one ultimate event can have occurrence. However, the ‘length’ of a ksana varies from one event-chain to another since, although the size of the emplacements where the ultimate events occur is (by hypothesis) fixed, the spacing is not fixed, is indeterminate though the same in similar conditions (Note 5). The length of a ksana has a minimum and this minimal length is attained only when an event-chain is at rest, i.e. when it is considered without reference to any other event-chain. This is the equivalent of a ‘proper interval’ in Relativity. So t is a parameter with minimal value t0. It is not clear what the maximum value is though there must be one.
         The inter-space distance s does not have a minimum, or not one that is, in normal conditions ever attained — this minimum would be the exact ‘width’ of the emplacement of an ultimate event, an extremely small distance. It transpires that the inter-space distance s is at a maximum in a rest-chain taking the value s0. I am not absolutely sure whether this needs to be stated as an assumption or whether it can be derived later from the assumptions already made.)

         Thus, the ‘space-time’ paralleliped s3t has the value (s0)3t0 , an absolute value.

The Rest Postulate

This says that

          Every event-chain is at rest with respect to the Event Locality K0 and may be considered to be ‘stationary’.

          Why this postulate and what does it mean? We all have experience of objects immersed in a fluid medium and there can also be events, such as sounds, located in this medium. Now, from experience, it is possible to distinguish between an object ‘at rest’ in a fluid medium such as the ocean and ‘in motion’ relative to this medium. And similarly there will be a clear difference between a series of siren calls or other sounds emitted from a ship in a calm sea, and the same sequence of sounds when the ship is in motion. Essentially, I envisage ultimate events as, in some sense, immersed in an invisible omnipresent ‘medium’, 0, — indeed I envisage ultimate events as being localized disturbances of K0. (But if you don’t like this analogy, you can simply conceive of ultimate events having occurrence on an ‘Event Locality’ whose purpose is simply to allow ultimate events to have occurrence and to keep them separate from one another.) The Rest Postulate simply means that, on the analogy with objects in a fluid medium, there is no friction or drag associated with chains of ultimate events and the medium in or on which they have occurrence. This is basically what Einstein meant when he said that “the ether does not have a physical existence but it does have a geometric existence”.

What’s the point of this constant if no one knows what it is? Firstly, it by no means follows that this constant s0 t0 is unknowable since we can work backwards from experiments using more usual units such as metres and seconds, giving at least an approximate value. I am convinced that the value of s0 t0  will be determined experimentally within the next twenty years, though probably not in my lifetime unfortunately. But even if it cannot be accurately determined, it can still function as a reference point. Galileo was not able to determine the speed of light even approximately with the apparatus at his disposal (though he tried) but this did not stop him stating that this speed was finite and using the limit in his theories without knowing what it was.

Diverging Regular Event-chains

Imagine a whole series of event-chains with the same reappearance rate which diverge from neighbouring spots — ideally which fork off from a single spot. Now, if all of them are regular with the same reappearance rate, the nth member of Event-chain E0 will be ‘contemporaneous’ with the nth members of all the other chains, i.e. they will have occurrence within the same ksana. Imagine them spaced out so that each nth ultimate event of each chain is as close as possible to the neighbouring chains. Thus, we imagine E0 as a vertical column of dots (not a continuous vertical line) and E1 a slanting line next to it, then E2 and so on. The first event of each of these chains (not counting the original event common to all) will thus be displaced by a single ‘grid-space’ and there will be no room for any events to have occurrence in between. The ‘speed’ or displacement distance of each event-chain relative to the first (vertical one) is thus lateral distance in successive ksanas/vertical distance in successive ksanas.  For a ‘regular’ event-chain the ‘slant’ or speed remains the same and is tan θ   =  1 s/t0 , 2 s/t0  and so on where, if θ is the slant angle,

tan θr  = vr  = 1, 2, 3, 4……   ­­

“What,” asked Zeno of Elea “is the speed of a particular chariot in a chariot race?”  Clearly, this depends on what your reference body is. We usually take the stadium as the reference body but the charioteer himself perceives the spectators as moving towards or away from him and he is much more concerned about his speed relative to that of his nearest competitor than to his speed relative to arena. We have grown used to the idea that ‘speed’ is relative, counter-intuitive though it appears at first.
But ‘distance’ is a man-made convenience as well : it is not an ‘absolute’ feature of reality. People were extremely put out by the idea that lengths and time intervals could be ‘relative’ when the concept was first proposed but scientists have ‘relatively’ got used to the idea. But everything seems to be slipping away — is there anything at all that is absolute, anything at all that is real? Ultimate Event Theory evolved from my attempts to ponder this question.
The answer is, as far as I am concerned, yes. To start with, there are such things as events and there is a Locality where events occur. Most people would go along with that. But it is also one of the postulates of UET that every macroscopic ‘event’ is composed of a specific number of ultimate events which cannot be further decomposed. Also, it is postulated that certain ultimate events are strongly bonded together into event-chains temporally and event-conglomerates laterally. There is a bonding force, causality.
Also, associated with every event chain is its Event Number, the number of ultimate events between the first event A and the last Z. This number is not relative but absolute. Unlike speed, it does not change as the event-chain is perceived in relation to different bodies or frames of reference. Every ultimate event is precisely localised and there are only a certain number of ultimate events that can be interposed between two events both ‘laterally’ (spatially) and ‘vertically’ (temporally). Finally, the size of the ‘space-time capsule’ is fixed once and for all. And there is also a maximum ‘space/time displacement ratio’ for all event-chains.
This is quite a lot of absolutes. But the distance between ultimate events is a variable since, although the dimensions of each ultimate event are fixed, the spacing is not fixed though it will remain the same within a so-called ‘regular’ event-chain.
It is important to realize that the ‘time’ dimension, the temporal interval measured in ksanas, is not connected up to any of the three spatial dimensions whereas each of the three spatial dimensions is connected directly to the other two. It is customary to take the time dimension as vertical and there is a temptation to think of t, time, being ‘in the same direction’ as the z axis in a normal co-ordinate system. But this is not so : the time dimension is not in any spatial direction but is conceived as being orthogonal (at right angles) to the whole lot. To be closer to reality, instead of having a printed diagram on the page, i.e. in two dimensions, we should have a three dimensional optical set-up which flashes on and off at rhythmic intervals and the trajectory of a ‘particle’ (repeating event-chain) would be presented as a repeating pinpoint of light in a different colour.
Supposing we have a repeating regular event-chain consisting for simplicity of just one ultimate event. We [resent it as a column of dots, i.e. conceive of it as vertical though it is not. The dots are numbered 0, 1, 2….    and the vertical spacing does not change (since this is a regular event-chain) and is set at  t0 since this is a ‘rest chain’.  Similar regular event-chains can then be presented as slanting lines to the right (or left) regularly spaced along the x axis. The slant of the line represents the ‘speed’. Unlike the treatment in Calculus and conventional physics, increasing v does not ‘pass through a continuous set of values’, it can only move up by a single ‘lateral’ space each time. The speeds of the different event-chains are thus 0s/t0  (= 0) ;  1s/t0 ;
2s/t0 ; 
 3s/t0 ;  4s/t0 ;……  n s/t0 and so on right up to  c s/t0 .  But to what do we relate the spacing s ?  To the ‘vertical’ event-chain or to slanting one? We must relate s to the event-chain under consideration so that its value depends on v so v =  v sv    The ratio  s/t0 is thus a mixed ratio sv/t0 .   tv  gives the intervals between successive events in the ‘moving’ event-chains and the number of these intervals does not increase because there are only a fixed number of events in any event-chain evaluated in any way. These temporal intervals thus undoubtedly increase because the hypotenuse gets larger. What about the spacing along the horizontal ? Does it also increase? Stay the same?  If we now introduce the Constant Size Postulate which says that the product  sv  tv  = s0 t0    we find that   sv  decreases with increasing v since tv  certainly increases. There is thus an inverse ratio and one consequence of this is that the mixed ratio sv/t0 = s0/tv    and we get symmetry. This leads to relativity whereas any other relation does not and we would have to specify which regular event-chain ‘really’ is the vertical one. One can legitimately ask which is the ‘real’ spatial distance between neighbouring events? The answer is that every distance is real and not just real for a particular observer. Most phenomena are not observed at all but they still occur and the distances between these events are real : we as it were take our pick, or more usually one distance is imposed on us.

Now the real pay off is that each of these regular event-chains with different speeds v is an equally valid description of the same event-chain. Each of these varying descriptions is true even though the time intervals and distances vary. This is possible because the important thing, what really matters, does not change : in any event-chain the number and order of the individual events is fixed once and for all although the distances and times are not. Rosser, in his excellent book Introductory Relativity, when discussing such issues gives the useful analogy of a gamer of tennis being played on a cruise liner in calm weather. The game would proceed much as on land, and if in a covered court, exactly as on land. And yet the ‘speed’ of the ball varies depending on whether you are a traveller on the boat or someone watching with a telescope from another boat or from land. The ‘real’ speed doesn’t actually matter, or, as I prefer to put it, is indeterminate though fixed within a particular inertial frame (event system). Taking this one step further, not just the relative speed but the spacing between the events of a regular  event-chain  ‘doesn’t matter’ because the constituent events are the same and appear in the same order. It is interesting that. on this interpretation, a certain indeterminacy with regard to distance is already making its appearance before Quantum Theory has even been mentioned. 

Which distance or time interval to choose?

Since, apparently, the situation between regular event-chains is symmetric (or between inertial systems if you like) one might legitimately wonder how there ever could be any observed discrepancy since any set of measurements a hypothetical observer can make within his own frame (repeating event system) will be entirely consistent and unambiguous. In Ultimate Event Theory, the situation is, in a sense, worse since I am saying that, whether or not there is or can be an observer present, the time-distance set-up is ‘indeterminate’ — though the number and order of events in the chain is not. Any old ‘speed’ will do provided it is less than the limiting value c. So this would seem to make the issue quite academic and there would be no need to learn about Relativity. The answer is that this would indeed be the case if we as observers and measurers or simply inhabitants of an event-environment could move from one ‘frame’ to another effortlessly and make our observations how and where we choose. But we can’t : we are stuck in our repeating event-environment constituted by the Earth and are at rest within it, at least when making our observations. We are stuck with the distance and time units of the laboratory/Earth event-chain and cannot make observations using the units of the electron event-chain (except in imagination). Our set of observations is fully a part of our system and the units are imposed on us. And this does make a difference, a discernible, observable difference when dealing with certain fast-moving objects.
Take the µ-meson. µ-mesons are produced by cosmic rays in the upper reaches of the atmosphere and are normally extremely short-lived, about  2.2 × 10–6 sec.  This is the (average) ‘proper’ time, i.e.  when the µ-meson is at rest — in my terms it would be N × t0 ksanas. Now, these mesons would, using this t value, hardly go more than 660 metres even if they were falling with the speed of light (Note 4). But a substantial portion actually reach sea level which seems impossible. Now, we have two systems, the meson event-chain which flashes on and off N times whatever N is before terminating, i.e. not reappearing. Its own ‘units’ are t0 and s0 since it is certainly at rest with itself. For the meson, the Earth and the lower atmosphere is rushing up with something approaching the limiting speed towards it. We are inside the Earth system and use Earth units : we cannot make observations within the meson. The time intervals of the meson’s existence are, from our rest perspective, distended : there are exactly the same number of ksanas for us as for the meson but, from our point of view, the meson is in motion and each ‘motion’ ksana is longer, in this case much much  longer. It thus ‘lives’ longer, if by living longer we mean having a longer time span in a vague absolute way,  rather than having more ‘moments of existence’. The meson’s ksana is worth, say, eight of our ksanas. But the first and last ultimate event of the meson’s existence are events in both ‘frames’, in ours as well as its. And if we suppose that each time it flashed into existence there was a (slightly delayed) flash in our event-chain, the flashes would be much more spaced out and so would be the effects. So we would ‘observe’, say, a duration of, say, eight of ‘our’ ksanas between consecutive flashings instead of one. And the spatial distance between flashes would also be evaluated in our system of metres and kilometres : this is imposed on us since we cannot measure what is going on within the meson event-chain. The meson actually would travel a good deal further in our system — not ‘would appear to travel farther’. Calculations show that it is well within the meson’s capacity to reach sea level (see full discussion in Rosser, Introductory Relativity pp. 71-3).
What about if we envisaged things from the perspective of the meson? Supposing, just supposing, we could transfer to the meson event-chain or its immediate environment and could remember what things were like in the world outside, the familiar Earth event-frame. We would notice nothing strange about ‘time’, the intervals between ultimate events, or the brain’s merging of them, would not surprise us at all. We would consider ourselves to be at rest. What about if we looked out of the window at the Earth’s atmosphere speeding by? Not only would we recognize that there was relative motion but, supposing there were clear enough landmarks (skymarks rather), the distances between these marks would appear to be far closer than expected — in effect there would be a double or triple sense of motion since our perception of motion is itself based on estimates of distance. As the books say, the Earth and its atmosphere would be ‘Lorentz contracted’. There would be exactly the same number of ultimate events in the meson’s trajectory, temporarily our trajectory also. The first and last event of the meson’s lifetime would be separated by the same number of temporal intervals and if these first and last events left marks on the outside system, these marks would also be separated by exactly the same number of spatial intervals. Only these spatial intervals — distances — would be smaller. This would very definitely be observed : it is as if we were looking out at the countryside on a familiar journey in a train fantastically speeded up. We would still consider ourselves at rest but what we saw out of the window would be ludicrously foreshortened and for this reason we would conclude that we were travelling a good deal faster than on our habitual journey. I do not think there would be any obvious way to recognize the time dilation of the outside system.

One is often tempted to think that the time dilation and the spatial contraction cancel each other out so all this talk of relativity is purely academic since any discrepancies should cancel out. This would indeed be the case if we were able to make our observations inside the event-chain we are observing, but we make the measurements (or perceptions) in a single frame. Although it is the meson event-chain that is dictating what is happening, both the time and spatial distance observations are made in our system. It is indeed only because of this that there is so much talk about ‘observers’ in Special Relativity. The point is not that some intelligent being observes something because usually he or she doesn’t : the point is that the fact of observation, i.e. the interaction with another system seriously confuses the issue. The ‘rest-motion’ situation is symmetrical but the ‘observing’ situation is not symmetrical, nor can it be in such circumstances.

This raises an important point.  In Ultimate Event Theory, as in Relativity, the situation is ‘kinematically’ symmetrical. But is it causally symmetrical? Although Einstein stressed that c was a limit to the “transfer of causality”  he was more concerned with light and electro-magnetism than causality. UET is concerned above all with causality — I have not mentioned the speed of light yet and don’t need to. In situations of this type, it is essential to clearly identify the primary causal chain. This is obviously the meson : we observe it, or rather we pick up indications of its flashings into and out of existence. The observations we make, or simply perceptions,  are dependent on the meson, they do not by themselves constitute a causal chain. So it looks at first sight as if we have a fundamental asymmetry : the meson event-chain is the controlling one and the Earth/observer event chain  is essentially passive. This is how things first appeared to me. But on reflection I am not so sure. In many (all?) cases of ‘observation’ there is interaction with the system being observed and it is inevitably going to be affected by this even if it has no senses or observing apparatus of its own. One could thus argue that there is causal symmetry after all, at least in some cases. There is thus a kind of ‘uncertainty principle’ due to the  interaction of two systems latent in Relativity before even Quantum Mechanics had been formulated. This issue and the related one of the limiting speed of transmission of causality will be dealt with in the subsequent post.

Sebastian Hayes  26 July
Note 1. And in point of fact, if General Relativity is to be believed, ‘free space’ is not strictly homogeneous even when empty of matter and neither is the speed of light strictly constant since light rays deviate from a straight path in the neighbourhood of massive bodies.

Note 2  For those people like me who wish to believe in the reality of 0 — rather than seeing it as a mere mathematical convenience like a co-ordinate system —  the lack of any ‘friction’ between the medium or backdrop and the events or foreground would, I think. be quite unobjectionable, even ‘obvious’, likewise the entire lack of any ‘normal’ metrical properties such as distance. The ‘backdrop’, that which lies ‘behind’ material reality though in some sense permeating it, is not physical and hence is not obliged to possess familiar properties such as a shape, a metric, a fixed distance between two points and so on. Nevertheless, this backdrop is not completely devoid of properties : it does have the capacity to receive (or produce) ultimate events and to keep them separate which involves a rudimentary type of ‘geometry’ (or topology). Later, as we shall see, it would seem that it is affected by the material points on it, so that this ‘geometry’, or ‘topology’, is changed, and so, in turn,  affects the subsequent patterning of events. And so it goes on in a vicious or creative circler, or rather spiral.
            The relation between K0, the underlying substratum or omnipresent medium, and the network of ultimate events we call the physical universe, K1  is somewhat analogous to the distinction between nirvana and samsara in Hinayana Buddhism. Nirvana  is completely still and is totally non-metrical, indeed non-everything (except non-real), whereas samsara is turbulence and is the world of measure and distancing. It is alma de-peruda, the ‘domain of separation’, as the Zohar puts it.  The physical world is ruled by causality, or karma, whereas nirvana is precisely the extinction of karma, the end of causality and the end of measurement.

Note 3   The ‘Space-time hyperbola’ , as stated, does not extend indefinitely either along the ‘space’ axis s (equivalent of x) or indefinitely upwards Space time hyperbolaalong the ‘time’ axis (equivalent of y).  — at any rate for the purposes of then present discussion. The variable t has a minimum t0   and the variable s a maximum s0  which one suspects is very much greater than  tc  .  Since there is an upper limit to the speed of propagation of a causal influence, c , there will in practice be no values of t greater than tc  and no s values smaller than sc  .   It thus seems appropriate to start marking off the s axis at the smallest value sc  =   s0/ c  which can function as the basic unit of distance.  Then s0 is equal to c of these ‘units’. We thus have a hyperbola something like this — except that the curve should consist of a string of separate dots which, for convenience I have run together.

Note 4  See Rosser, Introductory Relativity pp. 70-73. Incidentally, I cannot recommend too highly this book.

Note 5   I have not completely decided whether it is the ‘containers’ of ultimate events that are elastic, indeterminate, or the ‘space’ between the containers (which have the ultimate events inside them)’. I am inclined to think that there really are temporal gaps not just between ultimate events themselves but even between their containers, whereas this is probably not so in the case of spatial proximity. This may be one of the reasons, perhaps even the principal reason, why ‘time’ is felt to be a very different ‘dimension’. Intuitively, or rather experientially, we ‘feel’ time to be different from space and all the talk about the ‘Space/Time continuum’ — a very misleading phrase — is not enough to dispel this feeling.

To be continued  SH  18 July 2013