1. Anomalous nature of causality

Causality has a peculiar status in science. Without it, there could hardly be science at all and Claude Bernard, the 19th century French biologist, went so far as to define science as determinism. Quantum Mechanics has, of course, somewhat dented the privileged position of causality in scientific thinking ─ but much less than is commonly thought. We still have ‘statistical determinism’ and, for most practical purposes the difference between absolute and statistical determinism is academic. Moreover, contrary to what many people think, chaos theory does not dispense with determinism : in principle chaotic behaviour, though unpredictable, is held to be nonetheless deterministic (Note 1).
I consider that we just have to take Causality on board.  Apart from the idea that there is some sort of objective reality ‘out there’, causality tops the list of concepts I would be the most reluctant to do without. The trouble is that belief in causality is essentially an act of faith since there seems  no way to demonstrate whether it is operative or not. If there were actually some sort of test whereby we could show that causality was at work, in the sort of way we can test whether an electric current is flowing through a circuit, questions of causality could be resolved rapidly and decisively. But no such test exists and, seemingly, can’t exist. Scientists and engineers generally believe in causality because they can’t do without it, but several philosophers have questioned whether there really is such a thing, notably Hume and Wittgenstein. The fact  that event A has up to now always been succeeded by event B does not constitute proof as Hume correctly observed. Scientists, engineers and practical people need to believe in causality and so they simply ignored Hume’s attack, though his arguments have never been refuted. Hume himself said that he abandoned his scepticism concerning the reality of causality when playing billiards ─ as well he might.

Relativity and the Upper Limit to Causal Processes

What we can do, since the advent of Relativity, is to decide when causality is not operative by appealing to the well-known test of whether two events lie within the ‘light cone’. But all this talk of light and observers and sending messages is misleading : it is putting the cart before the horse. What we should concentrate on is causality.
Eddington once said that one could deduce from a priori reasons (i.e. without carrying out any experiments) that there must be an upper limit to the speed of light in any universe, though one could not deduce a priori what the value of this limit had to be. Replace ‘speed of light’ by ‘speed of propagation of a causal influence’ and I agree with him. Certainly I can’t conceive of any ‘world’ where the operation of causality was absolutely simultaneous.
I thus propose introducing as an axiom  of Ultimate Event Theory  that

        There exists an upper limit to the ‘speed’ (event-space/ksana ratio) for the propagation of all causal influences

  In traditional physics, since Relativity, this upper limit is noted as c ≈ 3 × 108 metres per second. Without wishing to be pedantic, I think it is worthwhile at the outset distinguishing between an attainable upper limit and the lowest unattainable upper limit. The latter will be noted c* while the attainable limit will be noted as c in accordance with tradition. In Ultimate Event Theory the units are not metres and seconds : the standard unit of length is s0 , the distance between two adjacent spots on the Event Locality K0, and t0 , the ‘length’ of a ksana, the distance between two successive ultimate events in an event-chain. Thus  c* is an integer = c* s0/t0  and c, the greatest actually attainable ‘speed’ ─ better, displacement ratio ─ is (c* – 1) s0/t0   
        In modern physics, since Einstein’s 1905 paper, c, the maximum rate of propagation of causality is equated with the actual speed of light. I argued in an article some years ago that there was no need to exactly identify c, the upper limit of the propagation of causality with the actual speed of light, but only to conclude that the speed of light was very close to this limit (Note 2)

Revised Rule for Addition of Velocities

 Einstein realized that his assumption  (introduced as an Axiom) that “the speed of light in vacuo is the same for all observers irrespective of the movement of the source” invalidated the usual rule for adding velocities. Normally, when considering ‘motion within motion’ as when, for example, I walk down a moving train, we just add my own speed relative to the train to the current speed of the train relative to the Earth. But, if  V = v + w  we can easily push V over the limit simply by making v and w very close to c. For example, if v = (3/4)c and w = (2/3)c  the combined speed will be greater than c.

Since c is a universal constant, the variables v and w may  be defined in terms of c. So, let  v = c/m   w = c/n  where m, n > 1   (though not necessarily integral).

Thus, using the normal (Galileian) Rule for adding velocities

V = c/m + c/n  = c( 1   +  1  ) = c (m + n) 
                   m       n           mn

The factor (m + n)   <if m > 1, n > 1
         mn

For, let m = (1 + d), n = (1 + e)  with d, e > 0    then

      mn =  (1 + d)(1 + e)  = (1 + (d+e) + de) = (2 + (d+e)) – (1 – de)

                                                        = (m + n) – (1 – de)

                        So  (m + n) – mn = (1– de)  <  1

So, in order to take V beyond c, all we have to do is make  de < 1 and this will be true whenever 0 < d < 1  and 0 < e < 1. For example, if we have m = 3/2 = 1 + ½    and n = 9/8 = 1 + 1/8   we obtain a difference of (1 – 1/16) = 15/16. And in fact

If v = c/(3/2)   w = c/(9/8)  we have

V = (2/3)c + (8/9) c = (14/9) c >  c

The usual formula for ‘adding’ velocities is thus no good since it allows the Upper Limit to be exceeded and this is impossible (by assumption). So we must look for another formula, involving m and n only, which will stop V from exceeding c. We need a factor which, for all possible non-zero quantities m and n (excluding m = 1, n = 1) will make the product < 1.

Determining the New Rule for Adding Velocities

 The first step would seem to be to cancel the mn in the expression (m + n)/mn . So for the multiplying factor  we want mn divided by some expression involving m and n only but which (1) is larger than mn and (2) has the effect of making

           (m + n)  × mn    <  1  for all possible m, n > 1
          mn       f(mn)

The simplest function is just (1 + mn) since this is > mn and also has the desired result that

                (m + n)  ×     mn        <  1  for all possible m, n > 1
                  mn          (1 + mn)

This is so because (m + n) > (1 + mn)  for all m, n > 1

        Again, we set m = (1 + d), n = (1 + e)  so

(m + n)  = 2 + (c + d)  and 1 + mn = 1 + (1 + (d + e) + de)

                                                        =  (2 + (d + e) + de)

                                                        =  (m + n) + de

So  (m + n)/(1 + mn) < 1  (since denominator is larger than the nominator).

Moreover, this function (1 + mn) is the smallest such function that fits the bill for all legitimate values of m  and n. For, if we set f(m, n) = (e + mn)  we must have e > (1 – de) for all values d, e > 0 . The smallest such e is just 1 itself.

I start by assuming that there is an unattainable Upper Limit to the propagation of causal influences, call it c*. This being the case, the ‘most extended’ regular event-chain can have at most a spatial distance/temporal distance ratio of c . Anything beyond this is not possible (Note 3).
This value c is a universal constant and any ‘ordinary’ speeds (space/time ratios) within event-chains ought, if possible to be defined with reference to c, i.e. as c/m, c/n and so on. What are the units of c? The ‘true’ units are s0 and t0, the inter-ultimate event spatial and temporal distances, i.e. the ‘distance’ from the emplacement of one ultimate event to its nearest simultaneously existing neighbour (in one spatial dimension) and the distance from one ultimate event to its immediate successor in a chain. These distances are those of an event-chain at rest with regard to the Locality K0. These values are, by hypothesis, constants.
A successor event in an event-chain can only displace itself by integral units since every event must occupy a spot on the Locality. The smallest displacement would just be that of a single grid-space, 1 s­0 . Using the c/m notation this is a displacement ratio of c/c  = 1 0/t0    And the smallest ‘combined speed’ V would be V = c/c + c/c  = 2  using the ‘traditional’ method of combining velocities. But, using the new formula we have

V = c (c + c)        =   2c2     s­0/t
       (1 + c2)         (1 + c2)

        This is very slightly less than c(2c)/c2 = 2 . We may consider the second fraction       c2  .    =       1   
                                                                                                                                                      (1 + c2)       1 + 1/c2
  as a ‘shrinkage factor’. Since c is so large 1/c2 is minute and the shrinkage factor is correspondingly small.
More generally, for  u = c/m   and  w = c/n  we have a ‘shrinkage factor’ of  1/(1 + 1/mn)
      This should be interpreted as follows. By the Space/Time capsule Axiom, the region s3t is constant and = s03 t0  where s0 and  t0  are constants. We neglect two of the spatial dimensions and concentrate only on the ‘rectangle’ st which is s0 by  t0  for a ‘stationary’ event chain. Since the sides of the ‘rest’ rectangle are fixed, so is the mixed space/time ratio s0 /t0   This in principle gives the ratio width to height of the region occupied by a single ultimate event in a rest chain ─ but, of course, we do not at present know the values of s0 and t0 .
Associated with a single event-chain is the region of the Locality it occupies. If an ultimate event conglomerate repeats at every ksana (has a reappearance rate of 1/1), the event-chain effectively monopolises the available space, i.e. stops any other ultimate events from having occurrence within this region. If there are N events in the chain the total area of the occupied region is N (s0)3 t0 . Note that if the event-chain contains N events, there are  (N – 1) intervals whereas if we number the ultimate events 0, 1, 2, 3….  there are N temporal intervals, i.e. N ksanas in all. Also, it is important to note that, in this model, each ultimate event itself only occupies a small part of this ‘Space/Time capsule’ of size (s0)3 t0 ─ but its occupancy is enough to exclude all other possible ultimate events.
As stated before, when dealing with simple event-chains with a fixed ‘direction’, we can neglect two of the three ‘spatial’ dimensions (the y and z dimensions), so we only need to bother about the ‘Space-Time rectangle’ of fixed size s0 t0 . Thus, when dealing with a simple regular event-chain we only need to bother about the region occupied by N such rectangles. Although the area  of the rectangle s0 t0 is constant (= R), the ratio of the sides need not be. However, for all s, t  st = s0 t0   the lengths s0  and t0  of this mixed ‘Space-Time’ rectangle are the ‘rest’ lengths, the dimensions of each capsule when considered in isolation ─ ‘rest’ lengths because, by the Rest Axiom (or definition) every event-chain is at rest relative to the Event Locality K0 (Note 4) .  Although there is no such thing as absolute movement relative to the Locality, there certainly is relative movement (displacement ksana by ksana) of one event-chain with respect to another which may be considered to be stationary. And this relative movement changes the distances distances of the event capsules and so of the entire chain. The changed distances are noted sv and tv  and, since the product sv tv is constant and equal to the ‘rest area’ s0 t0, ­the sides of the rectangle, sv and tv change inversely i.e. s0 /sv   = tv/tv  so if the ‘spatial dimension’ of the rectangle decreases, the ‘time dimesnion’, the length of a ksana in absolute terms increases. I had in a previous post introduced tentatively as an axiom that the rest length of a ksana, t0 , was a minimum. But, in fact as I hoped, this is a consequence of the behaviour of the s dimension. The Upper Limit Assumption and the consequent discussion of the rule for adding velocities, shows that s0 is a maximum which in turn makes t0 a maximum as required. And practically speaking, in ‘normal conditions’, s and t will also have a maximum and minimum, namely the values they take when the displacement ratio s/t  = c the upper limit. Thus s0 > sv ≥ sc and t0 < tv ≤ tc  

Revised Rule for Addition of Velocities

 Einstein realized that his assumption  (introduced as an Axiom) that “the speed of light in vacuo is the same for all observers irrespective of the movement of the source” invalidated the usual rule for adding velocities. Normally, when considering ‘motion within motion’ as when, for example, I walk down a moving train, we just add my own speed relative to the train to the current speed of the train relative to the Earth. But, if  V = v + w  we can easily push V over the limit simply by making v and w very close to c. For example, if v = (3/4)c and w = (2/3)c  the combined speed will be greater than c.
Since c is a universal constant, the variables v and w may  (and should) be defined in terms of c. So, let  v = c/m   w = c/n  where m, n > 1   (though not necessarily integral).

Thus, using the normal (Galileian) Rule for adding velocities

          V    = c/m + c/n  = c( 1   +  1  ) = c (m + n) 
                                           m      n           mn

The factor (m + n)   <if m > 1, n > 1
                     mn

For, let m = (1 + d), n = (1 + e)  with d, e > 0    then

         mn =  (1 + d)(1 + e)  = (1 + (d+e) + de) = (2 + (d+e)) – (1 – de)

                                                        = (m + n) – (1 – de)

               Thus          (m + n) – mn = (1– de)  <  1

So, in order to take V beyond c, all we have to do is make  de < 1 and this will be true whenever 0 < d < 1  and 0 < e < 1. For example, if we have m = 3/2 = 1 + ½    and n = 9/8 = 1 + 1/8   we obtain a difference of (1 – 1/16) = 15/16. And in fact

If v = c/(3/2)   w = c/(9/8)  we have

V = (2/3)c + (8/9) c = (14/9) c >  c

The usual formula for ‘adding’ velocities is thus no good since it allows the Upper Limit to be exceeded and this is impossible (by assumption). So we must look for another formula, involving m and n only, which will stop V from exceeding c. We need a factor which, for all possible non-zero quantities m and n (excluding m = 1, n = 1) will make the product < 1.
The first step would seem to be to get rid  of the mn in the expression (m + n)/mn . So for the multiplying factor  we want mn divided by some expression involving m and n only but which (1) is larger than mn and (2) has the effect of making

(m + n)  × mn    <  1  for all possible m, n > 1
 mn       f(mn)

The simplest function having the desired properties is just mn/(1 + mn) since this is > mn for m, n > 1.and also has the desired result that

(m + n)  ×     mn        <  1  for all possible m, n > 1
 mn        (1 + mn)

For, let m = (1 + d), n = (1 + e)  so that

(m + n)  = 2 + (c + d)  and 1 + mn = 1 + (1 + (d + e) + de)
                                                        =  (2 + (d + e) + de)
                                                        =  (m + n) + de

So  (m + n)/(1 + mn) < 1  (since denominator is larger than the nominator).

Moreover, this function (mn)/(1 + mn) is the smallest such function that fits the bill for all legitimate values of m  and n. For, if we set f(m, n) = (e + mn)  we must have e > (1 – de) for all values d, e > 0 . The smallest such e is just 1 itself.

The factor     mn       =        1   .       should be regarded as
                   (1 + mn)        1 + (1/mn)

a ‘shrinkage factor’ which gets applied automatically when velocities are combined. It is not a mathematical fiction but something that  is really operative in the physical world and which excludes  ‘runaway’ speeds which otherwise would wreck the system ─ much as a thermostat stops a radiator from overheating. It is not today helpful to view such procedures as ‘physical laws’ ─ though this is how Newton and possibly even Einstein viewed them. Rather, they are automatic procedures that ‘kick in’ when appropriate.
Mathematics is a tool for getting a handle on reality, no more, no less, and it is essential to distinguish between mathematical procedures which are simply aids to calculation or invention and those which correspond to actual physical mechanisms. I believe that the factor (mn)/(1 + mn)  ─ and likewise γ = (1/(1 – v2/c2))1/2  that we shall come to later─ fall into the latter category. How and why such mechanisms got developed in the first place, we do not know and perhaps will never know, though it is quite conceivable that they developed like so much else by ‘trial and error’ over a vast expanse of ‘time’ in much the same way as biological mechanisms developed without the users of these mechanisms knowing what they were doing or where they were heading.

The Space/Time Capsule and the units of c. 

This value c is a universal constant and any ‘ordinary’ speeds (space/time ratios) within event-chains ought, if possible, to be defined with reference to c, i.e. as c/m, c/n and so on. But what are the units of c in Ultimate Event Theory?
          As stated in the previous post, I take as axiomatic that “the region of the Space/Time  capsule s3t is constant and equal to the ‘rest’ value of so3 t0.  But, although the product is fixed, s and t can and do vary. When dealing with a (resolved) force or motion which thus has a unique spatial direction, we only need bother about the rectangle of area s × t which can be plotted as a hyperbola of the form st =  constant.
         However, unlike most rectangular hyperbolae, the graph does not extend to infinity along both axes ─ nothing in UET extends to infinity. So s and t must have minimal and maximal values. I have assumed so far that s0 is a maximum and this is in accord with Special Relativity. So this makes t0 a minimum since st = s0 t0  = Ω . Actually, while writing this post, it has occurred to me that one could do things the other way round and have s0 as a minimum and t0 a maximum since there does not seem any reason a priori why this should not be possible. But I shall not pursue this line of thought at the moment.
So, if we wish to convert to ‘ultimate’ units of distance and time, we can use t0  , the minimal length of a ksana which it attains in a ‘rest chain’ as the appropriate temporal unit. But what about spatial distance? Since s0 is the maximum value for the spatial dimension of the mixed Space-Time capsule’ of fixed volume  s3 t  = s03 t0, we must ask whether s has a minimum? The answer is yes. In UET there is no infinity and everything has a minimum and a maximum with the single exception of the Event Locality itself, K0, which has neither because it is intrinsically ‘non-metrical’. s03 t0 represents the volume of the ‘Space/time capsule’ enclosing an ultimate event and which in UET is the ultimate basic building-block of physical reality. But an ultimate event itself occupies a non-zero, albeit minuscule, region. Since there is nothing smaller than an ultimate event, this value, we can take the dimensions of the region occupied as the ultimate volume and each of the spatial lengths as the ultimate unit of distance. So su  will serve as the ‘ultimate’ unit of distance where the subscript u means ‘ultimate’. And, since st = s0 t0  = Ω for all permissible values of s  and t, we have su tu  =  s0 t   thus su /s= t0/tu . So the ratios of the extreme values of spatial and temporal units are inversely related. Thus s0 = M su  where m is an integer since every permissible length must be a whole number multiple of the base unit. Thus s0/t0  which we have noted as M, is, in ultimate units, (M su)/t0  so M = c
This was, to me, an unexpected and rather satisfying,  result. Instead of c appearing, as it were, from nowhere, the UET treatment gives a clear physical meaning to the idea that light travels at (or near to) the limiting value for all causal processes. We can argue the matter in this way.
I view an ultimate event as something akin to a hard seed in a pulpy fruit or the nucleus in an atom, where the fruit as a whole or the atom is the space/time  capsule. Suppose a causal influence emanating from the kernel of a Space/Time Relativity  Upper Limitcapsule where there is an ultimate event. Then, if it going to have any effect on the outside world, it must at least travel a distance of ½ so to get outside its capsule and another  ½ so to get to the centre of the next capsule where it repeats (or produces a different ultimate event). And in the case of a regular event-chain with a 1/1 reappearance ratio (i.e. one ultimate event at each and every ksana) the causal force must traverse this distance within the space of a single ksana. If the chain is considered in isolation and thus at rest, the length of every ksana will be the minimal temporal distance t0. The causal influence must thus have a space/time ratio of  (½ s+ ½ so )/t0  =  so /t0  = c.
Thus, c is not only the limiting speed for causal processes, but turns out to be the only possible speed in a rest chain since a causal influence must get outside a Space/Time capsule if it is going to have any effects.  And, since every event-chain is itself held together by causal forces, it makes sense that the electro-magnetic event-chain commonly known as ‘light’ cannot exceed this limit ─ an event-chain which exceeded the limit, supposing this to be conceivable, would immediately terminate since any subsequent events would be completely dissociated from prior events. What this means is that if, say, two light rays were sent out at right angles to each other, each event in the ‘moving chain’ would be displaced a distance of s0 at each ksana relative to the ‘stationary‘ chain, while the causal influence in the stationary chain would have traversed exactly the same distance in absolute units. In General Relativity, the constant c is often replaced by 1 to make calculations easier : this interpretation justifies the practice. For in the ‘capsule’ units  s9 and t0 the ratio is csu/t0 = 1 s0 /t0   

Asymmetry of Space/Time

It would seem that there is a serious asymmetry between the ‘spatial’ dimension(s) and the temporal. Since s/s0 = t0/t , spatial  and temporal distances ─ ‘space’ and ‘time’ ─ are inversely proportional (Note 3). We are a species with a very acute sense of spatial distance and a very crude sense of time ─ the film industry is based on the fact that the eye, or rather the eye + brain, ‘runs together’ images that are flashed across the screen at a rate of more than a few frames per second (8, I believe). And we do not have too much difficulty imagining huge spatial distances such as the diameter of the galaxy, while we find it difficult to conceive of anything at all of any interest happening within a hundredth, let alone a hundred billionth, of a second. Yet cosmologists happily talk about all sorts of complicated processes going on within minute fractions of a second after the Big Bang, so we know that there can be quite a lot of activity in what is, to us, a very small temporal interval.
For whatever reason, one feels that  the smallest temporal interval (supposing there is one) and which in UET is t0, must be extremely small compared to the maximum unitary ‘spatial’ distance s0 . This may be an illusion based on our physiology but I suspect that there is something in it : ‘time’ would seem to be much more finely tuned than space. This goes some way to explaining why we are unaware of their being any ‘gaps’ between ksanas as I believe there must be, while there are perhaps no gaps between contemporaneous spatial capsules. I believe there must be gaps between ksanas because the whole of the previous spatial set-up has to vanish to give way to the succeeding set-up, whereas ‘length’ does not have to vanish to pass on to width or depth. These gaps, if they exist, are probably extremely small and so do not show up even with our contemporary precision instruments. However, at extremely high (relative) speeds, gaps between ksanas should be observable and one day, perhaps quite soon, will be observed and their minimal and maximum extent calculated at least approximately.

 Strange Consequence of the Addition Rule

Curiously, the expression

        (m + n)  ×     mn        <  1  for all possible values of m, n
          mn        (1 + mn)

except when m = 1, n = 1 (or both). I have already shown that

(m + n)  ×     mn        <  1  when m, n > 1
 mn        (1 + mn)

But the inequality holds even when we are dealing with (possibly imaginary) speeds greater  than the limit. For consider c/m + c/n where m < 1  n < 1   i.e. when c/m and c/n  are each > 1

Let m = (1 – d)   n = (1 – e)   d, e > 0

        Then (m + n)     = 2 – (d + e) 

                  1 + mn   = 1 + {1 – d)(1 – e)}
                                = 1 + {1 –(d + e) + de)}
                                = 2 – (d + e) + de
                                = (m + n) + de  > (m + n) since d, e > 0

This means that even if both velocities exceed c, their combination (according to the new addition rule) is less than c !

For take c/(1/2) + c/(1/5) = 2c + 5 c = 7c by the ‘normal’ addition rule. But, according to the new rule, we have

c ((1/2) + (1/5))  =   c (0.5 + 0.2) = c (0.7)  <  c
(1 + (1/2)(1/5)           (1.1)              1.1

         I am not sure how to interpret this in the context of Ultimate Event Theory and causality. It would seem to imply that ‘event sequences’ ─ one cannot call them ‘chains’ because there is no bonding between the constituent events ─ which have displacement rates above  the causal limit, when combined, are somehow dragged back below the upper limit and become a bona fide event-chain. So, independently, such loose sequences can exist by themselves ‘above the limit’, but if they get entangled with each other, they get pulled back into line. In effect, this makes a sort of sense. Either causality is not operative at all, or, if it is operative, it functions at or below the limit.
This curious result has, of course, been noted many times in discussions about Special Relativity and given rise to all sorts of fantasies such as particles propagating backwards in time, effects preceding causes and so on. Although Ultimate Event Theory may itself appear far-fetched to a lot of people, it does not accommodate such notions : sequence and causality, though little else, are normally conserved and the ‘arrow of time’ only points one way. There must, however, be some good physical reason for this ‘over the speed limit’ anomaly and it will one day be of use technologically.

Random Events   

A random event by definition does not have a causal predecessor, although it can have a causal successor event. Random events are thus causal anomalies,  spanners in the works : they should not exist but apparently they do ─ indeed I suspect that they heavily outnumber well-behaved events which belong to recognizable event-chains (but individual random ultimate events are so short-lived they are practically speaking unobservable at present).
One explanation for the occurrence of random events ─ and they certainly seem to exist ─ is that they are events that have got dissociated from the event-chains to which they ‘by right’ belonged because “they arrived too fast”. If this is so, these stray events could pop up more or less anywhere on the Locality where there was an unoccupied space and they would appear completely out of place (i.e. ‘random’) because they would have no connection at all  with neighbouring events. This is indeed how many so-called ‘random’ events do appear to us : they look for all the world as if they have been wrongly assembled by an absent-minded worker. One might  draw a parallel with ‘jumping genes’ where sections of DNA get fitted into sections where they have no business to be (as far as we can tell).                                                                        S. H.     7/8/13

Note 1 Whether considering that a chaotic system is both unpredictable and yet deterministic is ‘having your cake and eating it’ I leave to others to decide. There is a generic difference between “being unable to make exact predictions because we can never know exactly the original situation” and “being unable to make predictions because the situation evolves in a radically unpredictable, i.e. random, manner”. No one disputes that in cases of non-linear dynamics the situation is inclined to ‘blow up’ because small variations in the original conditions can have vast consequences. Nor does anyone dispute that we will most likely never be able to know the initial conditions with the degree of certainty we would like to have. Therefore, ‘chaotic systems’ are unpredictable in practice ─ though they follow certain contours because of the existence of ‘strange attractors’.
But are the events that make up chaotic systems unpredictable in principle? Positivists sweep the whole discussion under the carpet with the retort, “If we’ll never be able to establish complete predictability, there’s no point in discussing the issue”. But for people of a ‘realistic’ bent, amongst whom I include myself, there is all the reason in the world to discuss the issue and come to the most ‘reasonable’ conclusion. I believe  there is a certain degree of  randomness ‘hard-wired’ into the workings of the universe anyway, even in ‘well-behaved’ linear systems. Nessim Taleb is, in my view, completely right to insist that there is a very real and important difference between the two cases. He believes there really is an inherent randomness in the workings of the universe and so nothing will ever be absolutely predictable. In consequence, he argues that, instead of bothering about how close we can get to complete predictability, it makes more sense to ‘prepare for the worst’ and allow in advance for the unforeseen.

Note 2. If you don’t identify the upper limit with the observed speed of an actual process, this allows you, even in ‘well-behaved’ linear systems. Nessim Taleb is, in my view, completely right to insist that there is a very real and important difference between the two cases. He believes there really is an inherent randomness in the workings of the universe and so nothing will ever be absolutely predictable. In consequence, he argues that, instead of bothering about how close we can get to complete predictability, it makes more sense to ‘prepare for the worst’ and allow in advance for the unforeseen.

Note 2. If you don’t identify the two exactly, this allows you to attribute a small mass to the ‘object under consideration and, as a matter of fact, at the time it was thought that the neutrino, which travels at around the same speed as light, was massless whereas we now have good reason to believe that the neutrino does have a small mass. But this issue is not germane to the present discussion and, for the purposes of this article, it is not necessary to make too much distinction between the two. When there is possible confusion, I shall use c* to signify the strictly unattainable limit and c to signify the upper limit of what can be attained. Thus v ≤ c  but v < c.

Note 3 Although hardly anyone seems to have been bothered by the issue, it questionable that it is legitimate to have mixed space/time values since this presupposes that there is a shared basic unit.

Note 4   If ‘random’ events greatly outnumber well-behaved causal events, why do we not record the fact with our senses and conclude that ‘reality’ is completely unpredictable? The ancients, of course, did believe this to a large extent and, seemingly, this was one reason why the Chinese did not forge ahead further than they actually did : they lacked the Western notion of ‘physical law’ (according to Needham). There may have been some subliminal perception of underlying disorder that surfaced in such ancient beliefs. But the main reason why the horde of ‘random’ ultimate events passes unnoticed is that these events flash in and out of existence without leaving much of a trace : only very few form recognizable event-chains and our senses are only responsive to relatively large conglomerates of events.

 

Advertisements