Archives for category: Uncategorized

Almost everyone schoolboy these days has heard of the Lorentz transformations which replace the Galileian transformations in Special Relativity. They are basically a means of dealing with the relative motion of two bodies with respect to two orthogonal co-ordinate systems. Lorentz first developed them in an ad hoc manner somewhat out of desperation in order to ‘explain’ the null result of the Michelson-Morley experiment and other puzzling experimental results. Einstein, in his 1905 paper, developed them from first principles and always maintained that he did not at the time even know of Lorentz’s work. What were Einstein’s assumptions?

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

As has since been pointed out, Einstein did, in fact, assume rather more than this. For one thing, he assumed that ‘free space’ is homogeneous and isotropic (the same in all directions) (Note 1). A further assumption that Einstein seems to have made is that ‘space’ and ‘time’ are continuous ─ certainly all physicists at the time  assumed this without question and the wave theory of ele tro-magnetism required it as Maxwell was aware. However, the continuity postulate does not seem to have played much of a part in the derivation of the equations of Special Relativity  though it did stop Einstein’s successors from thinking in rather different ways about ‘Space/Time’. Despite everything that has happened and the success of Quantum Mechanics and the photo-electric effect and all the rest of it, practically all students of physics think of ‘space’, ‘time’ and electro-magnetism as being ‘continuous’, rather than made up of discrete bits especially since Calculus is primarily concerned with ‘continuous functions’. Since nothing in the physical world is continuous, Calculus is in the main a false model of reality.

Inertial frames, which play such a big role in Special Relativity, as it is currently taught, do not exist in Nature : they are entirely man-made. It was essentially this realisation that motivated Einstein’s decision to try to formulate physics in a way that did not depend on any particular co-ordinate system whatsoever. Einstein assumed relativity and the constancy of the speed of light and independently deduced the Lorentz  transformations. This post would be far too long if I went into the details of Special Relativity (I have done this elsewhere) but, for the sake of the general reader, a brief summary can and should be given. Those who are familiar with Special Relativity can skip this section.

The Lorentz/Einstein Transformations     Ordinary people sometimes find it useful, and physicists find it indispensable, to locate an object inside a real or imaginary  three dimensional box. Then, if one corner of the imaginary box (e.g. room of house, railway carriage &c.) is taken as the Origin, the spot to which everything else is related, we can pinpoint an object by giving its distance from the corner/Origin, either directly or by giving the distance in terms of three directions. That is, we say the object is so many spaces to the right on the ground, so many spaces on the ground at right angles to this, and so many spaces upwards. These are the three co-ordinate axes x, y and z. (They do not actually need to be at right angles but usually they are and we will assume this.)

Also, if we are locating an event rather than an object, we will need a fourth specification, a ‘time’ co-ordinate telling us when such and such an event happened. For example, if a balloon floating around the room at a particular time, to pinpoint the event, it would not be sufficient to give its three spatial co-ordinates, we would need to give the precise time as well. Despite all the hoo-ha, there is nothing in the least strange or paradoxical about us living in a ‘four-dimensional universe’. Of course, we do done  : the only slight problem is that the so-called fourth dimension, time, is rather different from the other three. For one thing, it seems to only have one possible direction instead of two; also the three ‘spatial’ directions are much more intimately connected to each other than they are to the ‘time’ dimension. A single unit serves for the first three, say the metre, but for the fourth we need a completely different unit, the second, and we cannot ‘see’ or ‘touch’ a second whereas we can see and touch a metre rod or ruler.
Now, suppose we have a second ‘box’ moving within the original box and moving in a single direction at a constant speed. We select the x axis for the direction of motion. Now, an event inside the smaller box, say a pistol shot, also takes place within the larger box : one could imagine a man firing from inside the carriage of a train while it has not yet left the station. If we take the corner of the railway carriage to be the origin, clearly the distance from where the shot was fired to the railway carriage origin will be different from the distance from where the buffers train are. In other words, relative to the railway carriage origin, the distance is less than the distance to the buffers. How much less? Well, that depends on the speed of the train as it pulls out. The difference will be the distance the train has covered since it pulled out. If the train pulls out at constant speed 20 metres/second  metres/second and there has been a lapse of, say, 4 seconds, the distance will be  80 metres. More generally, the difference will be vt where t starts at 0 and is counted in seconds. So, supposing relative to the buffers, the distance is x, relative to the railway carriage the distance is v – xt a rather lesser distance.
Everything else, however, remains the same. The time is the same in the railway carriage as what is marked on the station clock. And, if there is only displacement in one dimension, the other co-ordinates don’t change : the shot is fired from a metre above ground level for example in both systems and so many spaces in from the near side in both systems. This all seems commonsensical and, putting this in formal mathematical language, we have the Galilean Transformations so-called

x = x – vt    y  = y    z  – z     t= t 

All well and good and nobody before the dawn of the 20th century gave much more thought to the matter. Newton was somewhat puzzled as to whether there was such a thing as ‘absolute distance’ and ‘absolute time’, hence ‘absolute motion’, and though he believed in all three, he accepted that, in practice, we always had to deal with relative quantities, including speed.
If we consider sound in a fluid medium such as air or water, the ‘speed’ at which the disturbance propagates differs markedly depending on whether you are yourself stationary with respect to the medium or in motion, in a motor-boat for example. Even if you are blind, or close your eyes, you can tell whether a police car is moving towards or away from you by the pitch of the siren, the well-known Doppler effect. The speed of sound is not absolute but depends on the relative motion of the source and the observer. There is something a little unsettling in the idea that an object does not have a single ‘speed’ full stop, but rather a variety of speeds depending on where you are and how you are yourself moving. However, this is not too troublesome.
What about light? In the latter 19th century it was viewed as a disturbance rather like sound that was propagated in an invisible medium, and so it also should have a variable speed depending on one’s own state of motion with respect to this background, the ether. However, no differences could be detected. Various methods were suggested, essentially to make the figures come right, but Einstein cut the Gordian knot once and for all and introduced as an axiom (basic assumption) that the speed of light in a vacuum (‘free empty space’) was fixed and completely independent of the observer’s state of motion. In other words, c, the speed of light, was the same in all co-ordinate systems (provided they were moving at a relative constant speed to each other). This sounded crazy and brought about a completely different set of ‘transformations’, known as the Lorentz Transformations  although Einstein derived them independently from his own first principles. This derivation is given by Einstein himself in the Appendix to his ‘popular’ book “Relativity : The Special and General Theory”, a book which I heartily recommend. Whereas physicists today look down on books which are intelligible to the general reader, Einstein himself who was not a brilliant student at university (he got the lowest physics pass of his year) and was, unlike Newton, not a particularly gifted pure mathematician, took the writing of accessible ‘popular’ books extremely seriously. Einstein is the author of the staggering put-down, “If you cannot state an issue clearly and simply, you probably don’t understand it”.
If we use the Galileian Tranformations and set v = c , the speed of light (or any form of electro-magnetism) in a vacuum, we have x = ct  or with x in metres and t in seconds, x = 3 × 108 metres (approximately) when t = 1 second. Transferring to the other co-ordinate system which is moving at v metres/sec relative to the first, we have  x’  x – vt  and, since t is the same as t, when dividing we obtain for x’ /t ,  (x – vt)/t = ((x/t) – v)  = (c – v), a somewhat smaller speed than c. This is exactly what we would expect if dealing with a phenomenon such as sound in a fluid medium. However, Einstein’s postulate is that, when dealing with light, the ratio distance/time is constant in all inertial frames, i.e. in all real or imaginary ‘boxes’ moving in a single direction with a constant difference in their speeds.

One might doubt whether it is possible to produce ‘transformations’ that do keep c the same for different frames. But it is. We need not bother about the y and z co-ordinates because they are most likely going to stay the same ─ we can arrange to set both at zero if we are just considering an object moving along in one direction. However, the x and t equations are radically changed. In particular, it is not possible to set t = t, meaning that ‘time’ itself (whatever that means) will be modified when we switch to the other frame.           The equations are

         x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

The reader unused to mathematics will find them forbidding and they are indeed rather tiresome to handle though one gets used to them. If you take the ratio If  x /t you will find ─ unless you make a slip ─ that, using the Lorentz Transformations you eventually obtain c as desired.

We have x = ct  or t = x/c  and the Lorentz Transformations

                    x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

Then  x/t  = γ (x – vt)        =   (x – vt)       =    c2(x – vt)
γ
(t – vx/c2)         (t – vx/c2)         (c2t – vx)   

               = c2(x – vt)      =  c2x – cv(ct)
                 
(c2t – vx)            (c(ct) – vx)

                                        =  c2x – cvx)       = (cx)(c – v)
                                            (cx – vx)            x(c – v)                  

                                          =   c

The amazing thing that this is true for any value of v ─ provided it is less than c ─ so it applies to any sort of system moving relative to the original ‘box’, as long as the relative motion is constant and in a straight line. It is true for v = 0 , i.e. the two boxes are not moving relatively to each other : in such a case the complicated Lorentz Transformations reduce to x = x      t = t   and so on.
The Lorentz/Einstein Transformations have several interesting and revealing properties. Though complicated, they do not contain terms in x2 or t2 or higher powers : they are, in mathematical parlance, ‘linear’. This is what we want for systems moving at a steady pace relatively to each other : squares and higher powers rapidly produce erratic increases and a curved trajectory on a space/time graph. Secondly, if v is very small compared to c, the ratio v/c which appears throughout the formulae is negligible since c is so enormous. For normal speeds we do not need to bother about these terms and the Galileian formulae give  satisfactory results.
Finally, and this is possibly the most important feature : the Lorentz/Einstein Transformations are ‘symmetric’. That is, if you work backwards, starting with the ‘primed’ frame and x and t, and convert to the original frame, you end up with a mirror image of the formulae with a single difference, a change of sign in the xto formula denoting motion in the opposite direction (since this time it is the original frame that is moving away). Poincaré was the first to notice this and could have beaten Einstein to the finishing line by enunciating the Principle of Relativity several years earlier ─ but for some reason he didn’t, or couldn’t, make the conceptual leap that Einstein made. The point is that each way of looking at the motion is equally valid, or so Einstein believed, whether we envisage the countryside as moving towards us when we are in the train, or the train moving relative to the static countryside.

Relativity from Ultimate Event Theory?

    Einstein assumed relativity and the constancy of the speed and deduced the Lorentz Transformations : I shall proceed somewhat in the opposite direction and attempt to derive certain well-known features of Special Relativity from basic assumptions of Ultimate Event Theory (UET). What assumptions?

To start with, the Event Number Postulate  which says that
  Between any two  events in an event-chain there are a fixed number of ultimate events. 
And (to recap basic definitions) an ultimate event is an event that cannot be further decomposed — this is why it is called ultimate.
Thus, if the ultimate events in a chain, or subsection of a chain, are numbered 0, 1, 2, 3…….n  there are n intervals. And if the event-chain is ‘regular’, sort of equivalent of an intertial system, the ‘distance’  between any two successive events stays the same. By convention, we can treat the ‘time’ dimension as vertical — though, of course, this is no more than a useful convention.   The ‘vertical’ distance between the first and last ultimate events of a  regular event-chain thus has the value n × ‘vertical’ spacing, or n × t.  Note that whereas the number indicating the quantity of ultimate events and intervals, is fixed in a particular case,  t turns out to be a parameter which, however, has a minimum ‘rest’ value noted t0. This minimal ‘rest’ value is (by hypothesis) the same for all regular event-chains.

….        Likewise, between any two ‘contemporary’ i.e. non-successive, ultimate events there are a fixed number of spots where ultimate events could have (had) occurrence. If there are two or more neighbouring contemporary ultimate events bonded together we speak of an event-conglomerate and, if this conglomerate repeats or gives rise to another conglomerate of the same size, we have a ‘conglomerate event-chain’. (But normally we will just speak of an event-chain).
A conglomerate is termed ‘tight’, and the region it occupies within a single ksana (the minimal temporal interval) is ‘full’ if one could not fit in any more ultimate events (because there are no available spots). And, if all the contemporary ultimate events are aligned, i.e. have a single ‘direction’, and are labelled   0, 1, 2, 3…….n  , then, there are likewise n ‘lateral’ intervals along a single line.

♦        ♦       ♦       ♦       ♦    ………

If the event-conglomerate is ‘regular’, the distance between any two neighbouring events will be the same and, for n events has the value n × ‘lateral’ inter-event spacing, or n × s. Although s, the spacing between contemporary ultimate events must obviously always be greater than the spot occupied by an ultimate event, for all normal circumstances it does not have a minimum. It has, however, a maximum value s0 .

The ‘Space-Time’ Capsule

Each ultimate event is thus enclosed in a four-dimensional ‘space-time capsule’ much, much larger than itself — but not so large that it can accommodate another ultimate event. This ‘space-time capsule’ has the mixed dimension s3t.
In practice, when dealing with ‘straight-line’ motion, it is only necessary to consider a single spatial dimension which can be set as the x axis. The other two dimensions remain unaffected by the motion and retain the ‘rest’ value, s­0.  Thus we only need to be concerned with the ‘space-time’ rectangle st.
We now introduce the Constant Size Postulate

      The extent, or size, of the ‘space-time capsule’ within which an ultimate event can have occurrence (and within which only one can have occurrence) is absolute. This size is completely unaffected by the nature of the ultimate events and their interactions with each other.

           We are talking about the dimensions of the ‘container’ of an ultimate event. The actual region occupied by an ultimate event, while being non-zero, is extremely small compared to the dimensions of the container and may for most purposes be considered negligible, much as we generally do not count the mass of an electron when calculating an atom’s mass. Just as an atom is mainly empty space, a space time capsule is mainly empty ‘space-time’, if the expression is allowed.
Note that the postulate does not state that the ‘shape’ of the container remains constant, or that the two ‘spatial’ and ‘temporal’ dimensions should individually remain constant. It is the extent of the space-time parallelipod’ s3t which remains the same or, in the case of the rectangle it is the product st ,that is fixed, not s and t individually.  All quantities have minimum and maximum values, so let the minimum temporal interval be named  t0 and, Space time Area diagramconversely, let s0 be the maximum value of s. Thus the quantity s0 t0 ,  the ‘area’ of the space-time rectangle, is fixed once and for all even though the temporal and spatial lengths can, and do, vary enormously. We have, in effect a hyperbola where xy = constant but with the difference that the hyperbola is traced out by a series of dots (is not continuous) and does not extend indefinitely in any direction (Note 3).
         This quantity s0 t0  is an extremely important constant, perhaps the most important of all. I would guess that different values of s0 t0   would lead to very different universes. The quantity is mixed so it is tacitly assumed that there is a common unit. What this common unit is, is not clear : it can only be  based on the dimensions of an ultimate event itself, or its precise emplacement (not its container capsule), since K0 , the backdrop or Event Locality does not have a metric, is elastic, indeterminate in extent.
         Although one can, in imagination, associate or combine all sorts of events with each other, only events that are bonded sequentially constitute an event-chain, and only bonded contemporary events remain contemporary in successive ksanas. This ‘bonding’ is not a mathematical fiction but a very real force, indeed the most basic and most important force in the physical universe without which the latter would collapse at any and every moment — or rather at every ksana.
         Now, within a single ksana one and only one ultimate event can have occurrence. However, the ‘length’ of a ksana varies from one event-chain to another since, although the size of the emplacements where the ultimate events occur is (by hypothesis) fixed, the spacing is not fixed, is indeterminate though the same in similar conditions (Note 5). The length of a ksana has a minimum and this minimal length is attained only when an event-chain is at rest, i.e. when it is considered without reference to any other event-chain. This is the equivalent of a ‘proper interval’ in Relativity. So t is a parameter with minimal value t0. It is not clear what the maximum value is though there must be one.
         The inter-space distance s does not have a minimum, or not one that is, in normal conditions ever attained — this minimum would be the exact ‘width’ of the emplacement of an ultimate event, an extremely small distance. It transpires that the inter-space distance s is at a maximum in a rest-chain taking the value s0. I am not absolutely sure whether this needs to be stated as an assumption or whether it can be derived later from the assumptions already made.)

         Thus, the ‘space-time’ paralleliped s3t has the value (s0)3t0 , an absolute value.

The Rest Postulate

This says that

          Every event-chain is at rest with respect to the Event Locality K0 and may be considered to be ‘stationary’.

          Why this postulate and what does it mean? We all have experience of objects immersed in a fluid medium and there can also be events, such as sounds, located in this medium. Now, from experience, it is possible to distinguish between an object ‘at rest’ in a fluid medium such as the ocean and ‘in motion’ relative to this medium. And similarly there will be a clear difference between a series of siren calls or other sounds emitted from a ship in a calm sea, and the same sequence of sounds when the ship is in motion. Essentially, I envisage ultimate events as, in some sense, immersed in an invisible omnipresent ‘medium’, 0, — indeed I envisage ultimate events as being localized disturbances of K0. (But if you don’t like this analogy, you can simply conceive of ultimate events having occurrence on an ‘Event Locality’ whose purpose is simply to allow ultimate events to have occurrence and to keep them separate from one another.) The Rest Postulate simply means that, on the analogy with objects in a fluid medium, there is no friction or drag associated with chains of ultimate events and the medium in or on which they have occurrence. This is basically what Einstein meant when he said that “the ether does not have a physical existence but it does have a geometric existence”.

What’s the point of this constant if no one knows what it is? Firstly, it by no means follows that this constant s0 t0 is unknowable since we can work backwards from experiments using more usual units such as metres and seconds, giving at least an approximate value. I am convinced that the value of s0 t0  will be determined experimentally within the next twenty years, though probably not in my lifetime unfortunately. But even if it cannot be accurately determined, it can still function as a reference point. Galileo was not able to determine the speed of light even approximately with the apparatus at his disposal (though he tried) but this did not stop him stating that this speed was finite and using the limit in his theories without knowing what it was.

Diverging Regular Event-chains

Imagine a whole series of event-chains with the same reappearance rate which diverge from neighbouring spots — ideally which fork off from a single spot. Now, if all of them are regular with the same reappearance rate, the nth member of Event-chain E0 will be ‘contemporaneous’ with the nth members of all the other chains, i.e. they will have occurrence within the same ksana. Imagine them spaced out so that each nth ultimate event of each chain is as close as possible to the neighbouring chains. Thus, we imagine E0 as a vertical column of dots (not a continuous vertical line) and E1 a slanting line next to it, then E2 and so on. The first event of each of these chains (not counting the original event common to all) will thus be displaced by a single ‘grid-space’ and there will be no room for any events to have occurrence in between. The ‘speed’ or displacement distance of each event-chain relative to the first (vertical one) is thus lateral distance in successive ksanas/vertical distance in successive ksanas.  For a ‘regular’ event-chain the ‘slant’ or speed remains the same and is tan θ   =  1 s/t0 , 2 s/t0  and so on where, if θ is the slant angle,

tan θr  = vr  = 1, 2, 3, 4……   ­­

“What,” asked Zeno of Elea “is the speed of a particular chariot in a chariot race?”  Clearly, this depends on what your reference body is. We usually take the stadium as the reference body but the charioteer himself perceives the spectators as moving towards or away from him and he is much more concerned about his speed relative to that of his nearest competitor than to his speed relative to arena. We have grown used to the idea that ‘speed’ is relative, counter-intuitive though it appears at first.
But ‘distance’ is a man-made convenience as well : it is not an ‘absolute’ feature of reality. People were extremely put out by the idea that lengths and time intervals could be ‘relative’ when the concept was first proposed but scientists have ‘relatively’ got used to the idea. But everything seems to be slipping away — is there anything at all that is absolute, anything at all that is real? Ultimate Event Theory evolved from my attempts to ponder this question.
The answer is, as far as I am concerned, yes. To start with, there are such things as events and there is a Locality where events occur. Most people would go along with that. But it is also one of the postulates of UET that every macroscopic ‘event’ is composed of a specific number of ultimate events which cannot be further decomposed. Also, it is postulated that certain ultimate events are strongly bonded together into event-chains temporally and event-conglomerates laterally. There is a bonding force, causality.
Also, associated with every event chain is its Event Number, the number of ultimate events between the first event A and the last Z. This number is not relative but absolute. Unlike speed, it does not change as the event-chain is perceived in relation to different bodies or frames of reference. Every ultimate event is precisely localised and there are only a certain number of ultimate events that can be interposed between two events both ‘laterally’ (spatially) and ‘vertically’ (temporally). Finally, the size of the ‘space-time capsule’ is fixed once and for all. And there is also a maximum ‘space/time displacement ratio’ for all event-chains.
This is quite a lot of absolutes. But the distance between ultimate events is a variable since, although the dimensions of each ultimate event are fixed, the spacing is not fixed though it will remain the same within a so-called ‘regular’ event-chain.
It is important to realize that the ‘time’ dimension, the temporal interval measured in ksanas, is not connected up to any of the three spatial dimensions whereas each of the three spatial dimensions is connected directly to the other two. It is customary to take the time dimension as vertical and there is a temptation to think of t, time, being ‘in the same direction’ as the z axis in a normal co-ordinate system. But this is not so : the time dimension is not in any spatial direction but is conceived as being orthogonal (at right angles) to the whole lot. To be closer to reality, instead of having a printed diagram on the page, i.e. in two dimensions, we should have a three dimensional optical set-up which flashes on and off at rhythmic intervals and the trajectory of a ‘particle’ (repeating event-chain) would be presented as a repeating pinpoint of light in a different colour.
Supposing we have a repeating regular event-chain consisting for simplicity of just one ultimate event. We [resent it as a column of dots, i.e. conceive of it as vertical though it is not. The dots are numbered 0, 1, 2….    and the vertical spacing does not change (since this is a regular event-chain) and is set at  t0 since this is a ‘rest chain’.  Similar regular event-chains can then be presented as slanting lines to the right (or left) regularly spaced along the x axis. The slant of the line represents the ‘speed’. Unlike the treatment in Calculus and conventional physics, increasing v does not ‘pass through a continuous set of values’, it can only move up by a single ‘lateral’ space each time. The speeds of the different event-chains are thus 0s/t0  (= 0) ;  1s/t0 ;
2s/t0 ; 
 3s/t0 ;  4s/t0 ;……  n s/t0 and so on right up to  c s/t0 .  But to what do we relate the spacing s ?  To the ‘vertical’ event-chain or to slanting one? We must relate s to the event-chain under consideration so that its value depends on v so v =  v sv    The ratio  s/t0 is thus a mixed ratio sv/t0 .   tv  gives the intervals between successive events in the ‘moving’ event-chains and the number of these intervals does not increase because there are only a fixed number of events in any event-chain evaluated in any way. These temporal intervals thus undoubtedly increase because the hypotenuse gets larger. What about the spacing along the horizontal ? Does it also increase? Stay the same?  If we now introduce the Constant Size Postulate which says that the product  sv  tv  = s0 t0    we find that   sv  decreases with increasing v since tv  certainly increases. There is thus an inverse ratio and one consequence of this is that the mixed ratio sv/t0 = s0/tv    and we get symmetry. This leads to relativity whereas any other relation does not and we would have to specify which regular event-chain ‘really’ is the vertical one. One can legitimately ask which is the ‘real’ spatial distance between neighbouring events? The answer is that every distance is real and not just real for a particular observer. Most phenomena are not observed at all but they still occur and the distances between these events are real : we as it were take our pick, or more usually one distance is imposed on us.

Now the real pay off is that each of these regular event-chains with different speeds v is an equally valid description of the same event-chain. Each of these varying descriptions is true even though the time intervals and distances vary. This is possible because the important thing, what really matters, does not change : in any event-chain the number and order of the individual events is fixed once and for all although the distances and times are not. Rosser, in his excellent book Introductory Relativity, when discussing such issues gives the useful analogy of a gamer of tennis being played on a cruise liner in calm weather. The game would proceed much as on land, and if in a covered court, exactly as on land. And yet the ‘speed’ of the ball varies depending on whether you are a traveller on the boat or someone watching with a telescope from another boat or from land. The ‘real’ speed doesn’t actually matter, or, as I prefer to put it, is indeterminate though fixed within a particular inertial frame (event system). Taking this one step further, not just the relative speed but the spacing between the events of a regular  event-chain  ‘doesn’t matter’ because the constituent events are the same and appear in the same order. It is interesting that. on this interpretation, a certain indeterminacy with regard to distance is already making its appearance before Quantum Theory has even been mentioned. 

Which distance or time interval to choose?

Since, apparently, the situation between regular event-chains is symmetric (or between inertial systems if you like) one might legitimately wonder how there ever could be any observed discrepancy since any set of measurements a hypothetical observer can make within his own frame (repeating event system) will be entirely consistent and unambiguous. In Ultimate Event Theory, the situation is, in a sense, worse since I am saying that, whether or not there is or can be an observer present, the time-distance set-up is ‘indeterminate’ — though the number and order of events in the chain is not. Any old ‘speed’ will do provided it is less than the limiting value c. So this would seem to make the issue quite academic and there would be no need to learn about Relativity. The answer is that this would indeed be the case if we as observers and measurers or simply inhabitants of an event-environment could move from one ‘frame’ to another effortlessly and make our observations how and where we choose. But we can’t : we are stuck in our repeating event-environment constituted by the Earth and are at rest within it, at least when making our observations. We are stuck with the distance and time units of the laboratory/Earth event-chain and cannot make observations using the units of the electron event-chain (except in imagination). Our set of observations is fully a part of our system and the units are imposed on us. And this does make a difference, a discernible, observable difference when dealing with certain fast-moving objects.
Take the µ-meson. µ-mesons are produced by cosmic rays in the upper reaches of the atmosphere and are normally extremely short-lived, about  2.2 × 10–6 sec.  This is the (average) ‘proper’ time, i.e.  when the µ-meson is at rest — in my terms it would be N × t0 ksanas. Now, these mesons would, using this t value, hardly go more than 660 metres even if they were falling with the speed of light (Note 4). But a substantial portion actually reach sea level which seems impossible. Now, we have two systems, the meson event-chain which flashes on and off N times whatever N is before terminating, i.e. not reappearing. Its own ‘units’ are t0 and s0 since it is certainly at rest with itself. For the meson, the Earth and the lower atmosphere is rushing up with something approaching the limiting speed towards it. We are inside the Earth system and use Earth units : we cannot make observations within the meson. The time intervals of the meson’s existence are, from our rest perspective, distended : there are exactly the same number of ksanas for us as for the meson but, from our point of view, the meson is in motion and each ‘motion’ ksana is longer, in this case much much  longer. It thus ‘lives’ longer, if by living longer we mean having a longer time span in a vague absolute way,  rather than having more ‘moments of existence’. The meson’s ksana is worth, say, eight of our ksanas. But the first and last ultimate event of the meson’s existence are events in both ‘frames’, in ours as well as its. And if we suppose that each time it flashed into existence there was a (slightly delayed) flash in our event-chain, the flashes would be much more spaced out and so would be the effects. So we would ‘observe’, say, a duration of, say, eight of ‘our’ ksanas between consecutive flashings instead of one. And the spatial distance between flashes would also be evaluated in our system of metres and kilometres : this is imposed on us since we cannot measure what is going on within the meson event-chain. The meson actually would travel a good deal further in our system — not ‘would appear to travel farther’. Calculations show that it is well within the meson’s capacity to reach sea level (see full discussion in Rosser, Introductory Relativity pp. 71-3).
What about if we envisaged things from the perspective of the meson? Supposing, just supposing, we could transfer to the meson event-chain or its immediate environment and could remember what things were like in the world outside, the familiar Earth event-frame. We would notice nothing strange about ‘time’, the intervals between ultimate events, or the brain’s merging of them, would not surprise us at all. We would consider ourselves to be at rest. What about if we looked out of the window at the Earth’s atmosphere speeding by? Not only would we recognize that there was relative motion but, supposing there were clear enough landmarks (skymarks rather), the distances between these marks would appear to be far closer than expected — in effect there would be a double or triple sense of motion since our perception of motion is itself based on estimates of distance. As the books say, the Earth and its atmosphere would be ‘Lorentz contracted’. There would be exactly the same number of ultimate events in the meson’s trajectory, temporarily our trajectory also. The first and last event of the meson’s lifetime would be separated by the same number of temporal intervals and if these first and last events left marks on the outside system, these marks would also be separated by exactly the same number of spatial intervals. Only these spatial intervals — distances — would be smaller. This would very definitely be observed : it is as if we were looking out at the countryside on a familiar journey in a train fantastically speeded up. We would still consider ourselves at rest but what we saw out of the window would be ludicrously foreshortened and for this reason we would conclude that we were travelling a good deal faster than on our habitual journey. I do not think there would be any obvious way to recognize the time dilation of the outside system.

One is often tempted to think that the time dilation and the spatial contraction cancel each other out so all this talk of relativity is purely academic since any discrepancies should cancel out. This would indeed be the case if we were able to make our observations inside the event-chain we are observing, but we make the measurements (or perceptions) in a single frame. Although it is the meson event-chain that is dictating what is happening, both the time and spatial distance observations are made in our system. It is indeed only because of this that there is so much talk about ‘observers’ in Special Relativity. The point is not that some intelligent being observes something because usually he or she doesn’t : the point is that the fact of observation, i.e. the interaction with another system seriously confuses the issue. The ‘rest-motion’ situation is symmetrical but the ‘observing’ situation is not symmetrical, nor can it be in such circumstances.

This raises an important point.  In Ultimate Event Theory, as in Relativity, the situation is ‘kinematically’ symmetrical. But is it causally symmetrical? Although Einstein stressed that c was a limit to the “transfer of causality”  he was more concerned with light and electro-magnetism than causality. UET is concerned above all with causality — I have not mentioned the speed of light yet and don’t need to. In situations of this type, it is essential to clearly identify the primary causal chain. This is obviously the meson : we observe it, or rather we pick up indications of its flashings into and out of existence. The observations we make, or simply perceptions,  are dependent on the meson, they do not by themselves constitute a causal chain. So it looks at first sight as if we have a fundamental asymmetry : the meson event-chain is the controlling one and the Earth/observer event chain  is essentially passive. This is how things first appeared to me. But on reflection I am not so sure. In many (all?) cases of ‘observation’ there is interaction with the system being observed and it is inevitably going to be affected by this even if it has no senses or observing apparatus of its own. One could thus argue that there is causal symmetry after all, at least in some cases. There is thus a kind of ‘uncertainty principle’ due to the  interaction of two systems latent in Relativity before even Quantum Mechanics had been formulated. This issue and the related one of the limiting speed of transmission of causality will be dealt with in the subsequent post.

Sebastian Hayes  26 July
Note 1. And in point of fact, if General Relativity is to be believed, ‘free space’ is not strictly homogeneous even when empty of matter and neither is the speed of light strictly constant since light rays deviate from a straight path in the neighbourhood of massive bodies.

Note 2  For those people like me who wish to believe in the reality of 0 — rather than seeing it as a mere mathematical convenience like a co-ordinate system —  the lack of any ‘friction’ between the medium or backdrop and the events or foreground would, I think. be quite unobjectionable, even ‘obvious’, likewise the entire lack of any ‘normal’ metrical properties such as distance. The ‘backdrop’, that which lies ‘behind’ material reality though in some sense permeating it, is not physical and hence is not obliged to possess familiar properties such as a shape, a metric, a fixed distance between two points and so on. Nevertheless, this backdrop is not completely devoid of properties : it does have the capacity to receive (or produce) ultimate events and to keep them separate which involves a rudimentary type of ‘geometry’ (or topology). Later, as we shall see, it would seem that it is affected by the material points on it, so that this ‘geometry’, or ‘topology’, is changed, and so, in turn,  affects the subsequent patterning of events. And so it goes on in a vicious or creative circler, or rather spiral.
            The relation between K0, the underlying substratum or omnipresent medium, and the network of ultimate events we call the physical universe, K1  is somewhat analogous to the distinction between nirvana and samsara in Hinayana Buddhism. Nirvana  is completely still and is totally non-metrical, indeed non-everything (except non-real), whereas samsara is turbulence and is the world of measure and distancing. It is alma de-peruda, the ‘domain of separation’, as the Zohar puts it.  The physical world is ruled by causality, or karma, whereas nirvana is precisely the extinction of karma, the end of causality and the end of measurement.

Note 3   The ‘Space-time hyperbola’ , as stated, does not extend indefinitely either along the ‘space’ axis s (equivalent of x) or indefinitely upwards Space time hyperbolaalong the ‘time’ axis (equivalent of y).  — at any rate for the purposes of then present discussion. The variable t has a minimum t0   and the variable s a maximum s0  which one suspects is very much greater than  tc  .  Since there is an upper limit to the speed of propagation of a causal influence, c , there will in practice be no values of t greater than tc  and no s values smaller than sc  .   It thus seems appropriate to start marking off the s axis at the smallest value sc  =   s0/ c  which can function as the basic unit of distance.  Then s0 is equal to c of these ‘units’. We thus have a hyperbola something like this — except that the curve should consist of a string of separate dots which, for convenience I have run together.

Note 4  See Rosser, Introductory Relativity pp. 70-73. Incidentally, I cannot recommend too highly this book.

Note 5   I have not completely decided whether it is the ‘containers’ of ultimate events that are elastic, indeterminate, or the ‘space’ between the containers (which have the ultimate events inside them)’. I am inclined to think that there really are temporal gaps not just between ultimate events themselves but even between their containers, whereas this is probably not so in the case of spatial proximity. This may be one of the reasons, perhaps even the principal reason, why ‘time’ is felt to be a very different ‘dimension’. Intuitively, or rather experientially, we ‘feel’ time to be different from space and all the talk about the ‘Space/Time continuum’ — a very misleading phrase — is not enough to dispel this feeling.

To be continued  SH  18 July 2013

 

Advertisement

A ksana is the minimal temporal interval : within the space of a ksana one and only one ultimate event can have occurrence. There can thus be no change whatsoever within the space of a ksana — everything is at rest.
In Ultimate Event Theory every ultimate event is conceived to fill a single spot on the Locality (K0) and every such spot has the same extent, a ‘spatial’ extent which includes (at least) three dimensions and a single temporal dimension. A ksana is  the temporal interval between the ‘end’ of one ultimate event and the ‘end’ of the next one. Since there can be nothing smaller than an ultimate event, it does not make too much sense to speak of ‘ends’, or ‘beginnings’ or ‘middles’ of ultimate events, or their emplacements, but, practically speaking, it is impossible to avoid using such words. Certainly the extent of the spot occupied by an ultimate event is not zero.
The ksana is, however, considerably more extensive than the ‘vertical’ dimension of the spot occupied by an ultimate event. Physical reality is, in Ultimate Event theory, a ‘gapped’ reality and, just as an atom is apparently mainly empty space, a ksana is mainly empty time (if the term is allowed)..  Thus, when evaluating temporal intervals the ‘temporal extent’ of the ultimate events that have occurrence within this interval can, to a first approximation, be neglected. As to the actual value of a ksana in terms of seconds or nanoseconds, this remains to be determined by experiment but certainly the extent of a ksana must be at least as small as the Planck scale, 6.626 × 10–34 seconds.
A ‘full’ event-chain is a succession of bonded ultimate events within where it would not be possible to fit in any more ultimate events. So if we label the successive ultimate events of a ‘full’ event-chain, 0, 1, 2, 3……N  there will be as many ksanas in this temporal interval as there are ultimate events.
Suppose we have a full event-chain which, in its simplest form, may be just a single ultimate event repeated identically at or during each successive ksana. Such an event-chain can be imaged as a column of dots where each dot represents an ultimate event and the space in between the dots represents the gap between successive ultimate events of the chain. Thus , using the standard spacing of 2.5  this computer we have

Now, although the ‘space’ occupied by all ultimate events is fixed and  an absolute quantity (true for ‘all inertial and non-inertial frames’ if you like), the spacing between the spots where ultimate events can occur both ‘laterally’ — laterally is to be understood as including all three normal spatial dimensions — and vertically, i.e. in the temporal direction, is not  constant but variable. So, although the spots where ultimate events can occur have fixed (minuscule) dimensions, the ‘grid-distance’, the distance between the closest spots which have occurrence within the same ksana,  and so  does the temporal distance between successive ultimate events of a full event-chain. So the ksana varies in extent.  However, there is, by hypothesis,  a minimum value for both the grid-distance and the ksana. The minimal value of both is attained whenever we have a completely isolated event-chain. In practice, there is no such event-chain any more than, in traditional physics, there is a body that is completely isolated  from all other bodies in the universe. However, these minimal values can be considered to be attained for event-chains that are sufficiently ‘far away’ from all other chains. And, more significantly, these minimal values apply whenever we have a full regular event-chain considered in isolation from its event environment.
The most important point, that cannot be too strongly emphasized, is that although the number of ultimate events in an event-chain, or any continuous section of an event-chain, is absolute, the interval between successive events varies from one chain to another, though remaining constant within a single event-chain (providing it is regular). Unless stated otherwise, by ‘ksana’ I mean the interval between successive ultimate events in a ‘static’ or isolated regular event-chain. This need not cause any more trouble than the concept of intervals of time in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given.
Thus, the ‘vertical’  spacing of events in different chains can and does differ and the minimal value will be represented by the smallest spacing available on the computer I am using. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                     •

moment’, is not an absolute. However, unless stated otherwise, by ‘ksana’ we are to understand the duration of a ksana within a ‘static’ or isolated regular event-chain. This should not cause any more trouble than the concept of ‘time’ in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given. However, the ‘vertical’  spacing of events in different chains can and does differ. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                        •

•                                       •

S.H. 11/7/13

General Laws :  I suspect that there are no absolutely general ‘laws of Nature’, no timeless laws such as those given by a mathematical formula : such a formula at best only indicates norms or physical constraints. Of all so-called laws, however, the most general and the most solidly established are arithmetic (not physical) laws, rules based on the properties of the natural numbers. To this extent Pythagoras was in the right.

Platonic Forms  Plato was also essentially right in proclaiming the need for ‘ideal’ forms : patterns which are not themselves physical but which dictate the shape and behaviour of physical things. But he was wrong to see these patterns as geometrical, and thus both static and timeless (the two terms are equivalent). With one or two exceptions contemporary science has done away with Platonic Forms though it still puts mathematics in the supreme position.
In practice, I do not see how one can avoid bringing in a secondary ‘ideal’ domain which has a powerful effect on actual behaviour. In Ultimate Event Theory, associations of events and event-chains, once they have attained a critical point, bring into existence ‘event schemas’ which from then on dictate the behaviour of similar collections of events. From this point onwards they are ‘laws’ to all intents and purposes but there was a time when they did not exist and there will perhaps be a future time when they will cease to be operative.
Random GenerationTake the well-known example of interference patterns produced by photons or electrons on a blank screen. It is possible to fire off these ‘particles’ one at a time so that the pattern takes shape point by point, or pixel by pixel if you like. At first the dots are distributed randomly and in different experiments the pattern builds up differently. But the final pattern, i.e., distribution of dots, is identical ─ or as nearly identical as experiment allows. This makes no kind of sense in terms of traditional physics with its assumption of strict causality. The occurrence of a particular event, a dot in a particular place, has no effect whatsoever (as far as we can tell) on the position of the next dot. So the order of events is not fixed even though the final pattern is completely determinate. So what dictates which event comes next? ‘Chance’ it would seem. But nonetheless the eventual configuration is absolutely fixed. This only makes sense if the final configuration follows an ‘event schema’ which does, in some sense, ‘exist’ though it has no place in the physical universe. This is a thoroughly Platonic conception. O

 Ultimate Reality   Relatively persistent patterns on an  underlying invisible ‘substance’ ─ that is all there is in the last resort. Hinduism was quite right to see all this as an essentially purposeless, i.e. gratuitous, display ─ the dance of Shiva. Far from being disheartening, this perspective is inspiring. It is at the opposite extreme both to the goal-directed ethos of traditional Christianity ─ the goal being to ‘save’ your soul ─ and to the drearily functional universe of contemporary biology where everything is perpetually seeking a fleeting  physical advantage over competitors.
What, then, is the difference between the organic and the inorganic?  Both are persistent, the inorganic more so than the organic. Without a basic ‘something’, nothing visible or tactile could or does exist. Without persistence there would be no recognizable patterns, merely noise, random flashes of light emerging from the darkness and subsiding into darkness after existing for a moment only. ‘Matter’ is an illusion, a mental construct : patterns of light (radiation) emerging and disappearing, that is all there is.

Dominance  The ‘universe’ must be maintained by some sort of force, otherwise it would collapse into nothingness at any moment. For Descartes this force came from God, Schopenhauer views it as something inherent in Nature, as what he calls ‘Will’ and which he views as being entirely negative, indeed monstrous. This ‘force’ is what I term dominance, the constraining effect one event or event-chain has on another (including on itself), and without it everything would slow down and very soon disappear without leaving a trace. Take away Schopenhauer’s Will, the force of karma, and this is what would happen ─ and in the Buddhist world schema will eventually happen. For Buddhism, the natural state of everything is rest, inaction, and the universe came about because of some unexplained disturbance of the initial state of rest, indeed is this disturbance. Subsequently, it is as if the ‘universe’ were frantically trying to get back to its original state of complete rest but  by its ceaseless striving is precisely making this goal more and more unattainable.

Disappearance  In both traditional and contemporary physics, it is impossible for an object to simply disappear without leaving a trace. The dogma of the conservation of mass/energy says that nothing ever really disappears, merely changes its form. However, according to Ultimate Event Theory, ultimate events are appearing and disappearing all the time and they need no ‘energy’ to do this. Certain of these ultimate events produced at random eventually coalesce into repeating event-chains we perceive as solids or liquids because they acquire ‘persistence’ or self-dominance, but it is conceivable that they can, in certain exceptional circumstances, lose this property and in such a case they will simply stop reappearing.
Are there any genuine cases where objects have completely disappeared in this way? The only evidence for them would seem to be anecdotal : one hears of certain Hindu magic-men who are able to make small objects disappear and appear in a different place but it is, of course, difficult to judge to distinguish genuine magic from the stage variety. And any such alleged cases rarely if ever get investigated by scientists since the latter are terrified of being accused of credulity or worse. Professor Taylor who investigated Uri Geller was told by colleagues that no reputable scientists would do such a thing. Clearly, if one is not allowed to investigate a phenomenon it has no chance of ever being verified which is what the rationalist/scientific lobby desire.
Contemporary science and rationalist thinking implicitly assumes that ‘real’ entities, while they actually exist, exist continuously ─ in fact the previous statement would be regarded as so obvious as to be hardly worth stating. But in UET nothing exists for more than an instant (ksana) and entities that seem to exist for a ‘long time’ are in reality composed of repeating ultimate events strongly bonded together. If reality is ‘gapped’, as UET affirms, all so-called objects alternately appear and disappear (though so rapidly that we do not notice the change) so  there is much less of a problem involved in making something disappear. Instead of actually destroying the object in some way (and in the destructive process transferring the object’s mass into different mass or pure energy) it would simply be sufficient to prevent an event cluster reappearing which is not quite so hard to imagine. In UET, an apparent object reappears regularly because  it possesses ‘self-dominance’; if it could be made to lose this property, it would not reappear, i.e. would disappear, and it would not necessarily leave any trace. Moreover, to make something disappear in this manner. it would not be necessary to use any kind of physical force, high temperature, pressure and so on. To say that the theoretical possibility is there is not, of course, the same thing as saying that a supposed occurrence actually takes place : that is a matter of experiment and observation. In my unfinished SF novel The Web of Aoullnnia devotees of a  mystical sect called the Yther are not only convinced that the entire universe is going to disappear into the nothingness from which it emerged, but  believe that they should hasten this progressive movement which they call Aoullnnia-yther where yther means ‘ebbing’, ‘withdrawal’, hence the name of the sect. Although contemporary Buddhists do not usually put it quite so starkly, essentially the aim of Buddhism is to return the entire universe to an entirely quiescent state “from which it never will arise again”.

On the other hand, deliberately bringing something into existence from nothing is just as inconceivable in Ultimate Event Theory as in contemporary physics, maybe more so.                  SH  22/5/13


 

It is said that certain Gnostic sects which flourished in North Africa during the first few centuries of our era not only encouraged but actually required candidates to give a written or verbal account of how they thought the universe began (Note 1). It would be interesting to know what these people came up with and, most likely, amongst a great deal of chaff there were occasional anticipations of current scientific theories. It is mistaken to imagine that great ideas go hand in hand with experimentation and mathematical implementation : on the contrary, important ideas often predate true discovery by centuries or even millennia. Democritus’ atomic theory (VIth century BC) could not possibly have been ‘proved’ prior to modern times and he certainly could not possibly have put it in quantum or even Newtonian mathematical form. Similarly, one or two brave people put forward the germ theory of disease while the ‘miasmic’ theory was still orthodoxy ─ and were usually dismissed as cranks.
As a body of beliefs, ‘science’ is currently entering a period of consolidation comparable to that experienced by the early Church after its final victory over paganism. Materialism has decisively vanquished idealism and religion is no longer a force to be reckoned with, at least in the West. Along with increasing potency and accuracy goes a certain narrowing of focus and a growing intolerance : science is now a university phenomenon with all that this implies and no .longer a ‘pastime of leisured persons’. To some extent, this tendency towards orthodoxy is inevitable, even beneficial : as someone said it doesn’t matter too much if a poet departs from  the prescribed form of a sonnet, but it may matter a great deal if a bridge builder uses the wrong equations. Nonetheless, there are warning signs : ‘scientific correctness’ has replaced not only free enquiry but the very idea of scientific validity. Professional scientists worry, not so much about whether their results are flawed or their theories tentative, as to whether they are going to get in trouble with the establishment, and offending the latter can have grave career and financial consequences.

        It is true that free, indeed often extremely erratic  speculation, is still allowed  in certain areas, especially cosmology and particle physics. But it is subject to certain serious constraints. Firstly, it is only permitted to persons who already hold more than one degree and who are able to couch their theories in such abstruse mathematics that journals find it difficult to find anyone to peer review the work. Is not this how it should be? Maybe not. Certainly, you are likely to need some knowledge of a subject before cobbling together a theory but there is such a thing as knowing too much. Once someone has been through the mill and spent years doing things in the prescribed manner, it is well nigh impossible to break out of the mental mould ─ and this is most likely the reason why really new ideas in science come from people in their twenties (Einstein, Heisenberg, Dirac, Gamow et al. et al.), not because of any miraculous effect of youth as such.

        So. Where’s all this leading?  I didn’t do science at university or even at school which puts me in many respects at an enormous disadvantage, but this has certain good aspects as well. I have no vested interest in orthodoxy and only accept something because I am convinced that it really is true, or is at least the best theory going for the time being. Almost all current would be innovators in science, however maverick they may appear at first sight, take on  board certain key doctrines of modern science such as the conservation of energy or the laws of thermo-dynamics. But one might as well  be killed for a sheep as a lamb and I have finally decided to take the plunge and, instead of trying to fit my ideas into an existing official framework, to swim out into the open sea, starting as far back as possible and  assuming only what seems to be essential. I originally envisaged ‘Ultimate Event Theory’ as a sort of ‘new science’  but now realize that what I really have been trying to do is give birth to a new ‘paradigm’ ─ a ‘paradigm’ being a systematic way of viewing the world or reality. Should this paradigm ever come to fruition, it will engender new sciences and new technologies, but the first step is to start thinking within a different framework and draw conclusions. In other words, one is obliged to start with theory ─ not experiment or mathematics though certainly I hope eventually experiments will give support to the key concepts and that a new symbolic system will be forthcoming (Note 2).

       Four Paradigms

 To date there have been basically four ways of viewing the world, three all-englobing ‘paradigms’ : (1) The Animistic paradigm; (2) the Mechanistic paradigm; and (3) the Information Paradigm and (4) the Event Paradigm.
According to (1) the universe is full of life, replete with ‘beings’ in many respects like ourselves inasmuch as ‘they’ have emotions and wills and cause things deliberately to happen. This conception goes far beyond mere belief in a pantheon of gods and goddesses : as Thales is supposed to have said, if a lodestone draws a piece of iron it is exercising ‘will’ and “All things are full of gods”. This world-view lasted a very long time and, even though it is largely discredited today, it still has plenty of life  left in it which is why we still speak of ‘charm’, ’charisma’, ‘fate’, and so on and why, despite two centuries of rationalistic propaganda, most of the population still believes in ‘jinxes’ and in ‘spirits’ (as I myself do at least part of the time).
The countless deities and “thrones, principalities and powers” against whom Saint Paul warns the budding Christian eventually gave way to a single all-powerful Creator God who made the world by a deliberate act of will. In its crudest form, Mechanism views the universe as a vast and complicated piece of clockwork  entirely controlled by physical and mathematical laws, some of which we already know. No living things of any sort here unless we make an exception for humanity and, even if we do make such an exception, it is hard to see how free will can enter the picture. Modern science has dispensed with the  Creator retained the mechanistic vision somewhat updated by quantum uncertainty and other exotic side effects.
The invention of the computer and its resounding success sometimes seems to be ushering in a new paradigm: the universe is an enormous integrated circuit endowed with intelligence of a sort and we are the humble bits. Seductive though this vision is in certain respects, it is not without serious dangers for the faithful since it looks disturbingly like a sort of reversion to the most ancient paradigm of all, the animistic one ─ the universe is alive and capable of creating itself and everything else out of itself.
The paradigm that I am working with harks back to certain Indian Buddhist thinkers of the early centuries AD though I originally discovered it for myself when I knew nothing about Buddhism and Taoism. No Creator God, no matter or mind as such, only evanescent point-like entities (‘dharmas’, ‘ultimate events’) forming relatively persistent patterns on a featureless backdrop which will eventually be returned to the original emptiness (‘sunyata’) from which the “thousand things” emerged.

Broad schema of Eventrics 

Following my own instincts and the larger cosmology of Taoism and other mystical belief systems, I divide reality into two broad categories, what I call the Manifest and the Unmanifest, each of which is further divided into two, the Non-Occurrent and the Occurent. If one feels more comfortable with a symbolic notation, we can speak of K0  and K1 with further regions K00 and K01, K10 and K11.  Of the Unmanifest Non-Occurrent, K00, little need or can be said. It is the ultimate origin of everything, the original Tao, Ain Soph (‘the Boundless’) of Jewish mysticism, the Emptiness of nirvana, the vacuum of certain contemporary physical theories (perhaps).

To be continued)

Note 1  As soon as Christianity, or a particular version of it, became the official religion of the declining Roman Empire, all such cosmological speculation was actively discouraged and penalized.

Are there/can there be events that are truly random?
First of all we need to ask ourselves what  we understand by randomness. As with many other properties, it is much easier to say what randomness is not than to say what it is.

Definitions of Randomness

If a series of events or other assortment exhibits a definite pattern, then it is not random” ─ I think practically everyone would agree to this.
This may be called the lack of pattern definition of randomness. It is the broadest and also the vaguest definition but at the end of the day it is what we always seem to come back to. Stephen Wolfram, the inventor of the software programme Mathematica and a life-long ‘randomness student’  uses the ‘lack of pattern’ definition. He writes, “When one says that something seems random, what one usually means is that one cannot see any regularities in it” (Wolfram, A New Kind of Science p. 316). 
        The weakness of this definition, of course, is that it offers no guidance on how to distinguish between ephemeral patterns and lasting ones (except to keep on looking) and some people have questioned whether the very concept of ‘pattern’ has any clear meaning. For this reason, the ‘lack of pattern’ definition is little used in science and mathematics, at least explicitly.

The second definition of randomness is the unpredictable definition and it follows on from the first since if a sequence exhibits patterning we can usually tell how it is going to continue, at least in principle. The trouble with this definition is that it has nothing to say about why such and such an event is unpredictable, whether it is unpredictable simply because we don’t have the necessary  information or for some more basic reason. Practically speaking, this may not make a lot of difference in the short run but, as far as I am concerned, the difference is not at all academic since it raises profound issues about the nature of physical reality and where we stand on this issue can lead to very different life-styles and life choices.

The third definition of randomness, the frequency definition goes something like this. If, given a well-known and well-defined set-up, a particular outcome, or set of outcomes, in the long run crops up just as frequently (or just as infrequently for that matter) as any other feasible outcome, we class this outcome as ‘random’ (Note 1). A six coming up when I throw a dice is a typical example of a ‘random event’ in the frequency sense. Even though any particular throw is perfectly determinate physically, over thousands or millions of throws, a six would come up no more and no less than any of the other possible outcomes, or would deviate from this ‘expected value’ by a very small amount indeed. So at any rate it is claimed and, as far as I know, experiment fully supports this claim.
It is the frequency definition that is usually employed in mathematics and mathematicians are typically always on the look-out for persistent deviations from what might be expected in terms of frequency. The presence or absence of some numerical or geometrical feature without any obvious reason suggests that there is, or at any rate might be, some hidden principle at work (Note 2).
The trouble with the frequency definition is it is pretty well useless in the real world since a vast number of instances is required to ‘prove’ that an event is random or not  ─ in principle an ‘infinite’ number ─ and when confronted with messy real life situations we have neither the time nor the capability to carry out extensive trials. What generally happens is that, if we have no information to the contrary, we assume that a particular outcome is ‘just as likely’ as another one and proceed from there. The justification for such an assumption is post hoc : it may or may  not ‘prove’ to be a sound assumption and the ‘proof’ involved has nothing to do with logic, only with the facts of the matter, facts that originally we do not and usually cannot know.

The fourth and least popular definition of randomness is the causality definition. For me, ‘randomness’ has to do with causality ─ or rather the lack of it. If an event is brought about by another event, it may be unexpected but it is not random. Not being a snooker player I wouldn’t bet too much money on exactly what is going to happen when one ball slams full pelt into the pack. But, at least according to Newtonian Mechanics, once the ball has left the cue, whatever does happen “was bound to happen” and that is that. The fact that the outcome is almost certainly unpredictable in all its finest details even for a powerful computer is irrelevant.
The weakness of this definition is that there is no foolproof way to test the presence or absence of causality: we can at best only infer it and we might be mistaken. A good deal of practical science is taken up with distinguishing between spurious and genuine cases of causality and, to make matters worse,   philosophers such as Hume and Wittgenstein go so far as to question whether this intangible ‘something’ we call causality is a feature of the real world at all. Ultimately, all that can be said in answer to such systematic sceptics is that belief in causality is a psychological necessity and that it is hard to see how we could have any science or reliable knowledge at all without bringing causality into the picture either explicitly or implicitly. I am temperamentally so much a believer in causality that I view it as a force, indeed as the most basic force of all since if it stopped operating in the way we expect life as we know it would be well-nigh impossible. For we could not be sure of the consequences of even the most ordinary actions; indeed if we could in some way voluntarily disturb the way in which causes and effects get associated, we could reduce an enemy state to helplessness much more rapidly and effectively than by unleashing a nuclear bomb. I did actually, only half-facetiously, suggest that the Pentagon would be advised to do some research into the matter ─ and quite possibly they already have done. Science has not paid enough attention to causality, it tends either to take its ‘normal’ operation for granted or to dispense with it altogether by invoking the ‘Uncertainty Principle’ when this is convenient. No one as far as I know has suggested there may be degrees of causality or that there could be an unequal distribution of causality amongst events.

Determinism and indeterminism

Is randomness in the ‘absence of causality’ sense in fact possible?  Not so long ago it was ‘scientifically correct’ to believe in total determinism and Laplace, the French 19th century mathematician, famously claimed  that if we knew the current state of the universe  with enough precision we could predict its entire future evolution (Note 3). There is clearly no place for inherent randomness in this perspective, only inadequate information.
Laplace’s view is no longer de rigueur in science largely because of Quantum Mechanics and Chaos Theory. But the difference between the two world-views has been greatly exaggerated. What we get in Quantum Mechanics (and other branches of science not necessarily limited to the world of the very small) is generally the replacement of individual determinism by so-called statistical determinism. It is, for example, said to be the case that a certain proportion of the atoms in a radio-active substance will decay within a specified time, but which particular atom out of the (usually very large) number in the sample actually will decay is classed as ‘random’. And in saying this, physics textbooks do not usually mean that such an event is in practice unpredictable but genuinely unknowable, thus indeterminate.
But what exactly is it that is ‘random’? Not the micro-events themselves (the  radio-active decay of particular atoms) but only their order of occurrence. Within a specified time limit half, or three quarters or some other  proportion of the atoms in the sample, will have decayed and if you are prepared to wait long enough the entire sample will decay. Thus, even though the next event in the sequence is not only unpredictable for practical reasons but actually indeterminate,  the eventual outcome of the entire sample is completely determined and, not only that, completely predictable !
Normally, if one event follows another we assume, usually but not always with good reason, that this prior event ‘caused’ the subsequent event, or at least had something to do with its occurrence. And even if we cannot specify the particular event that causes such and such an outcome, we generally assume that there is such an event. But in the case of this particular class of events, the decay of radio-active atoms, no single event has, as I prefer to put it, any ‘dominance’ over any other. Nonetheless, every atom will eventually decay : they have no more choice in the matter than Newton’s billiard balls.
Random Generation       To me, the only way the notion of ‘overall determinism without individual determinism’ makes any sense at all is by supposing that there is some sort of a schema which dictates the ultimate outcome but which leaves the exact order of events unspecified. This is an entirely Platonic conception since it presupposes an eventual configuration that has, during the time decay is going on, no physical existence whatsoever and can even be prevented from manifesting itself by my forcibly intervening and disrupting the entire procedure ! Yet the supposed schema must be considered in some sense ‘real’ for the very good reason that it has bona fide observable physical effects which the vast majority of imaginary shapes and forms certainly do not have (Note 4).

An example of something similar can be seen in the case of the development an old-fashioned  (non-digital) picture taken in such faint light that the lens only allows one photon to get through at a time.  “The development process is a chemical amplification of an initial atomic event…. If a photograph is taken with exceedingly feeble light, one can verify that the image is built up by individual photons arriving independently and, it would seem at first, almost randomly distributed in position” (French & Taylor, An Introduction to Quantum Physics p. 88-9)  This case is slightly different  from that of radio-active decay since the photograph has already been taken. But the order of events leading up to the final pattern is arbitrary and, as I understand it, will be different on different occasions. It is almost as if because the final configuration is fixed, the order of events is ‘allowed’ to be random.

Uncertainty or Indeterminacy ?

 Almost everyone who addresses the subject of randomness somehow manages to dodge the central question, the only question that really matters as far as I am concerned, which is : Are unpredictable events merely unpredictable because we lack the necessary information  or are they inherently indeterminate?
        Taleb is the contemporary thinker responsible more than anyone else for opening up Pandora’s Box of Randomness, so I looked back at his books to see what his stance on the uncertainty/indeterminacy issue was. His deep-rooted conviction that the future is unpredictable and his obstinacy in sticking to his guns against the experts would seem to be driving him in the indeterminate direction but at the last minute he backs off and retreats to the safer sceptical position of “we just don’t know”.

“A true random system is in fact random and does not have predictable properties. A chaotic system [in the scientific sense] has entirely predictable properties, but they are hard to know.” (The Black Swan p. 198 )

This is excellent and I couldn’t agree more. But, he proceeds   “…in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1. (…)  Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.”   The Black Swan p. 198 
As far as I am concerned randomness either is or is not an intrinsic property and difference between theory and practice doesn’t come into it. No doubt, from the viewpoint of an options trader, it doesn’t really matter whether market prices are ‘inherently unpredictable’ or ‘indeterminate’ since one still has to decide whether to buy or not.        However, even from a strictly practical point of view, there is a difference and a big one between intrinsic and ‘effective’ randomness.
Psychologically, human beings feel much easier working with positives than negatives as all the self-help books will tell you and it is even claimed that “the unconscious mind does not understand negatives”. At first sight ‘uncertainty’ and ‘indeterminacy’ appear to be equally negative but I would argue that they are not. If you decide that some outcome is ‘uncertain’ because we will never have the requisite information, you will most likely not think any more about the matter but in stead work out a strategy for coping with uncertainty ─ which is exactly what Taleb advocates and claims to have put into practice successfully in his career on the stock market.
On the other hand, if one ends up by becoming convinced that certain events really are indeterminate, then this raises a lot of very serious questions. The concept of a truly random event, even more so a stream of them, is very odd indeed. One is at once reminded of the quip about random numbers being so “difficult to generate that we can’t afford to leave it to chance”. This is rather more than a weak joke. There is a market for ‘random numbers’ and very sophisticated methods are employed to generate them. The first ‘random number generators’ in computer software were based on negative feedback loops, the irritating ‘noise’ that modern digital systems are precisely designed to eliminate. Other lists are extracted from the expansion of π (which has been taken to over a billion digits) since mathematicians are convinced this expansion will never show any periodicity and indeed none has been found. Other lists are based on so-called linear congruences.  But all this is in the highest degree paradoxical since these two last methods are based on specific procedures or algorithms and so the numbers that actually turn up are not in the least random by my definition. These numbers are random only by the frequency and lack of pattern definitions and as for predictability the situation is ambivalent. The next number in an arbitrary  section of the expansion of π  is completely unpredictable if all you have in front of you is a list of numbers but it is not only perfectly determinate but perfectly predictable if you happen to know the underlying algorithm.

Three types of Randomness

 Stephen Wolfram makes a useful distinction between three basic kinds of randomness. Firstly, we have randomness which relies on the connection of a series of events to its environment. The example he gives is the rocking of a boat on a rough sea. Since the boat’s movements depend on the overall state of the ocean, its motions are certainly unpredictable for us because there are so many variables involved ─ but perhaps not for Laplace’s Supermind.
Wolfram’s second type  of ‘randomness’ arises, not  because a series of events is continuously interacting with its environment, but because it is sensitively dependent on the initial conditions. Changing these conditions even very slightly can dramatically alter the entire future of the system and one consequence is that it is quite impossible to trace the current state of a system back to its original state. This is the sort of case studied in chaos theory. However, such a system, though it behaves in ways we don’t and can’t anticipate, is strictly determinate in the sense that every single event in a ‘run’ is completely fixed in advance (Note 5)
Both these methods of generating randomness depend on something or someone from outside the sequence of events : in the first case the randomness in imported from the far larger and more complex system that is the ocean, and in the second case the randomness lies in the starting conditions which themselves derive from the environment or are deliberately set by the experimenter. In neither case is the randomness inherent in the system itself and so, for this reason, we can generally reduced the amount of randomness by, for example, securely tethering the boat to a larger vessel or by only allowing a small number of possible starting conditions.
Wolfram’s third and final class of generators of randomness is, however, quite different since they are inherent random generators. The examples he gives are special types of cellular automaton. A cellular automaton consists essentially of a ‘seed’, which can be a single cell, and a ‘rule’ which stipulates how a cell of a certain colour or shape is to evolve. In the simplest cases we just have two colours, black and white, and start with a single black or white cell. Most of the rules produce simple repetitive patterns as one would expect, others produce what looks like a mixture of ‘order’ and ‘chaos’, while a few show no semblance of repetitiveness or periodicity whatsoever. One of these, that  Wolfram classes as Rule 30, has actually been employed in Random [Integer] which is part of Mathematica and so has proved its worth by contributing to the financial success of the programme and it has also, according to its inventor, passed all tests for randomness it has been subjected to.
Why is this so remarkable? Because in this case there is absolutely no dependence on anything external to the particular sequence which is entirely defined by the (non-random) start point and by an extremely simple rule. The randomness, if such it is to be called, is thus ‘entirely self-generated’ : this is not production of randomness by interaction with other sets of events  but is, if you like, randomness  by parthenogenesis. Also, and more significantly, the author claims that it is this type of randomness that we find above all in nature (though the other two types are also present).

Causal Classification of types of randomness

This prompts me to introduce a classification of  my own with respect to causality, or dominance as I prefer to call it. In a causal chain there is a forward flow of ‘dominance’ from one event to the next and, if one connection is missing, the event chain terminates (though perhaps giving rise to a different causal chain by ricochet). An obvious example  is a set of dominoes where one knocks over the next but one domino is spaced out a bit more  and so does not get toppled. A computer programme acts essentially in the same way : an initial act activates a sequential chain of events and terminates if the connection between two successive states is interrupted.
In the environmental case of the bobbing boat, we have a sequence of events, the movements of the boat, which do not  by themselves form an independent causal chain since each bob depends, not on the previous movement of the boat, but on the next incoming wave, i.e. depends on something outside itself. (In reality, of course, what actually goes on is more complex since, after each buffeting, the boat will be subject to a restoring force tending to bring it  back to equilibrium before it is once more thrown off in another direction, but I think the main point I am making still stands.)
In the statistical or Platonic case such as the decay of a radio-active substance or the development of the photographic image, we have a sequence of events which is neither causally linked within itself nor linked to any actual set of events in the exterior like the state of the ocean. What dictates the behaviour of the atoms is seemingly the eventual configuration (the decay of half, a quarter or all of the atoms) or rather the image or anticipation of this eventual configuration (Note 6).

So we have what might be called (1) forwards internal dominance; (2) sideways dominance; and (3) downwards dominance (from a Platonic event-schema).

Where does the chaotic case fit in? It is an anomaly since although there is clear forwards internal dominance, it seems also to have a Platonic element also and thus to be a mixture of (1) and (3).

Randomness within the basic schema of Ultimate Event Theory

Although the atomic theory goes back to the Greeks, Western science during the ‘classical’ era (16th to mid 19th century) took over certain key elements from Judaeo-Christianity, notably the idea of there being unalterable ‘laws of Nature’ and this notion has been retained even though modern science has dispensed with the lawgiver. An older theory, of which we find echoes in Genesis, views the ‘universe’ as passing from an original state of complete incoherence to the more or less ordered structure we experience today. In Greek and other mythologies the orderly cosmos emerges from an original kaos (from which our word ‘gas’ is derived) and the untamed forces of Nature are symbolized by the Titans and other monstrous  creatures. These eventually give way to the Olympians who, signficantly, control the world from above and do not participate in terrestrial existence. But the Titans, the ancient enemies of the gods, are not destroyed since they are immortal, only held in check and there is the fear that at any moment they may  break free. And there is perhaps also a hint that these forces of disruption (of randomness in effect) are necessary for the successful  functioning of the universe.
Ultimate Event Theory reverts to this earlier schema (though this was not my intention) since there are broadly three phases (1) a period of total randomness (2) a period of determinism and (3) a period when a certain degree of randomness is re-introduced.
In Eventrics, the basic constituents of everything ─ everything physical at any rate ─  are what I call ‘ultimate events’ which are extremely brief and occupy a very small ‘space’ on the event Locality. I assume that originally all ultimate events are entirely random in the sense that they are disconnected from all other ultimate events and, partly for this reason, they disappear as soon as they happen and never recur. This is randomness in the causality sense but it implies the other senses as well. If all events are disconnected from each other, there can be no recognizable pattern and thus no means of predicting which event comes next.
So where do these events come from and how is it they manage to come into being at all? They emerge from an ‘Event Source’ which we may call ‘the Origin’ and which I sometimes refer to as K0 (as opposed to the physical universe which is K1).  It is an important facet of the theory that there is only one source for everything that can and does occur. If one wants to employ the computer analogy, the Origin either is itself, or contains within itself, a random event generator and, since there is nothing else with which the Origin can interact and it does  not itself have any starting conditions  (since it has always existed), this  generator can only be what Wolfram calls an inherent randomness generator. It is not, then, order and coherence that is the ‘natural’ state but rather the reverse : incoherence and discontinuity is the ‘default position’ as it were (Note 7).
Nonetheless, a few ultimate events eventually acquire ‘self-dominance’ which enables them to repeat indefinitely more or less identically and, in a few even rarer cases, some events manage to associate with other repeating events to form conglomerates.
This process is permanent and is still going on everywhere in the universe and will continue to go on at least for some time (though eventually all event-chains will terminate and return the ‘universe’ to the nothingness from which it originally came). Thus, if you like, ‘matter’ is being created all the time though at an incredibly slow rate just as it is in Hoyle’s Steady State model (Note 7).
Once ultimate events form conglomerates they cease to be random and are subject to ‘dominance’ from other sets of event and from previous occurrences of themselves. There will still, at this stage, be a certain unpredictability in the outcomes of these associations because determinism hs not yet ousted randomness completely. Later still, certain particular associations of events become stabilized and give rise to ‘event-schemas’. These ‘event-schemas’ are not themselves made up of ultimate events  and are not situated in the normal event Locality I call K1  (roughly what we understand by the physical universe). They are situated in a concocted secondary ‘universe’ which did not exist previously and which can be called K2. The reader may baulk at this but the procedure is really no different from the distinction that is currently made between the actual physical behaviour of bodies which exemplify physical laws (whether deterministic or statistical) and the laws themselves which are not in any sense part of the physical world. Theoretical physicists routinely speculate about other possible universes where the ‘laws’, or more usually the constants, “are different”, thus implying that these laws, or principles, are in some sense  independent of what actually goes on. The distinction is somewhat similar to the distinction between genotype and phenotype and, in the last resort, it is the genotype that matters, not the phenotype.
Once certain event-schemas have been established, they are very difficult to modify : from now on they ‘dictate’ the behaviour of actual systems of events. There are thus already three quite different categories of events (1) those that emerge directly from the Origin and are strictly random; (2) those that are brought about by previously occurring physical events and (3) events that are dependent on event-schemas rather than on other individual events.
So far, then, everything has become progressively more determined though evolving from an original state of randomness somewhat akin to the Greek kaos (which incidentally gave us the term ‘gas’) or the Hebrew tohu va-vohu, the original state when the Earth was “without form and void and darkness was upon the face of the deep”.
The advent of intelligent beings introduces a new element since such  beings can, or believe they can, impose their own will on events, but this issue will not be discussed here. Whether an outcome is the result of a deliberate act or the mere product of circumstances is an issue that vitally concerns juries but has no real bearing on the determinist/indeterminist dilemma.
Macroscopic events are conglomerates of ultimate events and one might suppose that if the constituent events are completely determined, it follows that so are they. This is what contemporary reductionists actually believer, or at least preach and, within a materialist world-view, it is difficult to avoid some such conclusion. But, according to the Event Paradigm, physical reality is not a continuum but a complicated mosaic where in general blocks of events fit together neatly into interlocking causal chains and clusters. The engineering is, however, perhaps not quite faultless, and there are occasional mismatches and irregularities much as there are ‘errors’ in the transcription of DNA ─ indeed, genetic mutations are the most obvious example of the more general phenomenon of random ‘connecting errors’. And it is this possibility that allows for the reintroduction of randomness into an increasingly deterministic universe.
Despite the subatomic indeterminacy due to Quantum Mechanics, contemporary science nonetheless in practice gives us  a world that is very nearly as predictable as the Newtonian, and in certain respects more so. But human experience keeps turning up events that do not fit our rational expectations at all :  people act ‘completely out of character’, ‘as if they were someone else’, regimes collapse for no apparent reason, wars break out where they are least expected and so on. This is currently attributed to the complexity of the systems involved but there may be a deeper reason. There remains an obstinate tendency for events not to ‘keep to the book’ and one suspects that Taleb’s profound conviction  that the future is unpredictable, and the tremendous welcome this idea has received by the public, is based on an intuitive awareness that a certain type of randomness is hard-wired into the normal functioning of the universe. Why is it there supposing that it really is there? For the same sort of reason that there are persistent random errors in the transcription of the genetic code : it is a productive procedure that works in the long run by turning up possibilities that no one could possibly have planned or worked for. One hesitates to say that this randomness is deliberately put there but it is not a wholly accidental feature either : it is perhaps best conceived as a self-generated controlling mechanism that is reintroducing randomness as a means of propelling the system forward into a higher level of order, though quite what this will be is anyone’s guess.      SH  28/2/13

Note 1  Charles Sanders Peirce, who inaugurated this particular definition, did not speak of ‘random events’ but restricted himself to discussing the much more tractable (but also much more academic) issue of taking a random sample. He defined this as one “taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one of a set of instances as often as any other set of the same number”.

Note 2  Take a simple example. One might at first sight think that a square number could end with any digit whatsoever just as a throw of a dice could produce any one of the possible six outcomes. But glancing casually through a list of smallish square numbers one notes that every one seems to be either a multiple of 5 like 25, one less than a multiple of 5 like 49 or one more than a multiple of 5 like 81. We could (1) dismiss this as a fluke, (2) simply take it as a fact of life and leave it at that or (3) suspect there is  a hidden principle at work which is worth bringing into the light of day.
In this particular case, it is not difficult to establish that the pattern is no accident and will repeat indefinitely. This is so because, in the language of congruences, the square of a number that is ±1 (mod 5)  is 1 (mod 5) while the square of a number that is  ±2 (mod 5) is either +1 or –1(mod 5). This covers all possibilities so we never get squares that are two units less or two units more than a multiple of five.   

Note 3  Laplace, a born survivor who lived through the French Revolution, the Napoleonic era and the Bourbon Restoration,  was careful to restrict his professed belief in total determinism to physical (non-human) events. But clearly, there was no compelling reason to do this except the pragmatic one of keeping out of trouble with the authorities. More audacious thinkers such as Hobbes and La Mettrie, the author of the pamphlet L’Homme est une Machine, both found themselves obliged to go into exile during their lives and were vilified far and wide as ‘atheists’. Nineteenth century scientists and rationalists either avoided the topic as too  contentious or, following Descartes, made a hard and fast distinction, between human beings who possessed free will and the rest of Nature whose behaviour was entirely reducible to the ‘laws of physics’ and thus entirely predictable, at any rate in theory.

Note 4 The current  notion of the ‘laws of physics’ is also, of course, an entirely  Platonic conception since these laws are not  in any sense physical entities and are only deducible by their presumed effects.
Plato definitely struck gold with his notion of a transcendent reality of which the physical world is an imperfect copy since this is still the overriding paradigm in the mathematical sciences. If we did not have the yardstick of, for example, the behaviour of an ‘ideal gas’ (one that obeys Boyle’s Law exactly) we could hardly do any chemistry at all ─ but, in reality, as everyone knows, no gas actually does behave like this exactly hence the eminently Platonic term ‘ideal gas’.
Where Plato went wrong as far as I am concerned was in visualizing his ‘Forms’ strictly in terms of the higher mathemartics of his day which was Euclidian geometry. I view them as ‘event-schemas’ since events, and not geometric shapes, are the building-blocks of reality in my theory. Plato was also mistaken in thinking these ‘Ideas’ were fixed once and for all. I believe that the majority ─ though perhaps not all ─  of the basic event-schemas which are operative in the physical universe were built up piecemeal, evolve over time and are periodically displaced by somewhat different event-schemas much as species are.

Note 5. Because of the interest in chaos theory and the famous ‘butterfly effect’, some people seem to conclude that any slight perturbation is likely to have enormous consequences. If this really were the case, life would be intolerable. In ‘normal’ systems tinkering around with the starting conditions makes virtually no difference at all and every ‘run’, apart from maybe the first few events, ends up more or less the same. Each time you start your car in the morning it is in a different physical state from yesterday if only because of the ambient temperature. But, after perhaps some variation, provided the weather is not too extreme, the car’s behaviour settles down into the usual routine. If a working machine behaved  ‘chaotically’ it would be useless since it could not be relied on to perform in the same way from one day to the next, even from one hour to the next.

Note 6  Some people seem to be prepared to accept ‘backwards causation ‘, i.e. that a future event can somehow dictate what leads up to it,  but I find this totally unacceptable. I deliberately exclude this possibility in the basic Axioms of Ultimate Event Theory by stating that “only an event that has occurrence on the Locality can have dominance over other events”. And the final configuration certainly does not have occurrence on the Locality ─ or at any rate the same part of the Locality as actual events ─ until it actually occurs!

 Note 7   Readers will maybe be shocked at there being no mention of the Big Bang. But although I certainly believe in the reality of the Big  Bang, it does not at all follow from any of the fundamental assumptions of Ultimate Event Theory and it would be dishonest of me to pretend it did. When I first started thinking along these lines Hoyle’s conceptually attractive Steady State Theory was not entirely discredited though even then very much on the way out. The only way I can envisage the Big Bang is as a kind of cataclysmic ‘phase-transition’, presumably preceded by a long slow build up. If we accept the possibility of there being multiple universes, the Big Bang is not quite such a rare or shocking event after all : maybe when all is said and done it is a cosmic ‘storm in a teacup’.

Pagoda

To all whom it might concern:   I am speaking to the London Futurists (plus anyone else who cares to come along) on “Does Infinity Exist?” at the Peace Pagoda, Battersea Park, London  2 p.m. Saturday 8th December

This incidentally will be the first time that I will be talking about Ultimate Event Theory in public (and it is only last year that I started putting posts up about it). (It has taken me all of thirty-five years to reach this point of no return.) It seems that the Pagoda is entirely the right location for such a discussion though it was not deliberately chosen by me, indeed not chosen at all. I had originally aimed to hold the meeting (the first I have ever called on such a subject) indoors somewhere in a venue in central London but could find nowhere available for this date chosen entirely at random. Then a few Sundays ago, my partner, the painter Jane Maitland, suddenly said “Why don’t we visit Battersea Park today?”, something we never do — the last time I was there was at least eight years ago. We passed by the Pagoda but didn’t go into it. That night it suddenly came to me that the best place to meet up was the Pagoda. Why the best place? Because the origins of Ultimate Event Theory go back all of two thousand and five hundred years to the ponderings of an Indian ascetic about the nature of the physical world and the misery of human existence.

English like all Indo-European languages is an ‘object orientated language’.  It presents us with an object, he, she, it, then tells us something about it in the so-called predicate, She is dark-haired, intelligent, European, whatever. Alternatively, we are presented with two ‘things’ (organic or inorganic) and an action linking them together, I hit him. Never, except in the case of imperatives, do we have a verb standing alone and even imperatives do not express any actual state of affairs but only a hypothetical  or desired state of affairs (desired by the speaker) as in Come here. Whorf is one of the very few linguists to have noticed this :
“We are constantly reading into nature fictional acting entities, simply because our verbs must have substantives in front of them. We have to say ‘It flashed’ or ‘A light flashed’, setting up an actor, ‘it’ or ‘light’, to perform what we call an action, ‘to flash’. Yet the flashing and the light are one and the same!  The Hopi language reports the flash with a single verb, rehpi : ‘flash (occurred)’. There is no division into subject and predicate, not even a suffix like –t of Latin tonat, ‘it thunders’. Hopi can and does have verbs without subjects…”  (Whorf, Language, Thought and Reality p. 243)

Nouns and names are inert : they do not do anything, which is why a sentence which is just a list of names strikes us as being incomplete. But, although we can’t say it in contemporary English, ‘hit’ is perfectly adequate on its own; it pinpoints the essential, the action. Even more adequate on its own  would be ‘killing’  (it is ridiculous that we cannot say ‘birthing’ but only ‘giving birth’ as if we were giving something away to someone). Think of all the films you have seen which start with a shot ringing out and a dead body lying on the ground e.g. The Letter, Mildred Pierce. In both these cases, the entire rest of the film is taken up with retracing the series of events leading up to this all-important event and putting names to bodies. But the persons revolve around the central event, not the reverse, they interest us in the context of this event, not otherwise. In a more recent film, The Descendants, the entire film revolves around a water-skiing accident and it is extremely clever that the victim is only shown in a coma : she is of no interest in herself and had she not been put into a coma, there would have been nothing to make a film about. Such a dramatic event as a murder or violent death carries a tremendous weight of accessory events which otherwise would remain unknown and equally such events leave a long ‘trail’ of future events.
An ‘event’ language’ would have a (macroscopic) event as the central feature and the sentence structure would of necessity be different.  I have conjectured that the basic structure would be :

1. Presentation of a block of ultimate events
2. Its/their  localisation
3, Flow of causality (dominance) i.e. which event causes what.

 One or more of these elements may be absent : in baby-talk (3) is often lacking.

     Instead of  the bland  “He was hit  by a car”  we would have something more like  “Crash/he/car”

Event/dharma: the hitting, the collision
Localisation :  ‘He’   (whoever he is)
Cause (origin of dominance) :  car

Implicit in the subject/predicate syntax is an underlying ‘world-view’ or paradigm.

Whorf, remarkably, conjectures that a Hopi ‘physics’ would be very different from our Western traditional physics but ‘equally valid’. It is foolish to assume that an alien civilisation would have essentially the same mathematics and physics that we do though, certainly, there would be a certain overlap. Whorf thinks the main difference would be between a concentration on ‘events’ rather than ‘things’ on the one hand and a deep concern with the interaction between the ‘subjective’ and ‘objective’.

To be continued     

 

 

 

“It seems to me that there is nothing for it but to take as fundamental the relation of one event causing another” (Keith Devlin, Logic and Information).

Eventrics is the general study of events and their interactions while Ultimate Event Theory is, if you like, the nuclear physics of Eventrics. In these Posts I shall hop about more or less at random from the macro to the micro domains while concentrating nonetheless on the latter. Eventually, when enough material has accumulated, I may siphon off certain portions of the theory but at this stage it is more instructive for the reader to see the theory taking shape piecemeal, which is how event-clusters and event-chains themselves form, rather than attempt to systematize.  In any cased, te person reading this who will take the theory further than I can hope to, will not only need to have a clear understanding of the behaviour of events at their most elemental level but, above all, will need to become adept in navigating (or rather surfing) the enormous event currents of the present society if he or she is to give the theory the audience it deserves.
The macro-events we are concerned with on a day to day basis are huge event-clusters, as large as galaxies in comparison to their constituent ultimate events, and the collective behaviour of events may well differ from the behaviour of individual ultimate events as much as the behaviour of human crowds or gases seem to differ from that of their constituent molecules in object-based physics (Note 1). Certainly, large-scale bulk event-chains, what we call ‘historical movements’, seem to have their own momentum and evolve in their own manner, sweeping people along as if the latter were tornpieces of paper. Those persons who obtain positions of power are those who, by luck or good judgment, align themselves with forces they do not control but can up to a point use to their advantage (Note 2). An analysis by way of events rather than by way of persons or by way of electrons and molecules may well prove to be more appropriate in the macro domain and is indeed already followed by various writers.
Events, some of them at any rate, do not occur at random : they form themselves into recognizable event-chains and so require something to stop them falling apart. This something we call ‘causality’.  So-called primitive man, if anything, believed more firmly in causality than people today do : rather than meekly accept that certain events came about ‘by chance’, primitive societies considered there must always be some agency at work, malign or benevolent, natural or supernatural. Causality is not so much a law — for a law requires a lawgiver — as a force, perhaps the most basic and essential kind of force imaginable since without some form of causality everything would be a bewildering confusion where any event could follow any other event and there would be no persistent patterns of any kind whatsoever anywhere.  Though I am prepared to dispense with quite a number of things I am not prepared to dispense with causality, or its equivalent. In Ultimate Event Theory it appears under the name of ‘dominance’. As one of the half dozen basic concepts of the theory, it cannot really be described in terms of anything more elementary and I define it as “a coercive influence which certain event clusters and event-chains have over others”.  This is not much of a definition but will do for the moment. Before saying more about ‘dominance’ and how it differs, if at all, from causality, it may be as well to examine the ‘classic’ theory of causality as it appears in Western science up to the twentieth century and in rationalistic thinking generally.
Causality — what is causality?  The basis of all theories of causality is the notion, more precisely intuition, that certain pairs of events are connected up in a  necessary fashion whilst others (the majority) are not. It doesn’t “just happen to be the case” that someone falls over if I give him a sudden hard push : if  he did fall, we would say that I caused him to stumble. On the other hand, if I am shaking his hand and he happens to trip over a stone at this precise moment, I didn’t cause him to fall over — though it might look as if I did to an observer  some distance away.
According to Piaget, the newborn child lives in a ‘world’ without space and time, without permanent objects and without causality. The universe “consists of shifting and insubstantial tableaux which appear and are then totally reabsorbed” (Piaget). However, the notion of causality arises very early on, perhaps even as early as a few months if we are to believe certain modern researchers (Note 1).  Certainly, the baby very soon realises that by making certain movements or noises it can attract the attention of its mother successfully, though whether this quite constitutes an ‘understanding’ of causality is debatable. . Event A, such as gurgling or screaming, becomes regularly associated in the baby’s mind with a quite dissimilar Event B, the physical proximity of its mother or another grown-up. The scream ’causes’ the prompt arrival of a grown-up, never mind how or why.
The notion that certain occurrences can just arise ‘out of the blue’ without antecedents is repugnant to most adult human beings and any sort of an explanation, however fanciful,  is felt to be better than none at all. Belief in causality, whether well-founded or not, certainly seems to be a psychological necessity. The main motive for populating the universe in times past with so many unseen entities was to provide causal agents for observed phenomena. By the time we reach the 18th century, largely because of the astonishing success of Newtonian mechanics, most of these supernatural agencies became redundant, at any rate in the eyes of educated people. The “thrones, principalities and powers” against whom Saint Paul warns us had disappeared into thin air by the mid eighteenth century, leaving only an omniscient Creator God who had done such a good job in fashioning the universe that it could run on its own steam without the need of further intervention. The philosophers of the Enlightenment rejected ‘miraculous’ explanations of physical events : in theory at any rate mechanical explanations sufficed. Newton himself was puzzled that he was unable to provide a mechanical explanation of gravity and, later on, electrical phenomena caused problems : but most physicists prior to the last quarter of the nineteenth century assumed with Helmholtz  that “all physical problems can be reduced to mechanical problems” and that Calculus and Newton’s Laws were the key to the universe.
Through all this, belief in causality continued unimpaired. In principle there were no chance events, and the French astronomer Laplace went so far as to say that, if a Supermind knew in full detail the current state of the universe, it would be able to predict everything that was going to happen in the future. This view is no longer de rigueur, of course, mainly because of the discoveries of Quantum Theory which has uncertainty built right into it. But, for the moment, I propose to leave such complications aside in order to concentrate on what might be called the ‘Classic Theory of Causality’ — ‘classic’ in the sense that it was the theory upheld, or more often assumed, by the great majority of scientists and rational thinkers between the 16th and 20th centuries.

This Classic Theory would seem to be based on the four following assumptions:
1.    There exists a necessary connection between certain pairs of events, and by extension, longer sequences;
2.    The status of the two events in a causal pair is not equivalent, one of the two is, as it were, active and the other, as it were, passive or acted upon;
3.    The ‘causal force’ always operates forwards in time, it is transmitted from the earlier event to the later;
4.    All physical occurrences, and perhaps mental occurrences as well, are brought about by the prior occurrence of one or more previous events.

       These assumptions are so ‘commonsensical’ that almost everyone took them for granted for a long time, witness popular phrases such as “Every event has a cause” , “Nothing can arise from nothing” and so on. But then the 18th century British philosopher Hume threw a spanner into the works. He pointed out that these assumptions, and others like them, were, when all was said and done, simply assumptions — they could not be proved to be the case, and were not ‘self-evident’. We do not, Hume pointed out, ever see or hear this mysterious causal link : indeed it is notoriously difficult, even for trained observers, to distinguish between events which are (allegedly) causally related and those that are not — if this were not the case, the natural sciences would have developed much more rapidly than they actually did.
Nor do these assumptions appear to be ‘necessary truths’, though this is undoubtedly how Leibnitz and Kant and other rationalist thinkers viewed them. As Hume says, the fact that event A has up to now always and in all circumstances been followed by event B, does not mean that this will automatically be the case in the future. (Indeed, though Hume could not know this, the assumption is false if Quantum Mechanics is to be believed since in QM identical circumstances do not necessarily produce identical results.)
In brief, belief in causality is, so Hume argues, an act of faith. This was a very serious charge since most scientists regarded themselves as having left behind such modes of thought. The nineteenth century, as it happened, took very little notice of Hume’s devastating critique : science needed a cast iron belief in causality and Claude Bernard even went so far as to define science as determinism. And since science was clearly working, most educated people were happy to go along with its implicit assumptions — perhaps making an exception for mankind itself to whom God had given the capacity for free choice which the rest of Nature did not possess.
Actually, the four assumptions listed, necessary though they are, do not suffice to distinguish the post-Renaissance Western theory of causality from earlier beliefs and theories. Further restrictions were required to eradicate the remaining vestiges of magical pre-scientific thinking. The most important of these principles are :
1.     The No Miracle Principle
2.    The Principle of Spatio-Temporal Continuity;
3
.    The Principle of Energy;
4
.    The Principle of Localization;
5
.    The Mind/Body Principle
6
.    The Principle of Parsimony

The first three principles are ‘scientific’ in the sense that they have had enormous importance in the progress of scientific thinking. The first gets rid of all deus ex machina and thus stimulates a search for ‘natural principles’; the second  prohibits ‘action at a distance’ (though ironically gravity and certain aspects of quantum mechanics require it); the third, roughly that “all change requires an energy source” is very much an issue today in this era of depleted stocks of fossil fuels ; the fourth, that “everything must be somewhere” is, or seems to be, commonsensical; the fifth, roughly that the “mind cannot by itself bring about changes in the outside world” is a corner-stone of materialist philosophy, while the last,  that “Entities are not to be multiplied without necessity” is more a matter of method and necessity than anything else. These principles will be discussed in detail in the following Post.   S.H.

________________________________________________________

Note 1 The researchers Ann Leslie and Stephanie Keeble [“Do Six-Month-Old Infants Perceive Causality?” Cognition 25, 1987 pp. 265-288] claim that, when babies are shown ‘acausal’ sequences of events mixed up with similar causal sequences they [the babies] show unmistakeable symptoms of surprise such as more rapid heart beat.

Note 2  This is indeed what may have prevailed ‘in the beginning’ but the world we find ourselves in today is very different from the Greek kaos from which everything come and the main difference is that certain sequences of physical events have kept on repeating themselves with minor variations for millions of years. Such patterns have indeed become so firmly established that they are viewed as ‘laws of Nature’ though they are perhaps more fittingly described as ‘schemas’ or ‘event-moulds’ into which physical events have fallen.

 

Velocity has a different meaning in Ultimate Event Theory to that which it has in object-based physics. In the latter a particle traverses a multitude ─ usually an infinite number ─ of positions during a given time interval and the speed is the distance traversed divided by the time. One might, for example, note that it was 1 p.m. when driving through a  certain village and 3 p.m. when driving through a different one known to be 120 kilometres distant from the first. Supposing the speed was constant throughout this interval, it would be 120 kilometres per 2 hours. However, speed is practically never cited in this fashion : it is always reduced to a certain number of kilometres, or metres, with respect to a unitary interval of time, an hour, minute or second. Thus my speed on this particular journey would be quoted in a physics textbook as 60 kilometres per hour  or more likely as  (60 × 103)/(602»   16.67 metres per second  = 16.67  m s–1 (to two decimal places). 
        By doing this different speeds can be compared at a glance whereas if we quoted  speeds as 356 metres per 7 seconds and 720 metres per 8 seconds it would not be immediately obvious which speed is the greater. When dealing with such measures as metres and seconds there would normally be no difference between object-based physics and event-based physics, However, even when dealing with minute distances and tiny intervals of time such as nanometres and nanoseconds, ‘speed’ is still stated in so many units of lengths per interval of time. This automatic conversion to standard unitary measures presupposes that space and time are ‘infinitely divisible’ in the sense that, no matter how small the interval of time, it is always possible for a particle to change its position, i.e. ‘move’. This assumption is, to say the least, hardly plausible and Hume went so far as to write, “No priestly dogma invented on purpose to tame and subdue the rebellious reason of mankind ever shocked common sense more than the doctrine of the infinite divisibility with its consequences” ( Hume, Essay on Human Understanding).
        In Ultimate Event Theory, which includes as an axiom that time and space are not infinitely divisible, this automatic conversion is not always feasible. Lengths eventually reduce to so many ‘grid-spaces’ on the Locality, and intervals of time to so many ksanas (‘instants’), and there is no such thing as a half or a third of a ‘grid-space’ or a quarter of a ksana. The ‘speed’, or  ‘displacement rate’ of an ultimate event or event-cluster is defined as the distance on the Locality between two spots where the event has occurred. This distance is always a positive integer corresponding to the number of intermediary positions (+1) where an ultimate event could have had occurrence. If the position of the earlier occurrence is not the original position, we relate both positions to that of a repeating landmark event-sequence, the equivalent of the origin. So if the occurrences take place at consecutive ksanas, the current reappearance rate (‘speed’) is simply the ‘distance’ between  the two spots divided by unity, i.e. a positive integer. But what if an event reoccurs 7 spaces to the left every 4 ksanas? The ‘actual’ reappearance rate is 7 spaces per 4 ksanas  which, when converted to the ‘standard’ measure comes out as 7/4 spaces per ksana or 7/4 sp ks–1 . However, since there is no such thing as seven-fourths of a position on the Locality, displacement rates like  7/4 sp ks–1 are simply a convenient but somewhat misleading way of tracking a recurring event.
The ‘Finite Space/Time Axiom’ has curious consequences. It means that except when the space/ksana ratio is an integer, all event-chains are ‘gapped’ : that is, there are intermediary ksanas between successive occurrences when the event or event-cluster does not make an appearance at all. Thus, the reappearance pattern ksana by ksana for an ultimate event displacing itself along a line at the ‘standardized’ rate of 7/4 sp ks–1  will be

……..o■oooooooooooooooooo……………
……..oooooooooooooooooooo…………..
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooo■ooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..ooooooooooooooo■oooo……………
……..ooooooooooooooooooooo…………

And this in turn means that when s/n is a ratio of relatively prime numbers, there will be gaps n–1 ksanas long, and the ‘particle’ (repeating ultimate event) will completely disappear during this time interval !   The importance of the distribution of primes and factorisation generally, which has been so intensively studied over the last two centuries, may thus have practical applications after all since it relates to the important question of whether there can be ‘full’ reappearance rates for certain processes (Note 1).
In consequence, when specifying in  full detail the re-appearance rate of an event, or, what comes to the same thing, the re-occurrence speed of members of an event-chain, we not only beed to give the magnitude of the displacement, the change of direction, but also the ‘gap number’ or ‘true’ eappearance rate of an event-chain.

Constants of Ultimate Event Theory 

n* , the number of ksanas to a second, and s*, the number of grid-spaces to a metre, are basic constants in Ultimate Event Theory that remain to be determined but which I have no doubt will be determined during this century.  (s*/n*) is thus the conversion factor required to reduce speeds given in metres/second to spaces/ksana.  Thus c (s*/n*) =  3 × 108 (s*/n*) sp ks–1  is seemingly a displacement rate that cannot be exceeded. c(s*/n*) is  not, as I view things, the actual speed of light but merely the limiting speed for all ‘particles’, or, more precisely, the limiting value of the possible ‘lateral’ displacement of members of a single event-chain. Any actual event-chain would have at most a lateral displacement rate that approaches but does not attain this limit. While there is good reason to believe that there must be a limiting value for all event-chains (particles) — since there is a limit to everything — there is no need to believe that anything  actually attains such a limit. In object-based physics, the neutrino used until recently to be thought to travel at the speed of light and thus to be massless, but it is now known that it has a small mass. Te idea of a ‘massless’ particle is ridiculous (Note 2) for if there really were such a thing it would have absolutely no resistance to any attempt to change its state of rest or straight-line motion and so it is hard to see how it could be anything at all even for a single instanrt, or maybe it would just be a perpetually changing erratic ‘noise’. Mass, of course, does not have the same meaning in Ultimate Event Theory and will be defined in a subsequent post, but the idea that there is a ‘displacement limit’ to an event-chain passes over into UET. This ‘speed’ in reality shows the absolute limit of the lateral ‘bonding’ between events in an event-chain and in this sense is a measure of ‘event-energy’. Any greater lateral displacement than c(s*/n*) would result in the proto-event aborting, as it were, since it would no longer be tied to the same event-chain.

 Reappearance rates

 So far I have assumed that an event in an event-chain reappears as soon as it is able to do so. This may well not be the case, indeed I think it very unlikely that it is the case. For example, an event in an event-chain with a standardized ‘speed’ of 1/2 sp ks–1  might not in reality re-appear every second ksana : it could reappear two spaces to the left (or right) every fourth ksana, or three spaces every sixth ksana and so on. In this respect the ‘gap number’ would be more informative than the reappearance rate as such and it may be that slight interference with other event-chains would shift the gap number without actually changing the overall displacement ratio. Thus, an event shifting one space to the right every second ksana might only appear every fourth ksana shifted two spaces in the same direction and so on. It is tempting to see these shifts as in some way analogous to the orbital shifts of electrons, while more serious interference would completely disrupt the displacement ratio. Once we evolve instruments sensitive enough to register the ‘flicker’ of ultimate events, we mjay find that rthere are all sorts of different event patterns, as intricate as the close packing of molecules.
It has also occurred to me that different re-appearance rates for  event-chains that have  the same standard displace,ment rate might xplain why certain event-chains behave in very different ways despite having, as far as we can judge, the same ‘speed’.  In our macroscopic world, the effect of skipping a large number of grid-spaces and ksanas (which might well be occupied by other event-clusters) would give the impression that a particularly dense event-cluster (‘object’)  had literally gone right through some other cluster if the latter were thinly extended spatio-temporally. Far from being impossible or incredible, something like this actually happens all the time since, according to object-based physics, neutrinos are passing through us in their millions every time we blink. Why, then, is it so easy to block the passage of light which travels at roughly the same speed, certainly no less? I found this a serious conceptual problem but a difference of reappearance rates might explain this : maybe a stream of photons has the same ‘speed’ but a much tighter re-appearance rate than a stream of neutrinos. This is only a conjecture, of course, and there may be other factors at work, but there may be some way to test whether there really is such a discrepancy in the ‘speeds’ of the two ‘particles’. which would result in the neutrino having far better penetrating power with regard to obstructions.

Extended and combined reappearance rates

  Einstein wondered what would happen if an object exceeded the speed of light, i.e. in UET terms when an event-chain got too extended laterally. One  might also wonder what would happen if an event-chain got too extended temporally, i.e. if its re-appearance rate was 1/N where N was an absolutely huge number of ksanas. In such a case, the re-appearance of an event would not be recognized as being a re-appearance : it would simply be interpreted as an event (or event-cluster) that was entirely unrelated to anything in its immediate vicinity. Certain macroscopic events we consider to be random are perhaps not really so : the event-chains they belong to are so extended temporally that we just don’t recognize them as being event-chains (the previous appearance might have been years or centuries ago). Likewise the interaction of different event-chains in the form of ‘cause and effect’ might be so spread out in time that a ‘result’ would appear to come completely out of the blue (Note 3).
There must, however, be a limit to vertical extension (since everything has a limit). This would be an important number for it would show the maximum temporal extension of the ‘bonding’ between events of a single chain. We may also conjecture that there is a combined limit to lateral and vertical extension taken together, i.e. the product grid-positions × ksanas has a maximum which again would be a basic constant of nature.     S.H. 8/10/12

_________________________

Note 1  A ‘full’ re-appearance rate is one where an ultimate event m akes an appearance at every ksana from its first appearance to its last.

Note 2. De Broglie, who first derived the famous relation “p = h/ λ”linking a particle’s momentum p  with Planck’s constant divided by the wave-length, believed that photons, like all particles, had a small mass. There is no particular reason why the observed speed of light should be strictly equated with c, the limiting speed for all particles, except that this makes the equations easier to handle, and no experiment will ever be able to determine whether the two are strictly identical.

Note 3 This, of course, is exactly what Buddhists maintain with regard to the consequences of bad or good actions ─ they can follow you around in endless reincarnations. Note that it is only certain events that have this temporal extension in the karma theory: those involving the will, deliberate acts of malice or benevolence. If we take all this out of the moral context, the idea that effects can be widely separated temporally from their causes and that these effects come up  repeatedly is quite possibly a useful insight into what actually goes on in the case of certain abnormal event-chains that are over-extended vertically.

 

“The acceleration  of  straight motion in heavy bodies proceeds according to the odd numbers beginning with one. That is, marking off whatever equal times you wish, and as many of them, then if the moving body leaving a state of rest shall have passed during the first time such a space as, say, an ell, then in the second time it will go three ells, in the third, five; in the fourth, seven, and it will continue thus according to the succession of odd numbers.”      Galileo, Dialogue Concerning the Two World Systems   p. 257  Drake’s translation

I had originally supposed that Galileo based this conjecture, one of the most important in the whole of physics, on actual observations but, if so, Galileo kept very quiet about it. For the man who is hailed as the ‘first empiricist’, Galileo seems to have been singularly uninterested in checking out this remarkable relation which would have delighted the Pythagoreans, believing as they did that all physical phenomena were reducible to  simple ratios between whole numbers. Admittedly, Galileo was blind when the Dialogue Concerning Two World Systems was published but he surely had ample time to investigate the matter during his long life ― perhaps he was reluctant to test his beautiful theory because he was afraid it was not entirely correct (Note 1). Or, more likely, he wanted to give the impression that he had deduced the ‘Law of Falling Bodies’ entirely from first principles without recourse to experiment. We must remember that a Christianised Platonism provided the philosophical framework for the thinking  of all the early classical scientists right up to and including Newton. Galileo himself wrote that “As to the truth of which mathematical demonstrations give us the knowledge, it is the same which the Divine Wisdom knoweth” (Galileo, Dialogue).  He does claim (via his spokesman Salviati in the Dialogue) that there exists a proof and “one most purely mathematical” (to be given in outline in a moment) but one wonders how on earth he got hold of the relation in the first place since it does not seem to be based on general physical principles as Newton’s formula for gravitation was. As with so many other important discoveries, Galileo most likely just hit on this striking relation  by a combination of observation and inspired guesswork.
Note that Galileo still speaks the language of continued ratio just as Euclid did (and Newton continued to do in the Principia) : the ‘Law of Falling Bodies’ does not constitute an equation of motion in the modern sense (Note 2). There are various reasons for this apart from Galileo’s respect for the ancient Greek geometers. For there is safety in ratios : they  do not tie you down to actual quantities while implicitly assuming that these quantities really exist beyond what one can ever hope to examine in practice. Above all, the language of proportion or continued ratio sidesteps the question of whether space and time are ‘infinitely divisible’ since the rapport holds either way.
Galileo’s continued ratio concerns distances and not speeds. The statement that the successive distances are in the ratio of the odd numbers would still be true  if, during a given interval of time, the particle moved at a steadily accelerating pace, if it stayed motionless for most of the time only to suddenly surge forward at the close, or again move around erratically (Note 3). Provided the recorded distances in successive intervals of time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Moreover, supposing the rule is correct, we only need to determine a value for a single interval (and not  necessarily the first one) to work out all the others.
The difficulty with problems concerning speed is that speed, unlike distance, is not something we can actually observe (Note 4). We have perforce to start with distances, at least one,  that we can determine by observation and check it against repeating signals like the ticking of a clock or the beating of a pulse (Galileo used the latter to time the swinging of a censer when at mass). However, once we have at least one clearcut distance/time ratio, in order to work out further distances ― which is basically what we are interested in ― we have to deal with speeds (rates of change of distance with respect to time) and then backtrack  again to distances. That is, we have to move from the observable (distances, intervals of time) to the unknown and unobservable (speeds, accelerations) and finally back to the observable. The very idea of a ‘formula of motion’, i.e. a statement which, in algebraic shorthand, gives all distances at all times, is a thoroughly modern invention that required a gigantic leap of thought : not only did the Greeks did not have a clear conception of such a thing but Galileo himself seems to have hesitated on the threshold of the Promised Land without really entering it.

Derivation and Justification of Galileo’s ‘Law of Falling Bodies’

Suppose we start off with Galileo’s rule d1 : d2 : d3 ……  =  1: 3: 5…..:  and generalize. We have observed, or think we have observed, that a body starting from rest falls, say, 1 metre in the first second. So it will fall 3 metres during  the second interval, 5 during the third and by the end of the third second will have fallen altogether 1 + 3 + 5 = 9 metres  =  32 metres. Since the odd numbers 1 + 3 + ….(2n–1)   n = 1, 2, 3… add up conveniently to n2 we have a formula already. And, since there is nothing sacrosanct about a second as an interval of time, we might try going backwards and  consider halves, thirds and fourths of seconds and so on.  So, when dealing in half-seconds, since the particle falls 1 metre per second, it must fall ½ metre in the first half-second, 3/2 metres in the second half second, i.e. 2 metres already (even though it has only fallen 1 metre in the first second). And if we deal in thirds of seconds, the particle falls (1/3 + 3/3 + 5/3) = 9/3 = 3 metres, and if we deal in fourths of seconds it has fallen (1/4 + 3/4 + 5/4 + 7/4) = 16/4 = 4 metres so it looks as if the particle’s speed increases the briefer the intervals of time we consider! What has gone wrong?

The answer is that Galileo’s ratio does not tell us anything about speeds as such, only distances, and we cannot just project back the observed ‘speed’ evaluated over a given interval because this speed has maybe been changing during the interval considered. As a matter of common observation a falling body falls faster and faster as time goes on, so it does not have a fixed speed over a given interval. When Galileo spoke of the distances fallen, he was referring to the distance the body had fallen by or at the end of the interval. And like practically everyone else then and since, Galileo assumed that, in such a case,  the speed changed ‘continually’ (or continuously) rising from an initial speed, which the particle has at the very beginning of the interval, to a final speed which the particle attains only at the very close of the interval. So how do you work out the speed, and thus the distance traversed, during any interval of time? To get out a formula, it is seemingly no good just knowing the distances, and thus the speeds, of falling bodies over macroscopic intervals of time like seconds: we need to know the distances they fall in intervals of time too small for us to measure directly. Moreover, any error we make in an observed value over a relatively large interval like a second will most likely get magnified when we extrapolate forward to immense stretches of time, or backward to ‘microscopic’ time.
The ‘Law of Falling Bodies’  (that, during equal intervals of time, the distances fallen are as the odd numbers) takes for granted the following : (1) that, during free fall, a particle’s motion is always increasing ; and (2) that the speed increases steadily ― does not, for example, increase and then decrease a little or halt for a moment. We can take (1) as being based on observation. (2) is also based on observation in the sense that we do not notice any fluctuations (though there might well be some too small for us to pick up) and Salviato, Galileo’s spokesman in the Dialogue, after discussing the point, concludes that it is “more reasonable”  to conclude that the increase is regular.
But this is not all. Not only does Galileo (and practically all physicists since his day) assume that there is a ‘steady’ increase (in the sense that the particle does not backtrack or even stay motionless for a moment) but that the accelerating particle takes on all possible intermediate speeds.  “The acceleration is made continuously from moment to moment, and not discretely (intercisamente) from one time to another “ (Galileo, op. cit. p. 266)  This implies, as Galileo well realized, that space and time are ‘infinitely divisible’ since speed is the ratio of distance to time ― “Thus, we may understand that whenever space is traversed by the moving body with a motion which began from rest and continues uniformly accelerating, it has consumed and made use of infinite degrees of increasing speed….”  (Galileo, op. cit. p. 266).
Galileo and one or two of  his medieval predecessors resolved the ‘continuous acceleration problem’ by geometricizing it, even it belonged to dynamics, the science of movement, while  geometry is the science of the changeless (Euclidian anyway). As we all learned at school the area of a triangle is half the height times the base. But most people  have forgotten, if they ever knew, why the formula works. It works  because, taking the simplest case, that of a right angled triangle, you can cut off the upper ‘half-triangle’ and bring it round to form a rectangle. And the area of this rectangle is half the original height times the base. (Subsequently, we deal with non right-angled triangles by showing that the area of a parallelogram between the same two parallels is that of the equivalent rectangle.)
       What is more, we can (in imagination if not in practice) change a triangle into an equivalent  rectangle no matter how tiny the triangle and resulting rectangle are. Distance is speed × time, so we can present  distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this can be the case?). Also, provided the time interval is ‘brief’, we can, without too much exaggeration, treat the speed as constant and equal to the average height of the two uprights (h­2 – h1)/2.

The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not      We can now measure this narrow rectangle : it is   (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 5), the final result being, of course, the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant that has to be determined by observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  acceptable? To me, not, because, as always in calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but then, a  moment later, turning it into a constant while claiming that this does not matter because it is only a constant for such a short period of time ! It is like hearing that so and so is really a very nice guy and if you object that on every occasion you meet him he is despicable, being told that this doesn’t matter because, every time you meet him, it’s only for a ‘very short time’ !
Galileo, of course, did not have the modern concept of a ‘limit’ in the mathematical sense since it was only evolved, and very painfully at that, during the late nineteenth century.  But, contrary to what most people assume, the modern mathematical treatment using limits does not so much resolve the conceptual problem as make it technically irrelevant. But there is a cost to pay : the whole discussion has been removed from the domain of physical reality where it originated and where it belongs.
Is there any other way of tackling the problem?  Yes, I believe there is. We can simply suppose that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so. This is the ‘finitist’ hypothesis which is the kicking-off point for Ultimate Event Theory.
However, before passing on to the UET treatment, we ought to see what Newton made of the problem. He is over generous to Galileo when he states right at the beginning of his Principia that “By the first two laws [of  motion ] and the first two Corollaries, Galileo discovered that the descent of bodies varied as the square of the time and that the motion of projectiles was in the curve of a parabola” (Newton, Principia, Motte’s translation p. 21). If this really is the case, Galileo did not express himself as clearly and succinctly as Newton and, above all, did not explain why this is so. Galileo does speak of the ‘heaviness of bodies’ but does not quite manage to conceive of gravity in the  Newtonian sense.
By Newton’s Second Law, force affects a body’s state of rest or uniform straight line motion, and so  there must be a force at work. Since the force (that of gravity) is permanent and does not get ‘used up’ like most forces we are familiar with, a body is repeatedly accelerated and,  by Newton’s First Law, repeatedly retains the extra velocity it acquires. This means the acceleration cannot be other than constant :
“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities (Note 6) … And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.” ( Newton, Principia, p. 21)
In effect, a falling body is on the receiving end of a repeating force which moves it from one state of uniform velocity (which it would keep for ever if subject to no further outside impulse) to the next. Newton thus, as no one had done before him, derived the observed phenomenon of constant acceleration in a falling body from just two assumptions, his first two Laws of Motion.

Treatment of Free Fall in Ultimate Event Theory

When deriving an equation, especially a well-known one, one must beware of circular reasoning and assuming what we wish to prove. So, let us lay our hands on the table and declare what basic assumptions we require.
First of all, we need to assume that, in cases of free fall, the body ‘keeps on accelerating’, i.e. goes faster and faster as time goes on. This assumption is, in Newton, an Axiom, or an immediate deduction from an Axiom, but it was originally  the fruit of observation. For there might conceivably be worlds where falling bodies fall at a constant rate, i.e. , don’t speed up as they fall. However, it seems to be generally true.
Secondly, we need to assume that a falling body, while it speed is variable, has a constant or very nearly constant  acceleration  due to the Earth’s attraction ― this follows from Newton’s Laws of Motion which are presented as ‘Axioms’ though ‘amply supported by experiment’ . If  Newton is to be believed, the speed of a falling body changes inversely with the square of the distance, i.e. the nearer you are to the Earth the more rapidly your speed increases (until it reaches a terminal velocity because of air resistance). Also, because the Earth is not a perfect sphere, there will be slight variations according to where you are on the Earth’s surface. However, over the sort of distances we are likely to be concerned with, there will be little, if any, observable variation in the value of g, the constant of acceleration.
        A third assumption is, however, also necessary. Either we suppose, as Galileo and  Newton did (Newton with some hesitation) that ‘time and space’ are ‘infinitely divisible’ or we suppose that they are not and draws the necessary consequences. Ultimate Event Theory presupposes that all events are made up of ‘ultimate events’ (which cannot be further divided) and that there is an ‘ultimate’ interval of time within which no change of position or exchange of energy  is possible. When dealing with of the infinitely small ling with a ‘sequence’ of events, there is always a first event and that this event has occurrence on a particular spot at a particular moment. There can be  change but not continuous change (of position or anything else you like to mention). The dimensions of the three-dimensional spot where an ultimate event has occurrence are, currently, unknown as also is the extent of the elementary interval of time which I denote a ksana (from the Sanscrit for ‘instant’). All we can say about these fundamental dimensions of Space/Time is that they are ‘very small’ compared to the dimensions we deal with in the observable macroscopic ‘world’. It may seem that there is not much one can do with such a hypothesis, but, surprisingly, there is, since the whole art of Calculus and Calculus-like procedures is to start off with unknown microscopic quantities and eventually dispense with them while, miraculously, ending  up with the sort of values we can work with.
Fourthly, if our eventual formula is to be of any use, we have to assume that it is possible to determine an ‘initial state’, ‘initial distance’, ‘initial rate of change’ or the like. In this case, we have to assume that we can make at least a reasonable guess at the size of g, the constant of acceleration due to the proximity of the Earth.
(Incidentally, to avoid being pedantic, I shall sometimes lapse into the  ‘object-language’ of conventional physics, speaking of  ‘particles’ instead of events. But it is to be understood that what is ‘really’ happening is that ultimate events are appearing, disappearing and reappearing at particular spots on the Locality and that the ‘motion’ inasmuch as it exists at all is discontinuous.)
We have then a ‘particle’ (repeating ultimate event)  displacing itself relative to some landmark event-cluster considered to be ‘at rest’, i.e. repeating regularly. Our particle (event-cluster) has received an impulse that has dislodged it from its previous position since,  by hypothesis, it is now ‘in motion’. According to Newton’s Laws (Laws of Motion + Law of Universal Attraction) this impulse in a particular direction will not go away but will be repeated indefinitely ‘from moment to moment’ without diminution. Since the effect of an outside force is to accelerate the particle, its speed will increase but will, by the Law of Gravity, increase by a constant amount at each interval of time since the force responsible for the acceleration does not change appreciably over small distances. The resulting increase in distance fallen will be the same as the displacement ‘during’ the first interval since the ‘particle’ starts from rest. In the terms of UET, at the ksanalabelled 0 , the start point, the particle is at zero distance from a particular grid-point ― it is actually at this point ― and at the ksana labelled 1  it is m spaces further to the right (or left) of the original spot (strictly this spot’s repeat). The initial ‘speed’, which is also the constant acceleration per ksana,  is m spaces per ksana where m is an integer. At all subsequent ksanas, the ‘particle’ keeps this ‘speed’ and so displaces itself a further m spaces each time in the same direction. If there were no further force involved, it would keep this current speed indefinitely, but since the gravitational ‘pull’ is repeated it moves a further m spaces at each ksana.

ksana 0            . 

ksana 1            ←    m spaces  → .

ksana 2            ←    m spaces  →   .  ←      m spaces         →.

The equivalent of that will-o’-the-wisp,  ‘instantaneous speed’, in Ultimate Event Theory is simply the current reappearance rate of the ultimate event and in thiscase, the current ‘speed’  is the difference between the current position (relative to a fixed point origin) and the previous position divided by unity. The current acceleration (increase in speed) in this case is the difference between the current speed and the previous speed and this difference,  by Newton’s Law, remains the same because the force involved remains the same (or very nearly the same). Defined recursively, which is the preferred method in UET, the distance formula is
d(0) = 0; d(1) = m; d(n+1) = d(n) + d(1)   n = 1, 2, 3…. 

  ksana       distance from previous           distance from start

                         position                                  position

0                            0                                                 0 m
1                            m                                       1 m
2              (m + m) = 2m                     (1 + 2) m   = 3 m
3              m + 2m = 3m            (1 + 2 + 3) m     = 6 m
4                            4m             (1 + 2 + 3 + 4)   =  10 m
……………………………………………..
n                                     nm             {1 + 2 + 3 +……+ n}m 

         Since the sum of the natural numbers up to m is the relevant triangular number, m(m+ 1) = (m2/2) + (m/2)
                                                                                                                                                                  2
the total distance traversed at the nth ksana is thus
m {1 + 2 + 3 + …….n}   = m (n(n+1))
                                                      2
                                        = m ((n2 + n)  = (mn2)  + (mn)
                                                  2                      2             2
        This total displacement has taken place in n ksanas where n is a positive integer, for example, 3, 57, 1,456  or 1068.  
We, however, do not reckon in ksanas, this interval being so brief that our senses do not recognize it, even when our senses are extended and amplified by modern instruments. What we can  say, however, is that there are n* ksanas to a second, where n* , an unknown constant, is a very large positive integer.

If we wish to covert our formula to seconds we must divide by n*  so we now have

 

    d(n/n* )  = =  (mn)2  + (mn

                         2n*        2n*                    

If t is in seconds n = n* t or t = n/n*  since there are n* ksanas to a second. So,

 

        d(t)  =    (m (n*t)2)   m(n* t)    =  (mn* t2)   mt     …..(i)

                        2n*            2n*               2            2

Now, if we have been able to deduce, by observation, that, at the end of every second, there is a constant acceleration of gmetres per second which takes a full second to have effect, the speed at the end of the very first second will be g metres, where g is a known constant ― approximately known anyway. The speed at the n*th  ksana is n*m which means the ratio of metres to ksana is m n* : g or m = g/n* . Substituting this into the above, we obtain
d(t)  =  d(t)  =   (gn* t2)   +  mt   = ½ gt2 + ½ (gt)/n*………..(i)
                                    2n*               2n*

        Substituting for t = 1, 2, 3   we obtain
        d(t = 1)  =  ½ g + ½ g/n*   =  (g/2) + (g/2)/(n*)
        d(t = 2)  =  4(g/2)  +  2(g/2)/(n*)  =  2g   + g/n*
        d(3)  = 9 (g/2)  +  3(g/2)/(n*)
        d(t)  = t2 (g/2)  +  t(g/2)/(n*)   

         These are the total distances up to the end of the first, second, t th second. The actual distances traversed during each second can be obtained by subtraction. Thus, the distance traversed during the second second is

(4 – 1)(g/2) = (3)(g/2) and for the third (9 – 4)(g/2) 

If we examine the ratios, we see that the distance covered ‘during’ the second second is d(2)  is
        {4(g/2)  +  2(g/2)/(n*)}  –  {(g/2) + (g/2)/(n*)}
        =   3(g/2)  +  g/2)/(n*)   

which is very nearly 3(g/2) if n* is large so the ratio is very nearly 3 : 1      Similarly, the increases during subsequent seconds compared to those in the preceding seconds will be approximately    5 : 3; 7 : 5  and so on.
Note that it was not necessary to say anything about ‘areas under a curve’, ‘infinitesimally small’ intervals or, for that matter, limits as n* → ∞. The formula (d(t)  =   (½ gt2 + ½ (gt)/n*    is not a limit : it is an exact formula involving two constants, g and n*, one of which is known (at least approximately) and the other is currently not known because so far it has proved to be unobservable. Whether one actually takes a value such as n* into account (on the occasions when it is actually known) depends on the level of precision at which one is working, but this is a matter for the engineer or manufacturer to decide.

Determination of the value of n*

Let us we return to the general formula in t
        d(t)  =   ½ gt2 + ½ (gt)/n* 

         If now we set t = n*  ― and there is no reason why we should not since n* is a number, even if we do not know what it is ― we have
        d(n*)  =   ½ g(n*)2 + ½ (gn*)/n*  =  ½ g(n*)2 +  ½ g  

In other words, the ‘extra’ distance, which we have tended to discount as being negligible, is now equal to the known increment of ½ g  . And this means that we can, at least in principle, determine the value of n*  from data and spreadsheets  —  for it will be that value of t which gives an ‘additional’ increment of g/2.

Now, n* is probably far too large a number for this to be currently possible even with today’s computers but we do not have to leave it at that. We can work with nanoseconds instead of seconds and the relation will still be true, although we will now be determining n*/109  which is probably still a large number. Of course, at this level of precision we shall have to be much more careful about the values of g ― will probably have to make further observations to determine them ― and perhaps will also have to take into account the effects of General Relativity. However, since the instruments  now available are capable of determining the tiny Mössbauer effect, this should not by any means be an impossible task. I confidently guess that within the next twenty years, science will come up with a good estimate of the value of n* (the number of ksanas in a second) and that n* will take its rightful place alongside Avogrado’s Number and G, the universal constant of gravitation, as one of the most important numbers in science (Note 7).

Problem about the formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent and  oppositely directed, reaction, and this means there is a fixed order of events since action and reaction cannot be strictly simultaneous, at any rate in Ultimate Event Theory (see post on Newton’s Third Law).       Certainly there is a definite sequence of events when we are dealing with contact forces, which is why Newton used the terms ‘action’ and ‘reaction’ in the first place. But is gravity, action at a distance, the great exception to the rule? Seemingly so, because all my attempts to tinker around with the derivation ended up with something so far from the usual formula (½ gt2)  that it had to be wrong  because the latter has been shown by countless experiments to be at least approximately correct. If, for example, you envisage gravitational attraction as similar to a contact force, there will be a delay of at least one ksana before the next force is experienced by the falling body. That is, the Earth pulls the falling body, the latter  feels the effect, in return exerts a pull on the Earth while it continues for at least one ksana at its current reappearance rate. This gives

ksana       distance at current ksana           Total distance                                       

0                                      0                                                        0
1                             1g/2n                                                                             

2                             1g/2n                                                1g/2n

3                             2(g/2n)                                          3 g/2n                                                                     

4                            2(g/2n)                                         6 g/2n

5                             3(g/2n)                                            

6                              3(g/2n)                                     12 g/2n

……………………………………………..

 –1                          (m/2) (g/2n)
m (even)                            (m/2) (g/2n)        2{1 + 2 + 3 +….(m/2)  (g/2n)

The distance, after due substitutions, turns out to be half the previous one, as one might expect, namely  1/4 gt2  +   gt/4n* and this must be wrong.

This means that gravity must be treated as simultaneously affecting both bodies ‘at once’, i.e. within the same ksana, strange though this seems. Newton himself was worried by the issue since it implied that gravity propagated itself ‘instantaneously’ across the entire universe (Note 9) !  It is, however, probably wrong to conceive of gravity in  this way : in General Relativity it is the space between bodies that contracts rather than the separate bodies sending out impulses to each other. In Ultimate Event Theory terms, this makes gravitational phenomena states of the underlying substratum (that I call the Locality) rather than interactions between event-clusters ‘on’ the Locality. This issue will be gone into in more depth when I come to discussing the implications of Relativity for Ultimate Event Theory.

S.H.

Note 1   Galileo, who apparently never did  throw iron balls off the top of the Leaning Tower of Pisa, does give a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue Concerning the Two World Systems) But this figure is so widely off that one of his own pupils queried it at the time :

“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.”

Drake, Notes p. 561 op. cit.

Note 2  Galileo does, however, go on to say cryptically that “the spaces passed over by the  body starting from rest have to each other the ratios of the squares of the times in which such spaces were traversed” (Dialogue, p. 257)

Note 3 Galileo discusses this possibility but rejects it as unlikely.

Note 4  Speed is not a basic SI unit, being simply the ratio of distance to time (metres to seconds). Curiously though, while  speed is not something of which we have direct experience, the same is not true of momentum mv, speed × mass. All collisions involve forcible and abrupt change of momentum that we certainly do experience since such changes are often catastrophic (car crashes, tsunamis). I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2)  as a primary unit.

Note 5 In the more sophisticated modern derivation of the formula, we ‘squeeze’ the area between two limits, one an overestimate and the other an underestimate, and we chop up the interval into rectangles of variable width. But the basic strategy is the same as that of Galileo and Oresme before him.

Note 6  Newton presumably meant, ‘generates equal supplementary velocities’, since, as we have had it drummed into our heads in school, force produces accelerations, not velocities  as such. It is amusing to see one of the greatest minds in the history of science making a slip that would have earned him a reprimand in a modern school. This is actually not the only place in the Principia where Newton confuses velocity and acceleration; maybe part of the problem was that there did not exist a suitable Latin term (he wrote the Principia in Latin).

Note 7  Modern textbooks generally state that gravitational attraction travels at the speed of light.

Note 8  Unfortunately, the chances are that I shall not live to see this.

time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Now, lengths and distances are things we feel we know about, things accessible to the senses. Likewise, time in the sense of a regular sequence of audible or visual stimuli, the ticking of a clock, beating of a pulse, regular flashing of a light, is also something that falls within our sensory experience. But speed? Speed cannot be seen or heard and is, most of the time extremely difficult to determine precisely :  we experience the effects of speed, but not speed itself. The difficulty with problems like that of falling bodies is that we start with the known, recorded distances,  familiar audible events and so on, but then, in order to determine further distances, have to work out speeds and deduce  distances we cannot hope to evaluate directly from a formula which deals in changes of distance, not the distances themselves (Note 3)
       The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not from first principles, but from meticulous experiment and observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  entirely acceptable? To me, not, because there is first of all a lingering doubt in my mind as to whether it is legitimate to  represent distances, which are real things, by ‘areas under a speed/time curve’  since this involves plotting velocities which themselves depend on distances and have no reality of their own. But a more serious objection is that, as always in traditional calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but a  moment later turning it into a constant and pretending that this does not matter because it is only a constant for such a short period of time ! It is like saying someone is really a very nice guy but on every occasion you meet him, he will be basty but this doesn’t matter because you only ever see him for a very short time! The mathematical ‘passage to a limit’  does not so much resolve the conceptual problem as make it irrelevant  but only at the price of removing the whole discussion from the domain of physical reality where it originated and where it belongs. And in cases where we know that there really is a cut off point (a smallest possible value for the independent variable), this sort of manipulation can, and sometimes does, give incorrect answers which is why, increasingly, problems are dealt with by slogging them out numerically with computers rather than trusting blindly to analytical formula.  The dreadful truth that ought ot be inscribed in letters of darkest black in every calculus textbook us that the famous mathematical limit is, in almost all cases, never attained and those of us who live in the real world need to  beware of this since the cut off point may be much closer than the mathematics makes it look as if it is. Generally, engineers simply evaluate some quantity to the level of precision they require and don’t bother about the limiting value.
But is there any other way of tackling the problem?  Yes, I believe there is, deriving the equation of motion directly  from Newton’s Laws while adding in the ‘finitist’ assumption on which most of Ultimate Event Theory is based (Note 4). That is, one can simply suppose (what common sense tells us must be the case) that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so.

Treatment of Free Fall in Ultimate Event Theory

Suppose we have observed that a particle (event-cluster) falling from rest has, or at any rate appears to have, a constant acceleration of g metres/second which takes 1 second to take effect. So, in ‘thing-speak’, if the particle starts from rest, it has moved g metres by the end of the first second.
In Ultimate Event Theory there is always a first event, whether observed or not, and, in general, a first cause of ‘motion’. Since the ‘particle’ is in motion, we conclude that, in accordance with Newton’s First Law (or its UET equivalent), the particle has received an impulse that has dislodged it from its previous position and that in the first  ksana we are concerned with the ‘particle’ has been displaced by a certain number of spaces (grid-positions on the Locality) in some given direction. We do not know, and do not need to know, what this initial number of spaces is but we can call it g/n metres since there are, by hypothesis, n ksanas to a second. (The quotient  g/n is understood to be taken to the nearest whole number.) This is the ‘current velocity’, the equivalent of ‘instantaneous velocity’, namely the ‘velocity’ that the ‘particle’ would continue to have from now on if it were not interfered with by any outside forces. In other words, were this rate of displacement to remain unchanged, ksana by ksana, the particle would traverse  n(g/n) = g spaces in n ksanas, which we take to be the equivalent of a second. As for n, all we really need to know at this stage is that this  number ‘exists’, i.e. represents a real amount of spaces,  and that it is a very large number.
The particle (event-cluster) does not, however, as it hap[pens in this case maintain its current ‘velocity’ (rate of appearance) because of the influence of a massive event-cluster near by. Since gravitation is a permanent force, the particle in question will keep on receiving the same impulse ksana after ksana so its overall rate of displacement will increase at each stage but by a fixed amount (g/n) from one ksana to the next (Note 5) .
At the ksana we label 0 the particle is thus at zero distance from some grid position and at (not ‘during’) ksana 1 it is at the equivalent of g/n metres from this point. At each ksana it maintains its previous rate of appearance but with a constant supplementary distance added on because the effect of the outside force is repetitive and inexhaustible. We have

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

  0                                    0                                                   0

1                            1g/n                                     1 g/n

2              (1 + 1) = 2g/n                          (1 + 2) = 3 g/n

3              (2 + 1) = 3g/n                  (1 + 2 + 3)   = 6 g/n

4                            4g/n            (1 + 2 + 3 + 4)   =  10 g/n ……………………………………………..

m                                    m g/n              {1 + 2 + 3 +……+ m} g/n

         Since the sum of the natural numbers up to m is the relevant triangular number,  ((m+ 1))2 = (m2/2) + (m/2)
The total distance traversed at the mth ksana is thus

        g/n  {1 + 2 + 3 + …….m}   = g/n (m(m+1))/2
                                                = (g/2) ((m2 + m)/n) metres

This distance has been accomplished in m ksanas.
Now, since there are n ksanas to a second, to work out how many metres the particle has traversed in t seconds, we have to convert to seconds, i.e. divide by n. If m = nt , then the distance traversed in t seconds is

  (g/2) ((m2 + m)/n)     =  (g/2) ((nt2)+ t)   =  ½ gt2 + ½ g t/n
            n                                n

Now, if we want to take the limit as n → ∞ (I prefer to write simply n →  ) this is the familiar ½ gt2 . For normal situations, doubtless this is accurate enough, but I can conceive of cases where, if t were large enough, the extra term might need to be taken into consideration ―   indeed this would be an occasion to get an estimate of n. Otherwise, what we can say is that the acceleration is always > ½ gt2

        Note that I have not been obliged to make any appeal to ‘areas under a curve’ or to velocities as such, it being only necessary to add distances. Recursively, the behaviour of the ‘particle’ is given by the formula

                d(n+1) = d(n) + d(n) – d(n–1) = 2d(n) – d(n –1)

                d(0) = 0; d(1) = g/n

There is thus a steady increase of 2d1 = 2 g/n in this case at each ksana and this is why the odd numbers appear at successive intervals of time. The curve formed by joining the dots is what we know of as a parabola because the odd numbers 1 + 3 + 5 + ….(2n – 1) add to n2 and y = x2 is the equation of a parabola.

Correction to formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent, oppositely directed, reaction, and this means there is a fixed order of events since action and reaction are not strictly simultaneous (see post on Newton’s Third Law). This is certainly the case for cases of contact but I had thought of making an exception for gravity as Newton himself seems to have done since he is on record as saying that it propagates instantaneously, though he had some difficulties believing this (Note 5). If we consider that, in the case of free fall under gravity, there should still be a succession of events, the ‘action’ will only take effect at every other ksana. I had thought of making it a general principle that an ultimate event always occupies the same position for at least two consecutive ksanas, otherwise it cannot be said to have any proper velocity. I thus had the particle (event-cluster) keep the same rate of displacement for two ksanas before being once more accelerated. This worried me at first since the result seemed completely different to ½ gt2 which must be more or less correct. However, all was well after all : the final result merely differed in the second term.

This time, for reasons that will become apparent, I set 1 second as equal to 2n ksanas (and not n) so the initial displacement is g/2n metres.

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

 0                                      0                                                        0

1                             1g/2n                                            1 g/2n
2                             1g/2n                                           2 g/2n

3                             2(g/2n)                                      3(g/2n)                             

4                              2(g/2n)                                     6 g/2n

 

5                             3(g/2n)                                            

6                              3(g/2n)                                             12 g/2n

…………………………………………….. 

m–1                          m g/2n

m (even)                m g/2n         2{1 + 2 + 3 +…..(m/2)  g/2n

The total distance traversed at the mth ksana where m is even is

(g/2n)  2{1 + 2 + 3 + …….(m/2)}   = (g/2n) 2 { (m/2)2 + (m/2))}/2

                                        = (g/2n) {(m2/4) + (m/2)} metres

Substituting in m = nt and dividing by n we obtain

(g/2n) ((2nt)2/4  + nt/2))/n  = (g/2)(t2 + t/2n)                                                     

= ½ gt2  +   gt/4n

_________________________________________________

Note 1   For the man who is hailed as the inventor of the experimental method, Galileo is surprisingly cavalier about the details. He actually gives a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue  p. 259) which is so widely off that one of his own pupils queries it. 
“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.” 
     Drake, Notes p. 561 op. cit.)

Note 2   To his credit, Galileo does consider the possibility that the speed of a falling body fluctuates erratically rather than increases uniformly though he discounts this as unlikely : “It seems much more reasonable for it [the falling body] to  pass first through those degrees nearest to that from which it set out, and from this to those farther on” (Galileo, op. cit. p. 22).

 Note 3    Speed is not a basikc SI unit being simply the ratio of distance to time (metres/seconds). Interestingly, though speed is not something we have direct experience of, this seems not to be  true of momentum mv since all collisions are the result of forcible change of momentum, often dramatic. It is possible, indeed tempting, to consider that a particle (event-cluster) really does possess momentum (whereas it never really ‘has’ speed) and, I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2) as a primary unit.Note 1

Notew 4 This is most likely the reason why Newton continued to use the cumbersome apparatus of geometric ratios in his Principia instead of his ‘Method of Fluxions’ which he had already invented. Sticking  to ratios and geometry sidesteps the problem of infinite divisibility and the reality of ‘indivisibles’ concerning which Newton had serious doubts. .

Note 2 

Note 2  “In the Methodus Fluxionum Newton stated clearly the fundamental problem of the calculus: the relation of quantities being given, to find the relation of the fluxions to these, and conversely.”  Boyer, The History of the Calculus p. 194.

Note 3   y ‘fluxion’ Newton meant what we call the ‘rate of change’ or derivative.

 

Note 3 In the more sophisticated derivation of the formnula, we ‘squeeze’ the area between two limits, one an overestimate anbd the other an underestimate, and, moreover, we chop up the interval into rectangles of variable width, not just the same. But the basic strategy is the same as that of Galileo and Oresmebefore him.

 

In the ‘infinitist’ treatment of Calculus, we always have to calculate velocities when what we are interested in is distances. In UET, at any rate in this case, we can deal directly with distances which is as it should be since velocity is a secondary quality whereas distance (relative position) is not.

Note 4  Newton himself expresses himself rather too succinctly on the problem. He writes:

“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities; and in the whole time impresses a whole force, and generates a whole velocity proportional to the time. And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.”     

                        Newton, Principia, Motte’s translation  p. 21

This is rather obscure and even wrong to  modern ears. For Newton speaks of “equal forces…generating equal velocities” when every schoolchild today has had it drummed into them that force produces acceleration rather than velocity as such. However, Newton does not seem to have had a suitable (Latin) word for ‘acceleration’ and we must, I think, understand that he meant “equal forces generate equal supplementary velocities” while assuming, in accordance with the First Law, that a body retains the velocity it already has).

Note, however, that Newton, like Galileo, does not give us an equation of motion but still talks the geometrical language of proportion ― and this is typical of the way the entire Principia is written, even though Newton had already invented his version of the Calculus, the Method of Fluxions.

Note 5  Modern textbooks generally state that gravitational attraction travels at the speed of light but there is some doubt as to whether one can really talk about gravity travelling anywhere since it is the intervening space that contracts according to General Relativity.