Archives for category: Uncategorized

In an earlier post I suggested that the vast majority of ultimate events appear for a moment and then disappear for ever : only very exceptionally do ultimate events repeat identically and form an event-chain. It is these event-chains made up of (nearly) identically repeating clusters of ultimate events that correspond to what we call ‘objects’ — and by ‘object’ I include molecules, atoms and subatomic particles. There is, thus, according to the assumptions of this theory, ‘something’ much more rudimentary and fleeting than even the most short-lived particles known to cojntemporary physics. I furthermore conjectured that the ‘flicker of existence’ of ephemeral ultimate events might one day be picked up by instrumentation and that this would give experimental support to the theory.  It may be that my guess will be realized more rapidly than I imagined since, according to an article in the February edition of Scientific American an attempt is actually being made to detect such a ‘background hum’  (though those concerned interperet it somewhat differently).

Craig Hogan, director of the Fermilab Particle Astrophysics Center near Batavia, Ill., thinks that if we were to peer down at the tiniest sub-divisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of static. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth……..Hogan’s noise arises if space is made of chunks. Blocks. Hogan’s noise would imply that the universe is digital.”     Michael Moyer, Scientific American, February 2012
Moreover, Hogan thinks “he has devised a way to detect the bitlike structure of space. His machine ― currently under construction ― will attempt to measure its grainy nature.(…) Hogan’s interferometer will search for a backdrop that is much like the ether ― an invisible (and possibly imaginary) substrate that permeates the universe”.
         Various other physicists are coming round to the idea that ‘Space-Time’ is ‘grainy’ though Hogan is the first to my knowledge to speak unequivocally of a ‘backdrop’ permeated with ‘noise’ that has nothing to do with atomic particles or the normal quantum fluctuations. However, the idea that the universe is a sort of giant digital computer with these fluctuations being the ‘bits’ does not appeal to me. As I see it, the ‘information’ only has meaning in the context of intelligent beings such as ourselves who require data to understand the world around them, make decisions and so forth. To view the universe as a vast machine running by itself and carrying out complicated calculations with bits for information (a category that includes ourselves) strikes me as fanciful though it may prove to be a productive way of viewing things.      S.H.  30/08/12


Newton’s First Law states, in effect, that the ‘natural’ movement of a particle ― Newton says ‘innate’ ― is to travel in a straight line at constant speed. In consequence, any deviation from this ‘natural’  state requires explanation and the conclusion to be drawn is that the particle has become subject to interference from an outside source (Newton’s Second Law).
After the square and the rectangle, the circle is probably the best known regular shape though there are precious few examples of true circles to be found in nature  ─ if we are so familiar with it, this is largely because of the wheel, one of the most ancient human inventions. Plato, enamoured of elegance and simplicity rather than actuality, considered that heavenly bodies must follow circular paths because the circle was the ‘most perfect’ shape (Note 1). Greek astronomy, as we know, dealt with departures from circular motion by introducing epicycles, circles within or on other circles. It would probably be possible, though hardly convenient, to trace out any regular curve in this manner and the Ptolemaic system proved to be a satisfactory way of calculating planetary orbits for nearly two thousand years.
However, for reasons that are probably physiological in origin, we do not feel at ease with anything other than straight lines, squares and rectangles and a good deal of mathematics is concerned with the problem of ‘squaring the circle’ or, more generally, converting the areas and volumes of irregular shapes into so many squares. Newton, following Galileo, broke down circular motion into two straight-line components at right angles to each other  : one tangential to a point on the circumference of a circle and the other along the radius from that point to the centre of the circle. He furthermore proposed that circular motion could be modelled in the following manner : the original tangential velocity did not change in magnitude but did change repeatedly in its direction and by the same amount.  We do not normally view a change in direction as an ‘acceleration’ and Newton, writing in Latin, did not use the term. However, in Newtonian mechanics we consider any deviation from constant straight-line motion  to be an ‘acceleration’ which, according to  Newton’s Second Law must be due to the action of an  outside force. He coined the term centripetal or ‘centre-seeking’ force (after Latin peto, I seek) to characterise this force. What is more,  since gravitational attraction was permanent and did not change in magnitude over relatively small distances, a particle subject to such a force would deviate repeatedly from its current straight line motion and deviate by the same amount each time. If the original velocity was ‘just  right’, the particle would keep more or less at the same distance from the attracting body while perpetually changing direction : the result if the attracting body was very much larger being motion in a circle around that body.
There can be little doubt that Newton based his mathematical treatment of circular motion on personal experience while generalising the principles involved, by a giant leap of thought, to model the motion of heavenly bodies. On the second page of his Principia he mentions in his Definition V how “a stone, whirled about in a sling, endeavours to recede from the hand that turns it; and  by that endeavour, distends the sling, and that with so much the greater force, as it is revolved with the greater velocity” (Principia, Vol. I Motte’s translation). It is a matter of experience (not theory) that a whirling conker or other projectile exerts a definite pressure on your finger, or on any other object that serves as an axis of rotation, and that we feel this pressure more or less on the opposite side to where the conker is ‘at that moment’. Secondly, as Newton says, the faster the whirling motion, the more the string cuts into your finger. Thirdly, if we use a rubber  connection we can actually observe the ‘string’ being extended beyond its normal length. And finally, if we cut the string or otherwise break the connection between particle at the circumference and the centre, the particle flies off sideways : it never flies off directly inwards nor, as far as we can judge, does it continue to follow a circular trajectory. 

The Modern Derivation of the formula for centrifugal force

In the modern treatment of motion in a circle we have a particle which is at rest at time t = 0 at position P0 on the circumference of a circle of radius r. It is propelled in a direction tangential to the circumference of the circle at an initial  velocity of v metres/sec.  At time t it has reached the point P1 having travelled vt metres while the angle at O has turned through θ radians.

        Since it has travelled in a straight line along the tangent it has covered r tan θ metres while a particle travelling along the circumference has travelled in the same t seconds a distance of metres if θ is in radians. If we have an angular velocity of ω radians per second this distance is rωt metres.  Now, v, the speed of the particle travelling along the tangent, is not the same as rθ/t = rω, the speed of a particle travelling along the circumference, since r tan θ > rθ.

v/rω = (r tan θ)/t   or  v/r  =  ω (tan θ)/θ)

For very small angles v ≈ rω since the limit of tan θ/θ is 1 as θ → 0

Now, the particle supposedly always maintains its constant speed but each time it is in contact with the circumference changes direction and pursues a path along the tangent at that point. In the usual  modern treatment, we take two velocity vectors v1 and v2 corresponding to the velocities at two points on the circumference of a circle when the angle at the centre has turned through θ radians. These two velocities are, according to Newton’s treatment of circular motion, equal in magnitude but differ in direction and the angle between the two vectors is the same as the angle at the centre. We now evaluate the closing velocity vector which is considered “for very small angles” to be approximately equal to as if this closing vector, instead of being a straight line, were part of the circumference of a circle with radius v metres subtended by the angle θ. We thus end up, after some manipulation, with an acceleration vector of 2 v2/r metres per second per second, setting ωr equal to v. Since F = mass × acceleration , the centrifugal force is mv2/r for a particle of mass m.
Thus according to textbooks aimed at engineers and technicians. In textbooks aimed at pure mathematicians it is  stressed that what we have here is a double ‘passage to a limit’, that involving v and and that involving 2v sin  θ/2 and . Fpr, in reality the closing vector would have length 2v sin (θ/2) which goes to because the ratio (2v sin (θ/2)) : vθ  = v sin (θ/2) : v (θ/2) tends to the limit v as θ/2 → 0. But what these textbooks do not make clear is that these limits are never attained since tan θ is never strictly equal to θ and sin θ/2 is never strictly equal to θ/2 for any 0 < θ < p/2.     

Motion in a Circle according to Ultimate Event Theory

As before we have a particle P is at r units of distance from O, the centre of an imaginary circle. It is projected along the tangent at P0 with speed v units distance/units of time and at time t it is at P1 having traversed vt units of distance. At time t the angle at O has gone through θ radians so that vt = r tan θ.
So far, so good : everything is as in the normal treatment. However, note that t is integral being ‘so many ksanas’ ― a ‘ksana’ is the minimal interval of time, equivalent of the vague ‘instant’. Also, v is a rational number since it is the ratio of an integral number of grid positions to t ksanas. If the ‘particle’ (repeating ultimate event) takes 4 ksanas to reappear 7 grid positions further along, the speed of reappearance is 7/4 spaces/ksana. It is not to be supposed that the particle covers two and a quarter spaces in one ksana since the spaces are indivisible, only that four ksanas later the particle reappears seven positions to the right (or left as the case may be) to where it was before. Moreover, these distances are ‘unstretched’, i.e. the grid positions are conceived to occupy ‘all’, or practically all, of the distance between two occupied positions. Just as in Special Relativity, we consider measurements made from inside an inertial system to be ‘standard’, in Ultimate Event Theory we start by evaluating distances from the standpoint of a regular repeating event not subject to influence from other event-chains. The repeating event E is such an event since it is (not yet) subject to an outside influence : it maintains a constant speed of s spaces per t ksanas and its speed if s/t spaces per ksana. r is likewise integral in Ultimate Event Theory since it is so many (unstretched) spaces from O.
When P reaches P1 it is no longer at r spaces from O but a somewhat larger distance (and the same applies to event E when it repeats at this particular grid position). The distance P0 O has thus been ‘stretched’ using the analogy of a string connecting two bodies. Stretched by how much?  By Pythagoras,
    (OP1)2   = r2 + r2 tan2 θ)  or OP1 = r √(12 +  tan2 θ) and the extension is this quantity less the radius, or  r √(1 + tan2 θ) – r  =  r √(sec2 θ) – r = r (1/cos θ  – 1)

        Now this spatial distance does not have to be integral, that is, correspond to a whole number of positions capable of receiving ultimate events. (Note 1).
The initial displacement of the particle, once it has been impelled along its path, is unaccelerated and thus, by definition, not subject to an outside force. However, this displacement does have an effect on a similar initially unaccelerated body situated at the centre O (since the distance between the two particles has increased). This ‘action’ at O calls forth an ‘equal and opposite reaction’ according to Newton’s Third Law and exactly four ksanas later the ‘particle’ appears on the circumferenceat P2 with the same speed and proceeds along the tangent at P2    ― this according to Newton’s treatment of motion in a circle.
In event terms we have an event E originally appearing at P0 exactly r spaces from O, reappearing at P1 outside the imaginary circle t ksanas later, and finally appearing at P2 where it will travel along the tangent at P2  at the same speed v = m spaces per t ksana ― in this particular case 7/4 spaces/ksana.
The first displacement calls forth an ‘equal and opposite reaction’ according to Newton’s Third Law and exactly four ksanas later the ‘same’ ultimate event repeats on the circumference of the imaginary circle at P2. By hypothesis, it has  the same ‘reappearance speed’ of v spaces/ksana ― in this particular case 7/4 spaces/ksana. Although it has the same speed, event E has been subject to an outside influence since it has changed its straight-line event-direction : it has been, in Newtonian terms, accelerated and subject to an outside force. This force, or rather  influence, originates in  the reaction of the event-chain situated at the centre O .  The reaction will be, by Newton’s Laws, equal in magnitude to the original force to which the body (event-cluster) at O was subject.
It is essential to grasp the succession of events. When travelling from P0 to P1, the particle/event is not subject to any outside influence : this influence is in operation only for the second t ksanas during the passage from P1 to P2. Similarly, once at P2, the particle/event is again free of outside influence for t ksanas and the entire cycle action/reaction repeats indefinitely. This is so in the Newtonian treatment as well but there is a tendency, even in authoritative text books, to assume that because action and reaction take place in such swift succession that they are strictly simultaneous.  
So how do we evaluate the strength of the restoring force which in Newtonian Mechanics is known as the centripetal or centre-seeking force (from Latin peto I seek) ? We can do this by comparing the distances along the radial direction and noting the changes in speed. The particle/event starts off with zero speed in this direction since it is situated on the circumference; it is at a distance r (1/cos θ  – 1) from the centre O t ksanas later and t ksanas later still it has once more zero radial speed having recovered its original configuration.

Now, the particle (event) has constant speed v that and so, were it not for the restraining influence from the centre, would have, during the second t ksanas,  travelled another vt spaces along the tangent line from P0 and the angle at the centre would have gone through another θ radians. The particle’s distance from P2 on  the circumference O would thus be

        √ (r2 tan2 2θ – r2) – r  = r √sec2 θ  – r  = r(1 – cos 2θ)/cos 2θ

        We wish to have this distance and the eventual radial acceleration in terms of r and v rather than θ, since r and v are ‘macroscopic’ values that can be accurately measured whereas θ is microscopic ― what used to be called ‘infinitesimal’.
To bring the particle to P2 the above distance has had to be reduced to zero in the space of t (not 2t) seconds since the reaction did not begin until the particle was at P1. We thus evaluate        r (1 – cos 2θ) /t2   where, as before, t = r tan θ/v
                                                                cos 2θ

                =   (v2/r)  (1 – cos 2θ)    = (v2/r)  (2 cos2 θ)(cos2 θ)
                              cos 2θ tan2 θ             (1 – 2cos2 θ)(sin2 θ)   

Since cos 2θ  =  2 cos2 θ –1
        1 –  cos 2θ  =  1 – (2 cos2 θ – 1) = 2(1 – cos2 θ)   =  2 sin2 θ

Consequently         (v2/r)  (2 sin2 θ)(cos2 θ)     =   2(v2/r) (cos2 θ)
                                             (2cos2 θ – 1)(sin2 θ)              (2cos2 θ – 1)

         Since θ is not a macroscopic value and is  certainly extremely small, we may take the limit as θ → 0  (Note 2).

Lim  as θ → 0     (cos2 θ)       =           1     .       =   1
                        (2cos2 θ – 1)      2 –1/cos2 θ            

        The acceleration is thus 2 v2/r spaces per ksana per ksana  and the  centrifugal ‘force’ on a particle of mass m is 2m v2/r (Note 3).

This is a most significant result since it differs from that derived by traditional methods which is just v2/r .  The latter is the average value over 2t ksanas since, during the first t ksanas, the centrifugal force was not active. It will be objected that the particle ‘in effect’ follows a circular path around the circumference at all moments but, if this really were the case, the particle would never obey Newton’s First Law but would be permanently accelerated (since it would never follow a straight line trajectory). That the deviation is for normal practical purposes entirely negligible is irrelevant : the aim of an exact science  should be to deduce what actually goes on and anyway it is by no means certain that the discrepancy might, in some cases, be significant.  Whereas in the conventional treatment, the centrifugal force is permanent, in Ultimate Event Theory, it is, like all reactions, intermittent : it is only operative over t moments out of every 2t.  It is by no means inconceivable that accurate experiments could detect this intermittence, just as they might one day detect the difference between v and .    S.H. ­­­­­­­­­­­­­­­


Note 1. In reality, all orbiting bodies eventually move away from the centre of attraction or fall towards it as even the Moon does — but so slowly that the difference from one year to the next is only a few centimetres.   

Note 2 The ‘stretched distance’ between two occupied spaces  does not (by hypothesis) contain any more grid positions than an unstretched and the extra ‘distance’ is to be attributed to the widening of the gaps between grid-points, i.e. to the underlying (non-material) ‘substance’ that fills the Locality. This stretched distance is essentially a mathematical convenience that has no true existence. All that actually happens and that is, or could be,  observable is the occurrence of a particular ultimate event at a certain spot on the Locality at a particular ksana and its reappearance at a subsequent ksana at a spot that is not exactly equivalent to the previous position (relative to some standard repeating event-chain).

Note 2. It must be stressed that this limiting value, like practically all those which occur in Calculus, is not actually attained in practice though one assumes that in such a case as that considered the approximation is very good. The value of θ in any actual occurrence will certainly be small but not ‘infinitesimal’, a term which has no meaning in Ultimate Event Theory (and should have none in traditional mechanics either).

Note 3  It remains to give a satisfactory definition of ‘mass’ in terms of Ultimate Event Theory. Newton calculated mass with respect to area and density and spoke of it as “a measure of the quantity of matter in a body”. In traditional physics mass is not only measured by, but equated with, the ability of a particle to withstand any attempt to change its constant straight-line motion and in contemporary physics mass has virtually lost any ‘material’ meaning by being viewed as interchangeable with energy.

Newton’s Third Law states rather cryptically that

“To every action there is an opposite and equal reaction.”

This law is the most misunderstood (though probably most employed) Law of the three since it suggests at first sight that everything is in a permanent deadlock !   Writers of Mechanics textbooks hasten to point out that the action and reaction apply to  two different bodies, call them body A and body B.  The Third Law claims that the force exerted by body A on body B is met by an equivalent force, equal in magnitude but opposite in direction, which body B exerts on body A.
Does this get round the problem? Not entirely. The schoolboy or schoolgirl who somehow feels uneasy with the Third Law is on to something. What is either completely left out of the description, or not sufficiently  emphasized by physics and mechanics textbooks, is the timing of the occurrences. It is my push against the wall that is the prior occurrence, and the push back from the wall is a re-action. Without my decision to strike the wall, this ‘reaction’ would never have come about. What in fact is happening at the molecular level is that the molecules of the wall have been squeezed together by my blow and it is their attempt to recover their original conformation that causes the compression in my hand, or in certain other circumstances, pushes me away from the wall altogether. (The ‘pain’ I feel is a warning message sent to the brain to warn it/me that something is amiss.) The reaction of the wall is a restoring force and its effectiveness depends on the elasticity or plasticity of the material substance from which the wall is made — if the ‘wall’ is made of putty I feel practically nothing at all but my hand remains embedded in the wall. As a reliable author puts it, “The force acting on a particle should always be thought of as the cause and the associated  change of momentum as the effect” (Heading, Mathematical methods in Science and Engineering).
In cases where the two bodies remain in contact, a lengthy toing and froing goes on until both sides subside into equilibrium (Note 1). For the reaction of the wall becomes the action in the subsequent cause/event pair, with the subsequent painful compression of the tissues in my hand being the result. It is essential to realize that we are in the presence not of ‘simultaneous’ events, but of a clearly differentiated event-chain involving two ‘objects’  namely the wall and my hand. It is this failure to distinguish between cause and effect, action and reaction, that gives rise to the conceptual muddle concerning  centrifugal ‘force’. It is a matter of common experience that if objects are whirled around but restrained from flying away altogether, they seem to keep to the circumference of an imaginary circle — in the case of s spin dryer, the clothes press themselves against the inside wall of the cylinder while a conker attached to my finger by a piece of string follows a roughly circular path with my finger as centre (only roughly because gravity and air pressure deform the trajectory from that of a perfect circle) . At first sight, it would seem, then, that there is a ‘force’ at work pushing the clothes or the conker outwards  since the position of the clothes on the inside surface of the dryer or of the conker some distance away from my finger is not their ‘normal’ position. However, the centrifugal ‘force’ (from Latin fugo ‘I flee’) is not something applied to the clothes or the conker but is entirely a response to the initiating centripetal force (from Latin peto ‘I seek’) without which it would never have come into existence. The centrifugal ‘force’ is thus entirely secondary in this action/reaction couple and, for this reason, is often referred to as a ‘fictitious’ force — though this is somewhat misleading since the effects are there for all to see, or rather to  feel.
Newton does in certain passages make it clear that there is a definite sequence of events but in other passages he is ambivalent because, as he fully realized, according to his assumptions, gravitational influences seemed to propagate themselves over immense distances instantaneously (and in both directions) — which seemed extremely far-fetched and was one reason why continental scientists rejected the theory of gravitational attraction. Leaving gravity aside since it is ‘action at a distance’, what we can say is that in cases of direct contact, there really is an explicit, and often visible, sequencing of events. In the well-known Ball with Two Strings experiment (Note 2) we have a heavy lead ball suspended from the ceiling by a cotton thread with a second thread hanging underneath the ball. Where will the thread break? According to Newton’s Laws it should break just underneath the ceiling since the upper thread has to support the weight of the ball as well as responding to my tug. However, if you pull smartly enough the lower thread will break first and the ball will stay suspended. Why is this? Simply because there is not ‘time enough’ for my pull to be transmitted right up through the ‘ball plus thread’ system to the ceiling and call forth a reaction there. And, if it is objected that this is a somewhat untypical case because there is a substantial speed of transmission involved, an even more dramatic demonstration is given by high speed photographs of a golf club striking a ball. We can actually see the ball still in contact with the club massively deformed in shape and it is the ball’s recovery of its original configuration (the reaction) that propels it into the air. As someone said, all (mechanical) propulsion is ‘reaction propulsion’, not just that of jet planes.
In Ultimate Event Theory the strict sequencing of events, which is only implicit in Newtonian mechanics, becomes explicit.  If we leave aside for the moment the question of ‘how far’ a ksana extends (Note 3), it  is possible to give a clearcut definition of simultaneous (ultimate) events : Any two events are simultaneous if they have occurrence within the same ksana. A ‘ksana’ (roughly ‘instant’) is a three-dimensional ‘slice’ of the Locality and, within this slice, everything is still because there is, if you like, noit enough ‘time’ for anything to change. Consequently, an ultimate event which has occurrence within or at a particular ksana cannot possibly influence another event having occurrence within this same ksana : any effect it may have on other event-chains will have to wait until at least the next ksana. The entire chain of cause and effect is thus strictly consecutive (cases of apparent ‘causal reversal’ will be considered later.) In effect when bodies are in contact there is a ceaseless toing and froing, sort of ‘passing the buck’ from one side to the other, until friction and other forces eventually dampen down the activity to almost nothing (while not entirely destroying it).
S.H. 21/08/12


Note 1 Complete static equilibrium does not and cannot exist since what we call ‘matter’ is always in a state of vibration and bodies in contact affect each other even when apparently completely motionless. What can and does exist, however. is a ‘steady state’ when the variations in pressure of two bodies in contact more or less cancel each other out over time (over a number of ksanas). We are, in chemistry, familiar with the notion that two fluids in solution are never equally mixed and that, for example, oxidation and reduction reactions take place continually; when we say a fluid is ‘in equilibrium’ we do not mean that no chemical reactions are taking place but that the changes more or less equal out over a certain period of time. The same applies to solid bodies in contact though the departures from the mean are not so obvious. Although it is practical to divide mechanics into statics and dynamics, there is in reality no hard and fast division.

Note 2  I am indebted to Den Hartog for pointing this out in his excellent book Mechanics (Dover 1948).

Note 3  It is not yet the moment — or maybe I should say ksana — to see how Ultimate Event Theory squares with Relativity : it is hard enough seeing how it squares with Newtonian Mechanics. However, this issue will absolutely be tackled head on at a later date. Einstein, in his 1905 paper, threw a sapnner in the works by querying the then current understanding of ‘simultaneity’ and physics has hardly recovered since. In his latter days, Einstein adhered to the belief that everything takes place in an eternal present so that what is ‘going to’ happen has, in a sense, already been — in my terms already has occurrence on the Locality. I am extremely reluctant to accept such a theory which flies in the face of all our perceptions and would sap our will to live (mine at any rate). On the other hand, it would, I think, be fantastic to consider a single ksana (instant) stretching out across the known universe so that, in principle, all events are either ‘within’ this same ksana or within a previous one. At the moment I am inclined to think there is a sort of mosaic of ‘space/time’ regions and it is only within a particular circumscribed region that we can talk meaningfully of (ultimate) events having occurrence within or at the same ksana. Nonetheless, if you give up sequencing, you give up causality and this is to give up far too much. As Keith Devlin wrote, “It seems to me that there is nothing for it but to take as fundamental the relationh of one event causing another” (Devlin, Logic and Information p. 184)


Archimedes gave us the fundamental principles of Statics and Hydrostatics  but somehow managed to avoid founding the science of dynamics though, as a practising civil and military engineer, he must have had to deal with the mechanics of moving bodeis. The Greek world-view, that which has been passed down to us anyway, was essentially geometrical and the truths of geometry are, or were conceived to be, ‘timeless’ : they referred to an ideal world where spherical bodies really were perfectly spherical and straight lines perfectly straight.
Given the choice between exact positionaing and movement (which is change of position) you are bound to lose out on one or the other. But science and technology must somehow encompass  both exact position and ‘continuous’ movement. So how did Newton cope with the slippery idea of velocity ?  From a pragmatic point of view, supremely well since he put dynamics on a firm footing and went so far as to invent a new form of mathematics tailor-made to deal with the apparently erratic motions of heavenly bodies — his ‘Method of Fluxions’ which eventually became the Differential Calculus. Strangely, however, Newton completely avoided Calculus methods in his Principia and relied entirely on rational argument supplemented by cumbersome, essentially static,  geometrical demonstrations. Why did he do this? Probably, because he felt himself to be on uncertain ground mathematically and philosophically when dealing with velocity.
If you are confronted with steady straight line motion you don’t need Calculus — ordinary arithmetic such as even the ancient Babylonians and Egyptians employed is quite adequate. But, precisely, Newton was interested in the displacements of objects subject to a force, thus, by definition, not in constant straight line motion. And when the force was permanent, as was the case when dealing with gravitational attraction, the consequent motions of the boldies were never going to be constant (if change of direction was taken into account).
Mathematically speaking, speed is simply the first derivative of displacement with respect to time, and velocity, a vector quantity, is ‘directed speed’, speed with a direction attached to it. The modern mathematical concept of a ‘limit’ artfully avoids the question of whether a ‘moving’ particle actually attains a particular speed at a particular moment : it is sufficient that the difference between the ratio distance covered/time elapsed and  the proposed limit can be made “smaller than any finite quantity” as the time intervals are progressively reduced. This is a solution only to the extent that it removes the problem from the domain of reality where it originated. For the world of mathematics is an ideal, not real world though in some cases there is a certain overlap.
Newton was not basically a pure mathematician, he was a mathemartical realist and a hard-nosed materialist (at least in his physics). He was obviously bothered by the question that today you are not allowed to ask, namely “Did the particle attain this limit or did it only get very close to it?” 
It is often said that Newton did not have the modern mathematical concept of limit, but he came as close to it as was possible for a consistent realist. He speaks of “ultimate ratios” “evanescent quantities” and, unlike Leibnitz, tends to avoid infinitesimals if he can possibly manage to. He sees that there is indeed a serious logical problem about these diminishing ratios somehow carrying on ad infinitum and yet bringing the particle to a standstill.

“Perhaps it may be objected that there is no ultimate proportion of evanescent quantities; because the proportion, before the quantities have vanished, is not the ultimate, and when they are vanished, is none. But by the same argument, it may be alleged that a body arriving at a certain place, is not its ultimate velocity: because the velocity, before the body comes to the place, is not its ultimate velocity; when it has arrived, is none.”
Newton, Principia    
Translation Andrew Motte

Note that Newton speaks of ‘at a certain place’ and ‘its place’, making it clear that he believes there really are specific positions that a moving particle occupies. He continues :

“But the answer is easy; for by ultimate velocity is meant that with which the body is moved, neither before it arrives at its last place and the motion ceases, nor after, but at the very instant it arrives; that is, that velocity with which the body arrives at its last place and with which the motion ceases. And in like manner,  by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities not before they vanish, nor afterwards, but with which they vanish.”
  Newton, Principia     Translation Andrew Motte

      But this implies that there is a definite final velocity :  

        “There is a limit which the velocity at the end of the motion may attain, but not exceed. This is the ultimate velocity.”

         Well and good, but Newton now has to meet the objection that, if the ‘ultimate ratios’ are specific, so also, seemingly, are the ‘ultimate magnitudes’ (since a ratio is a comparison between two quantities). This would seem to imply that nothing can properly be compared with anything else or, as Newton puts it, that “all quantities consist of incommensurables, which is contrary to what Euclid has demonstrated”.  

     “But,” Newton continues, “this objection is founded on a false supposition. For those ultimate ratios with which quantities vanish are not truly the ratios of the ultimate quantities, but limits towards which the ratios of quantities decreasing without limit do always converge; and to which they approach nearer than by any given difference, but never go beyond, nor in effect attain to, till the quantities are diminished in infinitum.”   (Newton, Principia     Translation Andrew Motte)

       The last phrase (’till the quantities are diminished in infinitum’) seems to be tacked on. I was expecting as a grand climax, “to which they approach nearer than by any given difference, but never go beyond, nor in effect attain” full stop. This would make the ‘ultimate ratio’ something akin to an asymptote, a quantity or position at once unattained and unattainable. But this won’t do either because, after all, the particle does pass through such an ultimate value (‘limit’) since, were this not the case, it would not reach the place in question, ‘its place’. Bringing in infinity at the last moment (‘diminished in infinitum’ ) looks like a sign of desperation.
A little later, Newton is even more equivocal
“Therefore if in what follows, for the sake of being more easily understood, I should happen to mention quantities as least, or evanescent, or ultimate, you are not to suppose that quantities of any determinate magnitude are meant, but such as are conceived to be always diminished without end.” 

But what meaning can one give to “quantities….diminished without end” ?  None to my mind, except that we need such quantities to do Calculus, but this does not make such concepts any more reasonable or well founded. The issue, as I said before, has ceased to worry mathematicians because they have lost interest in physical reality, but it obviously did worry Newton and still worries generations of schoolboys and schoolgirls until they are cowed into acquiescence by their elders and betters. The fact of the matter is that to get to ‘its place’ (Newton’s phrase) a particle must have a velocity that is the ‘ultimate ratio’ between two quantities (distance and time) which are being ‘endlessly diminished’ and yet remain non-zero.
In Ultimate Event Theory there is no problem since there is always an ultimate ratio between the number of grid positions displaced in a certain direction relative to the number of ksana required to get there. When doing mathematics,  we are not going to specify this ratio, supposing even we knew it : it is for the physicist and engineer to give a value to this ratio, if need be, to the level of precision needed in a particular case. But we know (or I do) that δf(t)/δt has a limiting value (Note 1)which we may call df(t)/dt if we so wish. Note, however, that the actual ‘ultimate ratio’ will almost always be more (or less) than the derivative since there will be non-zero terms that need to be taken into account. Also, the actual limiting value will vary according to the processes being studied since manifestly some event-chains are ‘faster’ than others (require less ksanas to reach a specified point).  Nonetheless, the normal derivative will usually be ‘good enough’ for practical purposes, which is why Calculus is employed when dealing with processes that we know to be strictly finite such as population growth or radio-active decay.   S.H.

Note 1 :      Why must there always be a limiting value?  Because δt can never be smaller than a single ksana — one of the basic assumptions of Ultimate Event Theory.

(The opening image is from a painting by Jane Maitland)         S.H.     5/08/12     

‘Speed’ is not a primary concept in the Système Internationale d’Unités  : it is defined by means of two quantities that are primary, the unit of length, the metre,  and the unit of time, the second. ‘Speed’ is the ratio distance/time and its unit is metres/second.
It is, I think, possible to disbelieve in the reality of motion but not to disbelieve in the reality of distance and time, at least in some sense.
The difficulty with the concept of motion and the associated notions of speed and velocity, is that we have somehow to combine place (exact position) and change of place for  if there is no change in a body’s position, it is motionless. The concepts of ‘exact position’ and movement are in fact irreconcilable (Note 1)  : at the end of the day we have to decide which of the two we consider to be more fundamental. For this reason there are really only two consistent theories of motion, the continuous process theory and the cinematographic theory.
The former can be traced at least as far back as Heraclitus, the Ionian philosopher for whom “all things were a-flowing” and who likened the universe to “a never ending fire rhythmically rising and falling”. Barrow, Newton’s mathematics teacher, was also a proponent of the theory and some contemporary physicists, notably Lee Smolin, seem to belong to this camp.
Bergson goes so far as to seriousoly assert that, when a ‘moving object’ is in motion, it does not occupy any precise location whatsoever (and he is not thinking of Quantum Wave Theory which did not yet exist). He writes,
“… supposons que la flèche puisse jamais être en un point de son trajet. Oui, si la flèche, qui est en mouvement, coincidait jamais avec une position, qui est de l’immobilité. Mais la flèche n’est jamais a aucun point de son trajet”.
(“Suppose that the arrow actually could be at a particular point along its trajectory. This is possible if the arrow, which is on the move, ever were to coincide with a particular position, i.e. with an immobility. But the arrow never is at any point on its trajectory”.)
So how does he explain the apparent fact that, if we arrest a ‘moving’ object we always find it at a particular point ? His answer is that  in such a case we ‘cut’ the trajectory and it falls, as it were, into two parts. But this is like the corpse compared to the living thing ― c’est justement cette continuité indivisible de changement qui constitue la durée vraie” (“It is precisely the indivisible continuity of change that constitutes true durastion”) .

The cinematographic theory of movement finds its clearest expression in certain Indian thinkers of the first few centuries AD —:
“Movement is like a row of lamps sending flashes one after the other and thus producing the illusion of a moving light. Motion consists in a series of immobilities. (…) ‘Momentary things,’ says Kamalasila, ‘cannot displace themselves, ‘because they bdisappear at that very place at which they have appeared’.” Stcherbatsky, Buddhist Logic vol. I pp.98-99

For almost as long as I can remember, I have always had a strong sense that ‘everything is discontinuous’, that there are always breaks, interludes, gaps. By this I do not just mean breaks between lives, generations, peoples and so on but that there are perceptible gaps between one moment and the next. Now, western science, partly  because of the overwhelming influence of Newton and the Infinitesimal Calculus he invented, has definitely leaned strongly towards the process theory of motion, as is obvious from the colossal importance of the notion of continuityin the mathematical sciences.
But the development of physical science requires both the notion of ‘continuous movement’ and precise positioning. Traditional calculus is, at the end of the day, a highly ingenious, brilliantly successful but hopelessly incoherent procedure as Bishop Berkeley pointed out in Newton’s own time. Essentially Calculus has its cake and eats it too since it represents projectiles in continuous motion that yet occupy precise positions at every interval, however brief (Note 2).
In Ultimate Event Theory exact position is paramount and continuous motion goes  by the board. Each ultimate event is indivisible,  ‘all of a piece’, and so, in this rather trivial sense, we can say that every ultimate event is ‘continuous’ while it lasts (but it does not last long). Also, K0 , the underlying substratum or event Locality may be considered to be ‘continuous’ in a rather special sense, but this need not bother us anyway since K0 is not amenable to direct observation and does not interact with the events that constitute the world we experience. With these two exceptions, “Everything is discontinuous”. This applies to ‘matter’, ‘mind’, ‘life’, movement, anything you like to think of.    Furthermore, in the UET model, ultimate events have occurrence in or on three-dimensional grid-points on the Locality, but these grid-points are not pressed right up against one another (as in certain other  models such as that of Lee Smolin). No, there are (by hypothesis) real, and in principle measurable, breaks between one grid-position and the next and consequently between one ultimate event and its neighbours if there are any, or between each of its its consecutive reappearances.
Furthermore, in the UET model, ultimate events have occurrence in (or on) three-dimensional grid-points on the Locality, but these grid-points are not pressed right up against one another as they are in certain other discontinuous physical  models (Note 3). In Ultimate Event Theory there are real, and, in principle, measurable gaps breaks between one grid-position and the next and consequently between one ultimate event and its neighbours if there are any, or between each of its consecutive reappearances.
What we call a ‘body’ or ‘particle’ is a (nearly) identically repeating event cluster which, in the simplest case, consists of a single endlessly repeating ultimate event. The trajectory of the repeating event as it ‘moves’ (appears/reappears) from one three-dimensional frame to the next may be presented in the normal way as a line — but it is a broken, not a continuous line.
It is a matter of common experience that certain ‘objects’ (persisting event-clusters) change their position relative to other repeating event-clusters.  For illustrative purposes, we consider three event-chains composed of single events that repeat identically at every ksana (roughly ‘instant’). One of these three event-chains, the black one Z is considered to be ‘regular’ in its reappearances, i.e. to occupy the equivalent grid-point at each ksana. Its trajectory or eventway will be represented by a column on black squares where each row is a one-dimensional representation of what in reality is a three-dimensional region of the Locality. The red and green event-chains, X  and  Y  are displaced to the right laterally by one and three grid-positions relative to at each ksana (Note 4).

         X   Y                              Z

        In normal parlance, Y is a ‘faster’ event-chain (relative to Z) than X and its speed relative to Z is three grid-positions (I shall henceforth say ‘places’) per ksana . The speed of  X  relative to Z is one place/ksana. (It is to be remarked that Y reappears on the other side of  Z without ‘colliding’ with it).
Of course, this is a simplified picture : in reality event-chains will be more spread out, i.e. will consist of many more than a single element per ksana; also,  there is no reason a priori why they should be made up of events that reappear during every ksana. But the point is that ‘velocity’ in Ultimate Event Theory is a straight numerical ratio (number of grid-positions)/(number of  ksana)  relative to a regular repeating event-chain whose trajectory is considered to be vertical.  Note that Y reappears on the other side of  Z without ‘colliding’ with it.      S.H.  27/7/12


Note 1 :     “A particle may have a position or it may have velocity but it cannot in any exact sense be said to have both” (Eddingon).

Note 2 :  Barrow, Newton’s geometry teacher, wrote, “To every instant of time, I say, there corresponds some degree of velocity, which the moving body is considered to possess at that instant”. Newton gave mathematical body to this notion in his ‘Theory of Fluxions’, his version of what came to be known as the Infinitesimal Calculus.

Note 3      According the Principle of Relativity, there is no absolute direction for a straight event-line, and any one of a family of straight lines can be considered to be vertical. Other things being equal, we consider ourselves to be at rest if we do not experience any jolts or other disturbances and thus our ‘movement’ with that of Z, a vertical line.  However, if we were ‘moving’, i.e. appearing and reappearing at regular intervals, alongside or within (straight) event-chains or  Y, we would quite legitimately consider ourselves to be at rest and would expect our event-lines to be represented as vertical.
The point is that in classical physics up to and including Special Relativity the important distinction is not between rest and constant straitght-line motion but between accelerated and unaccelerated motion, and both rest and constant straight-line motion count as unaccelerated motion. This capital distinction was first made by Galileo and incorporated into Newton’s Principia. 
The distinction between ‘absolute’ rest and constant straight-line motion thus became a purely academic question of no practical consequence. However. by the end of the nineteenth century, certain physicists argued that it should be possible after all to distinguish between ‘absolute rest’ and constant straight-line motion by an optical experiment, essentially because the supposed background ether ought to offer a resistance to the passage of light and this resistance ought to vary at different times of the year because of the Earth’s orbit. The Michelsen-Morley experiment failed to detect any discrepancies and Einstein subsequently introduced as an Axiominto his Theory of Special Relativity the total equivalence of all inertial systems with respect to the laws of physics. He later came to wonder whether there really was such a thing as a true inertial system and this led to the generalisation of the Relativity principle to take in any kind of motion whatsoever, inertial systems being simply a limiting case.
What I conclude from all this is that (in my terms) the Locality does not interact physically with the events that have occurrence in and on it; however, it seems that there are certain privileged pathways into which event-chains tend to fall. I currently envisage ultimate events, not as completely separate entities, but as disturbances of the substratum, K , disturbances that will, one day, disappear without a trace. The Hinayana Buddhist schema is of an original ‘something’ existing in a state of complete quiescence (nirvana) that has, for reasons unknown, become disturbed (samsara) but which will eventually subside into quiescence once again. The time has come to turn this philosophic schema into a precise physical theory with its own form of mathematics, or rather symbolic system, and my aim is to contribute to this development as much as is possible. Others will take things much, much further but the initial impulse has at least been given.

Note 4  Of course, this is a simplified picture : in reality event-chains will be more spread out, i.e. will consist of many more than a single element per ksana; also,  there is no reason a priori why they should be made up of events that reappear during every ksana.

S.H.  22/7/12

 We have, then, at any instant a three-dimensional grid extending in all directions composed of locations which can receive one and only one ultimate event. I mentioned in the last post (Co-ordinate Systems) that the best rough model of physical reality would be a three-dimensional framework traced out by lights within which pinpoints of coloured light, representing ultimate events, would occasionally make their appearance.  The entire three-dimensional framework with rectangular axes is set up to flash on and off rhythmically and when the framework of axes disappears, so do the coloured lights.  In the majority of cases the pinpricks of light never appear again. However, occasionally the pinpricks do reappear and, if the light machine is speeded up like a cine-camera, the coloured pinpricks eventually coalesce and form lines of coloured light, either straight or curved. This represents the case when ultimate events acquire persistence (to be discussed later) and form a repeating event-chain.
We consider  the simplest case of a single repeating ultimate event, one that reoccurs identically at each successive instant. Also, for simplicity, I shall reduce the three-dimensional grid to a single line. The trajectory is thus traced out as a sequence of occupied squares on a repeating array of lines stretching out indefinitely in both directions. The ‘line’ is not a material object, of course, it is simply a set of positions where ultimate event can have occurrence. When empty, a position on this line will be marked □ and when occupied by an ultimate event will be marked  ■.    We have :


Now suppose that this ultimate event creates a facsimile of itself in an immediately adjacent cubicle and that, at all subsequent instants, the ‘daughter event’, marked in red, is displaced to the right by a single cell. ‘Right’ and ‘left’ are, of course, purely conventional since the substratum on which the ultimate events have occurrence is, by hypothesis, homogeneous and isotropic, i.e. directionless, but  once a one direction has been selected we keep to it. We have :


Joining up the coloured squares, we obtain in each case a straight line and we can easily imagine other straight lines representing different event-chains that differ only by the number of squares each diagonal  moves to the right at each successive instant. (The more complex case when an event-chain ‘jumps’ lines will be considered later.) Thus :


Each chain can be commenced and extended indefinitely if 1) we specify the starting position which we denote ☼ (in the above it is the same for all three chains), and 2) we specify the original increase in a chosen direction, either no increase at all or □, □□, □□□ and so on (Note 1).
This family of event-chains is to be distinguished from event-chains where either the increase is completely arbitrary or in some way depends on the current position of an event in a growing event-chain such as :


It is probable that the above event-chain, or rather its beginning stages, can be given by a formula, and probably by more than one, but, whether or not this is so, such an event-chain belongs to a completely different category from the ones shown above. Why is this? Because the evolution of such an event-chain cannot be gauged from two arbitrary successive positions only.
Now consider

Ignoring for the moment the  problem of the reversal of direction for the red event-chain, what we have here is two event-chains, each with the same first and last event, one of which (the black one)  has apparently kept to the ‘same’ position on the line while the members of the other event chain have gone to the right and then back to the left. Now, one of the original suppositions was that all the ‘grids’ on Kwere identical in size, and that every ultimate event fills a site completely. Thus the total occurrence time (where the ‘time of transition’ from one occurrence to the next is not taken into account), is the same for both event-chains : fifteen instants. And so the red event-chain is seemingly no ‘longer’ than the black !       This is contrary to all observation and I am tempted to add, to all reason. Clearly, by any normal reckoning, the red trajectory is lomnger than the black : it is at any rate not identical in length. But if we keep to the original assumption about the ‘size’ of the grid positions, there is no way the two trajectories can be different in length unless we conclude that the transition times, the ‘breaks’ in the flow of events, are different for different event-chains. We thus have the concept of the backdrop, whatever it is, being somehow ‘elastic’ or, more precisely, not being measurable in the normal way that solid objects are measurable. This is, up to a point, acceptable since the backdrop is not part of the ‘normal’ physical world.
What makes things worse, however,  is that, according to the Principle of Special Relativity, we could just as well have made the red event-chain a vertical line and the black line one that first of all slants to the left and then to the right to rejoin the red chain. Each description is, according to SR, equally valid — though I think it misleading to say they are ‘equivalent’ which manifestly they are not. If the black event-chain were part of an event-cluster with an observing agent on board, ‘he’ or ‘she’ would, in the case presented, consider himself to be motionless and the red spacecraft to be in constant motion, and the red observer would think the same. Of course, Special Relativity might be mistaken but there a large number of observations which suggest that the two different representations, each consistent within its own terms, really are equally legitimate and that there is no way of deciding which is ‘true’ — they both are. Any alternative theory — and there have been several proposed — has to explain why, for example, it does not seem to be possible to determine from inside an inertial system whether it is ‘really’ at rest or in straight line constant motion.
Note, however, that there is a big difference between the schema of Eventrics and that of classical and modern science, including Relativity theory. In Eventrics, by hypothesis, every event-chain is composed of a finite number of ultimate events (see original assumptions) and so has associated with it an ‘event-number’ which is not relative, does not depend on one’s actual or hypothetical standpoint or state of motion, but which is absolute. In classical and modern science the trajectories of ‘continuously moving objects’ are conceived as being composed of an infinite number of  instantaneous locations. So, since twice or three times infinity equals infinity, a ‘longer’ trajectory has not passed through any more actual or possible point-locations  than a shorter one  –  which is hard to believe, to say the least.    (To be continued)

Note 1 :  Mathematically, this is definition by recursion or  f(n+1) = f(n) + r     r = 0, 1, 2. 3…..    f(0) = Ο. In each case, the increase f(n+1) – f(n) = r and does not change however far the event-chain is extended. In particular, the coming increase does not depend on the current position of an event relative to a fixed (repeating) point. Ideally, all mathematical functions that describe actual behaviour should be defined by recursion since this would seem to be much closer to what actually goes on in nature. The analytic formula y = f(x) provides a ‘God’s eye’ view of the world : all possible values of the dependent variable are given ‘in one fell swoop and, as Ullmo put it rather well, “it is our fault if we have to discover piecemeal all the features of the curve”. This way of proceeding suited the world-view of the early scientists perfectly for they all, to a man, were firm believers, Descartes, Kepler, Leibnitx, Newton, Boyle…   However, we know that in biology trial and error (subject to certain overriding physical constraints) is the rule and both species and organisms proceed step by step from a given departure point. With recursion you only need to know the starting point and how to get from one position to the next.
I do not know whether all anaytic functions can be presented recrusively and vice-versa — I think someone has proved they can’t be — but the two presentations are certainly not ‘equivalent’. Even such a simple function as y = x²  is quite tricky to define recursively while one of the simplest and most important recursive functions f(n+ 1) = f(n) + f(n – 1)  gives a very complicated analytic formula.

Note 2 :  It was essentially the qualitative distinction between these two classes of event-chains that, perhaps more than anything else, gave rise to the formidable development of physical science in the West. Though anticipated to some extent by Oresme, Galileo was the first to grasp the signal importance of the distinction. The straight lines represent constant change, and the simplest case of constant change is no change at all (rest), whereas all other curves show acceleration, changing change as Oresme put it. Newton, following on from Galileo, classed straight line motion or rest as the ‘natural’ state which required no explanation : any deviation from this equilibrium state denoted the presence of a force which, since Newton did not know modern chemistry, was assumed to be an external force.
It is remarkable that Galileo hit upon the idea of what we today call an ‘inertial system’ since, in his day, there were no smooth running means of transportation like our trains and aircraft. Galileo, in his thought experiment, supposed that he was in a cabin without a window in a boat on a perfectly calm sea and he asked himself if it was possible, by performing various tests inside the cabin only to decide if he and the ship were at rest or moving at constant speed in a straight line. He decided that it was not possible. Einstein took up Galileo’s ‘Principle ofn Relativity’ and made it the cornerstone, along with the constancy of the speed of light, of the Theory of Special Relativity, viz. The laws of physics take the same form in all inertial systems as he put it.  Subsequently, Einstein cast doubt on the validity of the concept of an ‘inertial system’ and extended his theory to cover all forms of motion.
In the terms of Eventrics, the trajectories noted as the three straight lines in the first diagram “are equivalent” (though I would not quite put it like this). ‘Relativity’ comes about because if an observer were ‘moving’ at the same rate as the black squares, he or she would judge himself to be at rest and objects synchronised with the red or green squares to be ‘moving to his right at a steady pace. However, an observer ‘moving’ at the rate of the red or green squares would judge himself to be at rest and an object appearing and disappearing at the same rate as the black squares to be moving steadily towards his left (or vice-versa).

(To be continued 11/7/12)


In daily life we do not use co-ordinate systems unless we are engineers or scientists and even they do not use them outside the laboratory or factory. If we wish to be passed a certain book or utensil, we do not say it has x, y and z co-ordinates of (3, 5, 7)  metres relative to the left hand bottom corner of the room ― anyone who behaved in such a way would be considered half-mad. We specify the position of an object by saying it is “on the table”, “below the sink”, “near the Church”, “to the right of the Post Office” and so on. As Bohm pointed out in an interview, these are, mathematically speaking, topological concepts since they do not involve distance or angles. In practice, in our daily life, we define an object’s position by referring it to some prominent object or objects whose position(s) we do know. Aborigines and other roving peoples start off by referring their position to a well-known landmark visible for miles around and refer subsequent focal points to it, in effect using a movable origin or set of origins. In this way one advances  step by step from the known to the unknown instead of plunging immediately into the unknown as we do when we refer everything to a ‘point’ like the centre of the Earth, something of which we have no experience and never will have. We do much the same when directing someone to an object in a room : we relate a hidden or not easily visible object by referring to large objects whose localization is well-known, is imprinted permanently on our mental map, such as a particular table, chair, sink and so on. Even when we do not know the exact localization of the object, a general indication will at least tell us where to look ― “It is on the floor”. Such a simple and informative (but inexact) statement would be nearly impossible to put into mathematical/scientific language precisely because the latter is exact, too exact for everyday use.
I have gone into this at some length because it is important to bear in mind how unnatural scientific and mathematical co-ordinate systems are. Such systems, like so much else in an ‘advanced’ culture, are patterns that we impose on natural phenomena for our convenience and which have no  independent existence whatsoever (though scientists are rather loath to admit this). So why bother with them ? Well, for a long time humanity did not bother with such things, getting along perfectly well with more rough and ready but also more user-friendly systems like the local reference point directional system, or the ‘person who looks like so-and-so’ reference system. It is only when society became urban and started manufacturing its own goods rather than taking them directly from nature that such things as  geometrical systems and co-ordinate systems became necessary. The great advantage of the GPS or rectangular  three-dimensional co-ordinate system is that such systems are universal, not local, though this is also their drawback. Such artifices give us a way of fixing the position of  any object anywhere,  by using three, and only three, numbers. Using topological concepts such as ‘on’, ‘under’, ‘behind’ and so on, we commonly need more than three directional terms and the specifications tend to differ markedly depending on the object we are looking for, or the person we are talking to. But the ‘scientific’ co-ordinate system works everywhere ― though it is useless for practical purposes if we do not know, cannot see or remember the point to which everything is related. When out walking, the scientific system is only necessary when you are lost, i.e. when the normal local reference point system has broken down. Anyone who went hiking and looked at their computer every ten minutes to check on their position would be a fool and, if ever deprived of electronic devices, would never be able to find his or her way in the wilderness because he would not be able to pick up the natural cues and clues.
Why rectangular axes and co-ordinates? As a matter of fact, we  sometimes do use curved lines instead of straight ones since this is what the lines of latitude and longitude are, but human beings, when they do think quantitatively, almost always tend to think in terms of straight lines, squares, cubes and rectangles, shapes that do not exist in Nature (Note 1). The ‘Method of Exhaustion’, ancestor of the Integral Calculus, was essentially a means of reducing the areas and volumes of irregular figures to so many squares (Note 2). I have indeed sometimes wondered whether there might be an intelligent species for whom circles were much more natural shapes than straight lines and who would evaluate the area of a square laboriously in terms of epicycles whereas we evaluate the area of a circle by turning it into so many half rectangles, i.e. triangles. Be that as it may, it seems that human beings cannot take too much curved reality and I doubt if even a student of General Relativity ever thinks in curvilinear Gaussian co-ordinates.
Now, if we wish to accurately pinpoint the position of an object, we can do so, as stated, using only three distances plus the specification of the origin. (In the case of an object on the surface of the Earth we use latitude and longitude with the assumed origin being the centre of the Earth, the height above sea level being the third ‘co-ordinate’.) However, this is manifestly inadequate if we wish to specify the position, not of an object, but of an event. It would be senseless to specify an occurrence such as a tap on the window or a knife thrust to the heart by giving the distance of the occurrence from the right hand corner of the room in which it took place. It shows what a space-orientated culture we live in that it is only relatively recently that it has been found necessary to tack on a ‘fourth’ dimension to the other three and a lot of people still find this somewhat bizarre. For certain cultures, Indian especially, time seems to have been more significant than space (inasmuch as the two can be separated) and, had modern science developed there rather than in the West, it would doubtless have been very different. For a long time the leading science and branch of mathematics in the West was Mechanics, which studies the motions of rigid bodies that change little over brief periods of time. But from the point of view of Eventrics, what we familiarly call an ‘object’ is simply a relatively persistent event-cluster and the only reason we do not need to specify a time co-ordinate is that this object is assumed to be unchanging at least over ‘small’ intervals of time. Even the most stable objects are always changing, or rather they flash into existence, disappear and (sometimes) reoccur in a more or less identical shape and position with respect to nearby ‘objects’.
Instead of somehow tacking on a mysterious ‘fourth dimension’ to the familiar three spatial dimensions, Ultimate Event Theory posits discrete ‘globules’ or three-dimensional grids spreading out in all possible directions, each of which can receive one, and only one, ultimate event. The totality of possible positions for ultimate events constitutes the enduring  base-entity which I shall refer to as K0, or rather the only part of K0 with which we need to concern ourselves at the moment. It is misleading, if not meaningless, to refer to  this backdrop or substratum as ‘Space-Time’. Although I believe that ‘succession’ and ‘co-existence’ really do exist ― since events can and do occur ‘in succession’ and can also exist ‘at the same moment’  ― ‘Space’ and ‘Time’ have  no objective existence though one understands (sometimes) what people have in mind when they use the terms. Forf me ‘Space’ and ‘Time’ are basically mental constructs but I believe that the ultimate events themselves really do exist and likewise I believe that there really is an ‘entity’ on whose ‘surface’ ultimate events have occurrence. Newton fervently believed in the ‘absolute’ nature of Space and Time but his contemporary Leibnitz viewed  ‘Space’ as nothing but the sum-total of instantaneous relations between objects and some  contemporary physicists such as Lee Smolin (Note 3) take a similar line. For me, however, if there are events there must be a ‘somewhere’ on or in which these events can and do occur. Indeed, I take the view that the backdrop is more fundamental than the ultimate events since they emerge from it and are  essentially just momentary surface disturbances on it, froth on the ocean of K0.
For the present purposes it is, however, not so very important how one views this underlying entity, and what one calls it, it is sufficient to assume that it exists and that ultimate events are localized on or within it. K0 is assumed to be featureless and homogeneous, stretching indefinitely in all possible directions. For most of the time its existence can be neglected since all that we can observe and experiment with are the events themselves and their inter-relations. In particular, Kdoes not exert any ‘pressure’ on event-clusters or offer any  noticeable resistance to their apparent movements although it does seem to restrict them  to specific trajectories. As Einstein put it, referring to the ether, “It [the ether] has no physical effects, only geometrical ones”. (Note 4) In the terms of Ultimate Event Theory, what this means is that there are, or at least might be, ‘preferred pathways’ on the surface of K0 and, other things being equal, persisting event-clusters will pursue these pathways rather than others. Such  pathways and their inter-connections are inherent to K0  but are not fixed for all time because the landscape and its topology is itself affected by the event-clusters that have occurrence on and within it.
Even though I have argued that co-ordinate systems are entirely man-made and have no independent reality, in practiced I have found it impossible to proceed without an image at the back of my mind of a sort of fluid rectangular co-ordinate system consisting of an indefinite number of positions where ultimate events can and sometimes do occur. Ideally, instead of using two dimensional diagrams for a four-dimensional reality, we ought to have a three-dimensional framework, traced out by lights for example, and which appears and reappears at intervals ― possibly something like this is already in use. The trajectory of an object (i.e. repeating event-chain or event-cluster) would then be traced out, frame  by frame,  on this repeating three-dimensional co-ordinate backdrop. This would be a far more truthful image than the more convenient two dimensional representation.
One point should be made at once and cannot be too strongly stressed. Whereas the three spatial dimensions co-exist and, as it were, run into each other ― in the sense that a position (x, y, z) co-exists alongside a position (x1, y1, z1) ― ‘moments of time’ do not co-exist. This may seem obvious enough since if ‘moments of time’ really did co-exist they would be simultaneous, in effect the ‘same’ moment. And if all moments co-existed there would be nothing but an eternal present and no ‘time’ at all (Note 4).  But there is an unexpected and drastic consequence : it means that for the next ‘moment in time’ to come about, the previous one must disappear and along with it everything that existed at that moment. If we had an accurate three–dimensional optical model, when the lights defining the axes were turned off, everything framed by the optical co-ordinate system, pinpoints of coloured light for example, would by rights also disappear.
Rather few Western thinkers and scientists have ever realized that there is a problem here, let alone resolved it. (And there is no problem if we assume that existence and ‘Space-Time’ and everything else is ‘continuous’ but I do not see how this can possibly be the case and Ultimate Event Theory is based on the hypothesis that it is not the case.) Most scientists and philosophers in the West have assumed that it is somehow inherent in the make-up of objects and, above all, human beings to carry on existing, at least for a certain ‘time’. Descartes was the great exception : he concluded that it required an effort that could only come from God Himself to stop the whole universe disintegrating at every single instant. To Indian Buddhists, of course, the ephemeral nature of reality was taken for granted, and they ascribed the re-appearance and apparent continuity of ‘objects’, not to a supernatural Being,  but to the operation of a causal Force, that of ‘Dependent Origination’ (Note 4). Similarly, in Ultimate Event Theory, it is not the appearance or disappearance of ultimate events that requires explanation ― it is their ‘nature’, if you like,  to flash into and out of existence ― but rather it is the apparent solidity and continuous existence of ‘things’ that requires explanation. (Note 5) This is taking the Newtonian schema one step back : instead of ascribing the altered motion of a particle to an external force, it is the continuing existence of a ‘particle’ that requires a ‘force’, in this case a self-generated one.
Although Relativity and other modern theories have done away with all sorts of things that classical physicists thought  practically self-evident, the idea of a physical/temporal continuum is not one of them. Einstein, no less than Newton, believed that Space and Time were continuous. “The surface of a marble table is spread out in front of me. I can get from any point on this table to any othe point by passing continuously from one point to a ‘neighbouring’ one and, repeating this process a (large) number of times, or, in other words, by going from point to point without executing ‘jumps’. (…) We express this property of the surface by describing the latter as a continuum” (Einstein, Relativity p. 83).  To me, however, it is not possible to go from one point to another without a ‘jump’ as Einstein he puts it — quite the reverse, physical reality is made up of ‘jumps’. Also, the idea of a neighbourhood is quite different in Ultimate Event Theory : there are not an ‘infinite’ number of positions between a point on where an ultimate event has occurrence and another point where a different ultimate event has occurrence (or will have, has had, occurrence) but only a finite number. This number is not relative but absolute (though the perceived or inferred ‘distances’ may differ according to one’s standpoint and state of motion). And, of course, the three dimensional co-ordinate system we find appropriate need not necessarily be rectangular but might be curvilinear as in General Relativity.   S.H.   8 July 2012

Note 1 :  Extremely few natural objects have the appearance of our standard geometrical shapes, and the only ones that do are microscopic like rock crystals and radiolaria.

Note 2 : Geometry means literally ‘land measurement’ and was first developed for practical reasons —“According to most accounts, geometry was first discovered in Egypt, having had its origin in the measurement of areas. For this was a necessity for the Egyptians owing to the rising of the Nile which effaced the proper boundaries of everyone’s lands” (Proclus, Summary). Herodotus says something similar, claiming that the Pharaoh Ramses II distributed land in equal rectangular plots and levied an annual tax on them but that, subsequently, owners applied for tax reductions when their land got swept away by the overflowing Nile. To settle such disputes surveyors toured the country and had to work out accurately how much land had been lost. See Heath, A History of Greek Mathematics Vol. 1 pp. 119-22 from which these quotations were taken.

Note 3: “Space is  nothing apart from the things that exist; it is only an aspect of the relationships that hold between things” (Lee Smolin, Three Roads to Quantum Gravity, p. 18)

Note 4 : In the terms of Ultimate Event Theory, what this means is that there are, or at least might be, ‘preferred pathways’ on the surface of K0 and, other things being equal, persisting event-clusters will pursue these pathways rather than others. Such  pathways and their inter-connections are inherent to K0  but are not fixed for all time because the landscape and its topology is itself affected by the event-clusters that have occurrence on and within it.

  Note 5 : This is the same force that operates within a single existence, or causal chain of individual existences, in which case it is named Karma (literally ‘activity’). The entire aim of meditation and related practices is to eliminate, or rather to still, this force which drives the cycle of death and rebirth. The arhat (Saint?) succeeds in doing this and is thus able to enter the state of complete quiescence that is nirvana ― a state to which, eventually, the entire universe will return. The image of something completely still, like the surface of a mountain lake, being disturbed and these disturbances perpetuating themselves could prove to be a useful schema for a future physics. It is a very different paradigm from that of indestructible atoms moving about in the void which we inherit from the Greeks. In the  new paradigm, it is the underlying and invisible ‘substance’ that endures while everything we think of as material is a passing eruption on the surface of this something. The enorm ous event-cluster we currently call the ‘universe’ will thus not expand for ever, nor contract back again into a singularity : it will simply evaporate, return to the nothingness (that is also everything) from which it once emerged. In my unfinished SF novel The Web of Aoullnnia, the future mystical sect the Yther make this idea the cornerstone of their cosmology and activities ― Yther  is a Lenwhil Katylin term which signifies ‘ebbing away’. Interested readers are referred to my personal site

Anyone who presents a radically new scientific theory must expect hostility, ridicule and stupefaction. Up to a point (up to a point) this is even healthy, since a society where new ways of viewing reality hoved on the horizon every two years or so would be bewildering in the extreme. What generally happens is that the would-be innovator is told that everything that is true in the new theory is already contained in the current theory, while everything that differs from the existing theory is almost certainly wrong. The new theory is thus either redundant or misguided or both.
And yet we need new theories, by which I do not mean extensions of the current paradigm, or patched up versions, but something that really does start with substantially different first principles. Viable new ways of viewing the world are not easy to come by, and inventing a symbolic system appropriate to the new view is even more difficult.

Now, it is quite legitimate to keep in full view features of the official theory that are solidly based, provided one rephrases them in terms of the competing theory. Ideally, one would like to see the assumptions of the new theory leading to something similar but, clearly, it is all too easy to fudge things up when one knows where one would like to end up. Such an attempt is, however, instructive since it focuses attention on what extra assumptions apart from the basic postulates are necessary if one wants to find oneself in a certain place. But if predictions of the new theory don’t differ from the existing one, there is little justification for it, although the new theory may still have a certain explanatory power, intuitive or otherwise, which the prevailing theory lacks.
Now, at first sight, Ultimate Event Theory, may appear to be nothing more than an eccentric and pretentious way of presenting the same stuff. Instead of talking of molecules and solid objects, Eventrics and Ultimate Event Theory speak of ‘event-clusters’, ‘event-chains’ and the like. But since the ‘laws’ governing these new entities must, so the argument goes, be the very same laws governing solid bodies and atoms, the whole enterprise seems pointless. Certainly, I am quite happy to do mechanics without continually reinterpreting ‘body’ as ‘relatively persistent event-cluster’ — I would be crazy to behave otherwise. However, as I examine the bases of modern science and re-interpret them in terms of the principles of Eventrics, I find that there are marked differences not only in  the basic concepts but, occasionally, in what can be predicted. There are, for example, Newtonian concepts for which I cannot find any precise equivalent and the modern concept of Energy, not in fact employed by Newton, which has become the cornerstone of modern physics, is conspicuously absent (Note 1). There are also predictions that can be made on the basis of UET that completely conflict with experiment amd observation (Note 2) but at least such discrepancies focus my attention on this particular area as a problematic one.
I start by examining Newton’s Laws of Motion, perhaps the most significant three sentences ever to have been penned by anyone anywhere.
They are :
1. Every body continues in its state of rest or uniform straight-line motion unless compelled to change this state by external imposed forces.
2. Change of a body’s state of motion is proportional to the appled force and takes place in the direction of the straight line in which the force acts.
3. To every action there is an equal and oppositely directed action.

How does all this shape up in terms of Ultimate Event Theory?
      It is first necessary to make clear what ‘motion’ means in the context of Ultimate Event Theory (UET). Roughly speaking motion is “being at different places at different times” (Bertrand Russell). Yes, but what is it that appears at the different places and what and where are these ‘places’? The answer in UET is : the ‘what‘ are bundles of ultimate events, or, in the simplest case, a single ultimate event, while the ‘places’ are three-dimensional grid-positions on the Locality,  K0 , where all ultimate events are motionless. Each constituent of physical reality is, thus, always ‘at rest’ and it is only meaningful to speak of ‘motion’ with respect to event-chains (sequences of ultimate events). But these event-chains do not themselves ‘move’ : the constituent events flash in and out of existence while remaining somehow bonded together (Note 3).  It is all like a rhythmically flashing lamp that we carry around from room to room — except that there is no lamp, only a connected sequence of flashes.   As Heraclitus put it, “No man ever steps into the same river twice” .
To clear the ground, we might thus take as the

Zeroth Law of Motion : There is no such thing as continuous motion.

We now introduce the idea of the successive appearance and disappearance of events which replaces the naïve concept of continuous motion.

First Law.  The ‘natural tendency’ of every ultimate event is to appear once on the Locality at a single spot and never reoccur.

(Remark. When this does not happen, we have to suppose that something equivalent to Newton’s ‘Force’ is at work, i.e. something that is not itself composed of ultimate events but which can affect them, as for example displace them a position where they would be expected or simply enable them to re-occur (repeat more or less identically).

Second Law. When an event or event-cluster acquires ‘Dominance’ it is capable of influencing other ultimate events, but it must first of all acquire ‘Self-Dominance’, the power to repeat (nearly) identically.

From here on, the Laws are rephrasings of Newton though perhaps with an added twist:

Third Law.  An ultimate event, or event-cluster, that has acquired self-dominance continues to repeat (nearly) identically in a straight line from instant to instant except when subject to the dominance of other event-chains.  

(Remark: It is an open question whether an event or event-cluster that has acquired ‘Self-Dominance’, will carry on repeating indefinitely in this way, but for the moment we assume that it does.)

Fourth Law. The dominance of one event-chain over another is measured by the extent of the deviation from a straight line multiplied by the ‘event-momentum’ of the constituent events of the event-cluster.

(Remark. I am still searching  for the exact equivalent of Newton’s excellent, and by no means obvious,  concept of ‘momentum’ which gives us the ‘quantity’ of ‘matter-in-motion’ so to speak. Event-clusters  obviously differ in their spread (number of grid-positions occupied), their density (closeness of the occupied places) and the manner of their reappearance at successive instants, but there are other considerations also, such as ‘intensity’ which need exploration.)

Fifth Law.
In all interactions between event-clusters the dominance of one event-cluster over another is met by an equal and oppositely directed subsequent reverse dominance.  

(Remark. Note that Newton’s Third Law (the Fifth in this list) is the only one of his laws that refers to events only (action/reaction) without mentioning  bodies.)

Note 1. Newton did not use the term energy and even as late as the mid nineteenth-century physicists like Mayer and Helmholtz who did so much to develop the energy concept still talked of ‘Force’.  J.J. Thomson (Lord Kelvin) seems to have been the first physicist to introduce the term into physics.

Note 2. For example, I find I am unable to explain why what we call light does not pass right through every possible obstacle as neutrinos almost always do  — clearly this will require some new assumption.

Note 3 No event is ever exactly the same as any other, since, even if two ultimate events are alike in all other respects, they do not occupy the same position on the Locality.

SH 23/7/12

The forces operating in a market are both exogenous, coming from the outside, and endogenous, coming from inside. Broadly speaking, those from the outside are studied in fundamental analysis and those from the inside in technical analysis. Fundamental analysis concentrates on such things as a company’s assets, balance sheet, style of management, state of the market and so forth. Technical analysis ultimately bases itself on human behaviour since the trends and counter trends originate in the minds, or rather in the emotional states, of the primates we are. But scientists are wrong to see the market as simply another complex system. Herding animal behaviour can certainly be observed (Note 1)but humans are at least partially aware of what is going on and can feed their views back into the system, thus modifying it. Also, the peculiarity of the market is that predator and prey are mixed together and each player takes on both roles at different moments.
What do we see when we examine charts and graphs of the Stock Market? We practically never see straight lines and smooth curves : market activity proceeds by short, sharp bursts which in the short term tend to wipe each other out while  nonetheless, allegedly, dancing to a hidden deeper music. The typical zig-zag
shape of the Dow and the FTSE demon strates how a burst of trading activity in one direction almost immediately gives rise to a burst in the opposing direction. However, since this correcting burst, a fortiori trend, is reactive not  pro-active, it is always one step behind. Like Alice in Wonderland, the counter-trend has a job even keeping up, let alone overtaking the main trend.
But eventually this is precisely what happens.  A tipping point occurs when the counter trend  ousts the main trend and itself becomes dominant. A stock is overvalued, certain influential traders realize this, sell their shares, the value of the stock depreciates and so on. Alternatively, a company starts recovering from a bad time, one or two sharp traders note this, buy when the price is temptingly low, and initiate a rally.
Need there be a reversal at all? Yes, because otherwise the market  would cease to function : there would be a credit bubble so big that it would swallow up everything else, or a crash so devastating that it would bring down the entire economy. If there is danger of this, those in charge immediately end the game by closing down the Stock Market and, if necessary, forbidding further trading until they change thew rules. In the tulip bubble of the 17th century, the price of a single tulip allegedly rose so high that it equalled the price of a house : it could clearly not go up as far as the value of the entire country.
“The trend is your friend”  — but is it?  The people who win out in the Stock Market are precisely those who buck the trend, sell stock during a rising trend that is, according to them, about to peak (thus contributing to this  very levelling off), or buy stock when the  market value, according to them, is about to level off and subsequently rise. The art is in judging the relative strengths of trend and counter-trend and throwing one’s own weight into the scales.
There are, in the Stock Market, always  two opposed forces at work since all traders, inasmuch as they are traders, rather than buy-to-hold investors, are both buyers and sellers. Traders in shares do not generally wish to keep the shares they acquire and buyers of futures certainly do not want the vast quantities of corn or rice they buy in anticipation. They have a single purpose,  to make money, but to achieve this aim at least two completely opposed actions are required, buying and selling the very same item, often many times over. This is the main reason why the Stock Market is such a peculiar place. To make a profit, a trader must successively live on both sides of the Great Divide, like the alleged Afghan fighters in the Civil War who drew two salaries by fighting for one side during the day and the other side during the night. On top of that, each trader is, or aspires to be, an ‘objective’ analyst who has a God’s eye view of the battlefield from up above.
The market is a unique place in that volatility is the norm rather than the exception : unlike most ‘curves’ that are studied in mathematics, the ‘curves’ in market graphs are always jagged and nearly always go up or down, rarely sideways.  The market is in a permanent state of disequilibrium : indeed, if it were otherwise no money could be made, or very little. Can any very general ‘laws’ be formulated which might apply to other event environments?
First, a definition. In terms of Eventrics a (market) trend is an event-chain that has persistence (repeats itself), direction(goes up or down)and dominance (has the power to attract other events and make them copy itself). In this context, the direction’ is price movement while it is the increasing dominance of an event-chain that provokes herding behaviour, sometimes producing hysteria. Momentum, a vector quantity, may be defined as  volume × dominance, i.e. the ‘spread’ of an event-chain multiplied by its power to influence other events.

Rule 1   A market event-chain’s momentum is rarely static but tends to increase or decrease, and to increase or decrease irregularly (not linearly). As the saying goes, “Nothing succeeds like success” and one ought to add, “Nothing fails so successfully as failure”.

But :

Rule 2   Every burst, a fortiori trend, ‘normally’ gives rise to a correcting burst/trend whose direction is opposite to the main trend but which always  lags slightly behind the main trend.

Rule 3  At a certain point (tipping point) the relation main trend/correcting trend reverses.

Rule 4  In certain circumstances the correcting trend  disappears altogether with devastating consequences for the entire event-environment : examples are a runaway credit bubble or, conversely, a run on a bank.

These rules are merely extrapolations from the data and will not enable you to actually make money on the Stock Exchange. To do this you need either to be lucky or be at once highly intuitive and self-disciplined (a rare combination); you also need to have a good understanding of what is going on in the outside world politically and economically and be able to derive rather more precise rules governing the fluctuations of share prices than the above (to say the least).
So what are these more precise rules? Elliott Wave Theory tells us and I refer the reader to thw writings of Elliott himself or his most lucid and successful follower, Prechter. There is, on the face of it, no special reason why ‘trends’ should be made up of three dominant and two counter-dominant ‘waves’ as Elliott believed. However, if this turns out actually to be the case, it can only be that there is some deep-rooted physical reason.  Elliott himself writes

“The forces that cause market trends have their origin in nature and human behaviour and can be measured in various ways. (…) Forces travel in waves….  [they] can be forecast with considerable accuracy by comparing the structure and extent of the waves”  (Elliott, Nature’s Law p. 81)

From my point of view, it is misleading to speak of market trends as composed of waves, since, mathematically speaking, a ‘wave’ is continuous whereas market activity is discrete, is composed of specific trades and these trades proceed by bursts, not continuously (Note 2). Also, Elliott does not say what causes these waves in the first place : he seems to think that they are universal and omnipresent — but are they ? If they are, why don’t we find them in other contexts?          S. H.      (30/06/12)

Note 1  “Shared mood trends [amongst traders] appear to derive from a herding impulse governed by the phylogenetically ancient, pre-reasoning portions of the brain. This emotionally charged mental drive developed through evolution to help animals survive, but it is maladaptive to frming successful expectations cocnerning future financial valuation” (Prechter, Conquer the Crash p. 25) As E.O. Wilson said in an interview for “We have Palaeolithic emotions, medieval institutions and godlike technology. That’s dangerous” (New Scientist 21 April 2012)

Note 2   “Whenever a market ‘gaps’ up or down on an opening, it simply registers a new value onm the first trade, which can be conducted by as few as two people” (Prechter, Conquer the Crash p. 93)     S.H.

         The image is  Perpetual Motion by  June Mitchell   (All rights reserved) .

The test of a model depends on what it can predict, though this is not the only consideration : models which stimulate the mind because they are ‘intuitively clear’ have proved to be extremely helpful in the development of science even if they were eventually discarded.
Anyone wishing to lay the foundations for a new science on the basis of preliminary assumptions must steer a narrow course between two extremes. On the one hand, he must beware of wrenching unjustifiable conclusions from the premises because he ‘knows exactly where he wants to land up’. On the other hand, there is no point in threshing around in the dark and hoping for the best : once it is clear that one line of argument is leading nowhere, it must be abandoned. How do we know it is leading nowhere ?  Often we don’t, but we can appeal to our own or others’ experience to judge how we are progressing. For example, de Sitter’s model derived from Einstein’s Equations of General Relativity was clearly wrong (or rather not applicable to the case that concerned us) since it predicted a universe completely empty of matter. In other cases, early scientists were eventually proved ‘right’ (though not necessarily for the reasons they believed at the time), for example Huyghens’ wave theory of light.

        I shall attempt to avoid these two extremes. My sketchy knowledge of advanced physics and current experimentation (HLC and so on) could actually be an advantage in the sense that I am by no means sure ‘where I want to land up’, so I am less likely to fudge things. As for the second danger, a manifestly absurd conclusion will (hopefully) prompt me to re-examine my original assumptions and add to them, and, if this does not work, simply admit that something has gone wrong. But at this stage in the game it would be unfair to  expect, and even foolish to desire, anything but the broadest qualitative predictions : being too specific in one’s forecasts too early can all too easily block off diverging avenues worth exploring.

Before drawing any conclusions, I will briefly review in an informal manner, the preliminary assumptions on which the whole of Ultimate Event Theory is based. Broadly speaking, In a nutshell, I consider that “the universe is composed of events rather than things”. Although I have listed some properties of ‘events’ as I see them, at this stage I have to assume that the notion of an ‘event’, or at any rate the distinction between an event and a ‘thing’,  is ‘intuitively clear’. Ultimate events associate together to form ‘ordinary’ events but cannot themselves be further decomposed — which is why thet are called ‘ultimate’. They occupy ‘spots’  on the ‘Locality’ — the latter beihng, for the moment, nothing more than a  sufficiently large expanse able to accommodate as many ultimate events as we are likely to need. Ultimate events are ‘brief’ : they flash into and out of existence lasting for the space of a single ‘chronon’, the smallest temporal interval that can exist, in this ‘universe’ at any rate. A definite ‘gap’ exists between successive appearance of ultimate events : physical reality is discontinuous through and through (Note 1). Some ultimate events acquire  what I call ‘dominance’ : this enables them to repeat identically, perhaps  associate with other stabilized ultimate event and influence event clusters. ‘Objects’, a category that includes molecules and some elementary particles (but perhaps not quarks) — are viewed as relatively persistent, dense event-clusters. Dominance is not conserved on the grand scale : there will always be some ultimate events that  pass out of existence for ever, while there are also ultimate events which come into existence otherwise than by a causal process (random events) (Note 1).

The predictions are as follows:

(1)  There will always be an irreducible background ‘flicker’   because of the discontinuity of physical reality. This ‘rate of existence’ varies : essentially it depends on how many positions on the Locality are ‘missed out’ in a particular event-chain. The rate of most event chains is so rapid that it is virtually imperceptible — though, judging by certain passages in the writings of Plato and J-J Rousseau, some people seem to have thought they perceived it.  But there should be some ‘extended’ event chains whose flicker can be recognized by the instruments we now have, or will shortly develop (Note 2).

(2)  The current search for ‘elementary particles’ will turn up a very large quantity of heterogeneous ‘traces’ which are too brief and too rare to be dignified with the title of ‘elementary particle’.  The reason for this is the vast majority of ultimate events do not repeat at all : they flash into existence and disappear for ever.

(3)  The number of ‘elementary particles’ detected by colliders and other instruments will increase though some will never be detected again : this is so because ultimate events are perpetually forming themselves into clusters but also  ‘breaking up’ into their component parts, in some cases dematerializing completely.

(4) Certain ‘elementary particles’ will pass clean through solid matter without leaving a trace : this will tend to occur every time the (relative) speed of a very small event cluster is very large while  and the ‘thickness’ of the lumped cluster is small in the direction of travel (Note 3).

(5)  There will always be completely new event-clusters and  macroscopic events, so the future of the universe is not completely determinate. This is so because not all ultimate events are brought into existence by previously existing ones : some ultimate  events originate not in K1 (roughly what is known as the physical universe) but in K01 , the source of all events. If these ‘uncaused events’ persist, i.e. acquire self-dominance, or come to dominate existing event clusters, something completely new will have come into existence —  though whether it persists depends on how well it can co-exist with already well-established event-clusters. In brief, there is an irreducible random element built into the universe which stops it being fully determinate.

Notes :

(1) Since putting up this post on January 18th, I have come across what might be confirmation (od a sort) of this prediction. The February 2012 edition of Scientific American includes a mind-blowing article, Is Space Digital? by Michael Moyer. “Craig Hogan, director of the Fermilab Center ….thinks that if we were to peer down at the tiniest subdivisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of statuc. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth that physicists have argued about in the past. Rather Hogan’s noise would come about if space was not, as we have long assumed, smooth and continuous, a glassy backdrop to the dance of fields and particles. Hogan’s noise arises if space is made of chunks. Blocks. Bits.”  This is not just a passing thought, for Hogan “has devised an experiment to explore the buzzing at the universe’s most fundamental scales.”
I originally thought that what I call the ‘flicker of existence’ would remain forever beyond the reach of our instrumentation and said as much in the original draft of this post. However, after thinking about the amazing advances made already, I added, perhaps prophetically, “There should be some ‘extended’ event chains whose flicker can be recognized by the instruments we now have, or will shortly develop.”  Maybe Hogan’s is one of them.
However, I do not ‘buy’ the current trend of envisaging the universe as a super computer  — for Hogan my ‘flicker of exietence’ is  ‘digital noise’. The analogy universe/computer strikes me as being too obviously rooted in what is becoming the dominant human activity — computing. I wouold have thought the ‘universe’  had better things to do than just process information. Like what for example ?  Like bringing something new into existence from out of itself, actualizing what is potential. In a nutshell : the ‘universe’ (not a term I would choose) is creative not computational. But I suppose one cannot expect trained scientists to see things in this light.    S.H. (7/2/12)

(2) Heidegger put it more poetically, “Being is shot through with nothingness”.

(3)   This happens because a rapid event cluster ‘misses out’ more event locations on its path, so the chance of the two clusters ‘colliding’, i.e. ‘competing’ for the same spots on the Locality, is drastically reduced.

(4) This is so because not all ultimate events are brought into existence by previously existing ones : some ultimate  events originate not in K1 (roughly what is known as the physical universe) but in K01 , the source of all events. If these ‘uncaused events’ persist, i.e. acquire self-dominance, or come to dominate existing event clusters, something completely new will have come into existence —  though whether it persists depends on how well it can co-exist with already well-established event-clusters. In brief, there is an irreducible random element built into the universe. (This is quite apart from Quantum Indeterminacy which in any case would disappear if a coherent ‘hidden variables’ theory replaces the orthodox one.)

I will discuss in a subsequent post whether modern experiment and observation gives any support to these predictions.