English like all Indo-European languages is an ‘object orientated language’.  It presents us with an object, he, she, it, then tells us something about it in the so-called predicate, She is dark-haired, intelligent, European, whatever. Alternatively, we are presented with two ‘things’ (organic or inorganic) and an action linking them together, I hit him. Never, except in the case of imperatives, do we have a verb standing alone and even imperatives do not express any actual state of affairs but only a hypothetical  or desired state of affairs (desired by the speaker) as in Come here. Whorf is one of the very few linguists to have noticed this :
“We are constantly reading into nature fictional acting entities, simply because our verbs must have substantives in front of them. We have to say ‘It flashed’ or ‘A light flashed’, setting up an actor, ‘it’ or ‘light’, to perform what we call an action, ‘to flash’. Yet the flashing and the light are one and the same!  The Hopi language reports the flash with a single verb, rehpi : ‘flash (occurred)’. There is no division into subject and predicate, not even a suffix like –t of Latin tonat, ‘it thunders’. Hopi can and does have verbs without subjects…”  (Whorf, Language, Thought and Reality p. 243)

Nouns and names are inert : they do not do anything, which is why a sentence which is just a list of names strikes us as being incomplete. But, although we can’t say it in contemporary English, ‘hit’ is perfectly adequate on its own; it pinpoints the essential, the action. Even more adequate on its own  would be ‘killing’  (it is ridiculous that we cannot say ‘birthing’ but only ‘giving birth’ as if we were giving something away to someone). Think of all the films you have seen which start with a shot ringing out and a dead body lying on the ground e.g. The Letter, Mildred Pierce. In both these cases, the entire rest of the film is taken up with retracing the series of events leading up to this all-important event and putting names to bodies. But the persons revolve around the central event, not the reverse, they interest us in the context of this event, not otherwise. In a more recent film, The Descendants, the entire film revolves around a water-skiing accident and it is extremely clever that the victim is only shown in a coma : she is of no interest in herself and had she not been put into a coma, there would have been nothing to make a film about. Such a dramatic event as a murder or violent death carries a tremendous weight of accessory events which otherwise would remain unknown and equally such events leave a long ‘trail’ of future events.
An ‘event’ language’ would have a (macroscopic) event as the central feature and the sentence structure would of necessity be different.  I have conjectured that the basic structure would be :

1. Presentation of a block of ultimate events
2. Its/their  localisation
3, Flow of causality (dominance) i.e. which event causes what.

 One or more of these elements may be absent : in baby-talk (3) is often lacking.

     Instead of  the bland  “He was hit  by a car”  we would have something more like  “Crash/he/car”

Event/dharma: the hitting, the collision
Localisation :  ‘He’   (whoever he is)
Cause (origin of dominance) :  car

Implicit in the subject/predicate syntax is an underlying ‘world-view’ or paradigm.

Whorf, remarkably, conjectures that a Hopi ‘physics’ would be very different from our Western traditional physics but ‘equally valid’. It is foolish to assume that an alien civilisation would have essentially the same mathematics and physics that we do though, certainly, there would be a certain overlap. Whorf thinks the main difference would be between a concentration on ‘events’ rather than ‘things’ on the one hand and a deep concern with the interaction between the ‘subjective’ and ‘objective’.

To be continued     

 

 

 

In the last Post I introduced what I called the ‘Classic Theory of Causality’ which would seem to be based on the four following assumptions :

  • 1.    There exists a necessary connection between certain pairs of events, and by extension, longer sequences;
  • 2.    The status of the two events in a causal pair is not equivalent, one of the two is, as it were, active and the other, as it were, passive or acted upon;
  • 3.    The ‘causal force’ always operates forwards in time, it is transmitted from the earlier event to the later;
  • 4.    All physical occurrences, and perhaps mental occurrences as well, are brought about by the prior occurrence of one or more previous events.

           Actually, the four assumptions listed, necessary though they are, do not suffice to distinguish the post-Renaissance Western theory of causality (CTC)from earlier beliefs and theories. Further restrictions are required to eradicate the remaining vestiges of magical pre-scientific thinking. The most important of these principles seem to be :
1.    The No Miracle Principle
2.    The Principle of Spatio-Temporal Continuity
        3.    The Principle of Energy
4.    The Principle of Localization
5.    The Mind/Body Principle
6.    The Principle of Parsimony

 (1.) No Miracle Principle

By ‘miracle’ I mean an event brought about by a supernatural agency, by someone or something considered to be outside or beyond the physical universe.
Practically all belief systems prior to the seventeenth and eighteenth centuries included some appeal to supernatural beings, although advanced monotheistic systems had reduced these agencies to a single all-powerful one. The ‘new philosophy’ of Descartes and Galileo still required God as a ‘Prime Mover’ and there was some debate as to whether His intervention in the day to day working of the universe was needed. Leibnitz strenuously denied this, arguing that the contrary opinion was ‘blasphemous’ since it implied that God had been unable to make a perfectly functioning universe in the first place. Newton, for his part, found himself obliged to give God a small role in keeping the celestial machinery in working order, for example by stopping stars getting too close together. But by the early nineteenth century, mathematicians had so successfully improved on Newton’s schema that Laplace, when berated by Napoleon for writing a long book on the universe without mentioning its Creator, famously replied, “I had no need of that hypothesis”.
The most important feature of what we know as the ‘Enlightenment’ was the exclusion of the supernatural from the physical world : all physical events were explicable according to mechanical laws (and in principle all mental events as well). This attitude had certain immediate social benefits, leading for example to the prohibition of trials for witchcraft in France and Russia ─ since sorcery was an ‘imaginary crime’ ─ and to a certain degree of religious tolerance. It did, however, imply total determinism : all events were caused by previous events and in ways that could be predicted by way of Newton’s Laws, at least in theory. There was thus no place for free will and  there were no ‘chance events’.

 (2.) The Principle of Spatio-Temporal Continuity.  

This Principle is sometimes called the Law of Transmission by Contact. It is the celebrated prohibition of ‘action-at-a-distance’ that is held to mark the line of demarcation between magic and science. In magical belief systems the possibility of action at a distance is implicitly accepted : I can perform a rite here which will, say, cause the death of an animal or person several miles away. And I can perform a rite now which will cause it to rain tomorrow. According to classical scientific thinking, the only way I can produce an effect some way off is by some sort of chain reaction.
This Principle was assumed by Newton though he was embarrassed by the undoubted fact that his Law of Universal Attraction seemed to violate it, since the gravitational impulse was propagated ‘instantaneously’ throughout the entire universe. This incidentally was the main reason why continental scientists, while accepting Newton’s terrestrial mechanics, rejected his gravitational theory as being much too far-fetched.
Einstein in his Special Theory of Relativity also assumed the Principle and made it rather more precise by stating that no message, or causal impulse, could be transmitted at a speed faster than that of light. Since the speed of light in a vacuum is known, and believed to be fixed for all time, this put a serious restriction on the effects that any action of mine, or anyone else, could have : whole chunks of the universe were condemned to follow quite different destinies with no possibility of interaction between them simply because they were too far away from each other.
Einstein was such a firm believer in the principle that he remained to his dying day deeply unhappy about Quantum Mechanics  because QM seemed to involve a sort of ‘telepathic’ connection between distant particles — the term was used by Einstein himself  (Note 1).
Despite Quantum Mechanics, most of us still assume the complete validity of the principle. If I want to get from A to B, I have to traverse ‘the space’ in between : if it really were possible to ‘leap-frog’ in this way being seen miles away minutes before a crime would not constitute an alibi. Contemporary physics has found it necessary to deduce (or invent) ‘virtual particles’ that carry force between neutrons and to propose that there are ‘gravitons’ that carry gravity : all this essentially because of the Principle of Spatio-Temporal Continuity.

(3.)  The Principle of Energy  

The Principle may be stated thus :

All effective action on or in the world requires an energy source.

           This is a (deliberately) vaguer and more general version of the 1st Law of Thermo-dynamics which states that the total amount of energy within a closed system remains constant.
We are familiar with the feeling of ‘being drained’ when we have concluded an exhausting task : it is as if something has been taken from us. What is this something? Not seemingly anything we can touch or see.
Newton did not deal in ‘energy’ : the term only entered the vocabulary of physics in the nineteenth century and even then with some hesitation (Note 2). Strictly speaking, ‘energy’ is ‘Potential Work’, and Work is ‘the first integral of Force with respect to Distance’. But few people, even professional scientists, envisage energy in this way. The 19th century scientific and technological concept of energy fitted in well with a much more primitive notion, that of an immaterial ‘power’ that  is all around us and can be harnessed by man, what the Polynesians called mana, the North American Indians wakanda and the ancient Chinese ch’i.
Why “can’t you get owt for nowt?” Essentially, because of the Principle of Energy : if you want results, you must expend energy, either yours or someone else’s. Even money only brings about changes in the world  because it enables one to command machines or persons to do your bidding, and both persons and machines wear out.
Why are scientists sceptical about Uri Geller’s alleged ability to bend spoons by touching them? Because of the Principle of Energy. Although the human body does contain electro-magnetic energy, the source is too feeble to bring about such effects directly. Most so-called occult phenomena involve a violation of the Principle of Energy which is why the present society, rightly or wrongly, dismisses them out of hand — a well-known physicist of the time damned Professor Taylor for even investigating Uri Geller.

(4)  The Principle of Localization
 

This Principle does not, as far as I know, appear explicitly in the writings of any thinker, ancient or modern : it is nonetheless extremely important. Put crudely, it is the claim that everything must be somewhere. As such, this is a very restrictive requirement indeed — too restrictive perhaps. Where are all these gods, spirits, demons that obsessed and terrified ancient man?  Where are the “thrones, principalities and powers” of which Saint Paul speaks?  In the past they could be safely relegated to unexplored parts of the Earth, or to the sky. But we, having been to many of these places and taken photographs of them, know that these beings are not to be found there and, if astronomers are to be believed, there is not much place for them on distant galaxies either since the same set of natural laws are applicable everywhere in the universe. So, according to the Third Principle, these alleged beings must either be ‘nowhere’, or ‘in people’s heads’.
The Principle of Localization, or a natural extension of it, also stipulates that an entity cannot be in more than one place at a time, i.e. the possibility of multi-localization is explicitly denied. This makes all the ‘voyages’ of seers, shamans and other visionaries ‘imaginary voyages’, not real ones. It is basically because of the Principle of Localization that scientists and rationalists do not take to the idea of a ‘Group Mind’ or ‘species mind’ — for where exactly are these collective entities?  To be sure, ‘entities’ like ‘the nation’ or ‘the government’ are not precisely localized either, but they are, physically speaking, in the last resort made up of human beings who are localizable.
Quantum Mechanics, of course, does not verify the Third Principle since, prior to an ‘act of measurement’, an elementary particle does not (according to the orthodox interpretation of the theory) have an exact position, it is ‘all over the place’. But this is one of the main reasons why Quantum Mechanics is so worrisome.

(5.) The Mind/Body Principle  

The Fourth Principle may be stated thus : 

          The mind (inasmuch as it exists at all) is confined within the bounds of the body, and can only bring about changes in the world via the body, or an extension of the body.

The Fourth Principle is really nothing but a special case of the First combined with the Third — for the mind, if it exists, must be somewhere.
For many physicists and psychologists mind is just a handy word : only the brain exists. Dr. Susan Blackmore, for example, writes : I want to emphasize that comnsciousness cannot do anything. The subjectivity, the ‘what it’s like to be me now’ is not a force, or a causal agent that can make things happen” (The Meme Machine p. 238).

Dr. Blackmore does not believe in the ‘self’ or a controlling mind but even people who are prepared to accept that there are such things are usually not prepared to accept that the mind can be separated at will from the body, can ‘have a life of its own’, so to speak.  When a burglar ties up a man’s hands and feet, and gags him or her, the burglar feels pretty confident that the victim will be unable to send for help. Why does he believe this? Because of the Fourth Principle.
A certain Zen exercise tells you to “Stop that ship on the distant ocean”. Science considers such a feat to be impossible. Why? Because of the Fifth Principle.
Not all persons and societies subscribe to the Fourth Principle — indeed I am not sure that I subscribe to it myself. The young child imagines that it can affect the world around it simply by an act of will and most early societies were firm believers in the power of Mind over Matter. “Hopi attitudes,” writes Whorf, “stress the power of desire and thought. To the Hopi one’s desires and thoughts influence not only his immediate actions but the whole of nature as well” (Whorf, Language, Thought and Reality).
The radical dualism which we inherit from the Greeks (rather than the Jews) goes right back to shamanism which has been described as mankind’s earliest religion (Note 3). In trance the shaman’s body remains on the floor of the hut in full view, but his ‘spirit’ travels far away. Contemporary people who claim to have had OBEs (Out-of-the-Body-Experiences) — and there are plenty of them — clearly do not accept the Mind/Body Principle. Such people claim, for example, to have seen themselves (or rather their bodies) undergoing surgery, and have described what went on. Official science takes a dim view of such claims — why? Because of the Fifth Principle.

 (6.)  Occam’s Razor or the Principle of Parsimony 

One version of the principle is  Entities are not to be multiplied without necessity though, according to Bertrand Russell, Occam did not say this but did say something rather similar which, translated, goes It is pointless to do with many what can be done with less.  In other words, keep it brief when it comes to assumptions.
This principle is completely different from all the others : not only it can’t be proved or disproved, but we would still use it even if we knew it to be wrong. When Occam was formulating this very important principle, most people in Europe believed in the reality of spirits, angels, succubi and all sorts of non-corporeal entities. The existence of these ‘entities’ could hardly be disproved, and indeed they have crept back again in a disguised form as ‘aliens’ and inhabitants of the unconscious, but enough was enough and there was, understandably, a pressing need to sweep the whole lot of them away and start again. However, the theory that appeals to less entities or assumptions is not necessarily the right one : Quantum Mechanics and Relativity are vastly more complex than Newtonian Mechanics but they are also more accurate.
Suffice it to say that the Principle of Parsimony or Occam’s Razor is something we can’t do without, true or not.

S
ummary

It is remarkable that all these laws are essentially negative in character. They can be summed up as

CTC 1     No Miracles and No Chance Events
CTC 2     No Leapfrogging with Space and Time
CTC 3     No Action without an Energy Source
CTC 4     No Entity without a place
CTC 5     No Physical Change caused by Mind alone
CTC 6     No Unnecessary Entities

 These Six Principles are hardly ‘self-evident’ and, in the last resort, their validity depends on their usefulness. Since they have been  the intellectual background to the greatest technological change in history ─ or, at least since invention of agriculture ─ there must be something in them. But there is no ‘reason’ to believe that they are the be all and end all, and plenty of reasons to believe that they are not.  I am outlining them here as a preamble to my own tentative views within the framework of Ultimate Event Theory and to see how my own theory of causality differs, which it does. The chief difference is that I envisage causality not as a matter of logic but as a force, perhaps the most fundamental force of all since without it physical reality would be entirely chaotic while it manifestly is not.

What about Free Will ?

Our society believes, broadly speaking, that adult, sane human beings are responsible for their actions and, in consequence, can be justifiably applauded and/or rewarded for certain acts, likewise justifiably reproved and punished for certain others.  However, it is becoming increasingly ‘scientifically correct’ to disbelieve in free will completely though, curiously, this seems not to have the slightest effect on scientists’ and rationalists’ actual behaviour which is more or less the same as everyone else’s — sometimes worse. Most scientists keep their scientific and private lives completely separate ─ a very convenient arrangement, also a pusillanimous one.  Dr. Blackmore, however, discussing this very point writes “I cannot divorce my science from the way I live my life.  If my understanding of human nature is that there is no conscious self inside, then I must live this way” (The Meme Machine  p. 242) . The ancient Greek philosophers took their philosophy very seriously indeed. Diogenes believed in the simple life and so slept in a tub (actually a large amphora). It is said that one sceptical Greek philosopher was drowning in a quagmire and appealed to another wandering by to help him, but the latter took no notice. The first philosopher survived and allegedly complimented the other on being consistent in his rational selfishness.              SH  17/10/12

______________________________________

Note 1  One can escape from the conclusion [that the quantum theory is incomplete] only by assuming thatb either the measurement of S(1) ‘telepathically’ chamges the real situation of S(2) or by denying independently real situations as such to things that  which are spatially separated from each other. Both alternatives appear to me enirtely unacceptable” (Einstein, Autobigraphical Notes

Note 2  See Jennifer Coopersmith’s remarkably interesting and scholarly work, Energy, the Subtle Concept (OUP).

Note 3  See the fifth chapter of E.R. Dodds, The Greeks and the Irrational

“It seems to me that there is nothing for it but to take as fundamental the relation of one event causing another” (Keith Devlin, Logic and Information).

Eventrics is the general study of events and their interactions while Ultimate Event Theory is, if you like, the nuclear physics of Eventrics. In these Posts I shall hop about more or less at random from the macro to the micro domains while concentrating nonetheless on the latter. Eventually, when enough material has accumulated, I may siphon off certain portions of the theory but at this stage it is more instructive for the reader to see the theory taking shape piecemeal, which is how event-clusters and event-chains themselves form, rather than attempt to systematize.  In any cased, te person reading this who will take the theory further than I can hope to, will not only need to have a clear understanding of the behaviour of events at their most elemental level but, above all, will need to become adept in navigating (or rather surfing) the enormous event currents of the present society if he or she is to give the theory the audience it deserves.
The macro-events we are concerned with on a day to day basis are huge event-clusters, as large as galaxies in comparison to their constituent ultimate events, and the collective behaviour of events may well differ from the behaviour of individual ultimate events as much as the behaviour of human crowds or gases seem to differ from that of their constituent molecules in object-based physics (Note 1). Certainly, large-scale bulk event-chains, what we call ‘historical movements’, seem to have their own momentum and evolve in their own manner, sweeping people along as if the latter were tornpieces of paper. Those persons who obtain positions of power are those who, by luck or good judgment, align themselves with forces they do not control but can up to a point use to their advantage (Note 2). An analysis by way of events rather than by way of persons or by way of electrons and molecules may well prove to be more appropriate in the macro domain and is indeed already followed by various writers.
Events, some of them at any rate, do not occur at random : they form themselves into recognizable event-chains and so require something to stop them falling apart. This something we call ‘causality’.  So-called primitive man, if anything, believed more firmly in causality than people today do : rather than meekly accept that certain events came about ‘by chance’, primitive societies considered there must always be some agency at work, malign or benevolent, natural or supernatural. Causality is not so much a law — for a law requires a lawgiver — as a force, perhaps the most basic and essential kind of force imaginable since without some form of causality everything would be a bewildering confusion where any event could follow any other event and there would be no persistent patterns of any kind whatsoever anywhere.  Though I am prepared to dispense with quite a number of things I am not prepared to dispense with causality, or its equivalent. In Ultimate Event Theory it appears under the name of ‘dominance’. As one of the half dozen basic concepts of the theory, it cannot really be described in terms of anything more elementary and I define it as “a coercive influence which certain event clusters and event-chains have over others”.  This is not much of a definition but will do for the moment. Before saying more about ‘dominance’ and how it differs, if at all, from causality, it may be as well to examine the ‘classic’ theory of causality as it appears in Western science up to the twentieth century and in rationalistic thinking generally.
Causality — what is causality?  The basis of all theories of causality is the notion, more precisely intuition, that certain pairs of events are connected up in a  necessary fashion whilst others (the majority) are not. It doesn’t “just happen to be the case” that someone falls over if I give him a sudden hard push : if  he did fall, we would say that I caused him to stumble. On the other hand, if I am shaking his hand and he happens to trip over a stone at this precise moment, I didn’t cause him to fall over — though it might look as if I did to an observer  some distance away.
According to Piaget, the newborn child lives in a ‘world’ without space and time, without permanent objects and without causality. The universe “consists of shifting and insubstantial tableaux which appear and are then totally reabsorbed” (Piaget). However, the notion of causality arises very early on, perhaps even as early as a few months if we are to believe certain modern researchers (Note 1).  Certainly, the baby very soon realises that by making certain movements or noises it can attract the attention of its mother successfully, though whether this quite constitutes an ‘understanding’ of causality is debatable. . Event A, such as gurgling or screaming, becomes regularly associated in the baby’s mind with a quite dissimilar Event B, the physical proximity of its mother or another grown-up. The scream ’causes’ the prompt arrival of a grown-up, never mind how or why.
The notion that certain occurrences can just arise ‘out of the blue’ without antecedents is repugnant to most adult human beings and any sort of an explanation, however fanciful,  is felt to be better than none at all. Belief in causality, whether well-founded or not, certainly seems to be a psychological necessity. The main motive for populating the universe in times past with so many unseen entities was to provide causal agents for observed phenomena. By the time we reach the 18th century, largely because of the astonishing success of Newtonian mechanics, most of these supernatural agencies became redundant, at any rate in the eyes of educated people. The “thrones, principalities and powers” against whom Saint Paul warns us had disappeared into thin air by the mid eighteenth century, leaving only an omniscient Creator God who had done such a good job in fashioning the universe that it could run on its own steam without the need of further intervention. The philosophers of the Enlightenment rejected ‘miraculous’ explanations of physical events : in theory at any rate mechanical explanations sufficed. Newton himself was puzzled that he was unable to provide a mechanical explanation of gravity and, later on, electrical phenomena caused problems : but most physicists prior to the last quarter of the nineteenth century assumed with Helmholtz  that “all physical problems can be reduced to mechanical problems” and that Calculus and Newton’s Laws were the key to the universe.
Through all this, belief in causality continued unimpaired. In principle there were no chance events, and the French astronomer Laplace went so far as to say that, if a Supermind knew in full detail the current state of the universe, it would be able to predict everything that was going to happen in the future. This view is no longer de rigueur, of course, mainly because of the discoveries of Quantum Theory which has uncertainty built right into it. But, for the moment, I propose to leave such complications aside in order to concentrate on what might be called the ‘Classic Theory of Causality’ — ‘classic’ in the sense that it was the theory upheld, or more often assumed, by the great majority of scientists and rational thinkers between the 16th and 20th centuries.

This Classic Theory would seem to be based on the four following assumptions:
1.    There exists a necessary connection between certain pairs of events, and by extension, longer sequences;
2.    The status of the two events in a causal pair is not equivalent, one of the two is, as it were, active and the other, as it were, passive or acted upon;
3.    The ‘causal force’ always operates forwards in time, it is transmitted from the earlier event to the later;
4.    All physical occurrences, and perhaps mental occurrences as well, are brought about by the prior occurrence of one or more previous events.

       These assumptions are so ‘commonsensical’ that almost everyone took them for granted for a long time, witness popular phrases such as “Every event has a cause” , “Nothing can arise from nothing” and so on. But then the 18th century British philosopher Hume threw a spanner into the works. He pointed out that these assumptions, and others like them, were, when all was said and done, simply assumptions — they could not be proved to be the case, and were not ‘self-evident’. We do not, Hume pointed out, ever see or hear this mysterious causal link : indeed it is notoriously difficult, even for trained observers, to distinguish between events which are (allegedly) causally related and those that are not — if this were not the case, the natural sciences would have developed much more rapidly than they actually did.
Nor do these assumptions appear to be ‘necessary truths’, though this is undoubtedly how Leibnitz and Kant and other rationalist thinkers viewed them. As Hume says, the fact that event A has up to now always and in all circumstances been followed by event B, does not mean that this will automatically be the case in the future. (Indeed, though Hume could not know this, the assumption is false if Quantum Mechanics is to be believed since in QM identical circumstances do not necessarily produce identical results.)
In brief, belief in causality is, so Hume argues, an act of faith. This was a very serious charge since most scientists regarded themselves as having left behind such modes of thought. The nineteenth century, as it happened, took very little notice of Hume’s devastating critique : science needed a cast iron belief in causality and Claude Bernard even went so far as to define science as determinism. And since science was clearly working, most educated people were happy to go along with its implicit assumptions — perhaps making an exception for mankind itself to whom God had given the capacity for free choice which the rest of Nature did not possess.
Actually, the four assumptions listed, necessary though they are, do not suffice to distinguish the post-Renaissance Western theory of causality from earlier beliefs and theories. Further restrictions were required to eradicate the remaining vestiges of magical pre-scientific thinking. The most important of these principles are :
1.     The No Miracle Principle
2.    The Principle of Spatio-Temporal Continuity;
3
.    The Principle of Energy;
4
.    The Principle of Localization;
5
.    The Mind/Body Principle
6
.    The Principle of Parsimony

The first three principles are ‘scientific’ in the sense that they have had enormous importance in the progress of scientific thinking. The first gets rid of all deus ex machina and thus stimulates a search for ‘natural principles’; the second  prohibits ‘action at a distance’ (though ironically gravity and certain aspects of quantum mechanics require it); the third, roughly that “all change requires an energy source” is very much an issue today in this era of depleted stocks of fossil fuels ; the fourth, that “everything must be somewhere” is, or seems to be, commonsensical; the fifth, roughly that the “mind cannot by itself bring about changes in the outside world” is a corner-stone of materialist philosophy, while the last,  that “Entities are not to be multiplied without necessity” is more a matter of method and necessity than anything else. These principles will be discussed in detail in the following Post.   S.H.

________________________________________________________

Note 1 The researchers Ann Leslie and Stephanie Keeble [“Do Six-Month-Old Infants Perceive Causality?” Cognition 25, 1987 pp. 265-288] claim that, when babies are shown ‘acausal’ sequences of events mixed up with similar causal sequences they [the babies] show unmistakeable symptoms of surprise such as more rapid heart beat.

Note 2  This is indeed what may have prevailed ‘in the beginning’ but the world we find ourselves in today is very different from the Greek kaos from which everything come and the main difference is that certain sequences of physical events have kept on repeating themselves with minor variations for millions of years. Such patterns have indeed become so firmly established that they are viewed as ‘laws of Nature’ though they are perhaps more fittingly described as ‘schemas’ or ‘event-moulds’ into which physical events have fallen.

 

Velocity has a different meaning in Ultimate Event Theory to that which it has in object-based physics. In the latter a particle traverses a multitude ─ usually an infinite number ─ of positions during a given time interval and the speed is the distance traversed divided by the time. One might, for example, note that it was 1 p.m. when driving through a  certain village and 3 p.m. when driving through a different one known to be 120 kilometres distant from the first. Supposing the speed was constant throughout this interval, it would be 120 kilometres per 2 hours. However, speed is practically never cited in this fashion : it is always reduced to a certain number of kilometres, or metres, with respect to a unitary interval of time, an hour, minute or second. Thus my speed on this particular journey would be quoted in a physics textbook as 60 kilometres per hour  or more likely as  (60 × 103)/(602»   16.67 metres per second  = 16.67  m s–1 (to two decimal places). 
        By doing this different speeds can be compared at a glance whereas if we quoted  speeds as 356 metres per 7 seconds and 720 metres per 8 seconds it would not be immediately obvious which speed is the greater. When dealing with such measures as metres and seconds there would normally be no difference between object-based physics and event-based physics, However, even when dealing with minute distances and tiny intervals of time such as nanometres and nanoseconds, ‘speed’ is still stated in so many units of lengths per interval of time. This automatic conversion to standard unitary measures presupposes that space and time are ‘infinitely divisible’ in the sense that, no matter how small the interval of time, it is always possible for a particle to change its position, i.e. ‘move’. This assumption is, to say the least, hardly plausible and Hume went so far as to write, “No priestly dogma invented on purpose to tame and subdue the rebellious reason of mankind ever shocked common sense more than the doctrine of the infinite divisibility with its consequences” ( Hume, Essay on Human Understanding).
        In Ultimate Event Theory, which includes as an axiom that time and space are not infinitely divisible, this automatic conversion is not always feasible. Lengths eventually reduce to so many ‘grid-spaces’ on the Locality, and intervals of time to so many ksanas (‘instants’), and there is no such thing as a half or a third of a ‘grid-space’ or a quarter of a ksana. The ‘speed’, or  ‘displacement rate’ of an ultimate event or event-cluster is defined as the distance on the Locality between two spots where the event has occurred. This distance is always a positive integer corresponding to the number of intermediary positions (+1) where an ultimate event could have had occurrence. If the position of the earlier occurrence is not the original position, we relate both positions to that of a repeating landmark event-sequence, the equivalent of the origin. So if the occurrences take place at consecutive ksanas, the current reappearance rate (‘speed’) is simply the ‘distance’ between  the two spots divided by unity, i.e. a positive integer. But what if an event reoccurs 7 spaces to the left every 4 ksanas? The ‘actual’ reappearance rate is 7 spaces per 4 ksanas  which, when converted to the ‘standard’ measure comes out as 7/4 spaces per ksana or 7/4 sp ks–1 . However, since there is no such thing as seven-fourths of a position on the Locality, displacement rates like  7/4 sp ks–1 are simply a convenient but somewhat misleading way of tracking a recurring event.
The ‘Finite Space/Time Axiom’ has curious consequences. It means that except when the space/ksana ratio is an integer, all event-chains are ‘gapped’ : that is, there are intermediary ksanas between successive occurrences when the event or event-cluster does not make an appearance at all. Thus, the reappearance pattern ksana by ksana for an ultimate event displacing itself along a line at the ‘standardized’ rate of 7/4 sp ks–1  will be

……..o■oooooooooooooooooo……………
……..oooooooooooooooooooo…………..
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooo■ooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..ooooooooooooooo■oooo……………
……..ooooooooooooooooooooo…………

And this in turn means that when s/n is a ratio of relatively prime numbers, there will be gaps n–1 ksanas long, and the ‘particle’ (repeating ultimate event) will completely disappear during this time interval !   The importance of the distribution of primes and factorisation generally, which has been so intensively studied over the last two centuries, may thus have practical applications after all since it relates to the important question of whether there can be ‘full’ reappearance rates for certain processes (Note 1).
In consequence, when specifying in  full detail the re-appearance rate of an event, or, what comes to the same thing, the re-occurrence speed of members of an event-chain, we not only beed to give the magnitude of the displacement, the change of direction, but also the ‘gap number’ or ‘true’ eappearance rate of an event-chain.

Constants of Ultimate Event Theory 

n* , the number of ksanas to a second, and s*, the number of grid-spaces to a metre, are basic constants in Ultimate Event Theory that remain to be determined but which I have no doubt will be determined during this century.  (s*/n*) is thus the conversion factor required to reduce speeds given in metres/second to spaces/ksana.  Thus c (s*/n*) =  3 × 108 (s*/n*) sp ks–1  is seemingly a displacement rate that cannot be exceeded. c(s*/n*) is  not, as I view things, the actual speed of light but merely the limiting speed for all ‘particles’, or, more precisely, the limiting value of the possible ‘lateral’ displacement of members of a single event-chain. Any actual event-chain would have at most a lateral displacement rate that approaches but does not attain this limit. While there is good reason to believe that there must be a limiting value for all event-chains (particles) — since there is a limit to everything — there is no need to believe that anything  actually attains such a limit. In object-based physics, the neutrino used until recently to be thought to travel at the speed of light and thus to be massless, but it is now known that it has a small mass. Te idea of a ‘massless’ particle is ridiculous (Note 2) for if there really were such a thing it would have absolutely no resistance to any attempt to change its state of rest or straight-line motion and so it is hard to see how it could be anything at all even for a single instanrt, or maybe it would just be a perpetually changing erratic ‘noise’. Mass, of course, does not have the same meaning in Ultimate Event Theory and will be defined in a subsequent post, but the idea that there is a ‘displacement limit’ to an event-chain passes over into UET. This ‘speed’ in reality shows the absolute limit of the lateral ‘bonding’ between events in an event-chain and in this sense is a measure of ‘event-energy’. Any greater lateral displacement than c(s*/n*) would result in the proto-event aborting, as it were, since it would no longer be tied to the same event-chain.

 Reappearance rates

 So far I have assumed that an event in an event-chain reappears as soon as it is able to do so. This may well not be the case, indeed I think it very unlikely that it is the case. For example, an event in an event-chain with a standardized ‘speed’ of 1/2 sp ks–1  might not in reality re-appear every second ksana : it could reappear two spaces to the left (or right) every fourth ksana, or three spaces every sixth ksana and so on. In this respect the ‘gap number’ would be more informative than the reappearance rate as such and it may be that slight interference with other event-chains would shift the gap number without actually changing the overall displacement ratio. Thus, an event shifting one space to the right every second ksana might only appear every fourth ksana shifted two spaces in the same direction and so on. It is tempting to see these shifts as in some way analogous to the orbital shifts of electrons, while more serious interference would completely disrupt the displacement ratio. Once we evolve instruments sensitive enough to register the ‘flicker’ of ultimate events, we mjay find that rthere are all sorts of different event patterns, as intricate as the close packing of molecules.
It has also occurred to me that different re-appearance rates for  event-chains that have  the same standard displace,ment rate might xplain why certain event-chains behave in very different ways despite having, as far as we can judge, the same ‘speed’.  In our macroscopic world, the effect of skipping a large number of grid-spaces and ksanas (which might well be occupied by other event-clusters) would give the impression that a particularly dense event-cluster (‘object’)  had literally gone right through some other cluster if the latter were thinly extended spatio-temporally. Far from being impossible or incredible, something like this actually happens all the time since, according to object-based physics, neutrinos are passing through us in their millions every time we blink. Why, then, is it so easy to block the passage of light which travels at roughly the same speed, certainly no less? I found this a serious conceptual problem but a difference of reappearance rates might explain this : maybe a stream of photons has the same ‘speed’ but a much tighter re-appearance rate than a stream of neutrinos. This is only a conjecture, of course, and there may be other factors at work, but there may be some way to test whether there really is such a discrepancy in the ‘speeds’ of the two ‘particles’. which would result in the neutrino having far better penetrating power with regard to obstructions.

Extended and combined reappearance rates

  Einstein wondered what would happen if an object exceeded the speed of light, i.e. in UET terms when an event-chain got too extended laterally. One  might also wonder what would happen if an event-chain got too extended temporally, i.e. if its re-appearance rate was 1/N where N was an absolutely huge number of ksanas. In such a case, the re-appearance of an event would not be recognized as being a re-appearance : it would simply be interpreted as an event (or event-cluster) that was entirely unrelated to anything in its immediate vicinity. Certain macroscopic events we consider to be random are perhaps not really so : the event-chains they belong to are so extended temporally that we just don’t recognize them as being event-chains (the previous appearance might have been years or centuries ago). Likewise the interaction of different event-chains in the form of ‘cause and effect’ might be so spread out in time that a ‘result’ would appear to come completely out of the blue (Note 3).
There must, however, be a limit to vertical extension (since everything has a limit). This would be an important number for it would show the maximum temporal extension of the ‘bonding’ between events of a single chain. We may also conjecture that there is a combined limit to lateral and vertical extension taken together, i.e. the product grid-positions × ksanas has a maximum which again would be a basic constant of nature.     S.H. 8/10/12

_________________________

Note 1  A ‘full’ re-appearance rate is one where an ultimate event m akes an appearance at every ksana from its first appearance to its last.

Note 2. De Broglie, who first derived the famous relation “p = h/ λ”linking a particle’s momentum p  with Planck’s constant divided by the wave-length, believed that photons, like all particles, had a small mass. There is no particular reason why the observed speed of light should be strictly equated with c, the limiting speed for all particles, except that this makes the equations easier to handle, and no experiment will ever be able to determine whether the two are strictly identical.

Note 3 This, of course, is exactly what Buddhists maintain with regard to the consequences of bad or good actions ─ they can follow you around in endless reincarnations. Note that it is only certain events that have this temporal extension in the karma theory: those involving the will, deliberate acts of malice or benevolence. If we take all this out of the moral context, the idea that effects can be widely separated temporally from their causes and that these effects come up  repeatedly is quite possibly a useful insight into what actually goes on in the case of certain abnormal event-chains that are over-extended vertically.

 

“The acceleration  of  straight motion in heavy bodies proceeds according to the odd numbers beginning with one. That is, marking off whatever equal times you wish, and as many of them, then if the moving body leaving a state of rest shall have passed during the first time such a space as, say, an ell, then in the second time it will go three ells, in the third, five; in the fourth, seven, and it will continue thus according to the succession of odd numbers.”      Galileo, Dialogue Concerning the Two World Systems   p. 257  Drake’s translation

I had originally supposed that Galileo based this conjecture, one of the most important in the whole of physics, on actual observations but, if so, Galileo kept very quiet about it. For the man who is hailed as the ‘first empiricist’, Galileo seems to have been singularly uninterested in checking out this remarkable relation which would have delighted the Pythagoreans, believing as they did that all physical phenomena were reducible to  simple ratios between whole numbers. Admittedly, Galileo was blind when the Dialogue Concerning Two World Systems was published but he surely had ample time to investigate the matter during his long life ― perhaps he was reluctant to test his beautiful theory because he was afraid it was not entirely correct (Note 1). Or, more likely, he wanted to give the impression that he had deduced the ‘Law of Falling Bodies’ entirely from first principles without recourse to experiment. We must remember that a Christianised Platonism provided the philosophical framework for the thinking  of all the early classical scientists right up to and including Newton. Galileo himself wrote that “As to the truth of which mathematical demonstrations give us the knowledge, it is the same which the Divine Wisdom knoweth” (Galileo, Dialogue).  He does claim (via his spokesman Salviati in the Dialogue) that there exists a proof and “one most purely mathematical” (to be given in outline in a moment) but one wonders how on earth he got hold of the relation in the first place since it does not seem to be based on general physical principles as Newton’s formula for gravitation was. As with so many other important discoveries, Galileo most likely just hit on this striking relation  by a combination of observation and inspired guesswork.
Note that Galileo still speaks the language of continued ratio just as Euclid did (and Newton continued to do in the Principia) : the ‘Law of Falling Bodies’ does not constitute an equation of motion in the modern sense (Note 2). There are various reasons for this apart from Galileo’s respect for the ancient Greek geometers. For there is safety in ratios : they  do not tie you down to actual quantities while implicitly assuming that these quantities really exist beyond what one can ever hope to examine in practice. Above all, the language of proportion or continued ratio sidesteps the question of whether space and time are ‘infinitely divisible’ since the rapport holds either way.
Galileo’s continued ratio concerns distances and not speeds. The statement that the successive distances are in the ratio of the odd numbers would still be true  if, during a given interval of time, the particle moved at a steadily accelerating pace, if it stayed motionless for most of the time only to suddenly surge forward at the close, or again move around erratically (Note 3). Provided the recorded distances in successive intervals of time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Moreover, supposing the rule is correct, we only need to determine a value for a single interval (and not  necessarily the first one) to work out all the others.
The difficulty with problems concerning speed is that speed, unlike distance, is not something we can actually observe (Note 4). We have perforce to start with distances, at least one,  that we can determine by observation and check it against repeating signals like the ticking of a clock or the beating of a pulse (Galileo used the latter to time the swinging of a censer when at mass). However, once we have at least one clearcut distance/time ratio, in order to work out further distances ― which is basically what we are interested in ― we have to deal with speeds (rates of change of distance with respect to time) and then backtrack  again to distances. That is, we have to move from the observable (distances, intervals of time) to the unknown and unobservable (speeds, accelerations) and finally back to the observable. The very idea of a ‘formula of motion’, i.e. a statement which, in algebraic shorthand, gives all distances at all times, is a thoroughly modern invention that required a gigantic leap of thought : not only did the Greeks did not have a clear conception of such a thing but Galileo himself seems to have hesitated on the threshold of the Promised Land without really entering it.

Derivation and Justification of Galileo’s ‘Law of Falling Bodies’

Suppose we start off with Galileo’s rule d1 : d2 : d3 ……  =  1: 3: 5…..:  and generalize. We have observed, or think we have observed, that a body starting from rest falls, say, 1 metre in the first second. So it will fall 3 metres during  the second interval, 5 during the third and by the end of the third second will have fallen altogether 1 + 3 + 5 = 9 metres  =  32 metres. Since the odd numbers 1 + 3 + ….(2n–1)   n = 1, 2, 3… add up conveniently to n2 we have a formula already. And, since there is nothing sacrosanct about a second as an interval of time, we might try going backwards and  consider halves, thirds and fourths of seconds and so on.  So, when dealing in half-seconds, since the particle falls 1 metre per second, it must fall ½ metre in the first half-second, 3/2 metres in the second half second, i.e. 2 metres already (even though it has only fallen 1 metre in the first second). And if we deal in thirds of seconds, the particle falls (1/3 + 3/3 + 5/3) = 9/3 = 3 metres, and if we deal in fourths of seconds it has fallen (1/4 + 3/4 + 5/4 + 7/4) = 16/4 = 4 metres so it looks as if the particle’s speed increases the briefer the intervals of time we consider! What has gone wrong?

The answer is that Galileo’s ratio does not tell us anything about speeds as such, only distances, and we cannot just project back the observed ‘speed’ evaluated over a given interval because this speed has maybe been changing during the interval considered. As a matter of common observation a falling body falls faster and faster as time goes on, so it does not have a fixed speed over a given interval. When Galileo spoke of the distances fallen, he was referring to the distance the body had fallen by or at the end of the interval. And like practically everyone else then and since, Galileo assumed that, in such a case,  the speed changed ‘continually’ (or continuously) rising from an initial speed, which the particle has at the very beginning of the interval, to a final speed which the particle attains only at the very close of the interval. So how do you work out the speed, and thus the distance traversed, during any interval of time? To get out a formula, it is seemingly no good just knowing the distances, and thus the speeds, of falling bodies over macroscopic intervals of time like seconds: we need to know the distances they fall in intervals of time too small for us to measure directly. Moreover, any error we make in an observed value over a relatively large interval like a second will most likely get magnified when we extrapolate forward to immense stretches of time, or backward to ‘microscopic’ time.
The ‘Law of Falling Bodies’  (that, during equal intervals of time, the distances fallen are as the odd numbers) takes for granted the following : (1) that, during free fall, a particle’s motion is always increasing ; and (2) that the speed increases steadily ― does not, for example, increase and then decrease a little or halt for a moment. We can take (1) as being based on observation. (2) is also based on observation in the sense that we do not notice any fluctuations (though there might well be some too small for us to pick up) and Salviato, Galileo’s spokesman in the Dialogue, after discussing the point, concludes that it is “more reasonable”  to conclude that the increase is regular.
But this is not all. Not only does Galileo (and practically all physicists since his day) assume that there is a ‘steady’ increase (in the sense that the particle does not backtrack or even stay motionless for a moment) but that the accelerating particle takes on all possible intermediate speeds.  “The acceleration is made continuously from moment to moment, and not discretely (intercisamente) from one time to another “ (Galileo, op. cit. p. 266)  This implies, as Galileo well realized, that space and time are ‘infinitely divisible’ since speed is the ratio of distance to time ― “Thus, we may understand that whenever space is traversed by the moving body with a motion which began from rest and continues uniformly accelerating, it has consumed and made use of infinite degrees of increasing speed….”  (Galileo, op. cit. p. 266).
Galileo and one or two of  his medieval predecessors resolved the ‘continuous acceleration problem’ by geometricizing it, even it belonged to dynamics, the science of movement, while  geometry is the science of the changeless (Euclidian anyway). As we all learned at school the area of a triangle is half the height times the base. But most people  have forgotten, if they ever knew, why the formula works. It works  because, taking the simplest case, that of a right angled triangle, you can cut off the upper ‘half-triangle’ and bring it round to form a rectangle. And the area of this rectangle is half the original height times the base. (Subsequently, we deal with non right-angled triangles by showing that the area of a parallelogram between the same two parallels is that of the equivalent rectangle.)
       What is more, we can (in imagination if not in practice) change a triangle into an equivalent  rectangle no matter how tiny the triangle and resulting rectangle are. Distance is speed × time, so we can present  distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this can be the case?). Also, provided the time interval is ‘brief’, we can, without too much exaggeration, treat the speed as constant and equal to the average height of the two uprights (h­2 – h1)/2.

The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not      We can now measure this narrow rectangle : it is   (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 5), the final result being, of course, the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant that has to be determined by observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  acceptable? To me, not, because, as always in calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but then, a  moment later, turning it into a constant while claiming that this does not matter because it is only a constant for such a short period of time ! It is like hearing that so and so is really a very nice guy and if you object that on every occasion you meet him he is despicable, being told that this doesn’t matter because, every time you meet him, it’s only for a ‘very short time’ !
Galileo, of course, did not have the modern concept of a ‘limit’ in the mathematical sense since it was only evolved, and very painfully at that, during the late nineteenth century.  But, contrary to what most people assume, the modern mathematical treatment using limits does not so much resolve the conceptual problem as make it technically irrelevant. But there is a cost to pay : the whole discussion has been removed from the domain of physical reality where it originated and where it belongs.
Is there any other way of tackling the problem?  Yes, I believe there is. We can simply suppose that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so. This is the ‘finitist’ hypothesis which is the kicking-off point for Ultimate Event Theory.
However, before passing on to the UET treatment, we ought to see what Newton made of the problem. He is over generous to Galileo when he states right at the beginning of his Principia that “By the first two laws [of  motion ] and the first two Corollaries, Galileo discovered that the descent of bodies varied as the square of the time and that the motion of projectiles was in the curve of a parabola” (Newton, Principia, Motte’s translation p. 21). If this really is the case, Galileo did not express himself as clearly and succinctly as Newton and, above all, did not explain why this is so. Galileo does speak of the ‘heaviness of bodies’ but does not quite manage to conceive of gravity in the  Newtonian sense.
By Newton’s Second Law, force affects a body’s state of rest or uniform straight line motion, and so  there must be a force at work. Since the force (that of gravity) is permanent and does not get ‘used up’ like most forces we are familiar with, a body is repeatedly accelerated and,  by Newton’s First Law, repeatedly retains the extra velocity it acquires. This means the acceleration cannot be other than constant :
“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities (Note 6) … And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.” ( Newton, Principia, p. 21)
In effect, a falling body is on the receiving end of a repeating force which moves it from one state of uniform velocity (which it would keep for ever if subject to no further outside impulse) to the next. Newton thus, as no one had done before him, derived the observed phenomenon of constant acceleration in a falling body from just two assumptions, his first two Laws of Motion.

Treatment of Free Fall in Ultimate Event Theory

When deriving an equation, especially a well-known one, one must beware of circular reasoning and assuming what we wish to prove. So, let us lay our hands on the table and declare what basic assumptions we require.
First of all, we need to assume that, in cases of free fall, the body ‘keeps on accelerating’, i.e. goes faster and faster as time goes on. This assumption is, in Newton, an Axiom, or an immediate deduction from an Axiom, but it was originally  the fruit of observation. For there might conceivably be worlds where falling bodies fall at a constant rate, i.e. , don’t speed up as they fall. However, it seems to be generally true.
Secondly, we need to assume that a falling body, while it speed is variable, has a constant or very nearly constant  acceleration  due to the Earth’s attraction ― this follows from Newton’s Laws of Motion which are presented as ‘Axioms’ though ‘amply supported by experiment’ . If  Newton is to be believed, the speed of a falling body changes inversely with the square of the distance, i.e. the nearer you are to the Earth the more rapidly your speed increases (until it reaches a terminal velocity because of air resistance). Also, because the Earth is not a perfect sphere, there will be slight variations according to where you are on the Earth’s surface. However, over the sort of distances we are likely to be concerned with, there will be little, if any, observable variation in the value of g, the constant of acceleration.
        A third assumption is, however, also necessary. Either we suppose, as Galileo and  Newton did (Newton with some hesitation) that ‘time and space’ are ‘infinitely divisible’ or we suppose that they are not and draws the necessary consequences. Ultimate Event Theory presupposes that all events are made up of ‘ultimate events’ (which cannot be further divided) and that there is an ‘ultimate’ interval of time within which no change of position or exchange of energy  is possible. When dealing with of the infinitely small ling with a ‘sequence’ of events, there is always a first event and that this event has occurrence on a particular spot at a particular moment. There can be  change but not continuous change (of position or anything else you like to mention). The dimensions of the three-dimensional spot where an ultimate event has occurrence are, currently, unknown as also is the extent of the elementary interval of time which I denote a ksana (from the Sanscrit for ‘instant’). All we can say about these fundamental dimensions of Space/Time is that they are ‘very small’ compared to the dimensions we deal with in the observable macroscopic ‘world’. It may seem that there is not much one can do with such a hypothesis, but, surprisingly, there is, since the whole art of Calculus and Calculus-like procedures is to start off with unknown microscopic quantities and eventually dispense with them while, miraculously, ending  up with the sort of values we can work with.
Fourthly, if our eventual formula is to be of any use, we have to assume that it is possible to determine an ‘initial state’, ‘initial distance’, ‘initial rate of change’ or the like. In this case, we have to assume that we can make at least a reasonable guess at the size of g, the constant of acceleration due to the proximity of the Earth.
(Incidentally, to avoid being pedantic, I shall sometimes lapse into the  ‘object-language’ of conventional physics, speaking of  ‘particles’ instead of events. But it is to be understood that what is ‘really’ happening is that ultimate events are appearing, disappearing and reappearing at particular spots on the Locality and that the ‘motion’ inasmuch as it exists at all is discontinuous.)
We have then a ‘particle’ (repeating ultimate event)  displacing itself relative to some landmark event-cluster considered to be ‘at rest’, i.e. repeating regularly. Our particle (event-cluster) has received an impulse that has dislodged it from its previous position since,  by hypothesis, it is now ‘in motion’. According to Newton’s Laws (Laws of Motion + Law of Universal Attraction) this impulse in a particular direction will not go away but will be repeated indefinitely ‘from moment to moment’ without diminution. Since the effect of an outside force is to accelerate the particle, its speed will increase but will, by the Law of Gravity, increase by a constant amount at each interval of time since the force responsible for the acceleration does not change appreciably over small distances. The resulting increase in distance fallen will be the same as the displacement ‘during’ the first interval since the ‘particle’ starts from rest. In the terms of UET, at the ksanalabelled 0 , the start point, the particle is at zero distance from a particular grid-point ― it is actually at this point ― and at the ksana labelled 1  it is m spaces further to the right (or left) of the original spot (strictly this spot’s repeat). The initial ‘speed’, which is also the constant acceleration per ksana,  is m spaces per ksana where m is an integer. At all subsequent ksanas, the ‘particle’ keeps this ‘speed’ and so displaces itself a further m spaces each time in the same direction. If there were no further force involved, it would keep this current speed indefinitely, but since the gravitational ‘pull’ is repeated it moves a further m spaces at each ksana.

ksana 0            . 

ksana 1            ←    m spaces  → .

ksana 2            ←    m spaces  →   .  ←      m spaces         →.

The equivalent of that will-o’-the-wisp,  ‘instantaneous speed’, in Ultimate Event Theory is simply the current reappearance rate of the ultimate event and in thiscase, the current ‘speed’  is the difference between the current position (relative to a fixed point origin) and the previous position divided by unity. The current acceleration (increase in speed) in this case is the difference between the current speed and the previous speed and this difference,  by Newton’s Law, remains the same because the force involved remains the same (or very nearly the same). Defined recursively, which is the preferred method in UET, the distance formula is
d(0) = 0; d(1) = m; d(n+1) = d(n) + d(1)   n = 1, 2, 3…. 

  ksana       distance from previous           distance from start

                         position                                  position

0                            0                                                 0 m
1                            m                                       1 m
2              (m + m) = 2m                     (1 + 2) m   = 3 m
3              m + 2m = 3m            (1 + 2 + 3) m     = 6 m
4                            4m             (1 + 2 + 3 + 4)   =  10 m
……………………………………………..
n                                     nm             {1 + 2 + 3 +……+ n}m 

         Since the sum of the natural numbers up to m is the relevant triangular number, m(m+ 1) = (m2/2) + (m/2)
                                                                                                                                                                  2
the total distance traversed at the nth ksana is thus
m {1 + 2 + 3 + …….n}   = m (n(n+1))
                                                      2
                                        = m ((n2 + n)  = (mn2)  + (mn)
                                                  2                      2             2
        This total displacement has taken place in n ksanas where n is a positive integer, for example, 3, 57, 1,456  or 1068.  
We, however, do not reckon in ksanas, this interval being so brief that our senses do not recognize it, even when our senses are extended and amplified by modern instruments. What we can  say, however, is that there are n* ksanas to a second, where n* , an unknown constant, is a very large positive integer.

If we wish to covert our formula to seconds we must divide by n*  so we now have

 

    d(n/n* )  = =  (mn)2  + (mn

                         2n*        2n*                    

If t is in seconds n = n* t or t = n/n*  since there are n* ksanas to a second. So,

 

        d(t)  =    (m (n*t)2)   m(n* t)    =  (mn* t2)   mt     …..(i)

                        2n*            2n*               2            2

Now, if we have been able to deduce, by observation, that, at the end of every second, there is a constant acceleration of gmetres per second which takes a full second to have effect, the speed at the end of the very first second will be g metres, where g is a known constant ― approximately known anyway. The speed at the n*th  ksana is n*m which means the ratio of metres to ksana is m n* : g or m = g/n* . Substituting this into the above, we obtain
d(t)  =  d(t)  =   (gn* t2)   +  mt   = ½ gt2 + ½ (gt)/n*………..(i)
                                    2n*               2n*

        Substituting for t = 1, 2, 3   we obtain
        d(t = 1)  =  ½ g + ½ g/n*   =  (g/2) + (g/2)/(n*)
        d(t = 2)  =  4(g/2)  +  2(g/2)/(n*)  =  2g   + g/n*
        d(3)  = 9 (g/2)  +  3(g/2)/(n*)
        d(t)  = t2 (g/2)  +  t(g/2)/(n*)   

         These are the total distances up to the end of the first, second, t th second. The actual distances traversed during each second can be obtained by subtraction. Thus, the distance traversed during the second second is

(4 – 1)(g/2) = (3)(g/2) and for the third (9 – 4)(g/2) 

If we examine the ratios, we see that the distance covered ‘during’ the second second is d(2)  is
        {4(g/2)  +  2(g/2)/(n*)}  –  {(g/2) + (g/2)/(n*)}
        =   3(g/2)  +  g/2)/(n*)   

which is very nearly 3(g/2) if n* is large so the ratio is very nearly 3 : 1      Similarly, the increases during subsequent seconds compared to those in the preceding seconds will be approximately    5 : 3; 7 : 5  and so on.
Note that it was not necessary to say anything about ‘areas under a curve’, ‘infinitesimally small’ intervals or, for that matter, limits as n* → ∞. The formula (d(t)  =   (½ gt2 + ½ (gt)/n*    is not a limit : it is an exact formula involving two constants, g and n*, one of which is known (at least approximately) and the other is currently not known because so far it has proved to be unobservable. Whether one actually takes a value such as n* into account (on the occasions when it is actually known) depends on the level of precision at which one is working, but this is a matter for the engineer or manufacturer to decide.

Determination of the value of n*

Let us we return to the general formula in t
        d(t)  =   ½ gt2 + ½ (gt)/n* 

         If now we set t = n*  ― and there is no reason why we should not since n* is a number, even if we do not know what it is ― we have
        d(n*)  =   ½ g(n*)2 + ½ (gn*)/n*  =  ½ g(n*)2 +  ½ g  

In other words, the ‘extra’ distance, which we have tended to discount as being negligible, is now equal to the known increment of ½ g  . And this means that we can, at least in principle, determine the value of n*  from data and spreadsheets  —  for it will be that value of t which gives an ‘additional’ increment of g/2.

Now, n* is probably far too large a number for this to be currently possible even with today’s computers but we do not have to leave it at that. We can work with nanoseconds instead of seconds and the relation will still be true, although we will now be determining n*/109  which is probably still a large number. Of course, at this level of precision we shall have to be much more careful about the values of g ― will probably have to make further observations to determine them ― and perhaps will also have to take into account the effects of General Relativity. However, since the instruments  now available are capable of determining the tiny Mössbauer effect, this should not by any means be an impossible task. I confidently guess that within the next twenty years, science will come up with a good estimate of the value of n* (the number of ksanas in a second) and that n* will take its rightful place alongside Avogrado’s Number and G, the universal constant of gravitation, as one of the most important numbers in science (Note 7).

Problem about the formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent and  oppositely directed, reaction, and this means there is a fixed order of events since action and reaction cannot be strictly simultaneous, at any rate in Ultimate Event Theory (see post on Newton’s Third Law).       Certainly there is a definite sequence of events when we are dealing with contact forces, which is why Newton used the terms ‘action’ and ‘reaction’ in the first place. But is gravity, action at a distance, the great exception to the rule? Seemingly so, because all my attempts to tinker around with the derivation ended up with something so far from the usual formula (½ gt2)  that it had to be wrong  because the latter has been shown by countless experiments to be at least approximately correct. If, for example, you envisage gravitational attraction as similar to a contact force, there will be a delay of at least one ksana before the next force is experienced by the falling body. That is, the Earth pulls the falling body, the latter  feels the effect, in return exerts a pull on the Earth while it continues for at least one ksana at its current reappearance rate. This gives

ksana       distance at current ksana           Total distance                                       

0                                      0                                                        0
1                             1g/2n                                                                             

2                             1g/2n                                                1g/2n

3                             2(g/2n)                                          3 g/2n                                                                     

4                            2(g/2n)                                         6 g/2n

5                             3(g/2n)                                            

6                              3(g/2n)                                     12 g/2n

……………………………………………..

 –1                          (m/2) (g/2n)
m (even)                            (m/2) (g/2n)        2{1 + 2 + 3 +….(m/2)  (g/2n)

The distance, after due substitutions, turns out to be half the previous one, as one might expect, namely  1/4 gt2  +   gt/4n* and this must be wrong.

This means that gravity must be treated as simultaneously affecting both bodies ‘at once’, i.e. within the same ksana, strange though this seems. Newton himself was worried by the issue since it implied that gravity propagated itself ‘instantaneously’ across the entire universe (Note 9) !  It is, however, probably wrong to conceive of gravity in  this way : in General Relativity it is the space between bodies that contracts rather than the separate bodies sending out impulses to each other. In Ultimate Event Theory terms, this makes gravitational phenomena states of the underlying substratum (that I call the Locality) rather than interactions between event-clusters ‘on’ the Locality. This issue will be gone into in more depth when I come to discussing the implications of Relativity for Ultimate Event Theory.

S.H.

Note 1   Galileo, who apparently never did  throw iron balls off the top of the Leaning Tower of Pisa, does give a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue Concerning the Two World Systems) But this figure is so widely off that one of his own pupils queried it at the time :

“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.”

Drake, Notes p. 561 op. cit.

Note 2  Galileo does, however, go on to say cryptically that “the spaces passed over by the  body starting from rest have to each other the ratios of the squares of the times in which such spaces were traversed” (Dialogue, p. 257)

Note 3 Galileo discusses this possibility but rejects it as unlikely.

Note 4  Speed is not a basic SI unit, being simply the ratio of distance to time (metres to seconds). Curiously though, while  speed is not something of which we have direct experience, the same is not true of momentum mv, speed × mass. All collisions involve forcible and abrupt change of momentum that we certainly do experience since such changes are often catastrophic (car crashes, tsunamis). I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2)  as a primary unit.

Note 5 In the more sophisticated modern derivation of the formula, we ‘squeeze’ the area between two limits, one an overestimate and the other an underestimate, and we chop up the interval into rectangles of variable width. But the basic strategy is the same as that of Galileo and Oresme before him.

Note 6  Newton presumably meant, ‘generates equal supplementary velocities’, since, as we have had it drummed into our heads in school, force produces accelerations, not velocities  as such. It is amusing to see one of the greatest minds in the history of science making a slip that would have earned him a reprimand in a modern school. This is actually not the only place in the Principia where Newton confuses velocity and acceleration; maybe part of the problem was that there did not exist a suitable Latin term (he wrote the Principia in Latin).

Note 7  Modern textbooks generally state that gravitational attraction travels at the speed of light.

Note 8  Unfortunately, the chances are that I shall not live to see this.

time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Now, lengths and distances are things we feel we know about, things accessible to the senses. Likewise, time in the sense of a regular sequence of audible or visual stimuli, the ticking of a clock, beating of a pulse, regular flashing of a light, is also something that falls within our sensory experience. But speed? Speed cannot be seen or heard and is, most of the time extremely difficult to determine precisely :  we experience the effects of speed, but not speed itself. The difficulty with problems like that of falling bodies is that we start with the known, recorded distances,  familiar audible events and so on, but then, in order to determine further distances, have to work out speeds and deduce  distances we cannot hope to evaluate directly from a formula which deals in changes of distance, not the distances themselves (Note 3)
       The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not from first principles, but from meticulous experiment and observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  entirely acceptable? To me, not, because there is first of all a lingering doubt in my mind as to whether it is legitimate to  represent distances, which are real things, by ‘areas under a speed/time curve’  since this involves plotting velocities which themselves depend on distances and have no reality of their own. But a more serious objection is that, as always in traditional calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but a  moment later turning it into a constant and pretending that this does not matter because it is only a constant for such a short period of time ! It is like saying someone is really a very nice guy but on every occasion you meet him, he will be basty but this doesn’t matter because you only ever see him for a very short time! The mathematical ‘passage to a limit’  does not so much resolve the conceptual problem as make it irrelevant  but only at the price of removing the whole discussion from the domain of physical reality where it originated and where it belongs. And in cases where we know that there really is a cut off point (a smallest possible value for the independent variable), this sort of manipulation can, and sometimes does, give incorrect answers which is why, increasingly, problems are dealt with by slogging them out numerically with computers rather than trusting blindly to analytical formula.  The dreadful truth that ought ot be inscribed in letters of darkest black in every calculus textbook us that the famous mathematical limit is, in almost all cases, never attained and those of us who live in the real world need to  beware of this since the cut off point may be much closer than the mathematics makes it look as if it is. Generally, engineers simply evaluate some quantity to the level of precision they require and don’t bother about the limiting value.
But is there any other way of tackling the problem?  Yes, I believe there is, deriving the equation of motion directly  from Newton’s Laws while adding in the ‘finitist’ assumption on which most of Ultimate Event Theory is based (Note 4). That is, one can simply suppose (what common sense tells us must be the case) that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so.

Treatment of Free Fall in Ultimate Event Theory

Suppose we have observed that a particle (event-cluster) falling from rest has, or at any rate appears to have, a constant acceleration of g metres/second which takes 1 second to take effect. So, in ‘thing-speak’, if the particle starts from rest, it has moved g metres by the end of the first second.
In Ultimate Event Theory there is always a first event, whether observed or not, and, in general, a first cause of ‘motion’. Since the ‘particle’ is in motion, we conclude that, in accordance with Newton’s First Law (or its UET equivalent), the particle has received an impulse that has dislodged it from its previous position and that in the first  ksana we are concerned with the ‘particle’ has been displaced by a certain number of spaces (grid-positions on the Locality) in some given direction. We do not know, and do not need to know, what this initial number of spaces is but we can call it g/n metres since there are, by hypothesis, n ksanas to a second. (The quotient  g/n is understood to be taken to the nearest whole number.) This is the ‘current velocity’, the equivalent of ‘instantaneous velocity’, namely the ‘velocity’ that the ‘particle’ would continue to have from now on if it were not interfered with by any outside forces. In other words, were this rate of displacement to remain unchanged, ksana by ksana, the particle would traverse  n(g/n) = g spaces in n ksanas, which we take to be the equivalent of a second. As for n, all we really need to know at this stage is that this  number ‘exists’, i.e. represents a real amount of spaces,  and that it is a very large number.
The particle (event-cluster) does not, however, as it hap[pens in this case maintain its current ‘velocity’ (rate of appearance) because of the influence of a massive event-cluster near by. Since gravitation is a permanent force, the particle in question will keep on receiving the same impulse ksana after ksana so its overall rate of displacement will increase at each stage but by a fixed amount (g/n) from one ksana to the next (Note 5) .
At the ksana we label 0 the particle is thus at zero distance from some grid position and at (not ‘during’) ksana 1 it is at the equivalent of g/n metres from this point. At each ksana it maintains its previous rate of appearance but with a constant supplementary distance added on because the effect of the outside force is repetitive and inexhaustible. We have

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

  0                                    0                                                   0

1                            1g/n                                     1 g/n

2              (1 + 1) = 2g/n                          (1 + 2) = 3 g/n

3              (2 + 1) = 3g/n                  (1 + 2 + 3)   = 6 g/n

4                            4g/n            (1 + 2 + 3 + 4)   =  10 g/n ……………………………………………..

m                                    m g/n              {1 + 2 + 3 +……+ m} g/n

         Since the sum of the natural numbers up to m is the relevant triangular number,  ((m+ 1))2 = (m2/2) + (m/2)
The total distance traversed at the mth ksana is thus

        g/n  {1 + 2 + 3 + …….m}   = g/n (m(m+1))/2
                                                = (g/2) ((m2 + m)/n) metres

This distance has been accomplished in m ksanas.
Now, since there are n ksanas to a second, to work out how many metres the particle has traversed in t seconds, we have to convert to seconds, i.e. divide by n. If m = nt , then the distance traversed in t seconds is

  (g/2) ((m2 + m)/n)     =  (g/2) ((nt2)+ t)   =  ½ gt2 + ½ g t/n
            n                                n

Now, if we want to take the limit as n → ∞ (I prefer to write simply n →  ) this is the familiar ½ gt2 . For normal situations, doubtless this is accurate enough, but I can conceive of cases where, if t were large enough, the extra term might need to be taken into consideration ―   indeed this would be an occasion to get an estimate of n. Otherwise, what we can say is that the acceleration is always > ½ gt2

        Note that I have not been obliged to make any appeal to ‘areas under a curve’ or to velocities as such, it being only necessary to add distances. Recursively, the behaviour of the ‘particle’ is given by the formula

                d(n+1) = d(n) + d(n) – d(n–1) = 2d(n) – d(n –1)

                d(0) = 0; d(1) = g/n

There is thus a steady increase of 2d1 = 2 g/n in this case at each ksana and this is why the odd numbers appear at successive intervals of time. The curve formed by joining the dots is what we know of as a parabola because the odd numbers 1 + 3 + 5 + ….(2n – 1) add to n2 and y = x2 is the equation of a parabola.

Correction to formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent, oppositely directed, reaction, and this means there is a fixed order of events since action and reaction are not strictly simultaneous (see post on Newton’s Third Law). This is certainly the case for cases of contact but I had thought of making an exception for gravity as Newton himself seems to have done since he is on record as saying that it propagates instantaneously, though he had some difficulties believing this (Note 5). If we consider that, in the case of free fall under gravity, there should still be a succession of events, the ‘action’ will only take effect at every other ksana. I had thought of making it a general principle that an ultimate event always occupies the same position for at least two consecutive ksanas, otherwise it cannot be said to have any proper velocity. I thus had the particle (event-cluster) keep the same rate of displacement for two ksanas before being once more accelerated. This worried me at first since the result seemed completely different to ½ gt2 which must be more or less correct. However, all was well after all : the final result merely differed in the second term.

This time, for reasons that will become apparent, I set 1 second as equal to 2n ksanas (and not n) so the initial displacement is g/2n metres.

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

 0                                      0                                                        0

1                             1g/2n                                            1 g/2n
2                             1g/2n                                           2 g/2n

3                             2(g/2n)                                      3(g/2n)                             

4                              2(g/2n)                                     6 g/2n

 

5                             3(g/2n)                                            

6                              3(g/2n)                                             12 g/2n

…………………………………………….. 

m–1                          m g/2n

m (even)                m g/2n         2{1 + 2 + 3 +…..(m/2)  g/2n

The total distance traversed at the mth ksana where m is even is

(g/2n)  2{1 + 2 + 3 + …….(m/2)}   = (g/2n) 2 { (m/2)2 + (m/2))}/2

                                        = (g/2n) {(m2/4) + (m/2)} metres

Substituting in m = nt and dividing by n we obtain

(g/2n) ((2nt)2/4  + nt/2))/n  = (g/2)(t2 + t/2n)                                                     

= ½ gt2  +   gt/4n

_________________________________________________

Note 1   For the man who is hailed as the inventor of the experimental method, Galileo is surprisingly cavalier about the details. He actually gives a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue  p. 259) which is so widely off that one of his own pupils queries it. 
“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.” 
     Drake, Notes p. 561 op. cit.)

Note 2   To his credit, Galileo does consider the possibility that the speed of a falling body fluctuates erratically rather than increases uniformly though he discounts this as unlikely : “It seems much more reasonable for it [the falling body] to  pass first through those degrees nearest to that from which it set out, and from this to those farther on” (Galileo, op. cit. p. 22).

 Note 3    Speed is not a basikc SI unit being simply the ratio of distance to time (metres/seconds). Interestingly, though speed is not something we have direct experience of, this seems not to be  true of momentum mv since all collisions are the result of forcible change of momentum, often dramatic. It is possible, indeed tempting, to consider that a particle (event-cluster) really does possess momentum (whereas it never really ‘has’ speed) and, I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2) as a primary unit.Note 1

Notew 4 This is most likely the reason why Newton continued to use the cumbersome apparatus of geometric ratios in his Principia instead of his ‘Method of Fluxions’ which he had already invented. Sticking  to ratios and geometry sidesteps the problem of infinite divisibility and the reality of ‘indivisibles’ concerning which Newton had serious doubts. .

Note 2 

Note 2  “In the Methodus Fluxionum Newton stated clearly the fundamental problem of the calculus: the relation of quantities being given, to find the relation of the fluxions to these, and conversely.”  Boyer, The History of the Calculus p. 194.

Note 3   y ‘fluxion’ Newton meant what we call the ‘rate of change’ or derivative.

 

Note 3 In the more sophisticated derivation of the formnula, we ‘squeeze’ the area between two limits, one an overestimate anbd the other an underestimate, and, moreover, we chop up the interval into rectangles of variable width, not just the same. But the basic strategy is the same as that of Galileo and Oresmebefore him.

 

In the ‘infinitist’ treatment of Calculus, we always have to calculate velocities when what we are interested in is distances. In UET, at any rate in this case, we can deal directly with distances which is as it should be since velocity is a secondary quality whereas distance (relative position) is not.

Note 4  Newton himself expresses himself rather too succinctly on the problem. He writes:

“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities; and in the whole time impresses a whole force, and generates a whole velocity proportional to the time. And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.”     

                        Newton, Principia, Motte’s translation  p. 21

This is rather obscure and even wrong to  modern ears. For Newton speaks of “equal forces…generating equal velocities” when every schoolchild today has had it drummed into them that force produces acceleration rather than velocity as such. However, Newton does not seem to have had a suitable (Latin) word for ‘acceleration’ and we must, I think, understand that he meant “equal forces generate equal supplementary velocities” while assuming, in accordance with the First Law, that a body retains the velocity it already has).

Note, however, that Newton, like Galileo, does not give us an equation of motion but still talks the geometrical language of proportion ― and this is typical of the way the entire Principia is written, even though Newton had already invented his version of the Calculus, the Method of Fluxions.

Note 5  Modern textbooks generally state that gravitational attraction travels at the speed of light but there is some doubt as to whether one can really talk about gravity travelling anywhere since it is the intervening space that contracts according to General Relativity.

 

In an earlier post I suggested that the vast majority of ultimate events appear for a moment and then disappear for ever : only very exceptionally do ultimate events repeat identically and form an event-chain. It is these event-chains made up of (nearly) identically repeating clusters of ultimate events that correspond to what we call ‘objects’ — and by ‘object’ I include molecules, atoms and subatomic particles. There is, thus, according to the assumptions of this theory, ‘something’ much more rudimentary and fleeting than even the most short-lived particles known to cojntemporary physics. I furthermore conjectured that the ‘flicker of existence’ of ephemeral ultimate events might one day be picked up by instrumentation and that this would give experimental support to the theory.  It may be that my guess will be realized more rapidly than I imagined since, according to an article in the February edition of Scientific American an attempt is actually being made to detect such a ‘background hum’  (though those concerned interperet it somewhat differently).

Craig Hogan, director of the Fermilab Particle Astrophysics Center near Batavia, Ill., thinks that if we were to peer down at the tiniest sub-divisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of static. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth……..Hogan’s noise arises if space is made of chunks. Blocks. Hogan’s noise would imply that the universe is digital.”     Michael Moyer, Scientific American, February 2012
Moreover, Hogan thinks “he has devised a way to detect the bitlike structure of space. His machine ― currently under construction ― will attempt to measure its grainy nature.(…) Hogan’s interferometer will search for a backdrop that is much like the ether ― an invisible (and possibly imaginary) substrate that permeates the universe”.
         Various other physicists are coming round to the idea that ‘Space-Time’ is ‘grainy’ though Hogan is the first to my knowledge to speak unequivocally of a ‘backdrop’ permeated with ‘noise’ that has nothing to do with atomic particles or the normal quantum fluctuations. However, the idea that the universe is a sort of giant digital computer with these fluctuations being the ‘bits’ does not appeal to me. As I see it, the ‘information’ only has meaning in the context of intelligent beings such as ourselves who require data to understand the world around them, make decisions and so forth. To view the universe as a vast machine running by itself and carrying out complicated calculations with bits for information (a category that includes ourselves) strikes me as fanciful though it may prove to be a productive way of viewing things.      S.H.  30/08/12

Newton’s First Law states, in effect, that the ‘natural’ movement of a particle ― Newton says ‘innate’ ― is to travel in a straight line at constant speed. In consequence, any deviation from this ‘natural’  state requires explanation and the conclusion to be drawn is that the particle has become subject to interference from an outside source (Newton’s Second Law).
After the square and the rectangle, the circle is probably the best known regular shape though there are precious few examples of true circles to be found in nature  ─ if we are so familiar with it, this is largely because of the wheel, one of the most ancient human inventions. Plato, enamoured of elegance and simplicity rather than actuality, considered that heavenly bodies must follow circular paths because the circle was the ‘most perfect’ shape (Note 1). Greek astronomy, as we know, dealt with departures from circular motion by introducing epicycles, circles within or on other circles. It would probably be possible, though hardly convenient, to trace out any regular curve in this manner and the Ptolemaic system proved to be a satisfactory way of calculating planetary orbits for nearly two thousand years.
However, for reasons that are probably physiological in origin, we do not feel at ease with anything other than straight lines, squares and rectangles and a good deal of mathematics is concerned with the problem of ‘squaring the circle’ or, more generally, converting the areas and volumes of irregular shapes into so many squares. Newton, following Galileo, broke down circular motion into two straight-line components at right angles to each other  : one tangential to a point on the circumference of a circle and the other along the radius from that point to the centre of the circle. He furthermore proposed that circular motion could be modelled in the following manner : the original tangential velocity did not change in magnitude but did change repeatedly in its direction and by the same amount.  We do not normally view a change in direction as an ‘acceleration’ and Newton, writing in Latin, did not use the term. However, in Newtonian mechanics we consider any deviation from constant straight-line motion  to be an ‘acceleration’ which, according to  Newton’s Second Law must be due to the action of an  outside force. He coined the term centripetal or ‘centre-seeking’ force (after Latin peto, I seek) to characterise this force. What is more,  since gravitational attraction was permanent and did not change in magnitude over relatively small distances, a particle subject to such a force would deviate repeatedly from its current straight line motion and deviate by the same amount each time. If the original velocity was ‘just  right’, the particle would keep more or less at the same distance from the attracting body while perpetually changing direction : the result if the attracting body was very much larger being motion in a circle around that body.
There can be little doubt that Newton based his mathematical treatment of circular motion on personal experience while generalising the principles involved, by a giant leap of thought, to model the motion of heavenly bodies. On the second page of his Principia he mentions in his Definition V how “a stone, whirled about in a sling, endeavours to recede from the hand that turns it; and  by that endeavour, distends the sling, and that with so much the greater force, as it is revolved with the greater velocity” (Principia, Vol. I Motte’s translation). It is a matter of experience (not theory) that a whirling conker or other projectile exerts a definite pressure on your finger, or on any other object that serves as an axis of rotation, and that we feel this pressure more or less on the opposite side to where the conker is ‘at that moment’. Secondly, as Newton says, the faster the whirling motion, the more the string cuts into your finger. Thirdly, if we use a rubber  connection we can actually observe the ‘string’ being extended beyond its normal length. And finally, if we cut the string or otherwise break the connection between particle at the circumference and the centre, the particle flies off sideways : it never flies off directly inwards nor, as far as we can judge, does it continue to follow a circular trajectory. 

The Modern Derivation of the formula for centrifugal force

In the modern treatment of motion in a circle we have a particle which is at rest at time t = 0 at position P0 on the circumference of a circle of radius r. It is propelled in a direction tangential to the circumference of the circle at an initial  velocity of v metres/sec.  At time t it has reached the point P1 having travelled vt metres while the angle at O has turned through θ radians.

        Since it has travelled in a straight line along the tangent it has covered r tan θ metres while a particle travelling along the circumference has travelled in the same t seconds a distance of metres if θ is in radians. If we have an angular velocity of ω radians per second this distance is rωt metres.  Now, v, the speed of the particle travelling along the tangent, is not the same as rθ/t = rω, the speed of a particle travelling along the circumference, since r tan θ > rθ.

v/rω = (r tan θ)/t   or  v/r  =  ω (tan θ)/θ)
                               rθ/t  

For very small angles v ≈ rω since the limit of tan θ/θ is 1 as θ → 0

Now, the particle supposedly always maintains its constant speed but each time it is in contact with the circumference changes direction and pursues a path along the tangent at that point. In the usual  modern treatment, we take two velocity vectors v1 and v2 corresponding to the velocities at two points on the circumference of a circle when the angle at the centre has turned through θ radians. These two velocities are, according to Newton’s treatment of circular motion, equal in magnitude but differ in direction and the angle between the two vectors is the same as the angle at the centre. We now evaluate the closing velocity vector which is considered “for very small angles” to be approximately equal to as if this closing vector, instead of being a straight line, were part of the circumference of a circle with radius v metres subtended by the angle θ. We thus end up, after some manipulation, with an acceleration vector of 2 v2/r metres per second per second, setting ωr equal to v. Since F = mass × acceleration , the centrifugal force is mv2/r for a particle of mass m.
Thus according to textbooks aimed at engineers and technicians. In textbooks aimed at pure mathematicians it is  stressed that what we have here is a double ‘passage to a limit’, that involving v and and that involving 2v sin  θ/2 and . Fpr, in reality the closing vector would have length 2v sin (θ/2) which goes to because the ratio (2v sin (θ/2)) : vθ  = v sin (θ/2) : v (θ/2) tends to the limit v as θ/2 → 0. But what these textbooks do not make clear is that these limits are never attained since tan θ is never strictly equal to θ and sin θ/2 is never strictly equal to θ/2 for any 0 < θ < p/2.     

Motion in a Circle according to Ultimate Event Theory

As before we have a particle P is at r units of distance from O, the centre of an imaginary circle. It is projected along the tangent at P0 with speed v units distance/units of time and at time t it is at P1 having traversed vt units of distance. At time t the angle at O has gone through θ radians so that vt = r tan θ.
So far, so good : everything is as in the normal treatment. However, note that t is integral being ‘so many ksanas’ ― a ‘ksana’ is the minimal interval of time, equivalent of the vague ‘instant’. Also, v is a rational number since it is the ratio of an integral number of grid positions to t ksanas. If the ‘particle’ (repeating ultimate event) takes 4 ksanas to reappear 7 grid positions further along, the speed of reappearance is 7/4 spaces/ksana. It is not to be supposed that the particle covers two and a quarter spaces in one ksana since the spaces are indivisible, only that four ksanas later the particle reappears seven positions to the right (or left as the case may be) to where it was before. Moreover, these distances are ‘unstretched’, i.e. the grid positions are conceived to occupy ‘all’, or practically all, of the distance between two occupied positions. Just as in Special Relativity, we consider measurements made from inside an inertial system to be ‘standard’, in Ultimate Event Theory we start by evaluating distances from the standpoint of a regular repeating event not subject to influence from other event-chains. The repeating event E is such an event since it is (not yet) subject to an outside influence : it maintains a constant speed of s spaces per t ksanas and its speed if s/t spaces per ksana. r is likewise integral in Ultimate Event Theory since it is so many (unstretched) spaces from O.
When P reaches P1 it is no longer at r spaces from O but a somewhat larger distance (and the same applies to event E when it repeats at this particular grid position). The distance P0 O has thus been ‘stretched’ using the analogy of a string connecting two bodies. Stretched by how much?  By Pythagoras,
    (OP1)2   = r2 + r2 tan2 θ)  or OP1 = r √(12 +  tan2 θ) and the extension is this quantity less the radius, or  r √(1 + tan2 θ) – r  =  r √(sec2 θ) – r = r (1/cos θ  – 1)

        Now this spatial distance does not have to be integral, that is, correspond to a whole number of positions capable of receiving ultimate events. (Note 1).
The initial displacement of the particle, once it has been impelled along its path, is unaccelerated and thus, by definition, not subject to an outside force. However, this displacement does have an effect on a similar initially unaccelerated body situated at the centre O (since the distance between the two particles has increased). This ‘action’ at O calls forth an ‘equal and opposite reaction’ according to Newton’s Third Law and exactly four ksanas later the ‘particle’ appears on the circumferenceat P2 with the same speed and proceeds along the tangent at P2    ― this according to Newton’s treatment of motion in a circle.
In event terms we have an event E originally appearing at P0 exactly r spaces from O, reappearing at P1 outside the imaginary circle t ksanas later, and finally appearing at P2 where it will travel along the tangent at P2  at the same speed v = m spaces per t ksana ― in this particular case 7/4 spaces/ksana.
The first displacement calls forth an ‘equal and opposite reaction’ according to Newton’s Third Law and exactly four ksanas later the ‘same’ ultimate event repeats on the circumference of the imaginary circle at P2. By hypothesis, it has  the same ‘reappearance speed’ of v spaces/ksana ― in this particular case 7/4 spaces/ksana. Although it has the same speed, event E has been subject to an outside influence since it has changed its straight-line event-direction : it has been, in Newtonian terms, accelerated and subject to an outside force. This force, or rather  influence, originates in  the reaction of the event-chain situated at the centre O .  The reaction will be, by Newton’s Laws, equal in magnitude to the original force to which the body (event-cluster) at O was subject.
It is essential to grasp the succession of events. When travelling from P0 to P1, the particle/event is not subject to any outside influence : this influence is in operation only for the second t ksanas during the passage from P1 to P2. Similarly, once at P2, the particle/event is again free of outside influence for t ksanas and the entire cycle action/reaction repeats indefinitely. This is so in the Newtonian treatment as well but there is a tendency, even in authoritative text books, to assume that because action and reaction take place in such swift succession that they are strictly simultaneous.  
So how do we evaluate the strength of the restoring force which in Newtonian Mechanics is known as the centripetal or centre-seeking force (from Latin peto I seek) ? We can do this by comparing the distances along the radial direction and noting the changes in speed. The particle/event starts off with zero speed in this direction since it is situated on the circumference; it is at a distance r (1/cos θ  – 1) from the centre O t ksanas later and t ksanas later still it has once more zero radial speed having recovered its original configuration.

Now, the particle (event) has constant speed v that and so, were it not for the restraining influence from the centre, would have, during the second t ksanas,  travelled another vt spaces along the tangent line from P0 and the angle at the centre would have gone through another θ radians. The particle’s distance from P2 on  the circumference O would thus be

        √ (r2 tan2 2θ – r2) – r  = r √sec2 θ  – r  = r(1 – cos 2θ)/cos 2θ

        We wish to have this distance and the eventual radial acceleration in terms of r and v rather than θ, since r and v are ‘macroscopic’ values that can be accurately measured whereas θ is microscopic ― what used to be called ‘infinitesimal’.
To bring the particle to P2 the above distance has had to be reduced to zero in the space of t (not 2t) seconds since the reaction did not begin until the particle was at P1. We thus evaluate        r (1 – cos 2θ) /t2   where, as before, t = r tan θ/v
                                                                cos 2θ

                =   (v2/r)  (1 – cos 2θ)    = (v2/r)  (2 cos2 θ)(cos2 θ)
                              cos 2θ tan2 θ             (1 – 2cos2 θ)(sin2 θ)   

Since cos 2θ  =  2 cos2 θ –1
        1 –  cos 2θ  =  1 – (2 cos2 θ – 1) = 2(1 – cos2 θ)   =  2 sin2 θ

Consequently         (v2/r)  (2 sin2 θ)(cos2 θ)     =   2(v2/r) (cos2 θ)
                                             (2cos2 θ – 1)(sin2 θ)              (2cos2 θ – 1)

         Since θ is not a macroscopic value and is  certainly extremely small, we may take the limit as θ → 0  (Note 2).

Lim  as θ → 0     (cos2 θ)       =           1     .       =   1
                        (2cos2 θ – 1)      2 –1/cos2 θ            

        The acceleration is thus 2 v2/r spaces per ksana per ksana  and the  centrifugal ‘force’ on a particle of mass m is 2m v2/r (Note 3).

This is a most significant result since it differs from that derived by traditional methods which is just v2/r .  The latter is the average value over 2t ksanas since, during the first t ksanas, the centrifugal force was not active. It will be objected that the particle ‘in effect’ follows a circular path around the circumference at all moments but, if this really were the case, the particle would never obey Newton’s First Law but would be permanently accelerated (since it would never follow a straight line trajectory). That the deviation is for normal practical purposes entirely negligible is irrelevant : the aim of an exact science  should be to deduce what actually goes on and anyway it is by no means certain that the discrepancy might, in some cases, be significant.  Whereas in the conventional treatment, the centrifugal force is permanent, in Ultimate Event Theory, it is, like all reactions, intermittent : it is only operative over t moments out of every 2t.  It is by no means inconceivable that accurate experiments could detect this intermittence, just as they might one day detect the difference between v and .    S.H. ­­­­­­­­­­­­­­­

_____________________________________ 

Note 1. In reality, all orbiting bodies eventually move away from the centre of attraction or fall towards it as even the Moon does — but so slowly that the difference from one year to the next is only a few centimetres.   

Note 2 The ‘stretched distance’ between two occupied spaces  does not (by hypothesis) contain any more grid positions than an unstretched and the extra ‘distance’ is to be attributed to the widening of the gaps between grid-points, i.e. to the underlying (non-material) ‘substance’ that fills the Locality. This stretched distance is essentially a mathematical convenience that has no true existence. All that actually happens and that is, or could be,  observable is the occurrence of a particular ultimate event at a certain spot on the Locality at a particular ksana and its reappearance at a subsequent ksana at a spot that is not exactly equivalent to the previous position (relative to some standard repeating event-chain).

Note 2. It must be stressed that this limiting value, like practically all those which occur in Calculus, is not actually attained in practice though one assumes that in such a case as that considered the approximation is very good. The value of θ in any actual occurrence will certainly be small but not ‘infinitesimal’, a term which has no meaning in Ultimate Event Theory (and should have none in traditional mechanics either).

Note 3  It remains to give a satisfactory definition of ‘mass’ in terms of Ultimate Event Theory. Newton calculated mass with respect to area and density and spoke of it as “a measure of the quantity of matter in a body”. In traditional physics mass is not only measured by, but equated with, the ability of a particle to withstand any attempt to change its constant straight-line motion and in contemporary physics mass has virtually lost any ‘material’ meaning by being viewed as interchangeable with energy.

Newton’s Third Law states rather cryptically that

“To every action there is an opposite and equal reaction.”

This law is the most misunderstood (though probably most employed) Law of the three since it suggests at first sight that everything is in a permanent deadlock !   Writers of Mechanics textbooks hasten to point out that the action and reaction apply to  two different bodies, call them body A and body B.  The Third Law claims that the force exerted by body A on body B is met by an equivalent force, equal in magnitude but opposite in direction, which body B exerts on body A.
Does this get round the problem? Not entirely. The schoolboy or schoolgirl who somehow feels uneasy with the Third Law is on to something. What is either completely left out of the description, or not sufficiently  emphasized by physics and mechanics textbooks, is the timing of the occurrences. It is my push against the wall that is the prior occurrence, and the push back from the wall is a re-action. Without my decision to strike the wall, this ‘reaction’ would never have come about. What in fact is happening at the molecular level is that the molecules of the wall have been squeezed together by my blow and it is their attempt to recover their original conformation that causes the compression in my hand, or in certain other circumstances, pushes me away from the wall altogether. (The ‘pain’ I feel is a warning message sent to the brain to warn it/me that something is amiss.) The reaction of the wall is a restoring force and its effectiveness depends on the elasticity or plasticity of the material substance from which the wall is made — if the ‘wall’ is made of putty I feel practically nothing at all but my hand remains embedded in the wall. As a reliable author puts it, “The force acting on a particle should always be thought of as the cause and the associated  change of momentum as the effect” (Heading, Mathematical methods in Science and Engineering).
In cases where the two bodies remain in contact, a lengthy toing and froing goes on until both sides subside into equilibrium (Note 1). For the reaction of the wall becomes the action in the subsequent cause/event pair, with the subsequent painful compression of the tissues in my hand being the result. It is essential to realize that we are in the presence not of ‘simultaneous’ events, but of a clearly differentiated event-chain involving two ‘objects’  namely the wall and my hand. It is this failure to distinguish between cause and effect, action and reaction, that gives rise to the conceptual muddle concerning  centrifugal ‘force’. It is a matter of common experience that if objects are whirled around but restrained from flying away altogether, they seem to keep to the circumference of an imaginary circle — in the case of s spin dryer, the clothes press themselves against the inside wall of the cylinder while a conker attached to my finger by a piece of string follows a roughly circular path with my finger as centre (only roughly because gravity and air pressure deform the trajectory from that of a perfect circle) . At first sight, it would seem, then, that there is a ‘force’ at work pushing the clothes or the conker outwards  since the position of the clothes on the inside surface of the dryer or of the conker some distance away from my finger is not their ‘normal’ position. However, the centrifugal ‘force’ (from Latin fugo ‘I flee’) is not something applied to the clothes or the conker but is entirely a response to the initiating centripetal force (from Latin peto ‘I seek’) without which it would never have come into existence. The centrifugal ‘force’ is thus entirely secondary in this action/reaction couple and, for this reason, is often referred to as a ‘fictitious’ force — though this is somewhat misleading since the effects are there for all to see, or rather to  feel.
Newton does in certain passages make it clear that there is a definite sequence of events but in other passages he is ambivalent because, as he fully realized, according to his assumptions, gravitational influences seemed to propagate themselves over immense distances instantaneously (and in both directions) — which seemed extremely far-fetched and was one reason why continental scientists rejected the theory of gravitational attraction. Leaving gravity aside since it is ‘action at a distance’, what we can say is that in cases of direct contact, there really is an explicit, and often visible, sequencing of events. In the well-known Ball with Two Strings experiment (Note 2) we have a heavy lead ball suspended from the ceiling by a cotton thread with a second thread hanging underneath the ball. Where will the thread break? According to Newton’s Laws it should break just underneath the ceiling since the upper thread has to support the weight of the ball as well as responding to my tug. However, if you pull smartly enough the lower thread will break first and the ball will stay suspended. Why is this? Simply because there is not ‘time enough’ for my pull to be transmitted right up through the ‘ball plus thread’ system to the ceiling and call forth a reaction there. And, if it is objected that this is a somewhat untypical case because there is a substantial speed of transmission involved, an even more dramatic demonstration is given by high speed photographs of a golf club striking a ball. We can actually see the ball still in contact with the club massively deformed in shape and it is the ball’s recovery of its original configuration (the reaction) that propels it into the air. As someone said, all (mechanical) propulsion is ‘reaction propulsion’, not just that of jet planes.
In Ultimate Event Theory the strict sequencing of events, which is only implicit in Newtonian mechanics, becomes explicit.  If we leave aside for the moment the question of ‘how far’ a ksana extends (Note 3), it  is possible to give a clearcut definition of simultaneous (ultimate) events : Any two events are simultaneous if they have occurrence within the same ksana. A ‘ksana’ (roughly ‘instant’) is a three-dimensional ‘slice’ of the Locality and, within this slice, everything is still because there is, if you like, noit enough ‘time’ for anything to change. Consequently, an ultimate event which has occurrence within or at a particular ksana cannot possibly influence another event having occurrence within this same ksana : any effect it may have on other event-chains will have to wait until at least the next ksana. The entire chain of cause and effect is thus strictly consecutive (cases of apparent ‘causal reversal’ will be considered later.) In effect when bodies are in contact there is a ceaseless toing and froing, sort of ‘passing the buck’ from one side to the other, until friction and other forces eventually dampen down the activity to almost nothing (while not entirely destroying it).
S.H. 21/08/12

_______________________________________________

Note 1 Complete static equilibrium does not and cannot exist since what we call ‘matter’ is always in a state of vibration and bodies in contact affect each other even when apparently completely motionless. What can and does exist, however. is a ‘steady state’ when the variations in pressure of two bodies in contact more or less cancel each other out over time (over a number of ksanas). We are, in chemistry, familiar with the notion that two fluids in solution are never equally mixed and that, for example, oxidation and reduction reactions take place continually; when we say a fluid is ‘in equilibrium’ we do not mean that no chemical reactions are taking place but that the changes more or less equal out over a certain period of time. The same applies to solid bodies in contact though the departures from the mean are not so obvious. Although it is practical to divide mechanics into statics and dynamics, there is in reality no hard and fast division.

Note 2  I am indebted to Den Hartog for pointing this out in his excellent book Mechanics (Dover 1948).

Note 3  It is not yet the moment — or maybe I should say ksana — to see how Ultimate Event Theory squares with Relativity : it is hard enough seeing how it squares with Newtonian Mechanics. However, this issue will absolutely be tackled head on at a later date. Einstein, in his 1905 paper, threw a sapnner in the works by querying the then current understanding of ‘simultaneity’ and physics has hardly recovered since. In his latter days, Einstein adhered to the belief that everything takes place in an eternal present so that what is ‘going to’ happen has, in a sense, already been — in my terms already has occurrence on the Locality. I am extremely reluctant to accept such a theory which flies in the face of all our perceptions and would sap our will to live (mine at any rate). On the other hand, it would, I think, be fantastic to consider a single ksana (instant) stretching out across the known universe so that, in principle, all events are either ‘within’ this same ksana or within a previous one. At the moment I am inclined to think there is a sort of mosaic of ‘space/time’ regions and it is only within a particular circumscribed region that we can talk meaningfully of (ultimate) events having occurrence within or at the same ksana. Nonetheless, if you give up sequencing, you give up causality and this is to give up far too much. As Keith Devlin wrote, “It seems to me that there is nothing for it but to take as fundamental the relationh of one event causing another” (Devlin, Logic and Information p. 184)

 

  1. Physical reality is everything that has occurrence, ultimate reality is the source of everything that can        have occurrence.

  1.   2. Physical reality has a source, K0, which is not itself an event or a collection of events — rather, events are to be viewed as ephemeral and peripheral disturbances of this source.
  2. 3. All physical and mental phenomena are composed of these disturbances called ‘ultimate events’ or, by Buddhists, dharmas.
  3. 4. What we perceive as solid objects are in reality ‘flashings into existence’ of the dharmas (ultimate events).
  4. 5. Certain ultimate events acquire persistence and form stable sequences that have the power to influence other event-chains.
  5.  6. A binding force (karma) holds these event-clusters and event-chains together.
  6.  7. This binding force can be, and sometimes is, abolished in which case the event-chain dissociates and its constituent events soon  cease to repeat.
  7. 8. The complete abolition of all dominance (the power to affect other event-chains and to persist as a distinct event-chain) returns the physical universe to a quiescent state (nirvana) indistinguishable from the backdrop K0 itself. Whether the backdrop will give rise to other universes and realities need not concern us : our universe will have come to an end.     ¶

Ultimate reality can be known — inasmuch as such a thing can be known — because we, like everything else, are grounded in this ultimate reality.   ¶

Physical reality is not governed by eternal laws : all observed regularities are relatively persistent patterns, no more, no less. These patterns, being patterns rather than laws, can change, can evolve. The entire universe, as Descartes said, is at every instant creating itself out of nothing and subsiding into nothing — except that this ‘Nothing’ is the ground of everything.  ¶

Ultimate Event Theory is a new, or rather resuscitated paradigm, a paradigm which, it is suggested, could have given rise to the natural sciences but which, for various cultural and historical reasons, did not. This paradigm originated in Northern India in the first few centuries AD. The process of meditation itself was seen as a metaphor for the evolution of the entire universe, since the aim of meditation is to still the restless, ceaselessly active mind. ‘Deliverance’ is deliverance from this commotion = the stilling of the excitation that is the mind and ultimately the universe itself.
But it is not important where this paradigm came from : the important question is whether it is apt and whether it can be transformed into the bare bones of a physical theory. Matter has traditionally been  viewed as a ‘given’, as something both solid and persistent : this was Newton’s view and Western science stems from the Greek atomists via Galileo and Newton. According to Ultimate Event Theory ‘matter’ is neither enduring nor ‘solid’ : it is made up of evanescent flashes that sometimes form relatively stable and persistent patterns. These flashings are disturbances on the ‘periphery’ of the only ‘thing’ that really exists, and will one day disappear. Instead of atoms being eternal (as Newton imagined) the only ‘eternal’ thing is, thus, the underlying substance itself, K0,  and K0 does not apparently exist in an unchanging state, on the contrary, it is always evolving — though it must presumably possess a core that does not change. In any case this core does not concern us here : it is only the evolution of the surface fluctuations that are amenable to direct observation and experiment.   ¶

Image  :  Instead of a co-ordinate system with continuity built into it, we should rather think in terms of a three-dimensional ‘reality’ flashing on and off with definite gaps between each flash. Every event-line should strictly speaking be represented as a sequence of dots : there are, in Ultimate Event Theory,  no continuous functions or processes, only more or less dense and regular ones.
The gaps between ultimate events are not metrical, that is, although there is a definite size to each event-globule, the distances between globules have no absolute specific ‘length’, are ‘elastic’ if you like. We could imagine reality to consist of certain hard seeds (ultimate events) swimming around in a jelly (the Locality K0) except that this image is only valid for a single ksana (instant) — the jelly and the seeds appear, disappear, appear and so on indefinitely ( but not etenrally).    S.H. 9/8/12

Archimedes gave us the fundamental principles of Statics and Hydrostatics  but somehow managed to avoid founding the science of dynamics though, as a practising civil and military engineer, he must have had to deal with the mechanics of moving bodeis. The Greek world-view, that which has been passed down to us anyway, was essentially geometrical and the truths of geometry are, or were conceived to be, ‘timeless’ : they referred to an ideal world where spherical bodies really were perfectly spherical and straight lines perfectly straight.
Given the choice between exact positionaing and movement (which is change of position) you are bound to lose out on one or the other. But science and technology must somehow encompass  both exact position and ‘continuous’ movement. So how did Newton cope with the slippery idea of velocity ?  From a pragmatic point of view, supremely well since he put dynamics on a firm footing and went so far as to invent a new form of mathematics tailor-made to deal with the apparently erratic motions of heavenly bodies — his ‘Method of Fluxions’ which eventually became the Differential Calculus. Strangely, however, Newton completely avoided Calculus methods in his Principia and relied entirely on rational argument supplemented by cumbersome, essentially static,  geometrical demonstrations. Why did he do this? Probably, because he felt himself to be on uncertain ground mathematically and philosophically when dealing with velocity.
If you are confronted with steady straight line motion you don’t need Calculus — ordinary arithmetic such as even the ancient Babylonians and Egyptians employed is quite adequate. But, precisely, Newton was interested in the displacements of objects subject to a force, thus, by definition, not in constant straight line motion. And when the force was permanent, as was the case when dealing with gravitational attraction, the consequent motions of the boldies were never going to be constant (if change of direction was taken into account).
Mathematically speaking, speed is simply the first derivative of displacement with respect to time, and velocity, a vector quantity, is ‘directed speed’, speed with a direction attached to it. The modern mathematical concept of a ‘limit’ artfully avoids the question of whether a ‘moving’ particle actually attains a particular speed at a particular moment : it is sufficient that the difference between the ratio distance covered/time elapsed and  the proposed limit can be made “smaller than any finite quantity” as the time intervals are progressively reduced. This is a solution only to the extent that it removes the problem from the domain of reality where it originated. For the world of mathematics is an ideal, not real world though in some cases there is a certain overlap.
Newton was not basically a pure mathematician, he was a mathemartical realist and a hard-nosed materialist (at least in his physics). He was obviously bothered by the question that today you are not allowed to ask, namely “Did the particle attain this limit or did it only get very close to it?” 
It is often said that Newton did not have the modern mathematical concept of limit, but he came as close to it as was possible for a consistent realist. He speaks of “ultimate ratios” “evanescent quantities” and, unlike Leibnitz, tends to avoid infinitesimals if he can possibly manage to. He sees that there is indeed a serious logical problem about these diminishing ratios somehow carrying on ad infinitum and yet bringing the particle to a standstill.

“Perhaps it may be objected that there is no ultimate proportion of evanescent quantities; because the proportion, before the quantities have vanished, is not the ultimate, and when they are vanished, is none. But by the same argument, it may be alleged that a body arriving at a certain place, is not its ultimate velocity: because the velocity, before the body comes to the place, is not its ultimate velocity; when it has arrived, is none.”
Newton, Principia    
Translation Andrew Motte

Note that Newton speaks of ‘at a certain place’ and ‘its place’, making it clear that he believes there really are specific positions that a moving particle occupies. He continues :

“But the answer is easy; for by ultimate velocity is meant that with which the body is moved, neither before it arrives at its last place and the motion ceases, nor after, but at the very instant it arrives; that is, that velocity with which the body arrives at its last place and with which the motion ceases. And in like manner,  by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities not before they vanish, nor afterwards, but with which they vanish.”
  Newton, Principia     Translation Andrew Motte

      But this implies that there is a definite final velocity :  

        “There is a limit which the velocity at the end of the motion may attain, but not exceed. This is the ultimate velocity.”

         Well and good, but Newton now has to meet the objection that, if the ‘ultimate ratios’ are specific, so also, seemingly, are the ‘ultimate magnitudes’ (since a ratio is a comparison between two quantities). This would seem to imply that nothing can properly be compared with anything else or, as Newton puts it, that “all quantities consist of incommensurables, which is contrary to what Euclid has demonstrated”.  

     “But,” Newton continues, “this objection is founded on a false supposition. For those ultimate ratios with which quantities vanish are not truly the ratios of the ultimate quantities, but limits towards which the ratios of quantities decreasing without limit do always converge; and to which they approach nearer than by any given difference, but never go beyond, nor in effect attain to, till the quantities are diminished in infinitum.”   (Newton, Principia     Translation Andrew Motte)

       The last phrase (’till the quantities are diminished in infinitum’) seems to be tacked on. I was expecting as a grand climax, “to which they approach nearer than by any given difference, but never go beyond, nor in effect attain” full stop. This would make the ‘ultimate ratio’ something akin to an asymptote, a quantity or position at once unattained and unattainable. But this won’t do either because, after all, the particle does pass through such an ultimate value (‘limit’) since, were this not the case, it would not reach the place in question, ‘its place’. Bringing in infinity at the last moment (‘diminished in infinitum’ ) looks like a sign of desperation.
A little later, Newton is even more equivocal
“Therefore if in what follows, for the sake of being more easily understood, I should happen to mention quantities as least, or evanescent, or ultimate, you are not to suppose that quantities of any determinate magnitude are meant, but such as are conceived to be always diminished without end.” 

But what meaning can one give to “quantities….diminished without end” ?  None to my mind, except that we need such quantities to do Calculus, but this does not make such concepts any more reasonable or well founded. The issue, as I said before, has ceased to worry mathematicians because they have lost interest in physical reality, but it obviously did worry Newton and still worries generations of schoolboys and schoolgirls until they are cowed into acquiescence by their elders and betters. The fact of the matter is that to get to ‘its place’ (Newton’s phrase) a particle must have a velocity that is the ‘ultimate ratio’ between two quantities (distance and time) which are being ‘endlessly diminished’ and yet remain non-zero.
In Ultimate Event Theory there is no problem since there is always an ultimate ratio between the number of grid positions displaced in a certain direction relative to the number of ksana required to get there. When doing mathematics,  we are not going to specify this ratio, supposing even we knew it : it is for the physicist and engineer to give a value to this ratio, if need be, to the level of precision needed in a particular case. But we know (or I do) that δf(t)/δt has a limiting value (Note 1)which we may call df(t)/dt if we so wish. Note, however, that the actual ‘ultimate ratio’ will almost always be more (or less) than the derivative since there will be non-zero terms that need to be taken into account. Also, the actual limiting value will vary according to the processes being studied since manifestly some event-chains are ‘faster’ than others (require less ksanas to reach a specified point).  Nonetheless, the normal derivative will usually be ‘good enough’ for practical purposes, which is why Calculus is employed when dealing with processes that we know to be strictly finite such as population growth or radio-active decay.   S.H.

Note 1 :      Why must there always be a limiting value?  Because δt can never be smaller than a single ksana — one of the basic assumptions of Ultimate Event Theory.

(The opening image is from a painting by Jane Maitland)         S.H.     5/08/12