Archives for category: Uncategorized

Are there/can there be events that are truly random?
First of all we need to ask ourselves what  we understand by randomness. As with many other properties, it is much easier to say what randomness is not than to say what it is.

Definitions of Randomness

If a series of events or other assortment exhibits a definite pattern, then it is not random” ─ I think practically everyone would agree to this.
This may be called the lack of pattern definition of randomness. It is the broadest and also the vaguest definition but at the end of the day it is what we always seem to come back to. Stephen Wolfram, the inventor of the software programme Mathematica and a life-long ‘randomness student’  uses the ‘lack of pattern’ definition. He writes, “When one says that something seems random, what one usually means is that one cannot see any regularities in it” (Wolfram, A New Kind of Science p. 316). 
        The weakness of this definition, of course, is that it offers no guidance on how to distinguish between ephemeral patterns and lasting ones (except to keep on looking) and some people have questioned whether the very concept of ‘pattern’ has any clear meaning. For this reason, the ‘lack of pattern’ definition is little used in science and mathematics, at least explicitly.

The second definition of randomness is the unpredictable definition and it follows on from the first since if a sequence exhibits patterning we can usually tell how it is going to continue, at least in principle. The trouble with this definition is that it has nothing to say about why such and such an event is unpredictable, whether it is unpredictable simply because we don’t have the necessary  information or for some more basic reason. Practically speaking, this may not make a lot of difference in the short run but, as far as I am concerned, the difference is not at all academic since it raises profound issues about the nature of physical reality and where we stand on this issue can lead to very different life-styles and life choices.

The third definition of randomness, the frequency definition goes something like this. If, given a well-known and well-defined set-up, a particular outcome, or set of outcomes, in the long run crops up just as frequently (or just as infrequently for that matter) as any other feasible outcome, we class this outcome as ‘random’ (Note 1). A six coming up when I throw a dice is a typical example of a ‘random event’ in the frequency sense. Even though any particular throw is perfectly determinate physically, over thousands or millions of throws, a six would come up no more and no less than any of the other possible outcomes, or would deviate from this ‘expected value’ by a very small amount indeed. So at any rate it is claimed and, as far as I know, experiment fully supports this claim.
It is the frequency definition that is usually employed in mathematics and mathematicians are typically always on the look-out for persistent deviations from what might be expected in terms of frequency. The presence or absence of some numerical or geometrical feature without any obvious reason suggests that there is, or at any rate might be, some hidden principle at work (Note 2).
The trouble with the frequency definition is it is pretty well useless in the real world since a vast number of instances is required to ‘prove’ that an event is random or not  ─ in principle an ‘infinite’ number ─ and when confronted with messy real life situations we have neither the time nor the capability to carry out extensive trials. What generally happens is that, if we have no information to the contrary, we assume that a particular outcome is ‘just as likely’ as another one and proceed from there. The justification for such an assumption is post hoc : it may or may  not ‘prove’ to be a sound assumption and the ‘proof’ involved has nothing to do with logic, only with the facts of the matter, facts that originally we do not and usually cannot know.

The fourth and least popular definition of randomness is the causality definition. For me, ‘randomness’ has to do with causality ─ or rather the lack of it. If an event is brought about by another event, it may be unexpected but it is not random. Not being a snooker player I wouldn’t bet too much money on exactly what is going to happen when one ball slams full pelt into the pack. But, at least according to Newtonian Mechanics, once the ball has left the cue, whatever does happen “was bound to happen” and that is that. The fact that the outcome is almost certainly unpredictable in all its finest details even for a powerful computer is irrelevant.
The weakness of this definition is that there is no foolproof way to test the presence or absence of causality: we can at best only infer it and we might be mistaken. A good deal of practical science is taken up with distinguishing between spurious and genuine cases of causality and, to make matters worse,   philosophers such as Hume and Wittgenstein go so far as to question whether this intangible ‘something’ we call causality is a feature of the real world at all. Ultimately, all that can be said in answer to such systematic sceptics is that belief in causality is a psychological necessity and that it is hard to see how we could have any science or reliable knowledge at all without bringing causality into the picture either explicitly or implicitly. I am temperamentally so much a believer in causality that I view it as a force, indeed as the most basic force of all since if it stopped operating in the way we expect life as we know it would be well-nigh impossible. For we could not be sure of the consequences of even the most ordinary actions; indeed if we could in some way voluntarily disturb the way in which causes and effects get associated, we could reduce an enemy state to helplessness much more rapidly and effectively than by unleashing a nuclear bomb. I did actually, only half-facetiously, suggest that the Pentagon would be advised to do some research into the matter ─ and quite possibly they already have done. Science has not paid enough attention to causality, it tends either to take its ‘normal’ operation for granted or to dispense with it altogether by invoking the ‘Uncertainty Principle’ when this is convenient. No one as far as I know has suggested there may be degrees of causality or that there could be an unequal distribution of causality amongst events.

Determinism and indeterminism

Is randomness in the ‘absence of causality’ sense in fact possible?  Not so long ago it was ‘scientifically correct’ to believe in total determinism and Laplace, the French 19th century mathematician, famously claimed  that if we knew the current state of the universe  with enough precision we could predict its entire future evolution (Note 3). There is clearly no place for inherent randomness in this perspective, only inadequate information.
Laplace’s view is no longer de rigueur in science largely because of Quantum Mechanics and Chaos Theory. But the difference between the two world-views has been greatly exaggerated. What we get in Quantum Mechanics (and other branches of science not necessarily limited to the world of the very small) is generally the replacement of individual determinism by so-called statistical determinism. It is, for example, said to be the case that a certain proportion of the atoms in a radio-active substance will decay within a specified time, but which particular atom out of the (usually very large) number in the sample actually will decay is classed as ‘random’. And in saying this, physics textbooks do not usually mean that such an event is in practice unpredictable but genuinely unknowable, thus indeterminate.
But what exactly is it that is ‘random’? Not the micro-events themselves (the  radio-active decay of particular atoms) but only their order of occurrence. Within a specified time limit half, or three quarters or some other  proportion of the atoms in the sample, will have decayed and if you are prepared to wait long enough the entire sample will decay. Thus, even though the next event in the sequence is not only unpredictable for practical reasons but actually indeterminate,  the eventual outcome of the entire sample is completely determined and, not only that, completely predictable !
Normally, if one event follows another we assume, usually but not always with good reason, that this prior event ‘caused’ the subsequent event, or at least had something to do with its occurrence. And even if we cannot specify the particular event that causes such and such an outcome, we generally assume that there is such an event. But in the case of this particular class of events, the decay of radio-active atoms, no single event has, as I prefer to put it, any ‘dominance’ over any other. Nonetheless, every atom will eventually decay : they have no more choice in the matter than Newton’s billiard balls.
Random Generation       To me, the only way the notion of ‘overall determinism without individual determinism’ makes any sense at all is by supposing that there is some sort of a schema which dictates the ultimate outcome but which leaves the exact order of events unspecified. This is an entirely Platonic conception since it presupposes an eventual configuration that has, during the time decay is going on, no physical existence whatsoever and can even be prevented from manifesting itself by my forcibly intervening and disrupting the entire procedure ! Yet the supposed schema must be considered in some sense ‘real’ for the very good reason that it has bona fide observable physical effects which the vast majority of imaginary shapes and forms certainly do not have (Note 4).

An example of something similar can be seen in the case of the development an old-fashioned  (non-digital) picture taken in such faint light that the lens only allows one photon to get through at a time.  “The development process is a chemical amplification of an initial atomic event…. If a photograph is taken with exceedingly feeble light, one can verify that the image is built up by individual photons arriving independently and, it would seem at first, almost randomly distributed in position” (French & Taylor, An Introduction to Quantum Physics p. 88-9)  This case is slightly different  from that of radio-active decay since the photograph has already been taken. But the order of events leading up to the final pattern is arbitrary and, as I understand it, will be different on different occasions. It is almost as if because the final configuration is fixed, the order of events is ‘allowed’ to be random.

Uncertainty or Indeterminacy ?

 Almost everyone who addresses the subject of randomness somehow manages to dodge the central question, the only question that really matters as far as I am concerned, which is : Are unpredictable events merely unpredictable because we lack the necessary information  or are they inherently indeterminate?
        Taleb is the contemporary thinker responsible more than anyone else for opening up Pandora’s Box of Randomness, so I looked back at his books to see what his stance on the uncertainty/indeterminacy issue was. His deep-rooted conviction that the future is unpredictable and his obstinacy in sticking to his guns against the experts would seem to be driving him in the indeterminate direction but at the last minute he backs off and retreats to the safer sceptical position of “we just don’t know”.

“A true random system is in fact random and does not have predictable properties. A chaotic system [in the scientific sense] has entirely predictable properties, but they are hard to know.” (The Black Swan p. 198 )

This is excellent and I couldn’t agree more. But, he proceeds   “…in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1. (…)  Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.”   The Black Swan p. 198 
As far as I am concerned randomness either is or is not an intrinsic property and difference between theory and practice doesn’t come into it. No doubt, from the viewpoint of an options trader, it doesn’t really matter whether market prices are ‘inherently unpredictable’ or ‘indeterminate’ since one still has to decide whether to buy or not.        However, even from a strictly practical point of view, there is a difference and a big one between intrinsic and ‘effective’ randomness.
Psychologically, human beings feel much easier working with positives than negatives as all the self-help books will tell you and it is even claimed that “the unconscious mind does not understand negatives”. At first sight ‘uncertainty’ and ‘indeterminacy’ appear to be equally negative but I would argue that they are not. If you decide that some outcome is ‘uncertain’ because we will never have the requisite information, you will most likely not think any more about the matter but in stead work out a strategy for coping with uncertainty ─ which is exactly what Taleb advocates and claims to have put into practice successfully in his career on the stock market.
On the other hand, if one ends up by becoming convinced that certain events really are indeterminate, then this raises a lot of very serious questions. The concept of a truly random event, even more so a stream of them, is very odd indeed. One is at once reminded of the quip about random numbers being so “difficult to generate that we can’t afford to leave it to chance”. This is rather more than a weak joke. There is a market for ‘random numbers’ and very sophisticated methods are employed to generate them. The first ‘random number generators’ in computer software were based on negative feedback loops, the irritating ‘noise’ that modern digital systems are precisely designed to eliminate. Other lists are extracted from the expansion of π (which has been taken to over a billion digits) since mathematicians are convinced this expansion will never show any periodicity and indeed none has been found. Other lists are based on so-called linear congruences.  But all this is in the highest degree paradoxical since these two last methods are based on specific procedures or algorithms and so the numbers that actually turn up are not in the least random by my definition. These numbers are random only by the frequency and lack of pattern definitions and as for predictability the situation is ambivalent. The next number in an arbitrary  section of the expansion of π  is completely unpredictable if all you have in front of you is a list of numbers but it is not only perfectly determinate but perfectly predictable if you happen to know the underlying algorithm.

Three types of Randomness

 Stephen Wolfram makes a useful distinction between three basic kinds of randomness. Firstly, we have randomness which relies on the connection of a series of events to its environment. The example he gives is the rocking of a boat on a rough sea. Since the boat’s movements depend on the overall state of the ocean, its motions are certainly unpredictable for us because there are so many variables involved ─ but perhaps not for Laplace’s Supermind.
Wolfram’s second type  of ‘randomness’ arises, not  because a series of events is continuously interacting with its environment, but because it is sensitively dependent on the initial conditions. Changing these conditions even very slightly can dramatically alter the entire future of the system and one consequence is that it is quite impossible to trace the current state of a system back to its original state. This is the sort of case studied in chaos theory. However, such a system, though it behaves in ways we don’t and can’t anticipate, is strictly determinate in the sense that every single event in a ‘run’ is completely fixed in advance (Note 5)
Both these methods of generating randomness depend on something or someone from outside the sequence of events : in the first case the randomness in imported from the far larger and more complex system that is the ocean, and in the second case the randomness lies in the starting conditions which themselves derive from the environment or are deliberately set by the experimenter. In neither case is the randomness inherent in the system itself and so, for this reason, we can generally reduced the amount of randomness by, for example, securely tethering the boat to a larger vessel or by only allowing a small number of possible starting conditions.
Wolfram’s third and final class of generators of randomness is, however, quite different since they are inherent random generators. The examples he gives are special types of cellular automaton. A cellular automaton consists essentially of a ‘seed’, which can be a single cell, and a ‘rule’ which stipulates how a cell of a certain colour or shape is to evolve. In the simplest cases we just have two colours, black and white, and start with a single black or white cell. Most of the rules produce simple repetitive patterns as one would expect, others produce what looks like a mixture of ‘order’ and ‘chaos’, while a few show no semblance of repetitiveness or periodicity whatsoever. One of these, that  Wolfram classes as Rule 30, has actually been employed in Random [Integer] which is part of Mathematica and so has proved its worth by contributing to the financial success of the programme and it has also, according to its inventor, passed all tests for randomness it has been subjected to.
Why is this so remarkable? Because in this case there is absolutely no dependence on anything external to the particular sequence which is entirely defined by the (non-random) start point and by an extremely simple rule. The randomness, if such it is to be called, is thus ‘entirely self-generated’ : this is not production of randomness by interaction with other sets of events  but is, if you like, randomness  by parthenogenesis. Also, and more significantly, the author claims that it is this type of randomness that we find above all in nature (though the other two types are also present).

Causal Classification of types of randomness

This prompts me to introduce a classification of  my own with respect to causality, or dominance as I prefer to call it. In a causal chain there is a forward flow of ‘dominance’ from one event to the next and, if one connection is missing, the event chain terminates (though perhaps giving rise to a different causal chain by ricochet). An obvious example  is a set of dominoes where one knocks over the next but one domino is spaced out a bit more  and so does not get toppled. A computer programme acts essentially in the same way : an initial act activates a sequential chain of events and terminates if the connection between two successive states is interrupted.
In the environmental case of the bobbing boat, we have a sequence of events, the movements of the boat, which do not  by themselves form an independent causal chain since each bob depends, not on the previous movement of the boat, but on the next incoming wave, i.e. depends on something outside itself. (In reality, of course, what actually goes on is more complex since, after each buffeting, the boat will be subject to a restoring force tending to bring it  back to equilibrium before it is once more thrown off in another direction, but I think the main point I am making still stands.)
In the statistical or Platonic case such as the decay of a radio-active substance or the development of the photographic image, we have a sequence of events which is neither causally linked within itself nor linked to any actual set of events in the exterior like the state of the ocean. What dictates the behaviour of the atoms is seemingly the eventual configuration (the decay of half, a quarter or all of the atoms) or rather the image or anticipation of this eventual configuration (Note 6).

So we have what might be called (1) forwards internal dominance; (2) sideways dominance; and (3) downwards dominance (from a Platonic event-schema).

Where does the chaotic case fit in? It is an anomaly since although there is clear forwards internal dominance, it seems also to have a Platonic element also and thus to be a mixture of (1) and (3).

Randomness within the basic schema of Ultimate Event Theory

Although the atomic theory goes back to the Greeks, Western science during the ‘classical’ era (16th to mid 19th century) took over certain key elements from Judaeo-Christianity, notably the idea of there being unalterable ‘laws of Nature’ and this notion has been retained even though modern science has dispensed with the lawgiver. An older theory, of which we find echoes in Genesis, views the ‘universe’ as passing from an original state of complete incoherence to the more or less ordered structure we experience today. In Greek and other mythologies the orderly cosmos emerges from an original kaos (from which our word ‘gas’ is derived) and the untamed forces of Nature are symbolized by the Titans and other monstrous  creatures. These eventually give way to the Olympians who, signficantly, control the world from above and do not participate in terrestrial existence. But the Titans, the ancient enemies of the gods, are not destroyed since they are immortal, only held in check and there is the fear that at any moment they may  break free. And there is perhaps also a hint that these forces of disruption (of randomness in effect) are necessary for the successful  functioning of the universe.
Ultimate Event Theory reverts to this earlier schema (though this was not my intention) since there are broadly three phases (1) a period of total randomness (2) a period of determinism and (3) a period when a certain degree of randomness is re-introduced.
In Eventrics, the basic constituents of everything ─ everything physical at any rate ─  are what I call ‘ultimate events’ which are extremely brief and occupy a very small ‘space’ on the event Locality. I assume that originally all ultimate events are entirely random in the sense that they are disconnected from all other ultimate events and, partly for this reason, they disappear as soon as they happen and never recur. This is randomness in the causality sense but it implies the other senses as well. If all events are disconnected from each other, there can be no recognizable pattern and thus no means of predicting which event comes next.
So where do these events come from and how is it they manage to come into being at all? They emerge from an ‘Event Source’ which we may call ‘the Origin’ and which I sometimes refer to as K0 (as opposed to the physical universe which is K1).  It is an important facet of the theory that there is only one source for everything that can and does occur. If one wants to employ the computer analogy, the Origin either is itself, or contains within itself, a random event generator and, since there is nothing else with which the Origin can interact and it does  not itself have any starting conditions  (since it has always existed), this  generator can only be what Wolfram calls an inherent randomness generator. It is not, then, order and coherence that is the ‘natural’ state but rather the reverse : incoherence and discontinuity is the ‘default position’ as it were (Note 7).
Nonetheless, a few ultimate events eventually acquire ‘self-dominance’ which enables them to repeat indefinitely more or less identically and, in a few even rarer cases, some events manage to associate with other repeating events to form conglomerates.
This process is permanent and is still going on everywhere in the universe and will continue to go on at least for some time (though eventually all event-chains will terminate and return the ‘universe’ to the nothingness from which it originally came). Thus, if you like, ‘matter’ is being created all the time though at an incredibly slow rate just as it is in Hoyle’s Steady State model (Note 7).
Once ultimate events form conglomerates they cease to be random and are subject to ‘dominance’ from other sets of event and from previous occurrences of themselves. There will still, at this stage, be a certain unpredictability in the outcomes of these associations because determinism hs not yet ousted randomness completely. Later still, certain particular associations of events become stabilized and give rise to ‘event-schemas’. These ‘event-schemas’ are not themselves made up of ultimate events  and are not situated in the normal event Locality I call K1  (roughly what we understand by the physical universe). They are situated in a concocted secondary ‘universe’ which did not exist previously and which can be called K2. The reader may baulk at this but the procedure is really no different from the distinction that is currently made between the actual physical behaviour of bodies which exemplify physical laws (whether deterministic or statistical) and the laws themselves which are not in any sense part of the physical world. Theoretical physicists routinely speculate about other possible universes where the ‘laws’, or more usually the constants, “are different”, thus implying that these laws, or principles, are in some sense  independent of what actually goes on. The distinction is somewhat similar to the distinction between genotype and phenotype and, in the last resort, it is the genotype that matters, not the phenotype.
Once certain event-schemas have been established, they are very difficult to modify : from now on they ‘dictate’ the behaviour of actual systems of events. There are thus already three quite different categories of events (1) those that emerge directly from the Origin and are strictly random; (2) those that are brought about by previously occurring physical events and (3) events that are dependent on event-schemas rather than on other individual events.
So far, then, everything has become progressively more determined though evolving from an original state of randomness somewhat akin to the Greek kaos (which incidentally gave us the term ‘gas’) or the Hebrew tohu va-vohu, the original state when the Earth was “without form and void and darkness was upon the face of the deep”.
The advent of intelligent beings introduces a new element since such  beings can, or believe they can, impose their own will on events, but this issue will not be discussed here. Whether an outcome is the result of a deliberate act or the mere product of circumstances is an issue that vitally concerns juries but has no real bearing on the determinist/indeterminist dilemma.
Macroscopic events are conglomerates of ultimate events and one might suppose that if the constituent events are completely determined, it follows that so are they. This is what contemporary reductionists actually believer, or at least preach and, within a materialist world-view, it is difficult to avoid some such conclusion. But, according to the Event Paradigm, physical reality is not a continuum but a complicated mosaic where in general blocks of events fit together neatly into interlocking causal chains and clusters. The engineering is, however, perhaps not quite faultless, and there are occasional mismatches and irregularities much as there are ‘errors’ in the transcription of DNA ─ indeed, genetic mutations are the most obvious example of the more general phenomenon of random ‘connecting errors’. And it is this possibility that allows for the reintroduction of randomness into an increasingly deterministic universe.
Despite the subatomic indeterminacy due to Quantum Mechanics, contemporary science nonetheless in practice gives us  a world that is very nearly as predictable as the Newtonian, and in certain respects more so. But human experience keeps turning up events that do not fit our rational expectations at all :  people act ‘completely out of character’, ‘as if they were someone else’, regimes collapse for no apparent reason, wars break out where they are least expected and so on. This is currently attributed to the complexity of the systems involved but there may be a deeper reason. There remains an obstinate tendency for events not to ‘keep to the book’ and one suspects that Taleb’s profound conviction  that the future is unpredictable, and the tremendous welcome this idea has received by the public, is based on an intuitive awareness that a certain type of randomness is hard-wired into the normal functioning of the universe. Why is it there supposing that it really is there? For the same sort of reason that there are persistent random errors in the transcription of the genetic code : it is a productive procedure that works in the long run by turning up possibilities that no one could possibly have planned or worked for. One hesitates to say that this randomness is deliberately put there but it is not a wholly accidental feature either : it is perhaps best conceived as a self-generated controlling mechanism that is reintroducing randomness as a means of propelling the system forward into a higher level of order, though quite what this will be is anyone’s guess.      SH  28/2/13

Note 1  Charles Sanders Peirce, who inaugurated this particular definition, did not speak of ‘random events’ but restricted himself to discussing the much more tractable (but also much more academic) issue of taking a random sample. He defined this as one “taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one of a set of instances as often as any other set of the same number”.

Note 2  Take a simple example. One might at first sight think that a square number could end with any digit whatsoever just as a throw of a dice could produce any one of the possible six outcomes. But glancing casually through a list of smallish square numbers one notes that every one seems to be either a multiple of 5 like 25, one less than a multiple of 5 like 49 or one more than a multiple of 5 like 81. We could (1) dismiss this as a fluke, (2) simply take it as a fact of life and leave it at that or (3) suspect there is  a hidden principle at work which is worth bringing into the light of day.
In this particular case, it is not difficult to establish that the pattern is no accident and will repeat indefinitely. This is so because, in the language of congruences, the square of a number that is ±1 (mod 5)  is 1 (mod 5) while the square of a number that is  ±2 (mod 5) is either +1 or –1(mod 5). This covers all possibilities so we never get squares that are two units less or two units more than a multiple of five.   

Note 3  Laplace, a born survivor who lived through the French Revolution, the Napoleonic era and the Bourbon Restoration,  was careful to restrict his professed belief in total determinism to physical (non-human) events. But clearly, there was no compelling reason to do this except the pragmatic one of keeping out of trouble with the authorities. More audacious thinkers such as Hobbes and La Mettrie, the author of the pamphlet L’Homme est une Machine, both found themselves obliged to go into exile during their lives and were vilified far and wide as ‘atheists’. Nineteenth century scientists and rationalists either avoided the topic as too  contentious or, following Descartes, made a hard and fast distinction, between human beings who possessed free will and the rest of Nature whose behaviour was entirely reducible to the ‘laws of physics’ and thus entirely predictable, at any rate in theory.

Note 4 The current  notion of the ‘laws of physics’ is also, of course, an entirely  Platonic conception since these laws are not  in any sense physical entities and are only deducible by their presumed effects.
Plato definitely struck gold with his notion of a transcendent reality of which the physical world is an imperfect copy since this is still the overriding paradigm in the mathematical sciences. If we did not have the yardstick of, for example, the behaviour of an ‘ideal gas’ (one that obeys Boyle’s Law exactly) we could hardly do any chemistry at all ─ but, in reality, as everyone knows, no gas actually does behave like this exactly hence the eminently Platonic term ‘ideal gas’.
Where Plato went wrong as far as I am concerned was in visualizing his ‘Forms’ strictly in terms of the higher mathemartics of his day which was Euclidian geometry. I view them as ‘event-schemas’ since events, and not geometric shapes, are the building-blocks of reality in my theory. Plato was also mistaken in thinking these ‘Ideas’ were fixed once and for all. I believe that the majority ─ though perhaps not all ─  of the basic event-schemas which are operative in the physical universe were built up piecemeal, evolve over time and are periodically displaced by somewhat different event-schemas much as species are.

Note 5. Because of the interest in chaos theory and the famous ‘butterfly effect’, some people seem to conclude that any slight perturbation is likely to have enormous consequences. If this really were the case, life would be intolerable. In ‘normal’ systems tinkering around with the starting conditions makes virtually no difference at all and every ‘run’, apart from maybe the first few events, ends up more or less the same. Each time you start your car in the morning it is in a different physical state from yesterday if only because of the ambient temperature. But, after perhaps some variation, provided the weather is not too extreme, the car’s behaviour settles down into the usual routine. If a working machine behaved  ‘chaotically’ it would be useless since it could not be relied on to perform in the same way from one day to the next, even from one hour to the next.

Note 6  Some people seem to be prepared to accept ‘backwards causation ‘, i.e. that a future event can somehow dictate what leads up to it,  but I find this totally unacceptable. I deliberately exclude this possibility in the basic Axioms of Ultimate Event Theory by stating that “only an event that has occurrence on the Locality can have dominance over other events”. And the final configuration certainly does not have occurrence on the Locality ─ or at any rate the same part of the Locality as actual events ─ until it actually occurs!

 Note 7   Readers will maybe be shocked at there being no mention of the Big Bang. But although I certainly believe in the reality of the Big  Bang, it does not at all follow from any of the fundamental assumptions of Ultimate Event Theory and it would be dishonest of me to pretend it did. When I first started thinking along these lines Hoyle’s conceptually attractive Steady State Theory was not entirely discredited though even then very much on the way out. The only way I can envisage the Big Bang is as a kind of cataclysmic ‘phase-transition’, presumably preceded by a long slow build up. If we accept the possibility of there being multiple universes, the Big Bang is not quite such a rare or shocking event after all : maybe when all is said and done it is a cosmic ‘storm in a teacup’.

Pagoda

To all whom it might concern:   I am speaking to the London Futurists (plus anyone else who cares to come along) on “Does Infinity Exist?” at the Peace Pagoda, Battersea Park, London  2 p.m. Saturday 8th December

This incidentally will be the first time that I will be talking about Ultimate Event Theory in public (and it is only last year that I started putting posts up about it). (It has taken me all of thirty-five years to reach this point of no return.) It seems that the Pagoda is entirely the right location for such a discussion though it was not deliberately chosen by me, indeed not chosen at all. I had originally aimed to hold the meeting (the first I have ever called on such a subject) indoors somewhere in a venue in central London but could find nowhere available for this date chosen entirely at random. Then a few Sundays ago, my partner, the painter Jane Maitland, suddenly said “Why don’t we visit Battersea Park today?”, something we never do — the last time I was there was at least eight years ago. We passed by the Pagoda but didn’t go into it. That night it suddenly came to me that the best place to meet up was the Pagoda. Why the best place? Because the origins of Ultimate Event Theory go back all of two thousand and five hundred years to the ponderings of an Indian ascetic about the nature of the physical world and the misery of human existence.

English like all Indo-European languages is an ‘object orientated language’.  It presents us with an object, he, she, it, then tells us something about it in the so-called predicate, She is dark-haired, intelligent, European, whatever. Alternatively, we are presented with two ‘things’ (organic or inorganic) and an action linking them together, I hit him. Never, except in the case of imperatives, do we have a verb standing alone and even imperatives do not express any actual state of affairs but only a hypothetical  or desired state of affairs (desired by the speaker) as in Come here. Whorf is one of the very few linguists to have noticed this :
“We are constantly reading into nature fictional acting entities, simply because our verbs must have substantives in front of them. We have to say ‘It flashed’ or ‘A light flashed’, setting up an actor, ‘it’ or ‘light’, to perform what we call an action, ‘to flash’. Yet the flashing and the light are one and the same!  The Hopi language reports the flash with a single verb, rehpi : ‘flash (occurred)’. There is no division into subject and predicate, not even a suffix like –t of Latin tonat, ‘it thunders’. Hopi can and does have verbs without subjects…”  (Whorf, Language, Thought and Reality p. 243)

Nouns and names are inert : they do not do anything, which is why a sentence which is just a list of names strikes us as being incomplete. But, although we can’t say it in contemporary English, ‘hit’ is perfectly adequate on its own; it pinpoints the essential, the action. Even more adequate on its own  would be ‘killing’  (it is ridiculous that we cannot say ‘birthing’ but only ‘giving birth’ as if we were giving something away to someone). Think of all the films you have seen which start with a shot ringing out and a dead body lying on the ground e.g. The Letter, Mildred Pierce. In both these cases, the entire rest of the film is taken up with retracing the series of events leading up to this all-important event and putting names to bodies. But the persons revolve around the central event, not the reverse, they interest us in the context of this event, not otherwise. In a more recent film, The Descendants, the entire film revolves around a water-skiing accident and it is extremely clever that the victim is only shown in a coma : she is of no interest in herself and had she not been put into a coma, there would have been nothing to make a film about. Such a dramatic event as a murder or violent death carries a tremendous weight of accessory events which otherwise would remain unknown and equally such events leave a long ‘trail’ of future events.
An ‘event’ language’ would have a (macroscopic) event as the central feature and the sentence structure would of necessity be different.  I have conjectured that the basic structure would be :

1. Presentation of a block of ultimate events
2. Its/their  localisation
3, Flow of causality (dominance) i.e. which event causes what.

 One or more of these elements may be absent : in baby-talk (3) is often lacking.

     Instead of  the bland  “He was hit  by a car”  we would have something more like  “Crash/he/car”

Event/dharma: the hitting, the collision
Localisation :  ‘He’   (whoever he is)
Cause (origin of dominance) :  car

Implicit in the subject/predicate syntax is an underlying ‘world-view’ or paradigm.

Whorf, remarkably, conjectures that a Hopi ‘physics’ would be very different from our Western traditional physics but ‘equally valid’. It is foolish to assume that an alien civilisation would have essentially the same mathematics and physics that we do though, certainly, there would be a certain overlap. Whorf thinks the main difference would be between a concentration on ‘events’ rather than ‘things’ on the one hand and a deep concern with the interaction between the ‘subjective’ and ‘objective’.

To be continued     

 

 

 

“It seems to me that there is nothing for it but to take as fundamental the relation of one event causing another” (Keith Devlin, Logic and Information).

Eventrics is the general study of events and their interactions while Ultimate Event Theory is, if you like, the nuclear physics of Eventrics. In these Posts I shall hop about more or less at random from the macro to the micro domains while concentrating nonetheless on the latter. Eventually, when enough material has accumulated, I may siphon off certain portions of the theory but at this stage it is more instructive for the reader to see the theory taking shape piecemeal, which is how event-clusters and event-chains themselves form, rather than attempt to systematize.  In any cased, te person reading this who will take the theory further than I can hope to, will not only need to have a clear understanding of the behaviour of events at their most elemental level but, above all, will need to become adept in navigating (or rather surfing) the enormous event currents of the present society if he or she is to give the theory the audience it deserves.
The macro-events we are concerned with on a day to day basis are huge event-clusters, as large as galaxies in comparison to their constituent ultimate events, and the collective behaviour of events may well differ from the behaviour of individual ultimate events as much as the behaviour of human crowds or gases seem to differ from that of their constituent molecules in object-based physics (Note 1). Certainly, large-scale bulk event-chains, what we call ‘historical movements’, seem to have their own momentum and evolve in their own manner, sweeping people along as if the latter were tornpieces of paper. Those persons who obtain positions of power are those who, by luck or good judgment, align themselves with forces they do not control but can up to a point use to their advantage (Note 2). An analysis by way of events rather than by way of persons or by way of electrons and molecules may well prove to be more appropriate in the macro domain and is indeed already followed by various writers.
Events, some of them at any rate, do not occur at random : they form themselves into recognizable event-chains and so require something to stop them falling apart. This something we call ‘causality’.  So-called primitive man, if anything, believed more firmly in causality than people today do : rather than meekly accept that certain events came about ‘by chance’, primitive societies considered there must always be some agency at work, malign or benevolent, natural or supernatural. Causality is not so much a law — for a law requires a lawgiver — as a force, perhaps the most basic and essential kind of force imaginable since without some form of causality everything would be a bewildering confusion where any event could follow any other event and there would be no persistent patterns of any kind whatsoever anywhere.  Though I am prepared to dispense with quite a number of things I am not prepared to dispense with causality, or its equivalent. In Ultimate Event Theory it appears under the name of ‘dominance’. As one of the half dozen basic concepts of the theory, it cannot really be described in terms of anything more elementary and I define it as “a coercive influence which certain event clusters and event-chains have over others”.  This is not much of a definition but will do for the moment. Before saying more about ‘dominance’ and how it differs, if at all, from causality, it may be as well to examine the ‘classic’ theory of causality as it appears in Western science up to the twentieth century and in rationalistic thinking generally.
Causality — what is causality?  The basis of all theories of causality is the notion, more precisely intuition, that certain pairs of events are connected up in a  necessary fashion whilst others (the majority) are not. It doesn’t “just happen to be the case” that someone falls over if I give him a sudden hard push : if  he did fall, we would say that I caused him to stumble. On the other hand, if I am shaking his hand and he happens to trip over a stone at this precise moment, I didn’t cause him to fall over — though it might look as if I did to an observer  some distance away.
According to Piaget, the newborn child lives in a ‘world’ without space and time, without permanent objects and without causality. The universe “consists of shifting and insubstantial tableaux which appear and are then totally reabsorbed” (Piaget). However, the notion of causality arises very early on, perhaps even as early as a few months if we are to believe certain modern researchers (Note 1).  Certainly, the baby very soon realises that by making certain movements or noises it can attract the attention of its mother successfully, though whether this quite constitutes an ‘understanding’ of causality is debatable. . Event A, such as gurgling or screaming, becomes regularly associated in the baby’s mind with a quite dissimilar Event B, the physical proximity of its mother or another grown-up. The scream ’causes’ the prompt arrival of a grown-up, never mind how or why.
The notion that certain occurrences can just arise ‘out of the blue’ without antecedents is repugnant to most adult human beings and any sort of an explanation, however fanciful,  is felt to be better than none at all. Belief in causality, whether well-founded or not, certainly seems to be a psychological necessity. The main motive for populating the universe in times past with so many unseen entities was to provide causal agents for observed phenomena. By the time we reach the 18th century, largely because of the astonishing success of Newtonian mechanics, most of these supernatural agencies became redundant, at any rate in the eyes of educated people. The “thrones, principalities and powers” against whom Saint Paul warns us had disappeared into thin air by the mid eighteenth century, leaving only an omniscient Creator God who had done such a good job in fashioning the universe that it could run on its own steam without the need of further intervention. The philosophers of the Enlightenment rejected ‘miraculous’ explanations of physical events : in theory at any rate mechanical explanations sufficed. Newton himself was puzzled that he was unable to provide a mechanical explanation of gravity and, later on, electrical phenomena caused problems : but most physicists prior to the last quarter of the nineteenth century assumed with Helmholtz  that “all physical problems can be reduced to mechanical problems” and that Calculus and Newton’s Laws were the key to the universe.
Through all this, belief in causality continued unimpaired. In principle there were no chance events, and the French astronomer Laplace went so far as to say that, if a Supermind knew in full detail the current state of the universe, it would be able to predict everything that was going to happen in the future. This view is no longer de rigueur, of course, mainly because of the discoveries of Quantum Theory which has uncertainty built right into it. But, for the moment, I propose to leave such complications aside in order to concentrate on what might be called the ‘Classic Theory of Causality’ — ‘classic’ in the sense that it was the theory upheld, or more often assumed, by the great majority of scientists and rational thinkers between the 16th and 20th centuries.

This Classic Theory would seem to be based on the four following assumptions:
1.    There exists a necessary connection between certain pairs of events, and by extension, longer sequences;
2.    The status of the two events in a causal pair is not equivalent, one of the two is, as it were, active and the other, as it were, passive or acted upon;
3.    The ‘causal force’ always operates forwards in time, it is transmitted from the earlier event to the later;
4.    All physical occurrences, and perhaps mental occurrences as well, are brought about by the prior occurrence of one or more previous events.

       These assumptions are so ‘commonsensical’ that almost everyone took them for granted for a long time, witness popular phrases such as “Every event has a cause” , “Nothing can arise from nothing” and so on. But then the 18th century British philosopher Hume threw a spanner into the works. He pointed out that these assumptions, and others like them, were, when all was said and done, simply assumptions — they could not be proved to be the case, and were not ‘self-evident’. We do not, Hume pointed out, ever see or hear this mysterious causal link : indeed it is notoriously difficult, even for trained observers, to distinguish between events which are (allegedly) causally related and those that are not — if this were not the case, the natural sciences would have developed much more rapidly than they actually did.
Nor do these assumptions appear to be ‘necessary truths’, though this is undoubtedly how Leibnitz and Kant and other rationalist thinkers viewed them. As Hume says, the fact that event A has up to now always and in all circumstances been followed by event B, does not mean that this will automatically be the case in the future. (Indeed, though Hume could not know this, the assumption is false if Quantum Mechanics is to be believed since in QM identical circumstances do not necessarily produce identical results.)
In brief, belief in causality is, so Hume argues, an act of faith. This was a very serious charge since most scientists regarded themselves as having left behind such modes of thought. The nineteenth century, as it happened, took very little notice of Hume’s devastating critique : science needed a cast iron belief in causality and Claude Bernard even went so far as to define science as determinism. And since science was clearly working, most educated people were happy to go along with its implicit assumptions — perhaps making an exception for mankind itself to whom God had given the capacity for free choice which the rest of Nature did not possess.
Actually, the four assumptions listed, necessary though they are, do not suffice to distinguish the post-Renaissance Western theory of causality from earlier beliefs and theories. Further restrictions were required to eradicate the remaining vestiges of magical pre-scientific thinking. The most important of these principles are :
1.     The No Miracle Principle
2.    The Principle of Spatio-Temporal Continuity;
3
.    The Principle of Energy;
4
.    The Principle of Localization;
5
.    The Mind/Body Principle
6
.    The Principle of Parsimony

The first three principles are ‘scientific’ in the sense that they have had enormous importance in the progress of scientific thinking. The first gets rid of all deus ex machina and thus stimulates a search for ‘natural principles’; the second  prohibits ‘action at a distance’ (though ironically gravity and certain aspects of quantum mechanics require it); the third, roughly that “all change requires an energy source” is very much an issue today in this era of depleted stocks of fossil fuels ; the fourth, that “everything must be somewhere” is, or seems to be, commonsensical; the fifth, roughly that the “mind cannot by itself bring about changes in the outside world” is a corner-stone of materialist philosophy, while the last,  that “Entities are not to be multiplied without necessity” is more a matter of method and necessity than anything else. These principles will be discussed in detail in the following Post.   S.H.

________________________________________________________

Note 1 The researchers Ann Leslie and Stephanie Keeble [“Do Six-Month-Old Infants Perceive Causality?” Cognition 25, 1987 pp. 265-288] claim that, when babies are shown ‘acausal’ sequences of events mixed up with similar causal sequences they [the babies] show unmistakeable symptoms of surprise such as more rapid heart beat.

Note 2  This is indeed what may have prevailed ‘in the beginning’ but the world we find ourselves in today is very different from the Greek kaos from which everything come and the main difference is that certain sequences of physical events have kept on repeating themselves with minor variations for millions of years. Such patterns have indeed become so firmly established that they are viewed as ‘laws of Nature’ though they are perhaps more fittingly described as ‘schemas’ or ‘event-moulds’ into which physical events have fallen.

 

Velocity has a different meaning in Ultimate Event Theory to that which it has in object-based physics. In the latter a particle traverses a multitude ─ usually an infinite number ─ of positions during a given time interval and the speed is the distance traversed divided by the time. One might, for example, note that it was 1 p.m. when driving through a  certain village and 3 p.m. when driving through a different one known to be 120 kilometres distant from the first. Supposing the speed was constant throughout this interval, it would be 120 kilometres per 2 hours. However, speed is practically never cited in this fashion : it is always reduced to a certain number of kilometres, or metres, with respect to a unitary interval of time, an hour, minute or second. Thus my speed on this particular journey would be quoted in a physics textbook as 60 kilometres per hour  or more likely as  (60 × 103)/(602»   16.67 metres per second  = 16.67  m s–1 (to two decimal places). 
        By doing this different speeds can be compared at a glance whereas if we quoted  speeds as 356 metres per 7 seconds and 720 metres per 8 seconds it would not be immediately obvious which speed is the greater. When dealing with such measures as metres and seconds there would normally be no difference between object-based physics and event-based physics, However, even when dealing with minute distances and tiny intervals of time such as nanometres and nanoseconds, ‘speed’ is still stated in so many units of lengths per interval of time. This automatic conversion to standard unitary measures presupposes that space and time are ‘infinitely divisible’ in the sense that, no matter how small the interval of time, it is always possible for a particle to change its position, i.e. ‘move’. This assumption is, to say the least, hardly plausible and Hume went so far as to write, “No priestly dogma invented on purpose to tame and subdue the rebellious reason of mankind ever shocked common sense more than the doctrine of the infinite divisibility with its consequences” ( Hume, Essay on Human Understanding).
        In Ultimate Event Theory, which includes as an axiom that time and space are not infinitely divisible, this automatic conversion is not always feasible. Lengths eventually reduce to so many ‘grid-spaces’ on the Locality, and intervals of time to so many ksanas (‘instants’), and there is no such thing as a half or a third of a ‘grid-space’ or a quarter of a ksana. The ‘speed’, or  ‘displacement rate’ of an ultimate event or event-cluster is defined as the distance on the Locality between two spots where the event has occurred. This distance is always a positive integer corresponding to the number of intermediary positions (+1) where an ultimate event could have had occurrence. If the position of the earlier occurrence is not the original position, we relate both positions to that of a repeating landmark event-sequence, the equivalent of the origin. So if the occurrences take place at consecutive ksanas, the current reappearance rate (‘speed’) is simply the ‘distance’ between  the two spots divided by unity, i.e. a positive integer. But what if an event reoccurs 7 spaces to the left every 4 ksanas? The ‘actual’ reappearance rate is 7 spaces per 4 ksanas  which, when converted to the ‘standard’ measure comes out as 7/4 spaces per ksana or 7/4 sp ks–1 . However, since there is no such thing as seven-fourths of a position on the Locality, displacement rates like  7/4 sp ks–1 are simply a convenient but somewhat misleading way of tracking a recurring event.
The ‘Finite Space/Time Axiom’ has curious consequences. It means that except when the space/ksana ratio is an integer, all event-chains are ‘gapped’ : that is, there are intermediary ksanas between successive occurrences when the event or event-cluster does not make an appearance at all. Thus, the reappearance pattern ksana by ksana for an ultimate event displacing itself along a line at the ‘standardized’ rate of 7/4 sp ks–1  will be

……..o■oooooooooooooooooo……………
……..oooooooooooooooooooo…………..
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooo■ooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..ooooooooooooooo■oooo……………
……..ooooooooooooooooooooo…………

And this in turn means that when s/n is a ratio of relatively prime numbers, there will be gaps n–1 ksanas long, and the ‘particle’ (repeating ultimate event) will completely disappear during this time interval !   The importance of the distribution of primes and factorisation generally, which has been so intensively studied over the last two centuries, may thus have practical applications after all since it relates to the important question of whether there can be ‘full’ reappearance rates for certain processes (Note 1).
In consequence, when specifying in  full detail the re-appearance rate of an event, or, what comes to the same thing, the re-occurrence speed of members of an event-chain, we not only beed to give the magnitude of the displacement, the change of direction, but also the ‘gap number’ or ‘true’ eappearance rate of an event-chain.

Constants of Ultimate Event Theory 

n* , the number of ksanas to a second, and s*, the number of grid-spaces to a metre, are basic constants in Ultimate Event Theory that remain to be determined but which I have no doubt will be determined during this century.  (s*/n*) is thus the conversion factor required to reduce speeds given in metres/second to spaces/ksana.  Thus c (s*/n*) =  3 × 108 (s*/n*) sp ks–1  is seemingly a displacement rate that cannot be exceeded. c(s*/n*) is  not, as I view things, the actual speed of light but merely the limiting speed for all ‘particles’, or, more precisely, the limiting value of the possible ‘lateral’ displacement of members of a single event-chain. Any actual event-chain would have at most a lateral displacement rate that approaches but does not attain this limit. While there is good reason to believe that there must be a limiting value for all event-chains (particles) — since there is a limit to everything — there is no need to believe that anything  actually attains such a limit. In object-based physics, the neutrino used until recently to be thought to travel at the speed of light and thus to be massless, but it is now known that it has a small mass. Te idea of a ‘massless’ particle is ridiculous (Note 2) for if there really were such a thing it would have absolutely no resistance to any attempt to change its state of rest or straight-line motion and so it is hard to see how it could be anything at all even for a single instanrt, or maybe it would just be a perpetually changing erratic ‘noise’. Mass, of course, does not have the same meaning in Ultimate Event Theory and will be defined in a subsequent post, but the idea that there is a ‘displacement limit’ to an event-chain passes over into UET. This ‘speed’ in reality shows the absolute limit of the lateral ‘bonding’ between events in an event-chain and in this sense is a measure of ‘event-energy’. Any greater lateral displacement than c(s*/n*) would result in the proto-event aborting, as it were, since it would no longer be tied to the same event-chain.

 Reappearance rates

 So far I have assumed that an event in an event-chain reappears as soon as it is able to do so. This may well not be the case, indeed I think it very unlikely that it is the case. For example, an event in an event-chain with a standardized ‘speed’ of 1/2 sp ks–1  might not in reality re-appear every second ksana : it could reappear two spaces to the left (or right) every fourth ksana, or three spaces every sixth ksana and so on. In this respect the ‘gap number’ would be more informative than the reappearance rate as such and it may be that slight interference with other event-chains would shift the gap number without actually changing the overall displacement ratio. Thus, an event shifting one space to the right every second ksana might only appear every fourth ksana shifted two spaces in the same direction and so on. It is tempting to see these shifts as in some way analogous to the orbital shifts of electrons, while more serious interference would completely disrupt the displacement ratio. Once we evolve instruments sensitive enough to register the ‘flicker’ of ultimate events, we mjay find that rthere are all sorts of different event patterns, as intricate as the close packing of molecules.
It has also occurred to me that different re-appearance rates for  event-chains that have  the same standard displace,ment rate might xplain why certain event-chains behave in very different ways despite having, as far as we can judge, the same ‘speed’.  In our macroscopic world, the effect of skipping a large number of grid-spaces and ksanas (which might well be occupied by other event-clusters) would give the impression that a particularly dense event-cluster (‘object’)  had literally gone right through some other cluster if the latter were thinly extended spatio-temporally. Far from being impossible or incredible, something like this actually happens all the time since, according to object-based physics, neutrinos are passing through us in their millions every time we blink. Why, then, is it so easy to block the passage of light which travels at roughly the same speed, certainly no less? I found this a serious conceptual problem but a difference of reappearance rates might explain this : maybe a stream of photons has the same ‘speed’ but a much tighter re-appearance rate than a stream of neutrinos. This is only a conjecture, of course, and there may be other factors at work, but there may be some way to test whether there really is such a discrepancy in the ‘speeds’ of the two ‘particles’. which would result in the neutrino having far better penetrating power with regard to obstructions.

Extended and combined reappearance rates

  Einstein wondered what would happen if an object exceeded the speed of light, i.e. in UET terms when an event-chain got too extended laterally. One  might also wonder what would happen if an event-chain got too extended temporally, i.e. if its re-appearance rate was 1/N where N was an absolutely huge number of ksanas. In such a case, the re-appearance of an event would not be recognized as being a re-appearance : it would simply be interpreted as an event (or event-cluster) that was entirely unrelated to anything in its immediate vicinity. Certain macroscopic events we consider to be random are perhaps not really so : the event-chains they belong to are so extended temporally that we just don’t recognize them as being event-chains (the previous appearance might have been years or centuries ago). Likewise the interaction of different event-chains in the form of ‘cause and effect’ might be so spread out in time that a ‘result’ would appear to come completely out of the blue (Note 3).
There must, however, be a limit to vertical extension (since everything has a limit). This would be an important number for it would show the maximum temporal extension of the ‘bonding’ between events of a single chain. We may also conjecture that there is a combined limit to lateral and vertical extension taken together, i.e. the product grid-positions × ksanas has a maximum which again would be a basic constant of nature.     S.H. 8/10/12

_________________________

Note 1  A ‘full’ re-appearance rate is one where an ultimate event m akes an appearance at every ksana from its first appearance to its last.

Note 2. De Broglie, who first derived the famous relation “p = h/ λ”linking a particle’s momentum p  with Planck’s constant divided by the wave-length, believed that photons, like all particles, had a small mass. There is no particular reason why the observed speed of light should be strictly equated with c, the limiting speed for all particles, except that this makes the equations easier to handle, and no experiment will ever be able to determine whether the two are strictly identical.

Note 3 This, of course, is exactly what Buddhists maintain with regard to the consequences of bad or good actions ─ they can follow you around in endless reincarnations. Note that it is only certain events that have this temporal extension in the karma theory: those involving the will, deliberate acts of malice or benevolence. If we take all this out of the moral context, the idea that effects can be widely separated temporally from their causes and that these effects come up  repeatedly is quite possibly a useful insight into what actually goes on in the case of certain abnormal event-chains that are over-extended vertically.

 

“The acceleration  of  straight motion in heavy bodies proceeds according to the odd numbers beginning with one. That is, marking off whatever equal times you wish, and as many of them, then if the moving body leaving a state of rest shall have passed during the first time such a space as, say, an ell, then in the second time it will go three ells, in the third, five; in the fourth, seven, and it will continue thus according to the succession of odd numbers.”      Galileo, Dialogue Concerning the Two World Systems   p. 257  Drake’s translation

I had originally supposed that Galileo based this conjecture, one of the most important in the whole of physics, on actual observations but, if so, Galileo kept very quiet about it. For the man who is hailed as the ‘first empiricist’, Galileo seems to have been singularly uninterested in checking out this remarkable relation which would have delighted the Pythagoreans, believing as they did that all physical phenomena were reducible to  simple ratios between whole numbers. Admittedly, Galileo was blind when the Dialogue Concerning Two World Systems was published but he surely had ample time to investigate the matter during his long life ― perhaps he was reluctant to test his beautiful theory because he was afraid it was not entirely correct (Note 1). Or, more likely, he wanted to give the impression that he had deduced the ‘Law of Falling Bodies’ entirely from first principles without recourse to experiment. We must remember that a Christianised Platonism provided the philosophical framework for the thinking  of all the early classical scientists right up to and including Newton. Galileo himself wrote that “As to the truth of which mathematical demonstrations give us the knowledge, it is the same which the Divine Wisdom knoweth” (Galileo, Dialogue).  He does claim (via his spokesman Salviati in the Dialogue) that there exists a proof and “one most purely mathematical” (to be given in outline in a moment) but one wonders how on earth he got hold of the relation in the first place since it does not seem to be based on general physical principles as Newton’s formula for gravitation was. As with so many other important discoveries, Galileo most likely just hit on this striking relation  by a combination of observation and inspired guesswork.
Note that Galileo still speaks the language of continued ratio just as Euclid did (and Newton continued to do in the Principia) : the ‘Law of Falling Bodies’ does not constitute an equation of motion in the modern sense (Note 2). There are various reasons for this apart from Galileo’s respect for the ancient Greek geometers. For there is safety in ratios : they  do not tie you down to actual quantities while implicitly assuming that these quantities really exist beyond what one can ever hope to examine in practice. Above all, the language of proportion or continued ratio sidesteps the question of whether space and time are ‘infinitely divisible’ since the rapport holds either way.
Galileo’s continued ratio concerns distances and not speeds. The statement that the successive distances are in the ratio of the odd numbers would still be true  if, during a given interval of time, the particle moved at a steadily accelerating pace, if it stayed motionless for most of the time only to suddenly surge forward at the close, or again move around erratically (Note 3). Provided the recorded distances in successive intervals of time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Moreover, supposing the rule is correct, we only need to determine a value for a single interval (and not  necessarily the first one) to work out all the others.
The difficulty with problems concerning speed is that speed, unlike distance, is not something we can actually observe (Note 4). We have perforce to start with distances, at least one,  that we can determine by observation and check it against repeating signals like the ticking of a clock or the beating of a pulse (Galileo used the latter to time the swinging of a censer when at mass). However, once we have at least one clearcut distance/time ratio, in order to work out further distances ― which is basically what we are interested in ― we have to deal with speeds (rates of change of distance with respect to time) and then backtrack  again to distances. That is, we have to move from the observable (distances, intervals of time) to the unknown and unobservable (speeds, accelerations) and finally back to the observable. The very idea of a ‘formula of motion’, i.e. a statement which, in algebraic shorthand, gives all distances at all times, is a thoroughly modern invention that required a gigantic leap of thought : not only did the Greeks did not have a clear conception of such a thing but Galileo himself seems to have hesitated on the threshold of the Promised Land without really entering it.

Derivation and Justification of Galileo’s ‘Law of Falling Bodies’

Suppose we start off with Galileo’s rule d1 : d2 : d3 ……  =  1: 3: 5…..:  and generalize. We have observed, or think we have observed, that a body starting from rest falls, say, 1 metre in the first second. So it will fall 3 metres during  the second interval, 5 during the third and by the end of the third second will have fallen altogether 1 + 3 + 5 = 9 metres  =  32 metres. Since the odd numbers 1 + 3 + ….(2n–1)   n = 1, 2, 3… add up conveniently to n2 we have a formula already. And, since there is nothing sacrosanct about a second as an interval of time, we might try going backwards and  consider halves, thirds and fourths of seconds and so on.  So, when dealing in half-seconds, since the particle falls 1 metre per second, it must fall ½ metre in the first half-second, 3/2 metres in the second half second, i.e. 2 metres already (even though it has only fallen 1 metre in the first second). And if we deal in thirds of seconds, the particle falls (1/3 + 3/3 + 5/3) = 9/3 = 3 metres, and if we deal in fourths of seconds it has fallen (1/4 + 3/4 + 5/4 + 7/4) = 16/4 = 4 metres so it looks as if the particle’s speed increases the briefer the intervals of time we consider! What has gone wrong?

The answer is that Galileo’s ratio does not tell us anything about speeds as such, only distances, and we cannot just project back the observed ‘speed’ evaluated over a given interval because this speed has maybe been changing during the interval considered. As a matter of common observation a falling body falls faster and faster as time goes on, so it does not have a fixed speed over a given interval. When Galileo spoke of the distances fallen, he was referring to the distance the body had fallen by or at the end of the interval. And like practically everyone else then and since, Galileo assumed that, in such a case,  the speed changed ‘continually’ (or continuously) rising from an initial speed, which the particle has at the very beginning of the interval, to a final speed which the particle attains only at the very close of the interval. So how do you work out the speed, and thus the distance traversed, during any interval of time? To get out a formula, it is seemingly no good just knowing the distances, and thus the speeds, of falling bodies over macroscopic intervals of time like seconds: we need to know the distances they fall in intervals of time too small for us to measure directly. Moreover, any error we make in an observed value over a relatively large interval like a second will most likely get magnified when we extrapolate forward to immense stretches of time, or backward to ‘microscopic’ time.
The ‘Law of Falling Bodies’  (that, during equal intervals of time, the distances fallen are as the odd numbers) takes for granted the following : (1) that, during free fall, a particle’s motion is always increasing ; and (2) that the speed increases steadily ― does not, for example, increase and then decrease a little or halt for a moment. We can take (1) as being based on observation. (2) is also based on observation in the sense that we do not notice any fluctuations (though there might well be some too small for us to pick up) and Salviato, Galileo’s spokesman in the Dialogue, after discussing the point, concludes that it is “more reasonable”  to conclude that the increase is regular.
But this is not all. Not only does Galileo (and practically all physicists since his day) assume that there is a ‘steady’ increase (in the sense that the particle does not backtrack or even stay motionless for a moment) but that the accelerating particle takes on all possible intermediate speeds.  “The acceleration is made continuously from moment to moment, and not discretely (intercisamente) from one time to another “ (Galileo, op. cit. p. 266)  This implies, as Galileo well realized, that space and time are ‘infinitely divisible’ since speed is the ratio of distance to time ― “Thus, we may understand that whenever space is traversed by the moving body with a motion which began from rest and continues uniformly accelerating, it has consumed and made use of infinite degrees of increasing speed….”  (Galileo, op. cit. p. 266).
Galileo and one or two of  his medieval predecessors resolved the ‘continuous acceleration problem’ by geometricizing it, even it belonged to dynamics, the science of movement, while  geometry is the science of the changeless (Euclidian anyway). As we all learned at school the area of a triangle is half the height times the base. But most people  have forgotten, if they ever knew, why the formula works. It works  because, taking the simplest case, that of a right angled triangle, you can cut off the upper ‘half-triangle’ and bring it round to form a rectangle. And the area of this rectangle is half the original height times the base. (Subsequently, we deal with non right-angled triangles by showing that the area of a parallelogram between the same two parallels is that of the equivalent rectangle.)
       What is more, we can (in imagination if not in practice) change a triangle into an equivalent  rectangle no matter how tiny the triangle and resulting rectangle are. Distance is speed × time, so we can present  distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this can be the case?). Also, provided the time interval is ‘brief’, we can, without too much exaggeration, treat the speed as constant and equal to the average height of the two uprights (h­2 – h1)/2.

The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not      We can now measure this narrow rectangle : it is   (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 5), the final result being, of course, the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant that has to be determined by observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  acceptable? To me, not, because, as always in calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but then, a  moment later, turning it into a constant while claiming that this does not matter because it is only a constant for such a short period of time ! It is like hearing that so and so is really a very nice guy and if you object that on every occasion you meet him he is despicable, being told that this doesn’t matter because, every time you meet him, it’s only for a ‘very short time’ !
Galileo, of course, did not have the modern concept of a ‘limit’ in the mathematical sense since it was only evolved, and very painfully at that, during the late nineteenth century.  But, contrary to what most people assume, the modern mathematical treatment using limits does not so much resolve the conceptual problem as make it technically irrelevant. But there is a cost to pay : the whole discussion has been removed from the domain of physical reality where it originated and where it belongs.
Is there any other way of tackling the problem?  Yes, I believe there is. We can simply suppose that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so. This is the ‘finitist’ hypothesis which is the kicking-off point for Ultimate Event Theory.
However, before passing on to the UET treatment, we ought to see what Newton made of the problem. He is over generous to Galileo when he states right at the beginning of his Principia that “By the first two laws [of  motion ] and the first two Corollaries, Galileo discovered that the descent of bodies varied as the square of the time and that the motion of projectiles was in the curve of a parabola” (Newton, Principia, Motte’s translation p. 21). If this really is the case, Galileo did not express himself as clearly and succinctly as Newton and, above all, did not explain why this is so. Galileo does speak of the ‘heaviness of bodies’ but does not quite manage to conceive of gravity in the  Newtonian sense.
By Newton’s Second Law, force affects a body’s state of rest or uniform straight line motion, and so  there must be a force at work. Since the force (that of gravity) is permanent and does not get ‘used up’ like most forces we are familiar with, a body is repeatedly accelerated and,  by Newton’s First Law, repeatedly retains the extra velocity it acquires. This means the acceleration cannot be other than constant :
“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities (Note 6) … And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.” ( Newton, Principia, p. 21)
In effect, a falling body is on the receiving end of a repeating force which moves it from one state of uniform velocity (which it would keep for ever if subject to no further outside impulse) to the next. Newton thus, as no one had done before him, derived the observed phenomenon of constant acceleration in a falling body from just two assumptions, his first two Laws of Motion.

Treatment of Free Fall in Ultimate Event Theory

When deriving an equation, especially a well-known one, one must beware of circular reasoning and assuming what we wish to prove. So, let us lay our hands on the table and declare what basic assumptions we require.
First of all, we need to assume that, in cases of free fall, the body ‘keeps on accelerating’, i.e. goes faster and faster as time goes on. This assumption is, in Newton, an Axiom, or an immediate deduction from an Axiom, but it was originally  the fruit of observation. For there might conceivably be worlds where falling bodies fall at a constant rate, i.e. , don’t speed up as they fall. However, it seems to be generally true.
Secondly, we need to assume that a falling body, while it speed is variable, has a constant or very nearly constant  acceleration  due to the Earth’s attraction ― this follows from Newton’s Laws of Motion which are presented as ‘Axioms’ though ‘amply supported by experiment’ . If  Newton is to be believed, the speed of a falling body changes inversely with the square of the distance, i.e. the nearer you are to the Earth the more rapidly your speed increases (until it reaches a terminal velocity because of air resistance). Also, because the Earth is not a perfect sphere, there will be slight variations according to where you are on the Earth’s surface. However, over the sort of distances we are likely to be concerned with, there will be little, if any, observable variation in the value of g, the constant of acceleration.
        A third assumption is, however, also necessary. Either we suppose, as Galileo and  Newton did (Newton with some hesitation) that ‘time and space’ are ‘infinitely divisible’ or we suppose that they are not and draws the necessary consequences. Ultimate Event Theory presupposes that all events are made up of ‘ultimate events’ (which cannot be further divided) and that there is an ‘ultimate’ interval of time within which no change of position or exchange of energy  is possible. When dealing with of the infinitely small ling with a ‘sequence’ of events, there is always a first event and that this event has occurrence on a particular spot at a particular moment. There can be  change but not continuous change (of position or anything else you like to mention). The dimensions of the three-dimensional spot where an ultimate event has occurrence are, currently, unknown as also is the extent of the elementary interval of time which I denote a ksana (from the Sanscrit for ‘instant’). All we can say about these fundamental dimensions of Space/Time is that they are ‘very small’ compared to the dimensions we deal with in the observable macroscopic ‘world’. It may seem that there is not much one can do with such a hypothesis, but, surprisingly, there is, since the whole art of Calculus and Calculus-like procedures is to start off with unknown microscopic quantities and eventually dispense with them while, miraculously, ending  up with the sort of values we can work with.
Fourthly, if our eventual formula is to be of any use, we have to assume that it is possible to determine an ‘initial state’, ‘initial distance’, ‘initial rate of change’ or the like. In this case, we have to assume that we can make at least a reasonable guess at the size of g, the constant of acceleration due to the proximity of the Earth.
(Incidentally, to avoid being pedantic, I shall sometimes lapse into the  ‘object-language’ of conventional physics, speaking of  ‘particles’ instead of events. But it is to be understood that what is ‘really’ happening is that ultimate events are appearing, disappearing and reappearing at particular spots on the Locality and that the ‘motion’ inasmuch as it exists at all is discontinuous.)
We have then a ‘particle’ (repeating ultimate event)  displacing itself relative to some landmark event-cluster considered to be ‘at rest’, i.e. repeating regularly. Our particle (event-cluster) has received an impulse that has dislodged it from its previous position since,  by hypothesis, it is now ‘in motion’. According to Newton’s Laws (Laws of Motion + Law of Universal Attraction) this impulse in a particular direction will not go away but will be repeated indefinitely ‘from moment to moment’ without diminution. Since the effect of an outside force is to accelerate the particle, its speed will increase but will, by the Law of Gravity, increase by a constant amount at each interval of time since the force responsible for the acceleration does not change appreciably over small distances. The resulting increase in distance fallen will be the same as the displacement ‘during’ the first interval since the ‘particle’ starts from rest. In the terms of UET, at the ksanalabelled 0 , the start point, the particle is at zero distance from a particular grid-point ― it is actually at this point ― and at the ksana labelled 1  it is m spaces further to the right (or left) of the original spot (strictly this spot’s repeat). The initial ‘speed’, which is also the constant acceleration per ksana,  is m spaces per ksana where m is an integer. At all subsequent ksanas, the ‘particle’ keeps this ‘speed’ and so displaces itself a further m spaces each time in the same direction. If there were no further force involved, it would keep this current speed indefinitely, but since the gravitational ‘pull’ is repeated it moves a further m spaces at each ksana.

ksana 0            . 

ksana 1            ←    m spaces  → .

ksana 2            ←    m spaces  →   .  ←      m spaces         →.

The equivalent of that will-o’-the-wisp,  ‘instantaneous speed’, in Ultimate Event Theory is simply the current reappearance rate of the ultimate event and in thiscase, the current ‘speed’  is the difference between the current position (relative to a fixed point origin) and the previous position divided by unity. The current acceleration (increase in speed) in this case is the difference between the current speed and the previous speed and this difference,  by Newton’s Law, remains the same because the force involved remains the same (or very nearly the same). Defined recursively, which is the preferred method in UET, the distance formula is
d(0) = 0; d(1) = m; d(n+1) = d(n) + d(1)   n = 1, 2, 3…. 

  ksana       distance from previous           distance from start

                         position                                  position

0                            0                                                 0 m
1                            m                                       1 m
2              (m + m) = 2m                     (1 + 2) m   = 3 m
3              m + 2m = 3m            (1 + 2 + 3) m     = 6 m
4                            4m             (1 + 2 + 3 + 4)   =  10 m
……………………………………………..
n                                     nm             {1 + 2 + 3 +……+ n}m 

         Since the sum of the natural numbers up to m is the relevant triangular number, m(m+ 1) = (m2/2) + (m/2)
                                                                                                                                                                  2
the total distance traversed at the nth ksana is thus
m {1 + 2 + 3 + …….n}   = m (n(n+1))
                                                      2
                                        = m ((n2 + n)  = (mn2)  + (mn)
                                                  2                      2             2
        This total displacement has taken place in n ksanas where n is a positive integer, for example, 3, 57, 1,456  or 1068.  
We, however, do not reckon in ksanas, this interval being so brief that our senses do not recognize it, even when our senses are extended and amplified by modern instruments. What we can  say, however, is that there are n* ksanas to a second, where n* , an unknown constant, is a very large positive integer.

If we wish to covert our formula to seconds we must divide by n*  so we now have

 

    d(n/n* )  = =  (mn)2  + (mn

                         2n*        2n*                    

If t is in seconds n = n* t or t = n/n*  since there are n* ksanas to a second. So,

 

        d(t)  =    (m (n*t)2)   m(n* t)    =  (mn* t2)   mt     …..(i)

                        2n*            2n*               2            2

Now, if we have been able to deduce, by observation, that, at the end of every second, there is a constant acceleration of gmetres per second which takes a full second to have effect, the speed at the end of the very first second will be g metres, where g is a known constant ― approximately known anyway. The speed at the n*th  ksana is n*m which means the ratio of metres to ksana is m n* : g or m = g/n* . Substituting this into the above, we obtain
d(t)  =  d(t)  =   (gn* t2)   +  mt   = ½ gt2 + ½ (gt)/n*………..(i)
                                    2n*               2n*

        Substituting for t = 1, 2, 3   we obtain
        d(t = 1)  =  ½ g + ½ g/n*   =  (g/2) + (g/2)/(n*)
        d(t = 2)  =  4(g/2)  +  2(g/2)/(n*)  =  2g   + g/n*
        d(3)  = 9 (g/2)  +  3(g/2)/(n*)
        d(t)  = t2 (g/2)  +  t(g/2)/(n*)   

         These are the total distances up to the end of the first, second, t th second. The actual distances traversed during each second can be obtained by subtraction. Thus, the distance traversed during the second second is

(4 – 1)(g/2) = (3)(g/2) and for the third (9 – 4)(g/2) 

If we examine the ratios, we see that the distance covered ‘during’ the second second is d(2)  is
        {4(g/2)  +  2(g/2)/(n*)}  –  {(g/2) + (g/2)/(n*)}
        =   3(g/2)  +  g/2)/(n*)   

which is very nearly 3(g/2) if n* is large so the ratio is very nearly 3 : 1      Similarly, the increases during subsequent seconds compared to those in the preceding seconds will be approximately    5 : 3; 7 : 5  and so on.
Note that it was not necessary to say anything about ‘areas under a curve’, ‘infinitesimally small’ intervals or, for that matter, limits as n* → ∞. The formula (d(t)  =   (½ gt2 + ½ (gt)/n*    is not a limit : it is an exact formula involving two constants, g and n*, one of which is known (at least approximately) and the other is currently not known because so far it has proved to be unobservable. Whether one actually takes a value such as n* into account (on the occasions when it is actually known) depends on the level of precision at which one is working, but this is a matter for the engineer or manufacturer to decide.

Determination of the value of n*

Let us we return to the general formula in t
        d(t)  =   ½ gt2 + ½ (gt)/n* 

         If now we set t = n*  ― and there is no reason why we should not since n* is a number, even if we do not know what it is ― we have
        d(n*)  =   ½ g(n*)2 + ½ (gn*)/n*  =  ½ g(n*)2 +  ½ g  

In other words, the ‘extra’ distance, which we have tended to discount as being negligible, is now equal to the known increment of ½ g  . And this means that we can, at least in principle, determine the value of n*  from data and spreadsheets  —  for it will be that value of t which gives an ‘additional’ increment of g/2.

Now, n* is probably far too large a number for this to be currently possible even with today’s computers but we do not have to leave it at that. We can work with nanoseconds instead of seconds and the relation will still be true, although we will now be determining n*/109  which is probably still a large number. Of course, at this level of precision we shall have to be much more careful about the values of g ― will probably have to make further observations to determine them ― and perhaps will also have to take into account the effects of General Relativity. However, since the instruments  now available are capable of determining the tiny Mössbauer effect, this should not by any means be an impossible task. I confidently guess that within the next twenty years, science will come up with a good estimate of the value of n* (the number of ksanas in a second) and that n* will take its rightful place alongside Avogrado’s Number and G, the universal constant of gravitation, as one of the most important numbers in science (Note 7).

Problem about the formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent and  oppositely directed, reaction, and this means there is a fixed order of events since action and reaction cannot be strictly simultaneous, at any rate in Ultimate Event Theory (see post on Newton’s Third Law).       Certainly there is a definite sequence of events when we are dealing with contact forces, which is why Newton used the terms ‘action’ and ‘reaction’ in the first place. But is gravity, action at a distance, the great exception to the rule? Seemingly so, because all my attempts to tinker around with the derivation ended up with something so far from the usual formula (½ gt2)  that it had to be wrong  because the latter has been shown by countless experiments to be at least approximately correct. If, for example, you envisage gravitational attraction as similar to a contact force, there will be a delay of at least one ksana before the next force is experienced by the falling body. That is, the Earth pulls the falling body, the latter  feels the effect, in return exerts a pull on the Earth while it continues for at least one ksana at its current reappearance rate. This gives

ksana       distance at current ksana           Total distance                                       

0                                      0                                                        0
1                             1g/2n                                                                             

2                             1g/2n                                                1g/2n

3                             2(g/2n)                                          3 g/2n                                                                     

4                            2(g/2n)                                         6 g/2n

5                             3(g/2n)                                            

6                              3(g/2n)                                     12 g/2n

……………………………………………..

 –1                          (m/2) (g/2n)
m (even)                            (m/2) (g/2n)        2{1 + 2 + 3 +….(m/2)  (g/2n)

The distance, after due substitutions, turns out to be half the previous one, as one might expect, namely  1/4 gt2  +   gt/4n* and this must be wrong.

This means that gravity must be treated as simultaneously affecting both bodies ‘at once’, i.e. within the same ksana, strange though this seems. Newton himself was worried by the issue since it implied that gravity propagated itself ‘instantaneously’ across the entire universe (Note 9) !  It is, however, probably wrong to conceive of gravity in  this way : in General Relativity it is the space between bodies that contracts rather than the separate bodies sending out impulses to each other. In Ultimate Event Theory terms, this makes gravitational phenomena states of the underlying substratum (that I call the Locality) rather than interactions between event-clusters ‘on’ the Locality. This issue will be gone into in more depth when I come to discussing the implications of Relativity for Ultimate Event Theory.

S.H.

Note 1   Galileo, who apparently never did  throw iron balls off the top of the Leaning Tower of Pisa, does give a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue Concerning the Two World Systems) But this figure is so widely off that one of his own pupils queried it at the time :

“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.”

Drake, Notes p. 561 op. cit.

Note 2  Galileo does, however, go on to say cryptically that “the spaces passed over by the  body starting from rest have to each other the ratios of the squares of the times in which such spaces were traversed” (Dialogue, p. 257)

Note 3 Galileo discusses this possibility but rejects it as unlikely.

Note 4  Speed is not a basic SI unit, being simply the ratio of distance to time (metres to seconds). Curiously though, while  speed is not something of which we have direct experience, the same is not true of momentum mv, speed × mass. All collisions involve forcible and abrupt change of momentum that we certainly do experience since such changes are often catastrophic (car crashes, tsunamis). I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2)  as a primary unit.

Note 5 In the more sophisticated modern derivation of the formula, we ‘squeeze’ the area between two limits, one an overestimate and the other an underestimate, and we chop up the interval into rectangles of variable width. But the basic strategy is the same as that of Galileo and Oresme before him.

Note 6  Newton presumably meant, ‘generates equal supplementary velocities’, since, as we have had it drummed into our heads in school, force produces accelerations, not velocities  as such. It is amusing to see one of the greatest minds in the history of science making a slip that would have earned him a reprimand in a modern school. This is actually not the only place in the Principia where Newton confuses velocity and acceleration; maybe part of the problem was that there did not exist a suitable Latin term (he wrote the Principia in Latin).

Note 7  Modern textbooks generally state that gravitational attraction travels at the speed of light.

Note 8  Unfortunately, the chances are that I shall not live to see this.

time are in the ratio 1:3:5 (or 7:9:11), the rule holds. Now, lengths and distances are things we feel we know about, things accessible to the senses. Likewise, time in the sense of a regular sequence of audible or visual stimuli, the ticking of a clock, beating of a pulse, regular flashing of a light, is also something that falls within our sensory experience. But speed? Speed cannot be seen or heard and is, most of the time extremely difficult to determine precisely :  we experience the effects of speed, but not speed itself. The difficulty with problems like that of falling bodies is that we start with the known, recorded distances,  familiar audible events and so on, but then, in order to determine further distances, have to work out speeds and deduce  distances we cannot hope to evaluate directly from a formula which deals in changes of distance, not the distances themselves (Note 3)
       The way round the problem was to geometricize it. As we all learned at school the area of a triangle is half the height times the base. But most people don’t know or have forgotten why this works : it is because, taking the simplest case, that of a right angled triangle, you can remove the upper ‘half-triangle’ and bring it round to form a rectangle. And, for us, rectangles are easier to deal with.
What is more,  we can (in imagination if not in practice) do this no matter how tiny the right angled triangle and resulting rectangle is. Distance is speed × time (sort of), so we can envisage distance as the ‘area under the curve’ of a speed/ time graph, plotting speed vertically and time horizontally. Now, in the case under consideration, the speed is supposed to be ‘always’ changing and so has a different value at every single instant no matter how brief (does anyone really believe this?). But over a fairly short time interval we can, without too much exaggeration, consider it to be constant and equal to the average height of the two uprights (h­2 – h1)/2. We can now measure this narrow rectangle : it is  (h­2 – h1)/2 × t(­2 – t1) and, doing this again and again, we can get out  a total for the whole area. I shall not give the details of the derivation which can be seen in any book on Mechanics or Calculus (Note 3) because I shall in a moment derive a formula without any appeal to geometry. Suffice it to know that the final result is the well-known equation of motion for the case of constant acceleration from rest, s = ½ gt2 where s is the distance and g, the acceleration due to the gravitational attraction of the Earth, is a constant whose value was obtained, not from first principles, but from meticulous experiment and observation.
There can be no doubt that the formula works so it must be true, or very nearly true. But is the normal manner of deriving it  entirely acceptable? To me, not, because there is first of all a lingering doubt in my mind as to whether it is legitimate to  represent distances, which are real things, by ‘areas under a speed/time curve’  since this involves plotting velocities which themselves depend on distances and have no reality of their own. But a more serious objection is that, as always in traditional calculus, there is the peculiar procedure of first of all stating that some quantity (speed in this case) is ‘always’ changing, but a  moment later turning it into a constant and pretending that this does not matter because it is only a constant for such a short period of time ! It is like saying someone is really a very nice guy but on every occasion you meet him, he will be basty but this doesn’t matter because you only ever see him for a very short time! The mathematical ‘passage to a limit’  does not so much resolve the conceptual problem as make it irrelevant  but only at the price of removing the whole discussion from the domain of physical reality where it originated and where it belongs. And in cases where we know that there really is a cut off point (a smallest possible value for the independent variable), this sort of manipulation can, and sometimes does, give incorrect answers which is why, increasingly, problems are dealt with by slogging them out numerically with computers rather than trusting blindly to analytical formula.  The dreadful truth that ought ot be inscribed in letters of darkest black in every calculus textbook us that the famous mathematical limit is, in almost all cases, never attained and those of us who live in the real world need to  beware of this since the cut off point may be much closer than the mathematics makes it look as if it is. Generally, engineers simply evaluate some quantity to the level of precision they require and don’t bother about the limiting value.
But is there any other way of tackling the problem?  Yes, I believe there is, deriving the equation of motion directly  from Newton’s Laws while adding in the ‘finitist’ assumption on which most of Ultimate Event Theory is based (Note 4). That is, one can simply suppose (what common sense tells us must be the case) that when we continually reduce the intervals of time, we eventually arrive at  an interval of time so small that the particle does not accelerate (or even move) at all ― since there is not enough ‘time’ for it to do so.

Treatment of Free Fall in Ultimate Event Theory

Suppose we have observed that a particle (event-cluster) falling from rest has, or at any rate appears to have, a constant acceleration of g metres/second which takes 1 second to take effect. So, in ‘thing-speak’, if the particle starts from rest, it has moved g metres by the end of the first second.
In Ultimate Event Theory there is always a first event, whether observed or not, and, in general, a first cause of ‘motion’. Since the ‘particle’ is in motion, we conclude that, in accordance with Newton’s First Law (or its UET equivalent), the particle has received an impulse that has dislodged it from its previous position and that in the first  ksana we are concerned with the ‘particle’ has been displaced by a certain number of spaces (grid-positions on the Locality) in some given direction. We do not know, and do not need to know, what this initial number of spaces is but we can call it g/n metres since there are, by hypothesis, n ksanas to a second. (The quotient  g/n is understood to be taken to the nearest whole number.) This is the ‘current velocity’, the equivalent of ‘instantaneous velocity’, namely the ‘velocity’ that the ‘particle’ would continue to have from now on if it were not interfered with by any outside forces. In other words, were this rate of displacement to remain unchanged, ksana by ksana, the particle would traverse  n(g/n) = g spaces in n ksanas, which we take to be the equivalent of a second. As for n, all we really need to know at this stage is that this  number ‘exists’, i.e. represents a real amount of spaces,  and that it is a very large number.
The particle (event-cluster) does not, however, as it hap[pens in this case maintain its current ‘velocity’ (rate of appearance) because of the influence of a massive event-cluster near by. Since gravitation is a permanent force, the particle in question will keep on receiving the same impulse ksana after ksana so its overall rate of displacement will increase at each stage but by a fixed amount (g/n) from one ksana to the next (Note 5) .
At the ksana we label 0 the particle is thus at zero distance from some grid position and at (not ‘during’) ksana 1 it is at the equivalent of g/n metres from this point. At each ksana it maintains its previous rate of appearance but with a constant supplementary distance added on because the effect of the outside force is repetitive and inexhaustible. We have

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

  0                                    0                                                   0

1                            1g/n                                     1 g/n

2              (1 + 1) = 2g/n                          (1 + 2) = 3 g/n

3              (2 + 1) = 3g/n                  (1 + 2 + 3)   = 6 g/n

4                            4g/n            (1 + 2 + 3 + 4)   =  10 g/n ……………………………………………..

m                                    m g/n              {1 + 2 + 3 +……+ m} g/n

         Since the sum of the natural numbers up to m is the relevant triangular number,  ((m+ 1))2 = (m2/2) + (m/2)
The total distance traversed at the mth ksana is thus

        g/n  {1 + 2 + 3 + …….m}   = g/n (m(m+1))/2
                                                = (g/2) ((m2 + m)/n) metres

This distance has been accomplished in m ksanas.
Now, since there are n ksanas to a second, to work out how many metres the particle has traversed in t seconds, we have to convert to seconds, i.e. divide by n. If m = nt , then the distance traversed in t seconds is

  (g/2) ((m2 + m)/n)     =  (g/2) ((nt2)+ t)   =  ½ gt2 + ½ g t/n
            n                                n

Now, if we want to take the limit as n → ∞ (I prefer to write simply n →  ) this is the familiar ½ gt2 . For normal situations, doubtless this is accurate enough, but I can conceive of cases where, if t were large enough, the extra term might need to be taken into consideration ―   indeed this would be an occasion to get an estimate of n. Otherwise, what we can say is that the acceleration is always > ½ gt2

        Note that I have not been obliged to make any appeal to ‘areas under a curve’ or to velocities as such, it being only necessary to add distances. Recursively, the behaviour of the ‘particle’ is given by the formula

                d(n+1) = d(n) + d(n) – d(n–1) = 2d(n) – d(n –1)

                d(0) = 0; d(1) = g/n

There is thus a steady increase of 2d1 = 2 g/n in this case at each ksana and this is why the odd numbers appear at successive intervals of time. The curve formed by joining the dots is what we know of as a parabola because the odd numbers 1 + 3 + 5 + ….(2n – 1) add to n2 and y = x2 is the equation of a parabola.

Correction to formula for free fall

There is, however, a problem with the above. According to Newton’s Third Law every action calls forth an equivalent, oppositely directed, reaction, and this means there is a fixed order of events since action and reaction are not strictly simultaneous (see post on Newton’s Third Law). This is certainly the case for cases of contact but I had thought of making an exception for gravity as Newton himself seems to have done since he is on record as saying that it propagates instantaneously, though he had some difficulties believing this (Note 5). If we consider that, in the case of free fall under gravity, there should still be a succession of events, the ‘action’ will only take effect at every other ksana. I had thought of making it a general principle that an ultimate event always occupies the same position for at least two consecutive ksanas, otherwise it cannot be said to have any proper velocity. I thus had the particle (event-cluster) keep the same rate of displacement for two ksanas before being once more accelerated. This worried me at first since the result seemed completely different to ½ gt2 which must be more or less correct. However, all was well after all : the final result merely differed in the second term.

This time, for reasons that will become apparent, I set 1 second as equal to 2n ksanas (and not n) so the initial displacement is g/2n metres.

ksana      distance in metres traversed      Total distance

                relative to previous ksana             so far      

 0                                      0                                                        0

1                             1g/2n                                            1 g/2n
2                             1g/2n                                           2 g/2n

3                             2(g/2n)                                      3(g/2n)                             

4                              2(g/2n)                                     6 g/2n

 

5                             3(g/2n)                                            

6                              3(g/2n)                                             12 g/2n

…………………………………………….. 

m–1                          m g/2n

m (even)                m g/2n         2{1 + 2 + 3 +…..(m/2)  g/2n

The total distance traversed at the mth ksana where m is even is

(g/2n)  2{1 + 2 + 3 + …….(m/2)}   = (g/2n) 2 { (m/2)2 + (m/2))}/2

                                        = (g/2n) {(m2/4) + (m/2)} metres

Substituting in m = nt and dividing by n we obtain

(g/2n) ((2nt)2/4  + nt/2))/n  = (g/2)(t2 + t/2n)                                                     

= ½ gt2  +   gt/4n

_________________________________________________

Note 1   For the man who is hailed as the inventor of the experimental method, Galileo is surprisingly cavalier about the details. He actually gives a figure at one point : “an iron ball of one hundred pounds…falls from a height of one hundred yards in five seconds” (Galileo, Dialogue  p. 259) which is so widely off that one of his own pupils queries it. 
“[Galileo’s friend] B. Baliani wrote to him to question the figure and in reply, Galileo that for the refutation of statement under discussion, the exact time was of no consequence. He then went on to explain how one might determine the acceleration due to gravitation experimentally, if Baliani cared to trouble with it. Galileo does not assert that he had ever done so, and since he was then blind, it is improbable that he ever did.” 
     Drake, Notes p. 561 op. cit.)

Note 2   To his credit, Galileo does consider the possibility that the speed of a falling body fluctuates erratically rather than increases uniformly though he discounts this as unlikely : “It seems much more reasonable for it [the falling body] to  pass first through those degrees nearest to that from which it set out, and from this to those farther on” (Galileo, op. cit. p. 22).

 Note 3    Speed is not a basikc SI unit being simply the ratio of distance to time (metres/seconds). Interestingly, though speed is not something we have direct experience of, this seems not to be  true of momentum mv since all collisions are the result of forcible change of momentum, often dramatic. It is possible, indeed tempting, to consider that a particle (event-cluster) really does possess momentum (whereas it never really ‘has’ speed) and, I believe, there are systems of measurement which take  momentum mv (kg ms1) or force mdv/dt = ma (kg m s–2) as a primary unit.Note 1

Notew 4 This is most likely the reason why Newton continued to use the cumbersome apparatus of geometric ratios in his Principia instead of his ‘Method of Fluxions’ which he had already invented. Sticking  to ratios and geometry sidesteps the problem of infinite divisibility and the reality of ‘indivisibles’ concerning which Newton had serious doubts. .

Note 2 

Note 2  “In the Methodus Fluxionum Newton stated clearly the fundamental problem of the calculus: the relation of quantities being given, to find the relation of the fluxions to these, and conversely.”  Boyer, The History of the Calculus p. 194.

Note 3   y ‘fluxion’ Newton meant what we call the ‘rate of change’ or derivative.

 

Note 3 In the more sophisticated derivation of the formnula, we ‘squeeze’ the area between two limits, one an overestimate anbd the other an underestimate, and, moreover, we chop up the interval into rectangles of variable width, not just the same. But the basic strategy is the same as that of Galileo and Oresmebefore him.

 

In the ‘infinitist’ treatment of Calculus, we always have to calculate velocities when what we are interested in is distances. In UET, at any rate in this case, we can deal directly with distances which is as it should be since velocity is a secondary quality whereas distance (relative position) is not.

Note 4  Newton himself expresses himself rather too succinctly on the problem. He writes:

“When a body is falling, the uniform force of its gravity acting equally, impresses, in equal intervals of time, equal forces upon the body, and therefore generates equal velocities; and in the whole time impresses a whole force, and generates a whole velocity proportional to the time. And the spaces described in proportional times are as the product of the velocities and the times; that is, as the square of the times.”     

                        Newton, Principia, Motte’s translation  p. 21

This is rather obscure and even wrong to  modern ears. For Newton speaks of “equal forces…generating equal velocities” when every schoolchild today has had it drummed into them that force produces acceleration rather than velocity as such. However, Newton does not seem to have had a suitable (Latin) word for ‘acceleration’ and we must, I think, understand that he meant “equal forces generate equal supplementary velocities” while assuming, in accordance with the First Law, that a body retains the velocity it already has).

Note, however, that Newton, like Galileo, does not give us an equation of motion but still talks the geometrical language of proportion ― and this is typical of the way the entire Principia is written, even though Newton had already invented his version of the Calculus, the Method of Fluxions.

Note 5  Modern textbooks generally state that gravitational attraction travels at the speed of light but there is some doubt as to whether one can really talk about gravity travelling anywhere since it is the intervening space that contracts according to General Relativity.

 

In an earlier post I suggested that the vast majority of ultimate events appear for a moment and then disappear for ever : only very exceptionally do ultimate events repeat identically and form an event-chain. It is these event-chains made up of (nearly) identically repeating clusters of ultimate events that correspond to what we call ‘objects’ — and by ‘object’ I include molecules, atoms and subatomic particles. There is, thus, according to the assumptions of this theory, ‘something’ much more rudimentary and fleeting than even the most short-lived particles known to cojntemporary physics. I furthermore conjectured that the ‘flicker of existence’ of ephemeral ultimate events might one day be picked up by instrumentation and that this would give experimental support to the theory.  It may be that my guess will be realized more rapidly than I imagined since, according to an article in the February edition of Scientific American an attempt is actually being made to detect such a ‘background hum’  (though those concerned interperet it somewhat differently).

Craig Hogan, director of the Fermilab Particle Astrophysics Center near Batavia, Ill., thinks that if we were to peer down at the tiniest sub-divisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of static. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth……..Hogan’s noise arises if space is made of chunks. Blocks. Hogan’s noise would imply that the universe is digital.”     Michael Moyer, Scientific American, February 2012
Moreover, Hogan thinks “he has devised a way to detect the bitlike structure of space. His machine ― currently under construction ― will attempt to measure its grainy nature.(…) Hogan’s interferometer will search for a backdrop that is much like the ether ― an invisible (and possibly imaginary) substrate that permeates the universe”.
         Various other physicists are coming round to the idea that ‘Space-Time’ is ‘grainy’ though Hogan is the first to my knowledge to speak unequivocally of a ‘backdrop’ permeated with ‘noise’ that has nothing to do with atomic particles or the normal quantum fluctuations. However, the idea that the universe is a sort of giant digital computer with these fluctuations being the ‘bits’ does not appeal to me. As I see it, the ‘information’ only has meaning in the context of intelligent beings such as ourselves who require data to understand the world around them, make decisions and so forth. To view the universe as a vast machine running by itself and carrying out complicated calculations with bits for information (a category that includes ourselves) strikes me as fanciful though it may prove to be a productive way of viewing things.      S.H.  30/08/12