Archives for category: Calculus

Despite having already invented the Calculus (which he called the Theory of Fluxions), Newton did not use it in his magnum opus, the Principia Mathematica, probably because he felt uneasy about its logical basis. Instead he employed cumbersome strictly geometrical arguments without even employing co-ordinates ─ which makes the Principia almost unreadable for the modern student. Feynman, one of the greatest mathematical physicists of all time, confessed that he could not follow Newton’s proof that planets must follow elliptical orbits and instead offered his own geometrical proof (see Feynman’s Lost Lecture by Goodstein and Goodstein).
However, it is not true, as is often said, that Newton had no concept of limits. The very first section of Book I is entirely given over to eleven ‘Lemmas’ about Limits which he needs in order to show, amongst other things, that planets and other heavenly bodies verify an inverse square distance law. The key limit is the first:
“LEMMA I
Quantities, and the ratios of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given difference, become ultimately equal.

 If you deny it, suppose them to be ultimately equal, and let D be their ultimate difference. Therefore they cannot approach nearer to equality than by that given difference D; which is contrary to the supposition.”
Newton, Principia (Motte/Cajori translation  p. 29)

In particular, this Lemma leads on to the all-important Lemma VII which states that “the ultimate ratio of the arc, chord and tangent, any one to any other, is the ratio of equality”.

So, what are we to make of Lemma I? On the face of it, it sounds foolproof. Either diminishing ratios that converge to unity, attain their goal or they do not ─ exclusive sense of ‘or’. In practice, of course, this will not do; essentially Calculus wishes  to have it both ways, to make such ratios attain equality when this is convenient and have them not attain equality when this is embarrassing. At least Newton grasps the nettle: by this all-round Lemma he affirms that the limit is attained.
Or, does he? In the Scholium (Commentary) which concludes the section, Newton admits that there is a conceptual problem, at any rate when we consider speed. Why so? Because speed is not an independent entity but rather a ratio of distance to time, and, in dynamics, we desire to know a body’s speed at a particular moment of time. In such a case, is there, or is there not, such a thing as an ‘ultimate ratio’ of distance/time? Newton writes:

Perhaps it may be objected, that there is no ultimate proportion of evanescent quantities; because the proportion, before the quantities have vanished, is not the ultimate, and when they are vanished, is none”.

Newton’s reply to this objection is interesting:

By the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities not before they vanish, nor afterwards, but with which they vanish…..There is a limit which the velocity at the end of the motion may attain, but not exceed.”

This is all very well but contradicts lemma I since Newton says in the above passage that this ‘ultimate ratio’ ‘may be attained’ ─ in which case it would constitute a difference D that is not supposed to exist according to lemma I.
And, a little further on, Newton even contradicts what he has just said since he now denies that this ‘ultimate ratio’ is in fact attained:
        Those ultimate ratios with which quantities vanish are not truly the ratios of ultimate quantities, but limits towards which the ratios of quantities decreasing without limit do always converge; and to which they approach nearer than by any given difference, but never go beyond, nor in effect attain to, till the quantities are diminished ad infinitum” (p. 39 Motte/Cajori).

The contradiction remained like a worm in the apple of Calculus until the radical reworking the latter underwent at the end of the 19th century. The definition of a ‘limit’ that every mathematics student encounters today neatly sidesteps the problem ─ without resolving it. Mathematically speaking, it is immaterial whether a sequence or series actually attains the proposed limit; the only issue is whether the absolute value of the difference between all terms after a given point and the proposed limit can be made “smaller than any positive quantity” (Note 1).
The mathematics student of today is discouraged, sometimes even specifically prohibited, from asking the question that every enquiring person wants to pose: Does the function or sequence actually attain the limit? In most cases of any interest in Calculus and Analysis  the answer is that it does not. (The sequence 1, 1/2, 1/4, 1/8….1/2n for example does not ever attain the obvious limiting value of zero.) The adroit way in which the limit is defined, originally due to the 19th century mathematician Heine, means that, mathematically speaking, we get what we want, namely a clearcut test of whether or not a function ‘tends to a limit’ while avoiding altogether situations where, for example, we might find ourselves tempted to ‘divide by zero’.
However, Newton, despite being the greatest pure mathematician this country has  produced, was a physicist first and a mathematician second, which is why the modern ‘solution’ to the problem of limits, even had he thought of it, would probably not have appealed to him. I am afraid that I, as a philosophic empiricist, at any rate with regard to applied mathematics, am not at all satisfied by the sleight of hand; in cases of obvious physical importance I want to know whether a function or mode of behaviour generally actually does attain the proposed limit or not. However, Newton’s lemma VII which makes the “ultimate ratio of the arc, chord and tangent….the ratio of equality” does not convince me any more than it convinces any contemporary mathematics student.
So, what to do? The solution is quite simple and, I contend, perfectly valid mathematically ─ even though it will arouse howls of protest and derision from the aficionados of modern Analysis. We simply excise Newton’s lemma I and replace it by a positive statement:

“LEMMA I

Quantities, and the ratios of quantities, which in any finite time converge continually to equality, do not in general become ultimately equal but differ from strict equality by a small but finite amount. 

        Now, it is true that in general we do not know what this ‘small amount’ is ─ although in most applications it either is or could conceivably be ascertained. We now know that all energy interactions are quantised and that the inevitable inefficiency (because of friction and similar considerations) of an actual machine can be (and often is) estimated. Not only that, calculus is already used in situations where we know the value of the independent variable cannot be arbitrarily diminished. For example, dn in molecular thermo-dynamics cannot be made smaller than the size of a single molecule and dx in population studies cannot be smaller than a single living person. This does not matter too much since we are dealing with millions of entities, although it must be said that in more accurate work, the tendency these days is to not bother with calculus but to slog it out numerically with computers to the degree of precision required.
This drastic pruning of calculus does not make Analysis, and all that depends on it, altogether redundant since there is often no great difference in practice between assuming that dx has an ‘ultimate’ final value and letting it go as near to zero as we wish ─ the dx terms and a fortiori second and third order terms will most likely end up by being discarded anyway. Nonetheless, one can and should question whether the assumptions of Analysis, especially infinite divisibility, are realistic. I believe they are not. There is a growing movement amongst physicists (e.g. Causal Set Theory, Loop Quantum Gravity &c.) that even spacetime, the last refuge of the devotees of the continuous, might be ‘grainy’.             SH  25/02/20

Note 1 The technical definition for a function is:
f(x) tends to a limit l as x tends to a, if, given any positive number ε (however small), there is a positive number δ (which depends on ε) such that, for all x, except possibly a itself, lying between a − δ and a + δ,  f(x) lies between l − ε  and l + ε .  The definition of the limit of a sequence is similar.
Such a definition will probably not mean much to the non-mathematical reader but the idea behind it is a sort of guessing game. I claim that some sequence or function tends to a limit l and my opponent challenges me to show that I can produce terms of my sequence or function that get me closer to this limit than some arbitrarily small quantity such as 10−6 = 0.000001. If I succeed, my opponent chooses an even smaller difference and so the contest goes on. The point is that this difference, though it can be reduced to zero in some cases, need not necessarily go to zero. For example, I might claim that the diminishing sequence 1; 0.1; 0.01; 0.001; 0.0001; and so on, has zero as a limit. My opponent asks me to get within 1/1000 of my limit, i.e. to make the difference d smaller than, say, 1/1000. I do this easily enough by presenting him with 0.00001 which is a term in the sequence but is smaller than  1/1000 (since 0.00001 = 1/10000). Moreover, since this is a strictly diminishing sequence, all  terms further down the line will also have a smaller difference than the one I have to better. If my opponent ups his challenge, I can easily meet it since if he comes up with 1/10N (for some positive integer N) I can get closer simply by adding more zeroes to the denominator. Yet, in such a case, if actually asked to produce a term in my sequence that makes the difference zero exactly I cannot do so ─ since any term 1/10N , however large N is, is still a positive quantity albeit a small one. But this does not matter, the limit still holds since I can get as close to it as I am required to.

 

 

 

 

Advertisement

[Brief summary:  For those who are new to this website, a brief recapitulation. Ultimate Event Theory aims to be a description of the physical world where the event as opposed to the object (or field) is the primary item. The axioms of UET are given in an earlier post but the most important is the Axiom of Finitude which stipulates that Every event is composed of a finite number of ultimate events which are not further decomposable. Ultimate events ‘have occurrence’ on an Event Locality which exists only so as to enable ultimate events to take place somewhere  and to remain discrete. Spots on the Locality where events may and do have occurrence have three ‘spatial’ dimensions each of unit size, 1 stralda, and one temporal dimension of 1 ksana. Both the stralda and ksana are minimal and cannot be meaningfully subdivided. The physical world, or what we apprehend of it, is made up of event-chains and event-clusters which are bonded together and appear as relatively persistent objects. All so-called objects are thus, by hypothesis, discontinuous at a certain level and there are distinct gaps between the successive appearances of recurring ultimate events. These gaps, as opposed to the ‘grid-spots’, have ‘flexible’ extent with however a minimum and a maximum.]

Every repeating event, or event cluster, is in UET attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value 0 or any rational number n/m ─ but no irrational value. The r/r is quite distinct from the space/time displacement rate, the equivalent of ‘speed’, since it concerns the number of times an ultimate event repeats in successive quite apart from how far the repeat event is displaced ‘laterally’ from its previous position.
If r/r = 0, this means that the event in question does not repeat.
But this value is to be distinguished from r/r = 0/1 which signifies that the ultimate event reappears at every ksana but does not displace itself ‘laterally’ ― it is, if you like, a ‘rest’ event-chain.
If r/r = 1/1 the ultimate event reappears at every ksana and displaces itself one stralda at every ksana, the minimal spatial displacement. (Both the stralda and the ksana, being absolute minimums, are indivisible.)
        If r/r = m/n (with m, n positive whole numbers) this signifies that the ultimate event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other, as it were, when one space-time slice is replaced by the next. We generally assume that we ourselves constitute a standard  inertial system relative to which all others can be compared ─ we ‘are where we are’ at all instants and feel ourselves to be at rest except when our ‘natural state’ is manifestly disrupted, i.e. when we are accelerated by an outside force. In a similar way, in UET we conceive of ourselves as constituting a rest event-chain to which all others can be related. But we cannot see ourselves so we generally choose instead as a standard landmark event chain some (apparent) object that remains fixed at a constant distance as far as we can tell.

Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all — ‘normal’ physics has the same problem . The crucial difference is, of course, not between ‘vertical’ event-paths (‘stationary’ event-chains)  and ‘slanting’ event-paths (the equivalent of straight-line constant motion), but rather between ‘straight’ paths (whether vertical or slanting) and ones that are not straight, i.e. curved. As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and constant straight-line motion.
So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence. The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because time only stands still for the space of a single ksana — ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if the r/r of an event is 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right at every ksana (because there is no such thing as a fifth of a stralda). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.

We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important particularly when we consider collisions. If there is any likelihood of confusion the denominator, which is the ksana value,  will be marked in bold, thus 187/187.
Also, the stralda/ksana ratio for event-chains has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1). Since N is such an enormous number, this constraint can usually be ignored. An event or event-chain simply ceases to repeat when it reaches the limit.
These restrictions imply that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the number of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” (Note 1). (And this in turn means that much of the mathematical assumptions of Analysis and other areas of mathematics are unrealistic.)
Why is all this important or even worth stating? Because, unlike traditional physical systems, UET not only makes a distinction between constant and accelerated ‘motion’ (or rather their equivalents) but also between event-chains which have the same displacement rate but very different ‘reappearance rates’ — some repeating event-chain ‘miss out’ more ksanas than others.
A continuous function in Calculus is modelled as an unbroken line and, if we are dealing with a ‘moving object’, this object is assumed to ‘exist’ at every instant. In UET even a solid object is always discontinuous in that there is always a minute gap between consecutive appearances even in the case of the ‘densest’ event-chains. But, over and above this discontinuity which, since it is general and so minute, can usually be neglected, there remains the possibility of far more substantial discontinuities when a regularly repeating event may ‘miss out’ a large number of intermediate ksanas while nonetheless maintaining a regular rhythm. Giving the overall ‘speed’ and direction of an event-chain is not sufficient to identify it: a third property, the re-appearance rate, is required. There is all the difference in the world between an event-chain whose members (constituent ultimate events) appear at every consecutive ksana and an event-chain which only repeats at, say, every seventh or twentieth or hundredth ksana.
An important consequence is that a ‘particle’ (dense event-chain) can ‘pass through’ a solid barrier if the latter has a ‘tight’ reappearance rate while the ‘particle’ has one that is much more ‘spaced out’. Moreover, two ‘particles’ that have the same ‘speed’ (lateral displacement rate) but very different reappearance rates will behave very differently especially if their speeds are high relative to a barrier in front of them.
This feature of UET enables me to make a prediction even at this early stage. Both photons and neutrinos have speeds that are close to c, but their behaviour is remarkably different. Whilst it is very easy to block light rays, neutrinos are incredibly difficult to detect because they have no difficulty ‘passing through’ barriers as substantial as the Earth itself without leaving a trace. It has been said that a neutrino can pass through miles of solid lead without interacting with anything and indeed at this moment thousands are believed to be passing through my body and yours. On the basis of UET principles, this can only be so if the two event-chains known as ‘photon’ and ‘neutrino’ have wildly different reappearance rates, the neutrino being the most ‘spaced out’ r/r that is currently known to exist. Thus, if it should ever become possible to detect the ultimate event patterns of these event-chains, the ‘neutrino’ event-chain would be extremely ‘gapped’ while the photon would be extremely dense, i.e. apparently ‘continuous’ (Note 2). The accompanying diagram will give some idea of what I have in mind.


The existence and importance of reappearance-rates is one of the two principal innovations of Ultimate Event Theory and it may well have a bearing on the vexed question of so-called wave-particle duality. From the UET perspective, neither waves nor particles are truly fundamental  entities since both are bonded collections of ultimate events. A ‘wave’ is a ‘spaced-out’ collection of ultimate events, a ‘particle’ a dense conglomeration (although both wave and particle at sufficiently high magnification would reveal themselves to be discontinuous). Nonetheless, the perspective of UET is clearly much closer to the ‘particle’ approach to electro-magnetism (favoured by Newton) and gives rise to the following prediction. Since high frequency, short wave phenomena are clearly more ‘bunched up’ than low frequency, long-wave phenomena, it should one day, perhaps soon, be possible to detect discontinuities in very long wave radio transmissions while short wavelength phenomena would still appear to be continuous. The discontinuity would manifest itself as an irreducible ‘flicker’ like that of a light rapidly turned on and off — and may well have been already observed as a strangely persistent annoyance. Moreover, one can only suppose that there is some mechanism at work which shifts wave-particle phenomena from one mode to the other; such a mechanism simply ‘spaces out’ the constituent ultimate events of an apparent particle, or forcefully combines wave-like ultimate events into a dense bundle.
SH   16/03/20

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” would be the equivalent in UET terms of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points there is always another point” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET, the  so-called Principle of Parmenides (who first enunciated it) slightly adapted,  “If there were no limits, nothing would persist”.
As against the above axiom of continuity of ‘Ordered Geometry’ which underlies Calculus and much else, one could, if need be,  introduce into UET the axiom, “It is not always possible to introduce a further ultimate event between two distinct ultimate events”.

Note 2. It is possible that this facility of passing through apparently impenetrable barriers is the explanation of ‘electron tunnelling’ which undoubtedly exists because a microscope has been manufactured that relies on the principle.

 

“In the last analysis it is the ultimate picture which an age forms of the nature of its world that is its most fundamental possession”
   Burtt, The Metaphysical Foundations of Modern Science

Today, since the cultural environment is so violently anti-metaphysical, it has become fashionable for physical theories to be almost entirely mathematical. Not so long ago, people we now refer to as scientists regarded themselves as ‘natural philosophers’ which is why Newton’s great work is entitled Philosophiae Naturalis Principia Mathematica. When developing a radically new ‘world-view’, the ‘reality-schema’ must come first and the mathematics second, since new symbolic systems may well be required to flesh out the new vision ─ in Newton’s case Calculus (though he makes very sparing use of it in the Principia).
Newton set out his philosophic assumptions very clearly at the beginning, in particular his belief in ‘absolute positioning’ and ‘absolute time’ ─ “All things are placed in time as to order of succession; and in space as to order of situation” (Scholium to Definition VIII). And, as it happened, the decisive break with the Newtonian world-view did not come about because of any new mathematics, nor even primarily because of new data, but simply because Einstein denied what everyone had so far taken for granted, namely that “all things are placed in time as to order of succession” ─ in Special Relativity ‘space-like separated’ pairs of events do not have an unambiguous temporal ordering. The case of QM is more nuanced since the mathematics did come first but it was the apparent violation of causal process that made the theory so revolutionary (and which incidentally outraged Einstein).
The trouble with the current emphasis on mathematics is that, from an ‘eventric’ point of view, the tail is wagging the dog. What is real is what actually happens, not what is supposed to happen.
Moreover, mathematics is very far from being so free from metaphysical and ‘intuitive’ assumptions as is generally assumed.  Arithmetic and number theory go right back to Pythagoras who seems to have believed that, at bottom, everything could be explained in terms of whole number relations, hence the watchword “All is Number” (where number meant ‘ratio between positive integers’). And this ancient paradigm received unexpected support from the 20th century discovery that chemistry depends on whole number ratios between the elements (Note 1).
The rival theory of continuous quantity goes back to Plato who, essentially for philosophic reasons, skewed higher mathematics decisively towards the geometrical which is why even those parts of Euclid that deal with (Whole) Number Theory (Books VII ─ X) present numbers as continuous line segments rather than as arrays of dots. And Newton invented his Fluxions (what is now known as the calculus) because he believed reality was ‘continuous, ─ “I consider mathematical Quantities in this place not as consisting of very small parts but as described by a continued Motion…..These Geneses really exist, really take place in the Nature of Things” (Newton, De Quadratura).
The hold of the continuous, as opposed to the discrete, over mathematicians and physicists alike has been extraordinarily strong and held up the eventual triumph of the atomic hypothesis. Planck, the man who introduced ‘quanta’ into physics, wrote “Despite the great success that the atomic theory has so far involved, ultimately it will have to be abandoned in favour of the assumption of continuous matter”.
        Even contemporary branches of mathematics are far from being so ‘abstract’ as their authors claim, witness the excessive importance of the misleading image of the ‘Number Line’ and the general prejudice in favour of the continuous. Only logic is ‘reality-schema free’ and even here, there are systems of ‘deviant logic’ that attempt to make sense of the quantum world. The wholesale mathematisation of physics has itself been given philosophic support by authors such as Tegmark who claim that “at bottom reality is mathematical, not physical”.
All this to say that I make no apology for presenting a broad-brushed reality-schema or ‘world-view’ before attempting to develop a symbolic model and make predictions. It seems  we need some sort of general ‘metaphysical’ schema if only as a form of intellectual housekeeping, and it is much better to lay one’s cards on the table from the very beginning (as Newton does).
So, what is the schema of Eventrics and Ultimate Event Theory? The fundamental notion is of the ultimate event (an event that cannot be further decomposed). I believe there are such things as events and that they are (at least for the purposes of this theory) more fundamental than ‘things’. I also claim that events must ‘have occurrence’ somewhere ─ hence the need for an Event Locality which either precedes all events or is brought into existence as and when events ‘have occurrence’. Secondly, since most of the events that I and other humans are familiar with seem to be caused by other, usually preceding,  events, I feel that this datum needs to be introduced into the theory at the very start. There is thus, by hypothesis, a Casual Force operating on and between (most) events. This force I term Dominance in order to emphasize its usually one-sided operation, and perhaps also to be able to extend the sense a little (Note 2).
I have thus already found it necessary in a theory of events to introduce two entities that are not events, namely the Locality and Dominance. Nonetheless, they are very closely tied up with the production of events, since without the first nothing at all could happen (as I see it), and, without the second, all events would be disconnected from each other and reality would be a permanent vast blooming confusion which, reputedly, it is for the new-born infant.
Are all events caused by other events? This is the deterministic view which was for a long time favoured by the scientific community. The 19th century cosmologist Laplace went so far as to claim that if the positions and momenta of all bodies at the present moment were known, the entire future evolution of the universe could be deduced using only Newton’s laws. But, as we know, Quantum Mechanics and the Uncertainty Principle has put paid to such naïve assumptions; the notion of a strictly random event has now become entirely respectable. It can be loosely defined as “an event without causal predecessors” or, in the jargon of UET, “an event that has no passive relation of dominance to any other event that has occurrence on the Locality”. Because of QM and other considerations, I thus found it essential from the very outset to leave some room for ‘undominated’ or ‘random’ events in the general schema. (Not only that, I shall argue that, at one time, random events greatly outnumbered ordinary caused events.)
This naturally leads on to the question of origins and whether we need any. Most ‘origin-schemas’ require the prior existence either of ‘beings of another order’ (Brahman, God, Allah, Norse gods &c.) or of states that are barely conceivable to us mere mortals (Nirvana, the original Tao, the Quantum Vacuum &c.). All such beings/states/entities are other, fundamentally different from the world we (think we) know and the beings within it.
A few ‘origin-schemas’ envisage the universe as remaining basically the same at all times, or at most evolving from something not fundamentally different from the world we now (think we) inhabit. The Stoic cosmology of Eternal Recurrence, Fred Hoyle’s Steady State and perhaps the Hawking-Hartle ‘No Boundary’ theory fall into this  class. For the partisans of these schemas, the present universe is ‘self-explanatory’ and self-sufficient, requiring nothing outside itself for its existence or explication (Note 3).
For a long time modern science did indeed adhere to the ‘self-explanatory’ point of view, but current physical orthodoxy is a strange mixture of ‘other-‘ and ‘no-other’ origin-schemas. After dismissing for decades the question of “What was there before the Big Bang?” as meaningless, most current cosmological theories involve pre Big Bang uni- or multi-verses very different from our own but still ‘obeying the laws of physics’ which, though distilled uniquely from observations of this world, have  somehow become timeless and transcendent, in effect replacing the role of God Himself.
Partly for rational and partly for non-rational, i.e. temperamental, reasons I subscribe firmly to the first class of ‘origin theories’. I do not believe the physical universe is ‘self-explanatory’ notwithstanding the amazing success of the natural sciences, and it is significant that present cosmological theorists  have themselves found it necessary to push back into uncharted and inaccessible territory in their search for ultimate origins. The quasi-universality of religious belief throughout history, which, pared down to its essentials, means belief in a Beyond (Note 4) is today explained away as an ingrained habit of wishful thinking, useful perhaps when times are bad but  which humanity will eventually outgrow. However, I don’t find this explanation entirely convincing. There is perhaps more to it than that; this feeling that there is a reality beyond the physical sounds more like a faint but strangely persistent memory that the world of matter and its enforcers have never been able to completely obliterate. (This was precisely the view of the Gnostics.)
Be that as it may, I do assume an ultimate origin for events, a  source which is definitely not itself composed of events and is largely independent even of the Locality. This source ejects events randomly from itself, as it were, or events keep ‘leaking out’ of it to change the metaphor. The source is familiar to anyone who is conversant  with mysticism, it is the Brahman of Hinduism, the original Tao of Lao Tse, Ain Soph, the Boundless, of the Kabbalah, and what Bohm calls the ‘Explicate Order’. It is  unfashionable today to think in terms of ‘potentiality’ and contrast it with ‘actuality’, but it could be said that this source is “nothing in actuality but everything in potentiality”. Ain Soph is, as Bohm emphasizes, immeasurable in the strong sense ─ measurement is completely irrelevant to it. Since science and mathematics deal only with the measurable and the formal, Ain Soph does not fall within their remit ─ but equally well one can maintain, as all mystics do, that such a thing/place/entity is beyond our comprehension (but perhaps not entirely beyond our apprehension).
What, however, above all one must not do is to mix the measurable and the immeasurable ─ which is exactly what Cantor did, to the great detriment of modern mathematics. Inasmuch as the Unknowable can be known, science and mathematics are definitely not suitable means: ritual, ecstatic dance or directed meditation are traditionally regarded as more suitable ─ and part of their purpose is precisely to quieten or sideline the rational faculty which is, in this context, a hindrance rather than a help.
     Ain Soph, or whatever one wants to call the source, should not have any role to play in a physical or mathematical theory except, at most to function as the ultimate origin of uncaused events. We can, in practice, forgot about it. This means, however,  that ‘infinity’, ‘eternity’ and suchlike (pseudo)concepts should have no place in science or in mathematics since they belong entirely to the immeasurable (Note 5).
‘Reality’ thus splits up into two ‘regions’, which I name the Unmanifest and the Manifest. The former is the ultimate source of all events but does not itself consist of events, whilst the latter is ‘manifest’ (to us or other conscious beings) precisely because it is composed of events that we can observe.
These two regions  themselves divide into two giving the schema:
        (1) The Unmanifest Non-Occurrent
        (2) The Unmanifest Pre-Ocurrent
        (3) The Manifest Occurrent
        (4) The Manifest Post-Occurrent.

Why do we need (2.) and (4.)?
We need (2) largely because of Quantum Mechanics ─ more precisely because of the ‘orthodox’ Copenhagen interpretation of QM. This interpretation in effect splits the physical world into two layers, one of which is described by the wave function in its ‘independent state’ while the other arises when a human intervention causes the wave function to ‘collapse’ — an interesting metaphor. In the former (pure) state, whatever ‘goes on’ (and something apparently does) lacks entirely the specificity and discreteness of an ultimate event. We are, for example, invited to believe that a ‘photon’ (or rather a photo-wavicle) has no specific location prior to an intervention on our part ─ rather misleadingly termed a ‘measurement’. There is thus a layer of reality, and ‘physical reality’ at that, which does not consist of events but which seemingly does in some sense exist, and is all around and even in us. There is thus the need for an intermediary level between the remoteness of the true Unmanifest and the immediacy of the world of actual events we are familiar with (Note 6).
What of (4.), the Manifest Post-Occurrent ? It would seem that there are ‘entities’ of some sort which are not observable, not composed of bona fide observable events, but which are  nonetheless capable of giving rise to observable phenomena. I am thinking of such things as archetypes, myths, belief systems, generalized abstractions such as Nation, State, Humanity, perhaps even the self, Dawkins’s memes and so on. Logic and rational discourse tend to dismiss such things as pseudo-entities: there is the well-known anecdote of the tourist being shown around the Oxford colleges and asking where the university is. But the ‘university’ does have a reality of a sort, something in between the clearcut reality of a blow to the head and the unreality of a meaningless squiggle.
Moreover, it is in (4.) that I place such things as mathematical and physical theories. As far as I am concerned it is not the Oxford tourist but people like Tegmark (and Plato) who are guilty of a ‘category mistake’: in my terms, they situate mathematics in (1.), the Unmanifest Non-Occurrent, rather than in (4.) the Manifest Post-Occurrent. (1.) is a wholly transcendent level of reality, while (4.) is a manufactured realm which, though giving an appearance of solidity, would not exist, and would never have existed, if there had never been any human mathematicians (or other conscious beings). The Platonic view of mathematics, though tempting, is, I believe, a delusion: mathematics was made by man(kind) and was, originally at any rate, an extrapolation from human sense-impressions, though admittedly it is a very successful one.                                      SH 20/12/19 

 

Note 1. See the chapter on the ‘New Pythagoreanism’ in Shanks’s excellent book, Number Theory, or, for a more accessible treatment, in Valens’s The Number of Things.
Note 2 Dominance is roughly the equivalent of the Buddhist/Hindu concept of karma ─ but applied to all categories of events, not just morally significant ones.
Note 3. Newton granted a small role to God in the evolution of the universe, for example stopping heavenly bodies converging together, but Leibnitz argued that it was blasphemous to suppose any such intervention was needed since this implied that the Creator had not been a good enough designer in the first place. “No need for miracles” became a principal tenet of the Enlightenment though most thinkers found it necessary to introduce a Prime Mover to ‘get the ball rolling’, so to speak. Even this shadowy deus ex cathedra faded away into nothingness by the time of Laplace who famously informed Napoleon, “I had no need of that hypothesis” ─ the hypothesis in question being the existence of God.

Note 4 The Koran, for example, addresses itself specifically to “those who believe in the unseen” (Koran sutra 2 ‘The Heifer’ v. 3).

Note 5. This is precisely the point made by Lao Tse in the very first line of the Tao Te Ching which may be translated, “The Tao that can be named is not the original Tao”. Lao Tse was writing at a time when language, not mathematics or physics, was the most advanced intellectual achievement, and, were he alive today  would doubtless have written “The Tao that can be mathematized is not the original Tao”.

 Note 6.    QM is, incidentally, not the only system that posits an intermediary realm between the Limitless and the Limited. Hinayana Buddhism has a curious theory about ‘events’ passing through various stages of progressive ‘realization’ before becoming actual ─ most Indian author for some reason cite 17.

 

 

 

CALCULUS

“He who examines things in their growth and first origins, obtains the clearest view of them” Aristotle.

Calculus was developed mainly in order to deal with two seemingly intractable problems: (1) how to estimate accurately the areas and volumes of irregularly shaped figures and (2) how to predict physical behaviour once you know the initial conditions and the ‘rates of change’.
We humans have a strong penchant towards visualizing distances and areas in terms of straight lines, squares and rectangles ― I have sometimes wondered whether there might be an amoeba-type civilization which would do the reverse, visualizing straight lines as consisting of curves, and rectangles as extreme versions of ellipses. ‘Geo-metria’ (lit. ‘land measurement’) was, according to Herodotus, first developed by the Egyptians for taxation purposes. Now, once you have chosen a standard unit of distance for a straight line and a standard square as a unit of area, it becomes a relatively simple matter to evaluate the length of any straight line and any rectangle (provided they are not too large or too distant, of course). Taking things a giant step forward, various Greek mathematicians, notably Archimedes, wondered whether one could in like manner estimate accurately the ‘length’ of arbitrary curves and the areas of arbitrarily shaped expanses.

At first sight, this seems impossible. A curve such as the circumference of a circle is not a straight line and never will become one. However, by making your unit of length progressively smaller and smaller, you can ‘measure’ a given curve by seeing how many equal little straight lines are needed to ‘cover’ it as nearly as possible. Lacking power tools, I remember once deciding to reduce a piece of wood of square section to a cylinder using a hand plane and repeatedly running across the edges. This took me a very long time indeed but I did see the piece of wood becoming progressively more and more cylindrical before my eyes. One could view a circle as the ‘limiting case’ of a regular polygon with an absolutely enormous number of sides which is basically how Archimedes went about things with his ‘method of exhaustion’ (Note 1).

It is important to stop at this point and ask under what conditions this stratagem is likely to work. The most important requirement is the ability to make your original base unit progressively smaller at each successive trial measurement while keeping them proportionate to each other. Though there is no need to drag in the infinite which the Greeks avoided like the plague, we do need to suppose that we can reduce in a regular manner our original unit of length indefinitely, say by halving it at each trial. In practice, this is never possible and craftsmen and engineers have to call a halt at some stage, though, hopefully, only when an acceptable level of precision has been attained. This is the point historically where mathematics and technology part company since mathematics typically deals with the ‘ideal’ case, not with what is realizable or directly observable. With the Greeks, the gulf between observable physical reality and the mathematical model has started to widen.

What about (2), predicting physical behaviour when you know the initial conditions and the ‘rates of change’? This was the great achievement of the age of Leibnitz and Newton. Newton seems to have invented his version of the Calculus in order to show, amongst other things, that planetary orbits had to be ellipses, as Kepler had found was in fact the case for Mars. Knowing the orbit, one could predict where a given planet or comet would be at a given time. Now, a ‘rate of change’ is not an independently ‘real’ entity: it is a ratio of two more fundamental items. Velocity, our best known ‘rate of change’, does not have its own unit in the SI system ― but the metre (the unit of distance) and the second (the unit of time) are internationally agreed basic units. So we define speed in terms of metres per second.

Now, the distance covered in a given time by a body is easy enough to estimate if the body’s motion is in a straight line and does not increase or decrease; but what about the case where velocity is changing from one moment to the next? As long as we have a reliable correlation between distance and time, preferably in the form of an algebraic formula y = f(t), Newton and others showed that we can cope with this case in somewhat the same way as the Greeks coped with irregular shapes. The trick is to assume that the supposedly ever-changing velocity is constant (and thus representable by a straight line) over a very brief interval of time. Then we add up the distances covered in all the relevant time intervals. In effect, what the age of Newton did was to transfer the exhaustion procedure of Archimedes from the domain of statics to dynamics. Calculus does the impossible twice over: the Integral Calculus ‘squares the circle’, i.e. gives its area in terms of so many unit squares, while the Differential Calculus allows us to predict the exact whereabouts of something that is perpetually on the move (and thus never has a fixed position).

For this procedure to work, it must be possible, at least in principle, to reduce all spatial and temporal intervals indefinitely. Is physical reality actually like this? The post-Renaissance physicists and mathematicians seem to have assumed that it was, though such assumptions were rarely made explicit. Leibnitz got round the problem mathematically by positing ‘infinitesimals’ and ultimate ratios between them : his ‘Infinitesimal Calculus’ gloriously “has its cake and eats it too”. For, in practice, when dealing with an ‘infinitesimal’, we are (or were once) at liberty to regard it as entirely negligible in extent when this suits our purposes, while never permitting it to be strictly zero since division by zero is meaningless. Already in Newton’s own lifetime, Bishop Berkeley pointed out the illogicality of the procedure, as indeed of the very concept of ‘instantaneous velocity’.

The justification of the procedure was essentially that it seemed to work magnificently in most cases. Why did it work? Calculus typically deals with cases where there are two levels, a ‘micro’ scale’ and a ‘macro scale’ which is all that is directly observable to humans ― the world of seconds, metres, kilos and so on. If a macro-scale property or entity is believed to increase by micro-scale chunks, we can (sometimes) safely discard all terms involving δt (or δx) which appear on the Right Hand Side but still have a ‘micro/micro’ ratio on the Left Hand Side of the equation (Note 2). This ‘original sin’ of Calculus was only cleaned up in the late 19th century by the key concept of the mathematical limit. But there was a price to pay: the mathematical model had become even further away removed from observable physical reality.

The artful concept of a limit does away with the need for infinitesimals as such. An indefinitely extendable sequence or series is said to ‘converge to a limit’ if the gap between the suggested limit and any and every term after a certain point is less than any proposed non-negative quantity. For example, it would seem that the sequence ½; 1/3; ¼……1/n gets closer and closer to zero as n increases, since for any proposed gap, we can do better by making n twice as large and 1/n twice as small. This definition gets round problem of actual division by zero.

But what the mathematician does not address is whether in actual fact a given process ever actually attains the mathematical limit (Note 3), or how near it gets to it. In a working machine, for example, the input energy cannot be indefinitely reduced and still give an output, because there comes a point when the input is not capable of overcoming internal friction and the machine stalls. All energy exchange is now known to be ‘quantized’ ― but, oddly, ‘space’ and ‘time’ are to this day still treated as being ‘continuous’ (which I do not believe they are). In practice, there is almost always a gulf between how things ought to behave according to the mathematical treatment and the way things actually do or can behave. Today, because of computers, the trend is towards slogging it out numerically to a given level of precision rather than using fancy analytic techniques. Calculus is still used even in cases where the minimal value of the independent variable is actually known. In population studies and thermo-dynamics, for example, the increase δx or δn cannot be less than a single person, or a single molecule. But if we are dealing with hundreds of millions of people or molecules, Calculus treatment still gives satisfactory results. Over some three hundred years or so Calculus has evolved from being an ingenious but logically flawed branch of applied mathematics to being a logically impeccable branch of pure mathematics that is rarely if ever directly embodied in real world conditions.                                         SH

 

 

 

Note 1 It is still a subject of controversy whether Archimedes can really be said to have invented what we now call the Integral Calculus, but certainly he was very close.

Note 2 Suppose we have two variables, one of which depends on the other. The dependent variable is usually noted as y while the independent variable is, in the context of dynamics, usually t (for time). We believe, or suppose, that any change in t, no matter how tiny, will result in a corresponding increase (or decrease) in y the dependent variable. We then narrow down the temporal interval δt to get closer and closer to what happens at a particular ‘moment’, and take the ‘final’ ratio which we call dy/dt. The trouble is that we need to completely get rid of δt on the Right Hand Side but keep it non-zero on the Left Hand Side because dy/0 is meaningless ― it would correspond to the ‘velocity’ of a body when it is completely at rest.

Note 3   Contrary to what is generally believed, practically all the sequences we are interested in do not actually attain the limit to which they are said to converge. Mathematically, this does no9t matter — but logically and physically it often does.

 

Space-Time

Minkowski, Einstein’s old teacher of mathematics, inaugurated  the hybrid ‘Space-Time’ which is now on everyone’s lips. In an address delivered not long before his death in 1908 he said the now famous lines,

“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

         But why should Minkowski, and whole generations of scientists, have ever thought that ‘space’ and ‘time’ could be completely separate in the first place? Certain consequences of a belief in ‘Space-Time’ in General Relativity do turn out to be  scarcely credible, but there is nothing weird or paradoxical per se about the idea of ‘time’ being a so-called fourth dimension. To specify an event accurately it is convenient to give three spatial coordinates which tell you how far the occurrence of this event is, or will be, along three different directions relative to an agreed  fixed point.  If I want to meet someone in a city laid out like a grid as New York is (more or less), I need to specify the street, say Fifth Avenue, the number of the building and the floor (how high above the ground it is). But this by itself will not be enough for a successful meet-up : I also need to give the time of the proposed rendez-vous, say, three o-clock in the afternoon. The wonder is, not that science has been obliged to bring time into the picture, but that it was possible for so long to avoid mentioning it (Note 1)

Succession

 Now, if you start off with ‘events’, which are by definition ‘punctual’ and impermanent, rather than things or ‘matter’ you cannot avoid bringing time into the picture from the start: indeed one  might be inclined to say that ‘time’ is a good deal more important than space. Events happen ‘before’ or ‘after’ each other; what happened yesterday preceded what happened this morning, and you read the previous sentence before you started on the current one. The very idea of ‘simultaneous’ events, events that have occurrence ‘at the same time’, is a tricky concept even without bringing Special Relativity into the picture. But  the idea of succession is both clearcut and basic and one could, as a first bash, even define ‘simultaneous’ events negatively as bona fide  occurrences that are not temporally ordered.

So, when I started trying to elaborate an ‘event-orientated’  world-view, I felt I absolutely had to have succession as a primary ingredient : if anything it came higher up the list than ‘space’. Originally I tried to kick off with a small number of basic assumptions (axioms or postulates) which seemed absolutely unavoidable. One such assumption was that most events are ‘ordered temporally’, that they have occurrence successively, ‘one after the other’ ─ with the small exception of so-called ‘simultaneous events’. Causality also seemed to be something I could not possibly do without and causality is very much tied up with succession since it is usually the prior event that is seen as ‘causing’ the other event in a causal pair. Again, one might tentatively  defined ‘simultaneous events’ as events which cannot have a direct causal bond, i.e. function as cause and effect (Note 2). And,  in an era innocent of Special Relativity and light cones, one might well define space as  the totality of all distinct events that are not temporally ordered.

From an ‘event-based’ viewpoint,  chopping up reality into ‘space’ and ‘time’ is not fundamental : all we require is a ‘place’ where events can and do have occurrence, an  Event Locality. Such a Locality starts off empty of events but has the capacity to receive them, indeed I have come to regard ultimate events as in some sense concretisations or condensations of an underlying substratum.

 Difference between Space and Time

 There is, however, a problem with having a single indivisible entity whether we call it ‘Space-Time’ or simply ‘the Locality’. The two parts or aspects of this creature are not at all equivalent. Although I believe, as some physicists have suggested, that, at a certain level, ‘space’ is ‘grainy’, it certainly appears to be continuous : we do not notice any dividing line, let alone a gap, between the different spatial ‘dimensions’ or between different spatial regions. We don’t have to ‘add’ the dimension height to pre-existing dimensions of length and width for example : experience always provides us with a three-dimensional physical block of reality (Note 3). And the fact that the choice of directions, up/down, left/right and so on, is more often than not completely arbitrary suggests that physical reality does not have inbuilt directions, is ‘all-of-a-piece’.

Another point worth mentioning is that we seem to have a strong sense of being ‘at rest’ spatially : not only are we ‘where we are’, and not where we are not, but we actually feel this to be the case. Indeed we tend to consider ourselves to be at rest even when we know we are moving : when in a train we consider that it is the other things, the countryside, that are in motion, not us. It is indeed this that gives Galileo’s seminal concept of inertia its force and plausibility; in practice all we notice is a flagrant disturbance of the ‘rest’ sensation, i.e. an ‘acceleration’.

What about time? Now it is true that time is often said to ‘flow’ and we do not notice any clearcut temporal demarcation lines any more than we notice spatial ones. Nonetheless, I would argue that it is much less natural and plausible to consider ‘time’ as a continuum because we have such a strong sense of sequence. We continually break up time into ‘moments’ which occur ‘one before the other’ even though the extent of the moment varies or is left vague. Sense of sequence is part of our world and since our impressions are themselves bona fide events even if only subjective ones, it would appear that sequence is a real feature of the physical world. There is in practice always an arrow of time, an arrow which points from the non-actual to the actual. Moreover, the process of ‘actualization’ is not reversible : an event that has occurrence cannot be ‘de-occurred’ as it were (Note 4).

And it is noteworthy that one very seldom feels oneself to be ‘at rest’ temporally, i.e. completely unaware of succession and variation. The sensation is so rare that it is often classed as  ‘mystical’, the feeling of being ‘out of time’ of which T.S. Eliot writes so eloquently in The Four Quartets. Heroin and certain other drugs,  by restricting one’s attention to the present moment and the recent past, likewise ‘abolish time’, hence their appeal. In the normal way,  even when deprived of all external physical stimuli, one still retains the sensation of there being a momentum and direction to one’s own thoughts and inner processes : one idea or internal sensation follows another and one never has any trouble assigning order (in the sense of sequence) to one’s inner feelings and thoughts. It is now thought that the brain uses parallel processing on a big scale but, if so, we are largely unaware of this incessant multi-tasking. Descartes in his thought experiment of being entirely cut off from the outside world and considering what he simply could not doubt, might well have concluded that sequence, rather than the (intemporal) thinking ego, was the one item that could not be dispensed with. For one can temporarily disbelieve in one’s existence as a particular person but not in the endless succession of thoughts and subjective sensations that stream through one’s mind/brain.

All this will be dismissed as belonging to psychology rather than physics. But our sense impressions and thoughts are rooted in our physiology and should not be waved aside for that very reason : in a sense they are the most important and inescapable ‘things’ we have for without them we would be zombies. Physical theories that deny sequence, that consider the laws of physics to be ‘perfectly reversible’, are both implausible and seemingly  unliveable, so great is our sense of ‘before and after’. Einstein towards the end of his life decided that it followed from General Relativity that everything happened in an ‘eternal present’. He took this idea seriously enough to mention it in a letter to the son of his college friend, Besso, on receiving news of the latter’s death, writing “For those of us who believe in physics, this separation between past, present and future is only an illusion, however tenacious”.

Breaks in Time

 If, then, we accept succession as an unavoidable feature of lived reality, are we to suppose that one moment shifts seamlessly into the next without any noticeable demarcation lines, let alone gaps? Practically all physicists, even those who toy with the idea that Space-time is in some sense ‘grainy’, seem to be stuck with the concept of a continuum. “There is time, but there is not really any notion of a moment in time. There are only processes that follow one another by causal necessity” as Lee Smolin puts it in Three Roads to Quantum Gravity..

But I cannot see how this can possibly be the case, and this is precisely why the ‘time dimension’ of the Event Locality is so different from the spatial one. If I shift my attention from two items in a landscape, from a rock and its immediate neighbourhood to a tree, there is no sense that the tree displaces the rock : the two items can peaceably co-exist and do not interfere with each other. But if one moment follows another, it displaces it, pushes it out of the way, as it were, since past and present moments, prior and subsequent events, cannot by definition co-exist ─ except perhaps in the inert way they might be seen to co-exist in an Einsteinian perpetual now. And all the attributes and particular features of a given moment must themselves disappear to make way for what follows. We do not usually see this happening, of course, because most of the time the very same objects are recreated and our senses do not register the transition. We only notice change when a completely different physical feature replaces another one, but the same principle must apply even if the same feature is recreated identically. Since a single moment is, in its physical manifestation, three-dimensional, all these three dimensions must go if a new moment comes into being.

Whether there is an appreciable gap between moments apart from there being a definite change is an open question. In the first sketch of Ultimate Event Theory I attribute a fixed extent to the minimal temporal interval, the ksana, and I allow for the possibility of flexible gaps between ksanas. The phenomenon of  time dilation is interpreted as the widening of the gap between ksanas rather than as an extension of the ‘length’ of a ksana itself. This feature, however, is not absolutely essential to the general theory.

What we actually perceive and consider to constitute  a ‘moment’ is, of course, a block containing millions of ksanas since the length of a ksana must be extremely small (around the Planck scale). However, it would seem that ksanas do form blocks and that there are transitions between blocks and that sometimes, if only subliminally, we are aware of these gaps. Instead of being a flowing river, ‘time’ is more like beads on a string though the best image would be a three-dimensional shape pricked out in coloured lights that is switched on and off incessantly.

Mosaic Time

Temporal succession is either a real feature of the world or it is not, I cannot see that there is a possible third position. In Einstein’s universe “everything that can have occurrence already has occurrence” to put things in event terms. “In the ‘block universe’ conception of general relativity….the present moment has no meaning ─ all that exists is the whole history of the universe at once, timelessly. When laws of physics are represented mathematically, causal processes which are the activity of time are represented by timeless logical implications…. Mathematical objects, being timeless, don’t mhave present moments, futures or pasts”  (Lee Smolin, It’s Time to Rewrite time in New Scientist 20 April 2014)  

This means that there is no free will since what has occurrence cannot be changed, cannot be ‘de-occurred’. It also makes causality redundant as Lee Smolin states. One could indeed focus on certain pairs of events and baptise them ‘cause and effect’ but, since they both have occurrence, neither of them has brought the other about, nor has a third ‘previous’ event brought both of them about simultaneously. Causality becomes of no account since it is not needed.

Even a little acquaintance with Special Relativity leads one to conclude that it is impossible to establish a universally valid ‘now’. Instead we have the two light cones, one leading back to the past and one to the future (the observer’s future), and a large region classed as ‘elsewhere’. It is notorious that the order of events in ‘elsewhere’, viewed from inside a particular light cone, is not fixed for all observers : for one observer it can be said that event A precedes event B and for another that event B precedes A. This indeterminacy if of little or no practical consequence since there is (within SR) no possibility of interaction between the two regions. However, it does mean that it is on the face of it impossible to speak of a universally valid ‘now’ ─ although physicists do use expressions like the “present state of the universe”.

I personally cannot conceive of a ‘universe’ or a life or indeed anything at all without succession being built into it : the timeless world of mathematics is not reality but a ‘take’ on reality. The only way to conceptually save succession while accepting some of the more secure aspects of Relativity would seem to be to have some sort of ‘mosaic time’, physical reality split up into zones. How exactly these zones, which are themselves subjective in that they depend on a real or imagined ‘observer’, fit together is not a question I can answer though certain areas of research into general relativity can presumably be taken over into UET.  One could perhaps define the next best thing to a universal ‘now’ by taking a weighted average of all possible time zones : Eddington suggested something along these lines though he neglected to give any details. Note that if physical reality is a mosaic rather than a continuum, it would in principle be possible to shift the arrangement of particular tesserae in a small way, exchange one with another and so on.                     SH 23/01/15

 

 Note 1 Time was left out of the picture for so long, or at any rate neglected, because the first ‘science’ to be developed to a high degree of precision in the West was geometry. And the truths of (Euclidian) geometry, if they are truths at all, are ‘timeless’ which is why Plato prized geometry above all other branches of knowledge except philosophy. Inscribe a triangle in a circle with the diameter as base line and you will always find that it is right-angled. And if you don’t, this is to be attributed to careless drawing and measurement : in an ‘ideal’ Platonic world such an angle has to be a right angle. How do we know? Because the theorem has been proved.

This concentration on space rather than time meant that although the Greeks set out the basic principles  of statics, the West had to wait another 1,600 years or so before Galileo more or less invented the science of dynamics from scratch. And the prestige of Euclid and the associated static view of phenomena remained so great that Newton, perversely to our eyes, cast his Principia into a cumbrous geometrical mould using copious geometrical diagrams even though he had already invented a ‘mathematics of motion’, the Calculus.

 Note 2   Kant did in point of fact defend the idea of ‘simultaneous causation’ where each of two ‘simultaneous’ events affects the other ‘at the same time’. He gave the example of a ball resting on a cushion arguing that the ball presses down on the cushion for the same amount of time as the cushion is deformed by the presence of the ball. And if we take Newton’s Third Law as operating exactly at the same time on or between two different objects, we have to accept the possibility of simultaneous causation.

Within Ultimate Event Theory, what would normally be  called ‘causality’ is (sometimes) referred to as ‘Dominance’. I chose this term precisely because it signifies an unequal relation between two events, one event, referred to as the ‘cause’, as it were ‘dominating’ the other, the ‘effect’. In most, though perhaps not all, cases of causal relations I believe there really is priority and succession despite Newton’s Third Law. I would conceive of the ball pressing on the cushion as occurring at least a brief moment before its effect ─ though this is admittedly debatable. One could introduce the category of ‘Equal Dominance’ to cover cases of  Kant’s ‘simultaneous causality’ between two events.

Note 3  I have always found the idea of Flatland, which is routinely trotted out in popular books on Relativity, completely unconvincing. I can more readily conceive of there being more than three spatial dimensions as there being a world with less than three : a line, any line, always has some width and height.

 Note 4. If it is possible for an event in the future to have an effect ‘now’, this can only be because the ‘future’ event has already somehow already occurred, whereas intermediate events between ‘now’ and ‘then’ have not. I cannot conceive of a ‘non-event’ having any kind of causal repercussion — except, of course, in the trivial sense that current wishes or hopes about the future might affect our behaviour. Such wishes and desires belong to the present or recent past, not to the future.