A ksana is the minimal temporal interval : within the space of a ksana one and only one ultimate event can have occurrence. There can thus be no change whatsoever within the space of a ksana — everything is at rest.
In Ultimate Event Theory every ultimate event is conceived to fill a single spot on the Locality (K0) and every such spot has the same extent, a ‘spatial’ extent which includes (at least) three dimensions and a single temporal dimension. A ksana is  the temporal interval between the ‘end’ of one ultimate event and the ‘end’ of the next one. Since there can be nothing smaller than an ultimate event, it does not make too much sense to speak of ‘ends’, or ‘beginnings’ or ‘middles’ of ultimate events, or their emplacements, but, practically speaking, it is impossible to avoid using such words. Certainly the extent of the spot occupied by an ultimate event is not zero.
The ksana is, however, considerably more extensive than the ‘vertical’ dimension of the spot occupied by an ultimate event. Physical reality is, in Ultimate Event theory, a ‘gapped’ reality and, just as an atom is apparently mainly empty space, a ksana is mainly empty time (if the term is allowed)..  Thus, when evaluating temporal intervals the ‘temporal extent’ of the ultimate events that have occurrence within this interval can, to a first approximation, be neglected. As to the actual value of a ksana in terms of seconds or nanoseconds, this remains to be determined by experiment but certainly the extent of a ksana must be at least as small as the Planck scale, 6.626 × 10–34 seconds.
A ‘full’ event-chain is a succession of bonded ultimate events within where it would not be possible to fit in any more ultimate events. So if we label the successive ultimate events of a ‘full’ event-chain, 0, 1, 2, 3……N  there will be as many ksanas in this temporal interval as there are ultimate events.
Suppose we have a full event-chain which, in its simplest form, may be just a single ultimate event repeated identically at or during each successive ksana. Such an event-chain can be imaged as a column of dots where each dot represents an ultimate event and the space in between the dots represents the gap between successive ultimate events of the chain. Thus , using the standard spacing of 2.5  this computer we have

Now, although the ‘space’ occupied by all ultimate events is fixed and  an absolute quantity (true for ‘all inertial and non-inertial frames’ if you like), the spacing between the spots where ultimate events can occur both ‘laterally’ — laterally is to be understood as including all three normal spatial dimensions — and vertically, i.e. in the temporal direction, is not  constant but variable. So, although the spots where ultimate events can occur have fixed (minuscule) dimensions, the ‘grid-distance’, the distance between the closest spots which have occurrence within the same ksana,  and so  does the temporal distance between successive ultimate events of a full event-chain. So the ksana varies in extent.  However, there is, by hypothesis,  a minimum value for both the grid-distance and the ksana. The minimal value of both is attained whenever we have a completely isolated event-chain. In practice, there is no such event-chain any more than, in traditional physics, there is a body that is completely isolated  from all other bodies in the universe. However, these minimal values can be considered to be attained for event-chains that are sufficiently ‘far away’ from all other chains. And, more significantly, these minimal values apply whenever we have a full regular event-chain considered in isolation from its event environment.
The most important point, that cannot be too strongly emphasized, is that although the number of ultimate events in an event-chain, or any continuous section of an event-chain, is absolute, the interval between successive events varies from one chain to another, though remaining constant within a single event-chain (providing it is regular). Unless stated otherwise, by ‘ksana’ I mean the interval between successive ultimate events in a ‘static’ or isolated regular event-chain. This need not cause any more trouble than the concept of intervals of time in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given.
Thus, the ‘vertical’  spacing of events in different chains can and does differ and the minimal value will be represented by the smallest spacing available on the computer I am using. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                     •

moment’, is not an absolute. However, unless stated otherwise, by ‘ksana’ we are to understand the duration of a ksana within a ‘static’ or isolated regular event-chain. This should not cause any more trouble than the concept of ‘time’ in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given. However, the ‘vertical’  spacing of events in different chains can and does differ. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                        •

•                                       •

S.H. 11/7/13

General Laws :  I suspect that there are no absolutely general ‘laws of Nature’, no timeless laws such as those given by a mathematical formula : such a formula at best only indicates norms or physical constraints. Of all so-called laws, however, the most general and the most solidly established are arithmetic (not physical) laws, rules based on the properties of the natural numbers. To this extent Pythagoras was in the right.

Platonic Forms  Plato was also essentially right in proclaiming the need for ‘ideal’ forms : patterns which are not themselves physical but which dictate the shape and behaviour of physical things. But he was wrong to see these patterns as geometrical, and thus both static and timeless (the two terms are equivalent). With one or two exceptions contemporary science has done away with Platonic Forms though it still puts mathematics in the supreme position.
In practice, I do not see how one can avoid bringing in a secondary ‘ideal’ domain which has a powerful effect on actual behaviour. In Ultimate Event Theory, associations of events and event-chains, once they have attained a critical point, bring into existence ‘event schemas’ which from then on dictate the behaviour of similar collections of events. From this point onwards they are ‘laws’ to all intents and purposes but there was a time when they did not exist and there will perhaps be a future time when they will cease to be operative.
Random GenerationTake the well-known example of interference patterns produced by photons or electrons on a blank screen. It is possible to fire off these ‘particles’ one at a time so that the pattern takes shape point by point, or pixel by pixel if you like. At first the dots are distributed randomly and in different experiments the pattern builds up differently. But the final pattern, i.e., distribution of dots, is identical ─ or as nearly identical as experiment allows. This makes no kind of sense in terms of traditional physics with its assumption of strict causality. The occurrence of a particular event, a dot in a particular place, has no effect whatsoever (as far as we can tell) on the position of the next dot. So the order of events is not fixed even though the final pattern is completely determinate. So what dictates which event comes next? ‘Chance’ it would seem. But nonetheless the eventual configuration is absolutely fixed. This only makes sense if the final configuration follows an ‘event schema’ which does, in some sense, ‘exist’ though it has no place in the physical universe. This is a thoroughly Platonic conception. O

 Ultimate Reality   Relatively persistent patterns on an  underlying invisible ‘substance’ ─ that is all there is in the last resort. Hinduism was quite right to see all this as an essentially purposeless, i.e. gratuitous, display ─ the dance of Shiva. Far from being disheartening, this perspective is inspiring. It is at the opposite extreme both to the goal-directed ethos of traditional Christianity ─ the goal being to ‘save’ your soul ─ and to the drearily functional universe of contemporary biology where everything is perpetually seeking a fleeting  physical advantage over competitors.
What, then, is the difference between the organic and the inorganic?  Both are persistent, the inorganic more so than the organic. Without a basic ‘something’, nothing visible or tactile could or does exist. Without persistence there would be no recognizable patterns, merely noise, random flashes of light emerging from the darkness and subsiding into darkness after existing for a moment only. ‘Matter’ is an illusion, a mental construct : patterns of light (radiation) emerging and disappearing, that is all there is.

Dominance  The ‘universe’ must be maintained by some sort of force, otherwise it would collapse into nothingness at any moment. For Descartes this force came from God, Schopenhauer views it as something inherent in Nature, as what he calls ‘Will’ and which he views as being entirely negative, indeed monstrous. This ‘force’ is what I term dominance, the constraining effect one event or event-chain has on another (including on itself), and without it everything would slow down and very soon disappear without leaving a trace. Take away Schopenhauer’s Will, the force of karma, and this is what would happen ─ and in the Buddhist world schema will eventually happen. For Buddhism, the natural state of everything is rest, inaction, and the universe came about because of some unexplained disturbance of the initial state of rest, indeed is this disturbance. Subsequently, it is as if the ‘universe’ were frantically trying to get back to its original state of complete rest but  by its ceaseless striving is precisely making this goal more and more unattainable.

Disappearance  In both traditional and contemporary physics, it is impossible for an object to simply disappear without leaving a trace. The dogma of the conservation of mass/energy says that nothing ever really disappears, merely changes its form. However, according to Ultimate Event Theory, ultimate events are appearing and disappearing all the time and they need no ‘energy’ to do this. Certain of these ultimate events produced at random eventually coalesce into repeating event-chains we perceive as solids or liquids because they acquire ‘persistence’ or self-dominance, but it is conceivable that they can, in certain exceptional circumstances, lose this property and in such a case they will simply stop reappearing.
Are there any genuine cases where objects have completely disappeared in this way? The only evidence for them would seem to be anecdotal : one hears of certain Hindu magic-men who are able to make small objects disappear and appear in a different place but it is, of course, difficult to judge to distinguish genuine magic from the stage variety. And any such alleged cases rarely if ever get investigated by scientists since the latter are terrified of being accused of credulity or worse. Professor Taylor who investigated Uri Geller was told by colleagues that no reputable scientists would do such a thing. Clearly, if one is not allowed to investigate a phenomenon it has no chance of ever being verified which is what the rationalist/scientific lobby desire.
Contemporary science and rationalist thinking implicitly assumes that ‘real’ entities, while they actually exist, exist continuously ─ in fact the previous statement would be regarded as so obvious as to be hardly worth stating. But in UET nothing exists for more than an instant (ksana) and entities that seem to exist for a ‘long time’ are in reality composed of repeating ultimate events strongly bonded together. If reality is ‘gapped’, as UET affirms, all so-called objects alternately appear and disappear (though so rapidly that we do not notice the change) so  there is much less of a problem involved in making something disappear. Instead of actually destroying the object in some way (and in the destructive process transferring the object’s mass into different mass or pure energy) it would simply be sufficient to prevent an event cluster reappearing which is not quite so hard to imagine. In UET, an apparent object reappears regularly because  it possesses ‘self-dominance’; if it could be made to lose this property, it would not reappear, i.e. would disappear, and it would not necessarily leave any trace. Moreover, to make something disappear in this manner. it would not be necessary to use any kind of physical force, high temperature, pressure and so on. To say that the theoretical possibility is there is not, of course, the same thing as saying that a supposed occurrence actually takes place : that is a matter of experiment and observation. In my unfinished SF novel The Web of Aoullnnia devotees of a  mystical sect called the Yther are not only convinced that the entire universe is going to disappear into the nothingness from which it emerged, but  believe that they should hasten this progressive movement which they call Aoullnnia-yther where yther means ‘ebbing’, ‘withdrawal’, hence the name of the sect. Although contemporary Buddhists do not usually put it quite so starkly, essentially the aim of Buddhism is to return the entire universe to an entirely quiescent state “from which it never will arise again”.

On the other hand, deliberately bringing something into existence from nothing is just as inconceivable in Ultimate Event Theory as in contemporary physics, maybe more so.                  SH  22/5/13


 

AristotleAristotle considered ‘story line’ and ‘plot’ to be the key ingredient in drama, not character.  More precisely

“Tragedy is not an imitation of persons, but of actions and of life. (…) The events, i.e. the plot, are what the tragedy is there for, and that is the most important thing of all.”  Aristotle, Poetics 4.3

           Aristotle emphasizes that it is the ‘sequence of events’ not the end result that is important : in my terms he is concerned with an event-chain as an unfolding sequence rather than with the last event of this chain. What makes a good plot in both ancient myth and current Hollywood movies, is not the conclusion which was, in the case of myth, generally known by the audience (fall of Troy) and, in the case of action movies, predictable (the hero generally survives). It is the unravelling, the manner in which the action unfolds that counts.
Aristotle believed (rightly) that drama should concern itself with what we would today call archetypal (human) situations rather than actual events which, he says, are the subject of history rather than ‘poetry’ — by ‘poetry’ we must understand poetic drama.

“…Poetry tends to express universals and history particulars. The universal is the kind of speech or action which is consonant with a person of a given kind in accordance with probability or necessity; this is what poetry aims at, even though it applies individual names.”   Aristotle, Poetics 5.5 

Such archetypal plots (sequences of events) have a feeling of inner necessity about them and this makes them  compelling — compelling to watch (because we identify ourselves in what is represented on the stage) but, more significantly, compelling for the characters themselves who are enmeshed in sequences of events over which they have, as the drama progresses, less and less control. This is precisely the feature of event-chains that, in the jargon of Ultimate Event Theory, I call ‘dominance’. The archetypal stories of Adam and Eve, the homecoming of Ulysses and so on have a ‘power’ which other stories, real or imagined, lack. In terms of Eventrics, they have more ‘dominance’, an event-chain’s dominance being its capacity to make other event-chains conform to its own pattern.
Aristotle, interestingly, believes that it is more important that a certain sequence of events in a drama should be ‘probable’, i.e. ‘contain an inner necessity’, than that it should have really taken place.

“…The function of the poet [dramatist] is not to say what has happened, but to say the kind of thing that would happen, i.e. what is possible in accordance with probability or necessity.”     Aristotle, Poetics 5.5 

Rather than fiction being justified because it is ‘plausible’ (could have happened), it is to be justified with regard to basic event-chains because the latter have (in my terms) more ‘dominance’ even though they never took place. Inasmuch as the ‘poet’ (read dramatist) deals at all with actual past or present occurrences, he should concentrate on those which possess this extra potency which most mundane events lack.

“…there is nothing to prevent some of the things which have happened from being the kind of thing which probably would happen, and it is in this respect that he [the author] is concerned with them as a poet.”  Aristotle, Poetics 5.5 

A few historical sequences of events do indeed seem to have a ‘rightness’ about them which is extremely satisfying and which is much more typical of myth than everyday reality. The true story of Antony and Cleopatra attains the universality of myth, as does, nearer our own time, the career of Rommel or indeed that of Hitler himself. This ‘rightness’ is an aesthetic (or perhaps logical?) criterion rather than a moral one though we do have the revealing concept of ‘poetic justice’. It seems ‘right’ in a sort of cosmic sense that Macbeth should be caught in his own noose and he even shares this view himself, hence his final speech (“Life’s but a walking shadow…..” &c.). It is also ‘right’ that Lear should pay for his folly and insupportable treatment of Cordelia, that Mark Antony should commit suicide &c. &c. It was ‘right’ in a sense that goes beyond law and ethics that all the Nazi leaders except Speer should have either committed suicide or been executed — and right that Speer should not have been executed since he was the only one of the prisoners at Nuremburg who pleaded guilty.
Within Eventrics, there is the notion of a secondary, manufactured domain called the Manifest Non-Occurrent, which is composed, not of actual events,. but of event-schemas which have spontaneously evolved out of myriads of actual events. These schemas are subsequently able to influence and even dictate much of what goes on in the physical world. We are familiar with the idea of Jungian and Freudian archetypes but most of the so-called ‘laws of nature’ are, at bottom, of the same character, that is, they are ‘schemas’ and not eternal ‘laws’. In the past scientists thought that physical ‘laws’ were laid down by God and so could not be changed and, moreover, that they had no capacity themselves to evolve. But there is no need to believe this today when most scientists have dispensed with the necessity of a lawgiver. Of all the so-called laws, the most basic would seem to be arithmetic (rather than physical) laws : in this respect Pythagoras was not far from the truth. A universe could function perfectly well with quite different values for many constants (and maybe does so somewhere at this moment)  but it is hard to imagine any universe where the basic properties of the natural numbers were not upheld.
Archetypal event-chains do not seem to evolve if left to themselves. If and when they re-emerge, they do so in their previous form and affect contemporary people in much the same way as they affected people who lived hundreds or thousands of years earlier.
It is, according to Eventrics, not gods, heroes or even ‘great men’ who drive history but neither is it impersonal deterministic ‘physical laws’, it is dominant event-chains. These schemas are themselves in many cases the result of human effort but, once fully formed, they exert a powerful influence over the subsequent behaviour of living people. In this sense, one can say they are ‘alive’, if, by being alive we mean exerting an influence, even they ‘reproduce’ in the sense of producing copies of themselves. Classic tragic sequences of events produce actual tragedies, or at least the mould into which occurring tragedies tend to fall. But the event schemas are not alive in the sense of being able to change into something else. It is not surprising that Plato, a rational visionary, considered his ‘Ideas’ to be fixed and unchanging. But he was wrong on two counts, firstly because he concentrated his attention on static forms rather than on sequences of events (Aristotle’s plots) and, secondly, because these archetypal schemas can lose their effectiveness as and when actual events depart more and more from the basic mould.
Successful men and women in politics, business, warfare and most active professions align themselves, consciously or unconsciously,  on already existing event-chains which they bring back to life, as it were. This is what Hitler and the Nazis did with phenomenal success, resurrecting all sorts of powerful ancient symbols and battle schemas, even to the extent of facilitating  their own destruction since in Norse mythology the gods are not immortal and all-powerful but actually die at Ragnarök, the Twilight of the Gods.

 

n

 

It is said that certain Gnostic sects which flourished in North Africa during the first few centuries of our era not only encouraged but actually required candidates to give a written or verbal account of how they thought the universe began (Note 1). It would be interesting to know what these people came up with and, most likely, amongst a great deal of chaff there were occasional anticipations of current scientific theories. It is mistaken to imagine that great ideas go hand in hand with experimentation and mathematical implementation : on the contrary, important ideas often predate true discovery by centuries or even millennia. Democritus’ atomic theory (VIth century BC) could not possibly have been ‘proved’ prior to modern times and he certainly could not possibly have put it in quantum or even Newtonian mathematical form. Similarly, one or two brave people put forward the germ theory of disease while the ‘miasmic’ theory was still orthodoxy ─ and were usually dismissed as cranks.
As a body of beliefs, ‘science’ is currently entering a period of consolidation comparable to that experienced by the early Church after its final victory over paganism. Materialism has decisively vanquished idealism and religion is no longer a force to be reckoned with, at least in the West. Along with increasing potency and accuracy goes a certain narrowing of focus and a growing intolerance : science is now a university phenomenon with all that this implies and no .longer a ‘pastime of leisured persons’. To some extent, this tendency towards orthodoxy is inevitable, even beneficial : as someone said it doesn’t matter too much if a poet departs from  the prescribed form of a sonnet, but it may matter a great deal if a bridge builder uses the wrong equations. Nonetheless, there are warning signs : ‘scientific correctness’ has replaced not only free enquiry but the very idea of scientific validity. Professional scientists worry, not so much about whether their results are flawed or their theories tentative, as to whether they are going to get in trouble with the establishment, and offending the latter can have grave career and financial consequences.

        It is true that free, indeed often extremely erratic  speculation, is still allowed  in certain areas, especially cosmology and particle physics. But it is subject to certain serious constraints. Firstly, it is only permitted to persons who already hold more than one degree and who are able to couch their theories in such abstruse mathematics that journals find it difficult to find anyone to peer review the work. Is not this how it should be? Maybe not. Certainly, you are likely to need some knowledge of a subject before cobbling together a theory but there is such a thing as knowing too much. Once someone has been through the mill and spent years doing things in the prescribed manner, it is well nigh impossible to break out of the mental mould ─ and this is most likely the reason why really new ideas in science come from people in their twenties (Einstein, Heisenberg, Dirac, Gamow et al. et al.), not because of any miraculous effect of youth as such.

        So. Where’s all this leading?  I didn’t do science at university or even at school which puts me in many respects at an enormous disadvantage, but this has certain good aspects as well. I have no vested interest in orthodoxy and only accept something because I am convinced that it really is true, or is at least the best theory going for the time being. Almost all current would be innovators in science, however maverick they may appear at first sight, take on  board certain key doctrines of modern science such as the conservation of energy or the laws of thermo-dynamics. But one might as well  be killed for a sheep as a lamb and I have finally decided to take the plunge and, instead of trying to fit my ideas into an existing official framework, to swim out into the open sea, starting as far back as possible and  assuming only what seems to be essential. I originally envisaged ‘Ultimate Event Theory’ as a sort of ‘new science’  but now realize that what I really have been trying to do is give birth to a new ‘paradigm’ ─ a ‘paradigm’ being a systematic way of viewing the world or reality. Should this paradigm ever come to fruition, it will engender new sciences and new technologies, but the first step is to start thinking within a different framework and draw conclusions. In other words, one is obliged to start with theory ─ not experiment or mathematics though certainly I hope eventually experiments will give support to the key concepts and that a new symbolic system will be forthcoming (Note 2).

       Four Paradigms

 To date there have been basically four ways of viewing the world, three all-englobing ‘paradigms’ : (1) The Animistic paradigm; (2) the Mechanistic paradigm; and (3) the Information Paradigm and (4) the Event Paradigm.
According to (1) the universe is full of life, replete with ‘beings’ in many respects like ourselves inasmuch as ‘they’ have emotions and wills and cause things deliberately to happen. This conception goes far beyond mere belief in a pantheon of gods and goddesses : as Thales is supposed to have said, if a lodestone draws a piece of iron it is exercising ‘will’ and “All things are full of gods”. This world-view lasted a very long time and, even though it is largely discredited today, it still has plenty of life  left in it which is why we still speak of ‘charm’, ’charisma’, ‘fate’, and so on and why, despite two centuries of rationalistic propaganda, most of the population still believes in ‘jinxes’ and in ‘spirits’ (as I myself do at least part of the time).
The countless deities and “thrones, principalities and powers” against whom Saint Paul warns the budding Christian eventually gave way to a single all-powerful Creator God who made the world by a deliberate act of will. In its crudest form, Mechanism views the universe as a vast and complicated piece of clockwork  entirely controlled by physical and mathematical laws, some of which we already know. No living things of any sort here unless we make an exception for humanity and, even if we do make such an exception, it is hard to see how free will can enter the picture. Modern science has dispensed with the  Creator retained the mechanistic vision somewhat updated by quantum uncertainty and other exotic side effects.
The invention of the computer and its resounding success sometimes seems to be ushering in a new paradigm: the universe is an enormous integrated circuit endowed with intelligence of a sort and we are the humble bits. Seductive though this vision is in certain respects, it is not without serious dangers for the faithful since it looks disturbingly like a sort of reversion to the most ancient paradigm of all, the animistic one ─ the universe is alive and capable of creating itself and everything else out of itself.
The paradigm that I am working with harks back to certain Indian Buddhist thinkers of the early centuries AD though I originally discovered it for myself when I knew nothing about Buddhism and Taoism. No Creator God, no matter or mind as such, only evanescent point-like entities (‘dharmas’, ‘ultimate events’) forming relatively persistent patterns on a featureless backdrop which will eventually be returned to the original emptiness (‘sunyata’) from which the “thousand things” emerged.

Broad schema of Eventrics 

Following my own instincts and the larger cosmology of Taoism and other mystical belief systems, I divide reality into two broad categories, what I call the Manifest and the Unmanifest, each of which is further divided into two, the Non-Occurrent and the Occurent. If one feels more comfortable with a symbolic notation, we can speak of K0  and K1 with further regions K00 and K01, K10 and K11.  Of the Unmanifest Non-Occurrent, K00, little need or can be said. It is the ultimate origin of everything, the original Tao, Ain Soph (‘the Boundless’) of Jewish mysticism, the Emptiness of nirvana, the vacuum of certain contemporary physical theories (perhaps).

To be continued)

Note 1  As soon as Christianity, or a particular version of it, became the official religion of the declining Roman Empire, all such cosmological speculation was actively discouraged and penalized.

Are there/can there be events that are truly random?
First of all we need to ask ourselves what  we understand by randomness. As with many other properties, it is much easier to say what randomness is not than to say what it is.

Definitions of Randomness

If a series of events or other assortment exhibits a definite pattern, then it is not random” ─ I think practically everyone would agree to this.
This may be called the lack of pattern definition of randomness. It is the broadest and also the vaguest definition but at the end of the day it is what we always seem to come back to. Stephen Wolfram, the inventor of the software programme Mathematica and a life-long ‘randomness student’  uses the ‘lack of pattern’ definition. He writes, “When one says that something seems random, what one usually means is that one cannot see any regularities in it” (Wolfram, A New Kind of Science p. 316). 
        The weakness of this definition, of course, is that it offers no guidance on how to distinguish between ephemeral patterns and lasting ones (except to keep on looking) and some people have questioned whether the very concept of ‘pattern’ has any clear meaning. For this reason, the ‘lack of pattern’ definition is little used in science and mathematics, at least explicitly.

The second definition of randomness is the unpredictable definition and it follows on from the first since if a sequence exhibits patterning we can usually tell how it is going to continue, at least in principle. The trouble with this definition is that it has nothing to say about why such and such an event is unpredictable, whether it is unpredictable simply because we don’t have the necessary  information or for some more basic reason. Practically speaking, this may not make a lot of difference in the short run but, as far as I am concerned, the difference is not at all academic since it raises profound issues about the nature of physical reality and where we stand on this issue can lead to very different life-styles and life choices.

The third definition of randomness, the frequency definition goes something like this. If, given a well-known and well-defined set-up, a particular outcome, or set of outcomes, in the long run crops up just as frequently (or just as infrequently for that matter) as any other feasible outcome, we class this outcome as ‘random’ (Note 1). A six coming up when I throw a dice is a typical example of a ‘random event’ in the frequency sense. Even though any particular throw is perfectly determinate physically, over thousands or millions of throws, a six would come up no more and no less than any of the other possible outcomes, or would deviate from this ‘expected value’ by a very small amount indeed. So at any rate it is claimed and, as far as I know, experiment fully supports this claim.
It is the frequency definition that is usually employed in mathematics and mathematicians are typically always on the look-out for persistent deviations from what might be expected in terms of frequency. The presence or absence of some numerical or geometrical feature without any obvious reason suggests that there is, or at any rate might be, some hidden principle at work (Note 2).
The trouble with the frequency definition is it is pretty well useless in the real world since a vast number of instances is required to ‘prove’ that an event is random or not  ─ in principle an ‘infinite’ number ─ and when confronted with messy real life situations we have neither the time nor the capability to carry out extensive trials. What generally happens is that, if we have no information to the contrary, we assume that a particular outcome is ‘just as likely’ as another one and proceed from there. The justification for such an assumption is post hoc : it may or may  not ‘prove’ to be a sound assumption and the ‘proof’ involved has nothing to do with logic, only with the facts of the matter, facts that originally we do not and usually cannot know.

The fourth and least popular definition of randomness is the causality definition. For me, ‘randomness’ has to do with causality ─ or rather the lack of it. If an event is brought about by another event, it may be unexpected but it is not random. Not being a snooker player I wouldn’t bet too much money on exactly what is going to happen when one ball slams full pelt into the pack. But, at least according to Newtonian Mechanics, once the ball has left the cue, whatever does happen “was bound to happen” and that is that. The fact that the outcome is almost certainly unpredictable in all its finest details even for a powerful computer is irrelevant.
The weakness of this definition is that there is no foolproof way to test the presence or absence of causality: we can at best only infer it and we might be mistaken. A good deal of practical science is taken up with distinguishing between spurious and genuine cases of causality and, to make matters worse,   philosophers such as Hume and Wittgenstein go so far as to question whether this intangible ‘something’ we call causality is a feature of the real world at all. Ultimately, all that can be said in answer to such systematic sceptics is that belief in causality is a psychological necessity and that it is hard to see how we could have any science or reliable knowledge at all without bringing causality into the picture either explicitly or implicitly. I am temperamentally so much a believer in causality that I view it as a force, indeed as the most basic force of all since if it stopped operating in the way we expect life as we know it would be well-nigh impossible. For we could not be sure of the consequences of even the most ordinary actions; indeed if we could in some way voluntarily disturb the way in which causes and effects get associated, we could reduce an enemy state to helplessness much more rapidly and effectively than by unleashing a nuclear bomb. I did actually, only half-facetiously, suggest that the Pentagon would be advised to do some research into the matter ─ and quite possibly they already have done. Science has not paid enough attention to causality, it tends either to take its ‘normal’ operation for granted or to dispense with it altogether by invoking the ‘Uncertainty Principle’ when this is convenient. No one as far as I know has suggested there may be degrees of causality or that there could be an unequal distribution of causality amongst events.

Determinism and indeterminism

Is randomness in the ‘absence of causality’ sense in fact possible?  Not so long ago it was ‘scientifically correct’ to believe in total determinism and Laplace, the French 19th century mathematician, famously claimed  that if we knew the current state of the universe  with enough precision we could predict its entire future evolution (Note 3). There is clearly no place for inherent randomness in this perspective, only inadequate information.
Laplace’s view is no longer de rigueur in science largely because of Quantum Mechanics and Chaos Theory. But the difference between the two world-views has been greatly exaggerated. What we get in Quantum Mechanics (and other branches of science not necessarily limited to the world of the very small) is generally the replacement of individual determinism by so-called statistical determinism. It is, for example, said to be the case that a certain proportion of the atoms in a radio-active substance will decay within a specified time, but which particular atom out of the (usually very large) number in the sample actually will decay is classed as ‘random’. And in saying this, physics textbooks do not usually mean that such an event is in practice unpredictable but genuinely unknowable, thus indeterminate.
But what exactly is it that is ‘random’? Not the micro-events themselves (the  radio-active decay of particular atoms) but only their order of occurrence. Within a specified time limit half, or three quarters or some other  proportion of the atoms in the sample, will have decayed and if you are prepared to wait long enough the entire sample will decay. Thus, even though the next event in the sequence is not only unpredictable for practical reasons but actually indeterminate,  the eventual outcome of the entire sample is completely determined and, not only that, completely predictable !
Normally, if one event follows another we assume, usually but not always with good reason, that this prior event ‘caused’ the subsequent event, or at least had something to do with its occurrence. And even if we cannot specify the particular event that causes such and such an outcome, we generally assume that there is such an event. But in the case of this particular class of events, the decay of radio-active atoms, no single event has, as I prefer to put it, any ‘dominance’ over any other. Nonetheless, every atom will eventually decay : they have no more choice in the matter than Newton’s billiard balls.
Random Generation       To me, the only way the notion of ‘overall determinism without individual determinism’ makes any sense at all is by supposing that there is some sort of a schema which dictates the ultimate outcome but which leaves the exact order of events unspecified. This is an entirely Platonic conception since it presupposes an eventual configuration that has, during the time decay is going on, no physical existence whatsoever and can even be prevented from manifesting itself by my forcibly intervening and disrupting the entire procedure ! Yet the supposed schema must be considered in some sense ‘real’ for the very good reason that it has bona fide observable physical effects which the vast majority of imaginary shapes and forms certainly do not have (Note 4).

An example of something similar can be seen in the case of the development an old-fashioned  (non-digital) picture taken in such faint light that the lens only allows one photon to get through at a time.  “The development process is a chemical amplification of an initial atomic event…. If a photograph is taken with exceedingly feeble light, one can verify that the image is built up by individual photons arriving independently and, it would seem at first, almost randomly distributed in position” (French & Taylor, An Introduction to Quantum Physics p. 88-9)  This case is slightly different  from that of radio-active decay since the photograph has already been taken. But the order of events leading up to the final pattern is arbitrary and, as I understand it, will be different on different occasions. It is almost as if because the final configuration is fixed, the order of events is ‘allowed’ to be random.

Uncertainty or Indeterminacy ?

 Almost everyone who addresses the subject of randomness somehow manages to dodge the central question, the only question that really matters as far as I am concerned, which is : Are unpredictable events merely unpredictable because we lack the necessary information  or are they inherently indeterminate?
        Taleb is the contemporary thinker responsible more than anyone else for opening up Pandora’s Box of Randomness, so I looked back at his books to see what his stance on the uncertainty/indeterminacy issue was. His deep-rooted conviction that the future is unpredictable and his obstinacy in sticking to his guns against the experts would seem to be driving him in the indeterminate direction but at the last minute he backs off and retreats to the safer sceptical position of “we just don’t know”.

“A true random system is in fact random and does not have predictable properties. A chaotic system [in the scientific sense] has entirely predictable properties, but they are hard to know.” (The Black Swan p. 198 )

This is excellent and I couldn’t agree more. But, he proceeds   “…in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1. (…)  Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.”   The Black Swan p. 198 
As far as I am concerned randomness either is or is not an intrinsic property and difference between theory and practice doesn’t come into it. No doubt, from the viewpoint of an options trader, it doesn’t really matter whether market prices are ‘inherently unpredictable’ or ‘indeterminate’ since one still has to decide whether to buy or not.        However, even from a strictly practical point of view, there is a difference and a big one between intrinsic and ‘effective’ randomness.
Psychologically, human beings feel much easier working with positives than negatives as all the self-help books will tell you and it is even claimed that “the unconscious mind does not understand negatives”. At first sight ‘uncertainty’ and ‘indeterminacy’ appear to be equally negative but I would argue that they are not. If you decide that some outcome is ‘uncertain’ because we will never have the requisite information, you will most likely not think any more about the matter but in stead work out a strategy for coping with uncertainty ─ which is exactly what Taleb advocates and claims to have put into practice successfully in his career on the stock market.
On the other hand, if one ends up by becoming convinced that certain events really are indeterminate, then this raises a lot of very serious questions. The concept of a truly random event, even more so a stream of them, is very odd indeed. One is at once reminded of the quip about random numbers being so “difficult to generate that we can’t afford to leave it to chance”. This is rather more than a weak joke. There is a market for ‘random numbers’ and very sophisticated methods are employed to generate them. The first ‘random number generators’ in computer software were based on negative feedback loops, the irritating ‘noise’ that modern digital systems are precisely designed to eliminate. Other lists are extracted from the expansion of π (which has been taken to over a billion digits) since mathematicians are convinced this expansion will never show any periodicity and indeed none has been found. Other lists are based on so-called linear congruences.  But all this is in the highest degree paradoxical since these two last methods are based on specific procedures or algorithms and so the numbers that actually turn up are not in the least random by my definition. These numbers are random only by the frequency and lack of pattern definitions and as for predictability the situation is ambivalent. The next number in an arbitrary  section of the expansion of π  is completely unpredictable if all you have in front of you is a list of numbers but it is not only perfectly determinate but perfectly predictable if you happen to know the underlying algorithm.

Three types of Randomness

 Stephen Wolfram makes a useful distinction between three basic kinds of randomness. Firstly, we have randomness which relies on the connection of a series of events to its environment. The example he gives is the rocking of a boat on a rough sea. Since the boat’s movements depend on the overall state of the ocean, its motions are certainly unpredictable for us because there are so many variables involved ─ but perhaps not for Laplace’s Supermind.
Wolfram’s second type  of ‘randomness’ arises, not  because a series of events is continuously interacting with its environment, but because it is sensitively dependent on the initial conditions. Changing these conditions even very slightly can dramatically alter the entire future of the system and one consequence is that it is quite impossible to trace the current state of a system back to its original state. This is the sort of case studied in chaos theory. However, such a system, though it behaves in ways we don’t and can’t anticipate, is strictly determinate in the sense that every single event in a ‘run’ is completely fixed in advance (Note 5)
Both these methods of generating randomness depend on something or someone from outside the sequence of events : in the first case the randomness in imported from the far larger and more complex system that is the ocean, and in the second case the randomness lies in the starting conditions which themselves derive from the environment or are deliberately set by the experimenter. In neither case is the randomness inherent in the system itself and so, for this reason, we can generally reduced the amount of randomness by, for example, securely tethering the boat to a larger vessel or by only allowing a small number of possible starting conditions.
Wolfram’s third and final class of generators of randomness is, however, quite different since they are inherent random generators. The examples he gives are special types of cellular automaton. A cellular automaton consists essentially of a ‘seed’, which can be a single cell, and a ‘rule’ which stipulates how a cell of a certain colour or shape is to evolve. In the simplest cases we just have two colours, black and white, and start with a single black or white cell. Most of the rules produce simple repetitive patterns as one would expect, others produce what looks like a mixture of ‘order’ and ‘chaos’, while a few show no semblance of repetitiveness or periodicity whatsoever. One of these, that  Wolfram classes as Rule 30, has actually been employed in Random [Integer] which is part of Mathematica and so has proved its worth by contributing to the financial success of the programme and it has also, according to its inventor, passed all tests for randomness it has been subjected to.
Why is this so remarkable? Because in this case there is absolutely no dependence on anything external to the particular sequence which is entirely defined by the (non-random) start point and by an extremely simple rule. The randomness, if such it is to be called, is thus ‘entirely self-generated’ : this is not production of randomness by interaction with other sets of events  but is, if you like, randomness  by parthenogenesis. Also, and more significantly, the author claims that it is this type of randomness that we find above all in nature (though the other two types are also present).

Causal Classification of types of randomness

This prompts me to introduce a classification of  my own with respect to causality, or dominance as I prefer to call it. In a causal chain there is a forward flow of ‘dominance’ from one event to the next and, if one connection is missing, the event chain terminates (though perhaps giving rise to a different causal chain by ricochet). An obvious example  is a set of dominoes where one knocks over the next but one domino is spaced out a bit more  and so does not get toppled. A computer programme acts essentially in the same way : an initial act activates a sequential chain of events and terminates if the connection between two successive states is interrupted.
In the environmental case of the bobbing boat, we have a sequence of events, the movements of the boat, which do not  by themselves form an independent causal chain since each bob depends, not on the previous movement of the boat, but on the next incoming wave, i.e. depends on something outside itself. (In reality, of course, what actually goes on is more complex since, after each buffeting, the boat will be subject to a restoring force tending to bring it  back to equilibrium before it is once more thrown off in another direction, but I think the main point I am making still stands.)
In the statistical or Platonic case such as the decay of a radio-active substance or the development of the photographic image, we have a sequence of events which is neither causally linked within itself nor linked to any actual set of events in the exterior like the state of the ocean. What dictates the behaviour of the atoms is seemingly the eventual configuration (the decay of half, a quarter or all of the atoms) or rather the image or anticipation of this eventual configuration (Note 6).

So we have what might be called (1) forwards internal dominance; (2) sideways dominance; and (3) downwards dominance (from a Platonic event-schema).

Where does the chaotic case fit in? It is an anomaly since although there is clear forwards internal dominance, it seems also to have a Platonic element also and thus to be a mixture of (1) and (3).

Randomness within the basic schema of Ultimate Event Theory

Although the atomic theory goes back to the Greeks, Western science during the ‘classical’ era (16th to mid 19th century) took over certain key elements from Judaeo-Christianity, notably the idea of there being unalterable ‘laws of Nature’ and this notion has been retained even though modern science has dispensed with the lawgiver. An older theory, of which we find echoes in Genesis, views the ‘universe’ as passing from an original state of complete incoherence to the more or less ordered structure we experience today. In Greek and other mythologies the orderly cosmos emerges from an original kaos (from which our word ‘gas’ is derived) and the untamed forces of Nature are symbolized by the Titans and other monstrous  creatures. These eventually give way to the Olympians who, signficantly, control the world from above and do not participate in terrestrial existence. But the Titans, the ancient enemies of the gods, are not destroyed since they are immortal, only held in check and there is the fear that at any moment they may  break free. And there is perhaps also a hint that these forces of disruption (of randomness in effect) are necessary for the successful  functioning of the universe.
Ultimate Event Theory reverts to this earlier schema (though this was not my intention) since there are broadly three phases (1) a period of total randomness (2) a period of determinism and (3) a period when a certain degree of randomness is re-introduced.
In Eventrics, the basic constituents of everything ─ everything physical at any rate ─  are what I call ‘ultimate events’ which are extremely brief and occupy a very small ‘space’ on the event Locality. I assume that originally all ultimate events are entirely random in the sense that they are disconnected from all other ultimate events and, partly for this reason, they disappear as soon as they happen and never recur. This is randomness in the causality sense but it implies the other senses as well. If all events are disconnected from each other, there can be no recognizable pattern and thus no means of predicting which event comes next.
So where do these events come from and how is it they manage to come into being at all? They emerge from an ‘Event Source’ which we may call ‘the Origin’ and which I sometimes refer to as K0 (as opposed to the physical universe which is K1).  It is an important facet of the theory that there is only one source for everything that can and does occur. If one wants to employ the computer analogy, the Origin either is itself, or contains within itself, a random event generator and, since there is nothing else with which the Origin can interact and it does  not itself have any starting conditions  (since it has always existed), this  generator can only be what Wolfram calls an inherent randomness generator. It is not, then, order and coherence that is the ‘natural’ state but rather the reverse : incoherence and discontinuity is the ‘default position’ as it were (Note 7).
Nonetheless, a few ultimate events eventually acquire ‘self-dominance’ which enables them to repeat indefinitely more or less identically and, in a few even rarer cases, some events manage to associate with other repeating events to form conglomerates.
This process is permanent and is still going on everywhere in the universe and will continue to go on at least for some time (though eventually all event-chains will terminate and return the ‘universe’ to the nothingness from which it originally came). Thus, if you like, ‘matter’ is being created all the time though at an incredibly slow rate just as it is in Hoyle’s Steady State model (Note 7).
Once ultimate events form conglomerates they cease to be random and are subject to ‘dominance’ from other sets of event and from previous occurrences of themselves. There will still, at this stage, be a certain unpredictability in the outcomes of these associations because determinism hs not yet ousted randomness completely. Later still, certain particular associations of events become stabilized and give rise to ‘event-schemas’. These ‘event-schemas’ are not themselves made up of ultimate events  and are not situated in the normal event Locality I call K1  (roughly what we understand by the physical universe). They are situated in a concocted secondary ‘universe’ which did not exist previously and which can be called K2. The reader may baulk at this but the procedure is really no different from the distinction that is currently made between the actual physical behaviour of bodies which exemplify physical laws (whether deterministic or statistical) and the laws themselves which are not in any sense part of the physical world. Theoretical physicists routinely speculate about other possible universes where the ‘laws’, or more usually the constants, “are different”, thus implying that these laws, or principles, are in some sense  independent of what actually goes on. The distinction is somewhat similar to the distinction between genotype and phenotype and, in the last resort, it is the genotype that matters, not the phenotype.
Once certain event-schemas have been established, they are very difficult to modify : from now on they ‘dictate’ the behaviour of actual systems of events. There are thus already three quite different categories of events (1) those that emerge directly from the Origin and are strictly random; (2) those that are brought about by previously occurring physical events and (3) events that are dependent on event-schemas rather than on other individual events.
So far, then, everything has become progressively more determined though evolving from an original state of randomness somewhat akin to the Greek kaos (which incidentally gave us the term ‘gas’) or the Hebrew tohu va-vohu, the original state when the Earth was “without form and void and darkness was upon the face of the deep”.
The advent of intelligent beings introduces a new element since such  beings can, or believe they can, impose their own will on events, but this issue will not be discussed here. Whether an outcome is the result of a deliberate act or the mere product of circumstances is an issue that vitally concerns juries but has no real bearing on the determinist/indeterminist dilemma.
Macroscopic events are conglomerates of ultimate events and one might suppose that if the constituent events are completely determined, it follows that so are they. This is what contemporary reductionists actually believer, or at least preach and, within a materialist world-view, it is difficult to avoid some such conclusion. But, according to the Event Paradigm, physical reality is not a continuum but a complicated mosaic where in general blocks of events fit together neatly into interlocking causal chains and clusters. The engineering is, however, perhaps not quite faultless, and there are occasional mismatches and irregularities much as there are ‘errors’ in the transcription of DNA ─ indeed, genetic mutations are the most obvious example of the more general phenomenon of random ‘connecting errors’. And it is this possibility that allows for the reintroduction of randomness into an increasingly deterministic universe.
Despite the subatomic indeterminacy due to Quantum Mechanics, contemporary science nonetheless in practice gives us  a world that is very nearly as predictable as the Newtonian, and in certain respects more so. But human experience keeps turning up events that do not fit our rational expectations at all :  people act ‘completely out of character’, ‘as if they were someone else’, regimes collapse for no apparent reason, wars break out where they are least expected and so on. This is currently attributed to the complexity of the systems involved but there may be a deeper reason. There remains an obstinate tendency for events not to ‘keep to the book’ and one suspects that Taleb’s profound conviction  that the future is unpredictable, and the tremendous welcome this idea has received by the public, is based on an intuitive awareness that a certain type of randomness is hard-wired into the normal functioning of the universe. Why is it there supposing that it really is there? For the same sort of reason that there are persistent random errors in the transcription of the genetic code : it is a productive procedure that works in the long run by turning up possibilities that no one could possibly have planned or worked for. One hesitates to say that this randomness is deliberately put there but it is not a wholly accidental feature either : it is perhaps best conceived as a self-generated controlling mechanism that is reintroducing randomness as a means of propelling the system forward into a higher level of order, though quite what this will be is anyone’s guess.      SH  28/2/13

Note 1  Charles Sanders Peirce, who inaugurated this particular definition, did not speak of ‘random events’ but restricted himself to discussing the much more tractable (but also much more academic) issue of taking a random sample. He defined this as one “taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one of a set of instances as often as any other set of the same number”.

Note 2  Take a simple example. One might at first sight think that a square number could end with any digit whatsoever just as a throw of a dice could produce any one of the possible six outcomes. But glancing casually through a list of smallish square numbers one notes that every one seems to be either a multiple of 5 like 25, one less than a multiple of 5 like 49 or one more than a multiple of 5 like 81. We could (1) dismiss this as a fluke, (2) simply take it as a fact of life and leave it at that or (3) suspect there is  a hidden principle at work which is worth bringing into the light of day.
In this particular case, it is not difficult to establish that the pattern is no accident and will repeat indefinitely. This is so because, in the language of congruences, the square of a number that is ±1 (mod 5)  is 1 (mod 5) while the square of a number that is  ±2 (mod 5) is either +1 or –1(mod 5). This covers all possibilities so we never get squares that are two units less or two units more than a multiple of five.   

Note 3  Laplace, a born survivor who lived through the French Revolution, the Napoleonic era and the Bourbon Restoration,  was careful to restrict his professed belief in total determinism to physical (non-human) events. But clearly, there was no compelling reason to do this except the pragmatic one of keeping out of trouble with the authorities. More audacious thinkers such as Hobbes and La Mettrie, the author of the pamphlet L’Homme est une Machine, both found themselves obliged to go into exile during their lives and were vilified far and wide as ‘atheists’. Nineteenth century scientists and rationalists either avoided the topic as too  contentious or, following Descartes, made a hard and fast distinction, between human beings who possessed free will and the rest of Nature whose behaviour was entirely reducible to the ‘laws of physics’ and thus entirely predictable, at any rate in theory.

Note 4 The current  notion of the ‘laws of physics’ is also, of course, an entirely  Platonic conception since these laws are not  in any sense physical entities and are only deducible by their presumed effects.
Plato definitely struck gold with his notion of a transcendent reality of which the physical world is an imperfect copy since this is still the overriding paradigm in the mathematical sciences. If we did not have the yardstick of, for example, the behaviour of an ‘ideal gas’ (one that obeys Boyle’s Law exactly) we could hardly do any chemistry at all ─ but, in reality, as everyone knows, no gas actually does behave like this exactly hence the eminently Platonic term ‘ideal gas’.
Where Plato went wrong as far as I am concerned was in visualizing his ‘Forms’ strictly in terms of the higher mathemartics of his day which was Euclidian geometry. I view them as ‘event-schemas’ since events, and not geometric shapes, are the building-blocks of reality in my theory. Plato was also mistaken in thinking these ‘Ideas’ were fixed once and for all. I believe that the majority ─ though perhaps not all ─  of the basic event-schemas which are operative in the physical universe were built up piecemeal, evolve over time and are periodically displaced by somewhat different event-schemas much as species are.

Note 5. Because of the interest in chaos theory and the famous ‘butterfly effect’, some people seem to conclude that any slight perturbation is likely to have enormous consequences. If this really were the case, life would be intolerable. In ‘normal’ systems tinkering around with the starting conditions makes virtually no difference at all and every ‘run’, apart from maybe the first few events, ends up more or less the same. Each time you start your car in the morning it is in a different physical state from yesterday if only because of the ambient temperature. But, after perhaps some variation, provided the weather is not too extreme, the car’s behaviour settles down into the usual routine. If a working machine behaved  ‘chaotically’ it would be useless since it could not be relied on to perform in the same way from one day to the next, even from one hour to the next.

Note 6  Some people seem to be prepared to accept ‘backwards causation ‘, i.e. that a future event can somehow dictate what leads up to it,  but I find this totally unacceptable. I deliberately exclude this possibility in the basic Axioms of Ultimate Event Theory by stating that “only an event that has occurrence on the Locality can have dominance over other events”. And the final configuration certainly does not have occurrence on the Locality ─ or at any rate the same part of the Locality as actual events ─ until it actually occurs!

 Note 7   Readers will maybe be shocked at there being no mention of the Big Bang. But although I certainly believe in the reality of the Big  Bang, it does not at all follow from any of the fundamental assumptions of Ultimate Event Theory and it would be dishonest of me to pretend it did. When I first started thinking along these lines Hoyle’s conceptually attractive Steady State Theory was not entirely discredited though even then very much on the way out. The only way I can envisage the Big Bang is as a kind of cataclysmic ‘phase-transition’, presumably preceded by a long slow build up. If we accept the possibility of there being multiple universes, the Big Bang is not quite such a rare or shocking event after all : maybe when all is said and done it is a cosmic ‘storm in a teacup’.

DharmaA radically new theory, model or manner of viewing the world is inevitably going to be attacked as (1) absurd and (2) not new after all. Eventrics, which views physical reality as composed of ‘ultimate events’ (rather than atoms), can more justifiably be dismissed as crazy than being just the same old stuff in a different wrapping.
During the last few centuries the leading paradigm (consensus view of reality) has been a mixture of atomism which goes back to the Greeks, usefully combined with a belief in unalterable ‘laws of Nature’ which owes a lot to Judaeo-Christianity. The classical synthesis of Newton introduced the un-Greek (but actually very ancient) concept of force. A very satisfactory world-view emerged consisting of atoms moving around subject to contact or remote forces, the whole regulated by immutable laws laid down by God. 19th century science removed God from the picture and 20th century scientific discoveries have made matter a very tenuous substance indeed and have introduced forces unknown to Newton, nuclear, electro-magnetic and so on.
At the end of the day, however, we still work with a matter based world-view and in some cases atoms and molecules can actually be imaged, if not literally ‘seen’ with a microscope. So what is the difference between such a view and the viewpoint of Eventrics that I am proposing? Essentially this : that atoms are in some sense solid and they are extremely long-lasting if not quite eternal as Democritus and Newton imagined. Eventrics is the study of events and their interactions and Ultimate Event Theory is that part of the theory that deals with the ultimate constituents of all events, so-called ultimate events. UET thus bears roughly the same relation to Eventrics as nuclear physics does to physics. Now, ultimate events are quite definitely not substantial in any usual sense of the word and, more significantly, they only last for an incredibly short time span, that of the ksana (or chronon). The actual interval involved is yet to be determined but it would most likely be at least 10–36 seconds (the Planck scale).
This, however, is not all. It is true that certain  contemporary physicists have come round to the idea that there is a minimal temporal interval and the sort of ‘reality constituents’ they deal, in ‘causal nodes’ for example, do seem to be rather similar to my ultimate events. However, where my theory departs from current theories (apart from the lack of mathematical sophistication which will come later) is that I believe that ‘physical reality’ or ‘Space/Time’ or whatever else you want to call it, is radically discontinuous. There are, quite literally, gaps between ultimate events and, because ‘macroscopic events are conglomerates of ultimate events much as chemical compounds are conglomerates of atoms, there are gaps between everything we see and hear and, for that matter, are. This is certainly not a popular view in Western thought, scientific or not, and would be regarded by most people as lunatic. It was, however, a very widespread viewpoint amongst Hinayana Buddhists in India in the first few centuries AD. This is not, I hasten to add, the reason why I am adopting it : on  the contrary, I formed my rough view before I had even heard of Buddhism though, naturally, when I came across the views of these people via Stcherbatsky’s books on Buddhism, such views at once had a favourable audience. I must also insist that, although I do practise my own brand of meditation, I would not call myself a Buddhist and my interest in Buddhism is largely in the underlying physical theory, not the moral theory which is what these monks were essentially interested in (and still are).
The assumption that ‘physical reality’ is made up of ultimate events has two especially important consequences with regard to current science and mathematics. As far as mathematics is concerned, it means that Calculus and similar branches are all based on a false (though extremely useful) assumption, namely that everything, matter, energy, time, space and so on is “infinitely divisible’. According to UET this cannot be so since there is always a final ratio between input and output, dependent and independent variable. We know as a matter of fact that there is such a limit when dealing with energy exchanges (because of Planck’s quanta) but this principle is in UET extended to absolutely everything you can think of (perhaps with a single exception). There are ultimate time/space blocks, for example, and physical reality, instead of being modelled as a continuous substance should be conceived as a sort of mosaic with the ‘pieces’ being molecules of ultimate events. This, incidentally, raises the question of whether these pieces are separable in certain circumstances, whether, the building blocks can dissolve like certain chemical substances in solution.
The second consequence of the theory is that everything in physics is, as it were, pushed one stage backwards. We are familiar with the notion of inertia which may be described as the tendency of a body to maintain its current state of unaccelerated motion. Newton and Galileo also took for granted the idea that a solid object, or at any rate an atom, could not simply disappear : in other words that it was self-perpetuating. We now know that this is not entirely true since atoms can decay, though this remains a very rare occurrence in the portion of reality we can readily observe. But the idea of an object “remaining what it is” still seems pretty reasonable, at least as a first approximation.
Now, in UET, the ‘natural’ tendency of ultimate events is to disappear as soon as they come into existence and never to appear again. What we sense as solid objects must, then, be exceptions to the general rule and UET must provide some sort of reason for this state of affairs. ’Objects’ in UET are repeating event-chains which have acquired what I term ‘self-dominance’ ─ without this property they would not repeat ever. This is, incidentally, part of the preliminary assumptions of the theory, not something derived from experience. And the plausibility of this seemingly mad suggestion depends on what can be deduced from it.
As to the ‘gaps’ between ultimate events, although as humans there is little chance of us experiencing them directly, it may well be that we can experience gaps between blocks of ultimate events and many people have indeed had such experiences, or had some sort of experience that they interpreted in this fashion. As Heidegger put it, “Being is shot through with nothingness”.
This is a very radical assumption. It means that ‘reality’ has two ─ and essentially only two ─ states : On or Off, Existence or Non-existence. The ‘event-trajectory’ of any repeating event-chain (including ourselves) is gapped. Interestingly, this very basic conception fits in well with the current digital revolution and the victory of the digital two-state computer over the many-state analogue computer.
Is there anything that persists between ultimate events, anything that ‘fills the gap’? It is difficult to avoid the conclusion that there must be ‘something’ since otherwise it is hard to see why everything doesn’t just switch off permanently. This ‘something’ must, however, be so unphysical, so vacuous, so insubstantial, that it would be more pertinent to describe it as ‘Emptiness’ or ‘Void’ ─ and this is precisely how it is described in innumerable Hinayana Buddhist texts. Also, one notes the idea floating around contemporary physics ─ but I hasten to add without any ‘spiritual’ or religious meaning ─ that ‘nothingness’ is at the origin of everything and in particular at the origin of our universe which may be one of many. ‘Nothing’ is in fact making something of a come-back and even becoming scientifically respectable.
In a nutshell, the paradigm, or world-model of Ultimate Event Theory is that reality is made up of three, and only three, elements : (1)the ultimate events themselves, (2) the interconnections event-chains have with each other and (3) the underlying invisible immaterial ‘substance’ of which they are, as it were, concretisations or droplets of foam. As it happens, these features remind one of the three main constituents of the Buddhist world-view : dharma, karma, nirvana. The dharma correspond to the ultimate events, karma to the ‘causal force’ propelling events, nirvana to the underlying reality. I must once again add that the way I am using these terms is slightly different from the way Buddhists did and do use them since Buddhists  are concerned almost exclusively with the ‘moral law’ and with ways of hastening the extinction of the world-process and returning everything to the quiescent state of nirvana. The image of a calm pool of water (nirvana) being somehow for unknown reasons ‘stirred up’ or set into commotion seems, nonetheless, extremely apt.
It will be objected (and has been already vehemently) that all this is just speculation and metaphysics and essentially mumbo-jumbo, not science. To this I answer that science evolved from so-called ‘natural philosophy’ and that contemporary science is not just a prediction system but contains a (relatively) coherent world-view or world-system that is not regarded as philosophy only because the main tenets are not seriously questioned. It seems to be not only psychologically necessary but extremely useful to have some sort of a model of reality or picture in the mind, though naturally there are dangers in taking it too literally. On the whole science and mathematics have erred in taking their assumptions too seriously rather than in not having any : mathematicians, for example, apparently the least metaphysical of beasts, would be hard put to expel the idea of the ‘number line’ with all that that entails from their minds. And the crude Bohr model of the atom is still holding its ground against quantum mechanics and, in a sense, with good reason.
So what’s the point of having a different world-view or paradigm from the present one, especially since the current one  has led to such amazing results? To this I would say that one may be temperamentally inclined in a different direction and there is nothing wrong with that, though certainly that is no argument for the ‘truth’ or usefulness of the view. One judges a tree by its fruits. Firstly, the current world-view seems to be getting into serious conceptual difficulties and it may be that there is no way of patching it up : perhaps better to go back and take a different turning and see where it leads. It is to be hoped that this different direction will lead to some or most of  the undeniable benefits of the current method, but if that’s all it did there would be little point in my offering it to the public or even believing it myself. There are, however, certain assumptions that I do not feel obliged to take on board, or not quite to the same extent as contemporary scientists, and so I may be able to accommodate certain data that they cannot, or explain what is going on in a more comprehensible fashion. Science,or rather scientism, currently assumes that it holds the whole truth, at any rate in principle, and anything that doesn’t fit is either ignored or dismissed as coincidence. This is a rather unsatisfactory way of proceeding and plenty of scientists in private will admit that they have doubts about certain matters or are inclined to views that would get them into seriously trouble if expressed in public.
So what assumptions do I have in mind?  Well, one is the notion of there being immutable ‘laws of nature’ from which there can be no deviation, except in the sense of quantum uncertainties which are of no use to us practically speaking. Although there must seemingly be some underlying principles, much of the behaviour we see around us seems to be the result of habit, or blind repetition. Physicists readily agree to this when speaking of customs and human opinions but the same may apply to physical laws as well (as Sheldrake has dared to suggest). ‘Natural laws’ are, in terms of Eventrics, event-schemas which have been built up by countless repetitions and have become so strong they appear to be absolute. But perhaps they are not absolute after all.
For example, though not denying for a moment its utility, I do not have to subscribe to the absolute nature of the conservation of mass/energy. There is, as a matter of fact, no obvious equivalent of ‘energy’ in Eventrics, the nearest being the ‘force’ between blocks of events that I call Dominance. Energy is an entirely hypothetical concept in the sense that one cannot ee or hear or touch energy but only deduce its presence from certain effects. It is the work that could be done, not the work that actually is being done (work having the physical sense of force × distance). The idea that the universe has a specific amount of energy delivered at the Big Bang and which has been conserved ever since would mean, translated into the terms of Eventrics, that the total amount of ‘dominance’ is given and remains constant. This means, for one thing, that it cannot decrease. However, I am inclined to believe that this quantity can and indeed must decrease eventually since the entire universe (or what we call a universe) will eventually disappear and everything will return to the nothingness from which it came. There may also be local or temporary fluctuations in the level of dominance. Why will the universe disappear if the quantity of dominance (karma if you like) declines? Because according to my preliminary assumptions, it is in the nature of ultimate events not to persist but to disappear for ever, the persistent ones being the exceptions. Events that have acquired dominance can themselves persist and can interact with other chains producing new chains but eventually the entire machine will ‘run out of steam’. Also, there is a finite amount of dominance emanating from the ‘sink’, at least within the limits of ‘our’ universe. Events consist either of combinations and repeats of existing events + completely new events emerging from nothingness. Eventually, no new events will emerge and the existing event-chains will die away : this at any rate is how I view things.
This overview is necessarily general and subsequent posts will y derive from the schema conclusions which can be tested, or will give an alternative and perhaps preferable explanation of well-attested physical phenomena.     SH  5/2/13

Pagoda

The speaker alias myself commenced by saying he aimed to give a rapid overview of the subject 1 as a mathematical and physical concept 2 its connection to religion and mysticism and 3 possible social and technological consequences of the elimination of the concept from science and mathematics.      

Definition I defined ‘infinity’ as “a process that can be started but never concluded”. Usually the process involves making something ‘bigger and bigger’ or ‘smaller and smaller’ as in Calculus. “Infinity is a process or procedure, not a quantity”. I should have added that I was well aware that are more precise, also more sophistical, mathematical definitions but at the end of the day we come down to this  that  infinity is a procedure or activity that never terminates and never can.

Is the concept attractive ? “Not to me,” the speaker  said. “My dislike of infinity dates back to my childhood when a familiar sight on the breakfast table was a brand of honey which had a bear on the label. The bear was holding a jar of honey with a bear on it holding a jar of honey and so on….This used to torture me at night until I fell asleep with exhaustion.

Necessary as a concept?  The Greeks got on very well indeed without it though, arguably, their finitism stopped them developing the science of dynamics. It would have been possible in my view to have developed the calculus without dragging in the infinite but the Greeks just stopped short of doing this though Archimedes came near.  Fast forward to the Renaissance. This  was a period when the West was liberated from the medieval obsession with ‘infinite time’ (eternity), and the new optic gave rise to the exploration of the physical world  by navigation and the loving depiction of the human body in painting and sculpture. “However, the concept of infinity made its fateful appearance with Galileo and others leading eventually to the ‘Infinitesimal Calculus’ (as it was called until very recently) though Newton seems to have had some doubts about the validity of his great invention.”  The grip of infinity finally began to loosen at the end of the pragmatic nineteenth century but then mathematics plunged ever deeper into the mire of infinity with Cantor’s theory of the Transfinite, infinity gone mad (and Cantor himself did) . Adapting a simile from Nietzsche, I said that the concept of infinity was like the gigantic statue of a dead god whose baleful shadow lay across the valley below terrifying the inhabitants and stopping them going about their daily business.

The Revealing Case of Pascal “Le silence absolu des espaces infinis m’effraient” (The absolute silence of infinite space terrifies me). Pascal wrote this.
Pascal“It is interesting that Pascal, the man who discovered the Law of Uniform Pressure for Gases, built the first working calculator and contributed to the Calculus, had a mystical experience one night around this time of year and from then on abandoned the ‘sterile infinity of mathematics’ for the warmth of a personal relationship with God.”
“But,” I added, “if Pascal had been alive today he would perhaps not have needed to abandon the world and science. For we now know that the universe is not finite ─ we can even judge its extent ─   and it is not silent since we hear it if not with our ears at least with radio telescopes. The universe is no longer  forbidding and distant; we know, or think we know, the constituents of the stars and galaxies, however huge, are like grains of sand scattered across the ground. We are part of the universe since, as this gentleman will tell you, [a chemist in the audience] the carbon and other elements in our bodies comes from exploded stars.”

Everything physical is finite  At one time almost everything was thought to be ‘infinite,’ ‘eternal’. But we now know, for example, that the speed of light is  not infinite, that the universe itself had a beginning in time and has a specific size. Energy is not continuous but can only be  distributed in definite quantities (the famous quanta of Quantum Mechanics); molecules and atoms can even be ‘seen’ by electron microscopes.
The Differential Calculus is basically the study of how two sets of quantities change with respect to each other, one variable ‘depending’ on the other. In mathematics the independent variable can be made arbitrarily small. But if you reduce the input of a  mechanical system beyond a certain point, this input is unable to overcome internal friction and there is no output whatsoever. And this limit is miles away far from the mathematical one. “Touch the person next to you as lightly as possible. Then lighter still. You will soon get to the point when this person does not recognize the pressure of your hand. Everything is like this, there is always a smallest and largest possible amount in real life. Calculus models an ideal world, not the real one.”
Today there is some talk of there being a finite ‘smallest length’, the Planck scale (10 (exp) –34), but very rarely talk of there being a smallest interval of time, what I call the ksana (from Sanscrit for ‘instant’). Time is actually the most important dimension since, as Pearce wrote, “one can imagine a world without space but not a world without time”. Although the ‘space’ of our dreams is completely distorted, this does not happen with time : one event leads to another just as in real life. In dreams as in real life you never get stuck in a vicious circle going round and round for ever: there is a ceaseless drive onwards and in a single direction. Time is cvery different from space since it only has one ‘dimension’ and it is dislocated from the three spatial dimensions, “the spatial three-dimensional reality must disappear when time is introduced since otherwise there would be no difference from what exists at one moment and the next”.

The Infinite compared with the ‘Non-finite’

Is the universe self-sufficient and self-explanatory? It would seem not since even science is now seriously talking about it coming from something that was there before, and which will perhaps give rise to other, different, universes. This deeper reality, the ‘Origin’, Ain Soph, call it what you will, has (so I would claim) nothing in common with the mathematical concept of infinity – the Buddha is credited with the just observation that “nirvana is neither finite nor infinite”.
The speaker said he envisaged ‘reality’ as made up of two regions with a veil separating them (the veil of Isis). Mystics have lifted a corner of this veil and have sometimes described what they have seen on the other side. The Beyond is so completely different from everything in the physical universe that mystics, quite rightly describe it in contradictory or negative terms. On the other side there is no number, no shape, no name, no elementary particles, no difference between the part and the whole “All is One”.
However, we live on this side of the veil, in the world of separation, the world of extension and number and mathematics and physics should confine themselves to what is measurable and/or deducible from our (ordinary) sense impressions.  Above all we should not bring into science and mathematics any knowledge (or delusory imaginings) concerning the ‘non-finite’ domain of reality.
“The Tao that can be named is not the origin al Tao” – the first line of the Tao Te Ching. In Lao Tse’s time, language was the most accurate analytic tool known to mankind : if Lao Tse were alive today he would have written “The Tao that can be numbered or mathematized is not the original Tao”.
I have found this stratagem of separating reality into two, and only two, incompatible regions, one finite, specific, measurable, the other non-finite and immeasurable very useful indeed (Note 1).
Strangely enough, the bridge, inasmuch as there is one, between the two realms is not to be found by reaching out into the vastness with  bigger telescopes and torturing oneself with the concept of the infinite, but on the contraryby focussing on the present moment, any moment, this moment. I pointed to the sunlight falling on the grass alongside where we were standing.

Is it possible to show that there is a ‘smallest interval of time’? Is the hypothesis testable?

I made the prediction in one of the early posts on this site, that “during this century science will be able to determine the ratio of the smallest interval of distance to that of the smallest interval of time”. I added that I thought this would not happen in my lifetime. But to my astonishment, someone (Craig Hogan) is currently building a machine he calls an Interferometer in Chicago precisely to show that, as he conceives things, “Space/Time is grainy” or in the current jargon “At a certain level the universe is digital.” Hogan is looking for a basic ‘static’ that goes deeper even than fluctuations of the quantum vacuum and which he sees as the “froth of Space/Time” (Note 2).

There are, incidentally, several thinkers today who view the universe as a giant computer and this came up in the discussion later. “What I note is that a digital computer is made up of a finite collection of bits, carries out finite series of operations sequentially  and has two and only two ‘states’, ‘on’ and ‘off’.” In life terms, ‘on’ is ‘existent’ and ‘off’ is ‘inexistent’ and reality is flickering on and off perpetually. “We are bits”, and the person who raised issue, somewhat to my surprise, did not take this as an insult but nodded in agreement.

I also noted that the universe is expanding faster and it does not look like it will ever contract again now. Everything has its time, “You will die, I will die and the universe will die”, the speaker said somewhat melodramatically.

Social and technological aspects of ‘infinity’

The speaker, i.e. myself, did not have time to say too much about social and technological matters because of the cold.  He would have liked to say more about how science and mathematics, now everywhere triumphant, have made the world and life almost totally incomprehensible (hence the heasdlong flight towards religious fundamentalism). Theoretical scientists and mathematicians seem to be engaged in a sort of competition, on the one hand they say “See all the improbable or impossible things I can believe in and you can’t!”  and on the other “See all the stipid things you believe in and I don’t”. (Things like free will and that you have the ability to change your life overnight if you really want to.)   We are moving at an alarming rate towards scientific totalitarianism: science has ceased to be a free enquiry but a matter of signing up to a credo and watch your step if you disagree with Richard Dawkins & co. on any point for you’ll live to regret it — if you’re a professional scientist that is, I can think what I want.
I believe all knowledge is based on sense impressions and this is the point where we should start. No exception should be made for mathematics and speculative science. What is dismissed by science as ‘anecdotal’ is actually in a way more genuine and more real than what is carried out in the artificial environment of laboratories. A practising chemist at a family gathering discussingthese sort of issues, said like a bolt from the blue to everyone;’s astonishment, “Only the experiment is real, all the rest is theory”. I’m not sure that I wouldn’t go one step further and say “Only the experience is real”.
On the social level, he/I referred to a book popular for as a while in the Sixties but now forgotten, Cain’s Book by Alexander Trocchi. In this book, the central character lives on a boat moored near New York. He is paid to be there by the owner and does not have to do anything much except potter around so he has plenty of time on his hands. He sees in the distance the vast city that he calls “the city of outrageous purpose” (an excellent phrase) but rarely ventures into it. He spends his time desultorily (but on the whole enjoyably) looking at the water and occasionally meeting one or two drop outs. He contrasts “the city of outrageous purpose” (spatial) with “the meaningless texture of the present moment” (temporal). He prefers the second to the first obviously – though unfortunately Trocchi’s interest in the ‘texture of the present moment’ took him into hard drugs, an unnecessary and counter-productive move.

Spatial and Temporal Cultures

      We live in a spatial civilisation which prizes ‘things’ above sensations. We have an ‘object-orientated outlook’which ultimately goes back to the Greeks whose greatest achievements were strictly spatial (geometry and sculpture). Democritus supposedly said “Nothing exists except atoms and void” and his atoms, like Newton’s, were indestructible and eternal. This view of the world, duly extended  by Galileo and Newton, has taken us to where we are now and I certainly don’t want to disparage the fantastic achievements dependent upon it.
But at around the same time as Democritus was active (VIth cetury BC) a homeless wanderer came to exactly the opposite conclusion, namely that “Everything is ephemeral, a ceaseless succession of point-like instants in a state of commotion”. This is the great thought of a timelike civilization and, strangely, though it has given rise to great art and poetry, it never gave rise to a form of science and technology like the spatial take on reality (Note 3). The speaker  stated cryptically that the concept of ‘the moment’ will soon give rise to a different kind of science and even a new technology. (My ponderings on this theme will be the subject of a subsequent post.)

Conclusion

I concluded by saying that it was completely appropriate that this discussion was taking place in (or rather just outside) an edifice built in honour of the Buddha. The briefest summary of (Hinayana) Buddhism is the following credo
“The Great Recluse identified the elements of existence (dharma), their causal interconnection (karma) and their ultimate extinction (nirvana)”.

Finally – and this was completely unplanned and a surprise even to me – I said “My message to you is ‘Hold fast to the moment’, ‘Seize the moment’ ”.

Intelligent discussion followed from the audience but we had to call it a day because of the weather.

Postscript Subsequently, I formed the project of giving a series of talks on related subjects in the open air at the Pagoda, probably on the last Sunday of each month (watch this space). If no one turns up it doesn’t really matter as it is a good place to be.      SH   15/12/12

_______________________________

Note 1   This principle of the ‘Seaparation of the Spheres” enables me to dismiss at one fell swoop the Theory of the Transfinite and all the Set Theory that depends on it as nonsense which indeed is how it appears to the ordinary person (if such still exist). I must admit to having some trouble deciding how to fit the ‘reality’, if it be reality, of what is described by the wave function in Quantum Mechanics into my schema — does the Schrodinger equation describe anything that really exists or not? But I’m in good company here since debate on the subject still rages unabated. .

Note 2.  See article Scientific American, February 2012   “Craig Hogan believes that the world is fuzzy…… [he] thinks that if we were to peer down at the tiniest subdivisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of static. This hum does not come from particles bouncing in and out of being or other kinds of quantum froth that the physicists have argued about in the past. Rather Hogan’s noise would come about if space was not, as we have long assumed, smooth and continuous, a glassy backdrop to the dance of fields and particles. Hogan’s noise arises if space is made of chunks. Blocks. Bits. Hogan’s noise would imply that the universe is digital.
He has devised an experiment to explore the buzzing at the universe’s most fundamental scales.”   Scientific American, February 2012

As I see it, if Hogan picks up an irreducible ‘static’ that is regular, this may well be caused by the spatial shift from one ksana to another. If, however, as I would expect, the noise is random, it would not come from ‘Space/Time’ (what I call the Locality) but from stray ‘ultimate events’ springing into existence and then disappearing without being able to form stable event-chains. There are, I suspect, very many more (I nearly said an ‘infinite number’ of) ultimate events that ‘do not make it’ and merely disappear for ever —  just as there are many many more elementary particles than the ones that form themselves into stable atoms.

Note 3  The men who elaborated the ‘dharma theory’ certainly had the clarity and intelligence to initiate a scientific revolution but their principal or exclusive concern was ‘soteriological’ : to provide a cure for mankind’s unhappiness. There was no point in delving deeper into the mechanisms underlying the physical (pseudo)world, the world of maya, and so, although the developed a system of logic and psychology (to help people towards enlioghtenment), they never developed a systematic physics.       SH  

 

Pagoda I want to start by expressing my gratitude to MeetUp in general and the London Futurists in particular for enabling this event to take place at all, the first time ever that my ideas have been aired in a public place. I intended to conclude the meeting with an expression of my debt to MeetUp,  the Futurists and founder/organiser David Wood, but unfortunately this slipped my mind as the meeting broke up fairly rapidly after a full hour in the cold. (A summary of my talk will be given in a subsequent post.)
The meeting at the Pagoda on Sunday was, as far as I am concerned, well attended — I did not expect or desire  crowds. All those present seem to have had serious intent and to judge by the thoughtful comments made in the discussion afterwards (drastically curtailed because of the cold) they grasped the main drift of my argument. Some missed the meeting because of the weather or did not find us because we were hidden behind a wall on the south side of the Pagoda.

Two persons have already said they would like to have heard the talk and wondered whether there could be a repeat. However, I feel that my ideas are rather far from the framework and general ethos of the London Futurists — though naturally if asked I would be glad to repeat the talk indoors somewhere at a later date. Instead, I plan to have a monthly series of talks/discussions on various issues arising from ‘Ultimate Event Theory’, the scientific and philosophical system I am currently developing. The place will remain the Peace Pagoda, Battersea Park, South facing wall, at 2 p.m. on a date to be announced, probably the last Sunday of each month — watch this site in January. If no one comes at all, the session won’t be wasted since I will be periodically renewing my contact with the ideas of the Buddha via the beautiful edifice in Battersea Park.

What follows is ‘matters arising’ from the talk:

Three stages

It is said that every new scientific idea goes through three stages : Firstly, they say it is not true, secondly, they say it is not important and, thirdly, they credit the wrong person.
Although I am to my knowledge the first person to have taken the world-view of Hinayana Buddhism seriously as a physical theory (as opposed to a religious or metaphysical doctrine), it is entirely appropriate that the first time Ultimate Event Theory was presented verbally to the public the venue was the Peace Pagoda (built by practising Buddhist craftsmen) since the theory I am developing, “Ultimate Event Theory”, can be traced back to the founder of one of the five great world religions, Buddhism.
Our science stems from the Greeks, in particular the atomist Democritus of Abdera  whose works have unfortunately been lost. He is credited with the amazing statement — reductionist if ever there was one —  “Nothing exists except atoms and void“. These atoms Democritus (and Newton) believed to be indestructile and eternal. Although we now know that some atoms decay, the statement is not so far out : around us are protons and neutrinos that have existed since the Big Bang nearly 15 billion years ago (or very soon afterwards). And as for the void, it is healthier and more vibrant than ever, since it is seething with quantum activity (Note 1).
Dharma    But around the same time when Democritus decided that the ultimate elements of existence were eternal atoms, Gautama Buddha in India reached exactly the opposite conclusion, namely that the dharma (‘elements’) were evanescent and that everything (except nirvana) ‘lasted for a moment only’.  A Buddhist credo summarised the teaching of the Buddha thus: “The Great Recluse identified the elements of existence (dharma), their causal interconnection (karma) and their ultimate extinction (nirvana)” (Stcherbatsky, The Central Conception of Buddhism).
I must emphasize that the theory I am developing, Ultimate Event Theory, is a physical theory (though it has ramifications far beyond physics) and does not presuppose any religious belief, still less is it an underhand way of ‘preaching Buddhism’ or any other form of religion. The Buddha himself founded no Church and spent the latter part of his long life wandering around India giving talks in the open air to anyone who cared to listen. My original interest in Buddhist theory was ‘scientific/philosophical’ rather than ‘spiritual’.  It seemed to me that Gautama Buddha had, through the practice of meditation, intuited certain basic features of physical and mental reality, and concluded correctly that matter, mind, soul, personality and so on are all ‘secondary’ not primary entities — in today’s parlance they are ’emergent’ entities. He also saw, or rather felt, that ‘existence’ was not continuous but that everything (incuding the physical universe) is, as it were, being destroyed and recreated at every instant (the Theory of Instantaneous Being). I do not personally, however, conclude that the personality, consciousness, free will and so on are ‘illusory’ as the Buddhist tradition seems to have inferred, merely not primary, not basic.  At bottom we are seemingly all made up of elementary particles and forces between these particles but at a deeper level still I believe that everything is composed of momentary ‘ultimate events’ flashing into existence and then disappearing for ever. As far as I am concerned the buck stops here : beyond the dharma lies only the Absolute, the ground of all being, and this, though it can perhaps be glimpsed by mystics, is wholly outside the domain of science, rational thought and mathematics. “The Tao that can be named (or measured)  is not the original Tao”.      SH  5 December 2012

Note 1  For the claim that Space/Time is “grainy” see Is Space Digital by Michael Moyer, Scientific American Feb. 2012, also  “How big is a grain of space-time?”  by Anil Ananthaswamy (New Scientist 9 July 2011)

______________________________________________________________________

Genesis of Ultimate Event Theory :  My life could be divided into two periods, the first ending one morning in the late seventies when I came across a curious book with the bizarre title Buddhist Logic in Balham Public Library, Battersea, London.  In this book for the first time I came across the idea that had always seemed to me intuitively to be true, that reality and existence were not continuous but discontinuous and, moreover, punctured by gaps — as the German philosopher Heidegger put it  “Being is shot through with nothingness”. A whole school of thinkers, those of the latter Hinayana, took this statement as so obvious it was hardly worth arguing about (though they did produce arguments to persuade their opponents, hence the title of the book).
This well-written tome of Stcherbatsky, not himself a practising Buddhist, thus introduced me to the ideas of certain Hinayana thinkers during the first few centuries of the modern era (Dignaga, Vasubandhu et al.)  I saw at once how ‘modern’ their views were and how, with a certain ingenuity, one could perhaps transform their ‘metaphysics’ into a physical theory very diffferent from what is taught today in schools. These deep and subtle thinkers, in every way the equal of the Greeks, had no interest in developing a physical theory for its own sake since their concern was with personal ‘enlightenment’ rather than the elucidation of the physical world.  Had they and their followers wished it, quite conceivably the world-wide scientific revolution would have taken place, not in the then backward West, but in India. But maybe the time was has now come for the insights of these men to take root some 1,800 years later on the other side of the world and to eventually become the basis of a new science and a new technology. Matter is getting thinner and thinner in contemporary physics so why not drop it entirely and stop viewing the world as the interaction of atoms or elementary particles ? According to Buddhism the ‘natural’ tendency of everything is not to last for ever (like Newton’s atoms) but to disappear and the relative persistence of certain rare event-chains is to be ascribed to a causal binding force, sort of physical equivalent of karma. There is no Space/Time continuum, only a connected discontinuum which is full of gaps. The universe itself will come to an end and everything will return to the absolute quiescence3 of nirvana — though some later Buddhist thinkers, like some conteomporary cosmologists, envisage a never-ending cycle of emergence/extinction/emergence……

Recommended Reading  Those interested in Buddhism as a ‘way of life’ are recommended to start (and also perhaps finish) with Conze, A Short History of Buddhism. This book really is short (132 small size pages) and so good that I seriously doubt whether anyone really needs to read any other book on the subject (unless they want to follow up a particular aspect of the theory) : the writing is clear, concise, comprehensive, pungent. If I were allowed to take only twenty books on a desert island, this would be one of them.
The Russian scholar Stcherbatsky whose books had such a big effect on me has written three seminal works covering the three main aspects of (Hinayana) Buddhism. The Central Conception of Buddhism concerns what I call ‘ultimate events’ (dharma),  Buddhist Logic deals in the main with causality (karma) and The Buddhist Conception of Nirvana with nirvana as one might expect.  Although it is the second book, Buddhist Logic (Volume 1 only), that influenced me, most interested readers would probably find it forbidding in aspect and would be advised to read the Central Conception of Buddhism first (100 pages only) , and not to bother at all with The Buddhist Conception of Nirvana which I found quite poor.

Pagoda

To all whom it might concern:   I am speaking to the London Futurists (plus anyone else who cares to come along) on “Does Infinity Exist?” at the Peace Pagoda, Battersea Park, London  2 p.m. Saturday 8th December

This incidentally will be the first time that I will be talking about Ultimate Event Theory in public (and it is only last year that I started putting posts up about it). (It has taken me all of thirty-five years to reach this point of no return.) It seems that the Pagoda is entirely the right location for such a discussion though it was not deliberately chosen by me, indeed not chosen at all. I had originally aimed to hold the meeting (the first I have ever called on such a subject) indoors somewhere in a venue in central London but could find nowhere available for this date chosen entirely at random. Then a few Sundays ago, my partner, the painter Jane Maitland, suddenly said “Why don’t we visit Battersea Park today?”, something we never do — the last time I was there was at least eight years ago. We passed by the Pagoda but didn’t go into it. That night it suddenly came to me that the best place to meet up was the Pagoda. Why the best place? Because the origins of Ultimate Event Theory go back all of two thousand and five hundred years to the ponderings of an Indian ascetic about the nature of the physical world and the misery of human existence.

The Theory of Special Relativity is based on two simple postulates, that “1. the laws of physics take the same form in all inertial frames” and “2. the observed speed of light in a vacuum is constant for all (inertial) observers irrespective of their relative motion”. I shan’t say much about the first postulate now or define an ‘inertial frame’ — basically a ‘frame’ where you can’t say whether you’re moving or not except by looking out of the window — but we need to look at the second.
It is important to realize that (2) is an extremely surprising claim. The speed of a train, for example, is by no means the same for all observers : for the person inside the train the speed is essentially zero since he/she considers himself quite rightly to be at rest unless there is a sudden jolt, but for someone standing alongside the track the speed of the train is, say, 120 miles an hour.  And for an observer in a spacecraft navigating the Earth it is different again (Note 1). Normally, we add speeds together and, if I rolled a marble along the corridor of an unaccelerating train in the direction of travel, the marble’s speed, judged by someone outside would be its speed in the train plus the speed of the train. How is it possible for light to have a constant recorded speed whether the emitter is in a spaceship receding from you or in your own train or spacecraft?
According to Ultimate Event Theory, light, like everything that “has occurrence” is composed of a finite number of ultimate events (Axiom of Finitude). Suppose simply for the sake of argument that the ‘reappearance rate’ of a photon (a specific type of repeating ultimate event) is 1 space/ksana (Note 2). We can represent this by
The blue block represents a repeating event that (rightly or wrongly) we consider to be ‘stationary’from ksana to ksana. My position from ksana to ksana is given by the green blocks and I consider myself to be drifting eastwards away from the blue blocks by one grid-position at each ksana, or, more likely would consider the blue blocks to be drifting steadily away from me westwards. The red blocks represent the positions of some other repeating event that I judge to be moving steadily away from me at a rate of 1 grid-position per ksana. Note that all three coloured blocks joined up give straight lines (they are, in traditional parlance, inertial systems). From the standpoint of the blue blocks, which arbitrarily we take as our ‘landmark sequence’, both the green and red event-chains are moving steadily to the right and the red ‘event-chain’ is moving ‘faster’ since it has a shallower gradient. The ‘speed’ (reappearance rate) of the red line can be calculated by noting the speed of the green blocks relative to the blue and adding on the speed of the green relative to the green. Whereas the green blocks are gaining an extra space each ksana, the red are gaining rather more but the increase (acceleration) is regular. All this is what one would expect.
However, according to Einstein’s Theory of Special Relativity, if light is emitted from the green blocks and the red simultaneously (i.e. within the same ksana), when we eventually pick up the signals at the blue block, compare distances and so on, we do not judge the speed of the light ray from red to be any different from the speed of the light ray from green.  This is extremely unexpected  but will have to be accepted, not because modern physics textbooks say this is so, but because countless actual experiments have (allegedly) failed to detect any difference in the observed speed of light irrespective of the relative movement of the source. Instruments have measured the speed of a light beam projected from an aircraft moving towards the observer and the speed of a light beam projected backwards from the tail of an aircraft moving away ─ and there is no appreciable difference (within experimental error). To see how astonishing this is, imagine a fighter aircraft gunning you down : if it is travelling towards you, the bullets will hit you rather sooner than if you were both travelling at around the same speed. And if the fighter aircraft is moving away from you faster than the ‘muzzle velocity’ of the machine-gun, the bullets from the tail-gun will never reach you at all! Light clearly behaves unlike material objects.
Assuming that Einstein’s prediction about the observed speed of light is substantially correct (which I believe), how can this anomaly be explained in terms of Ultimate Event Theory?  Certainly, there is nothing in my preliminary postulates or my original ‘universe model’ that would lead me to expect anything of the kind, quite the reverse. Since everything that has occurrence is composed of a finite number of ultimate events (the Axiom of Finitude) any and every apparently continuous burst of light is made up of so many individual ‘photonic events’. And the number of these events between two recognizable end-points is fixed once and for all. Also, I absolutely refuse to countenance the notion that the occurrence or not of an ultimate event depends on my personal state of motion or anything else pertaining to me since I consider this the worst kind of subjectivism. If we accept this, we have the absurd consequence that all sorts of things can be conjured into existence just by jumping into a train or a spaceship while they simply never happen at all for someone left behind on the ground !
It is true that I could account for the observed constancy of the photonic event-chain we call light by making the ultimate events themselves larger or smaller according to the relative motion of the observer and observed. But once again I am very reluctant to do this since the advantage of having truly elementary entities is that they have a minumum of attributes and these attributes (such as size) are fixed, are ‘absolute’. It would be equivalent to making the size or charge of a proton changeable in differing situations in ordwer to make certain observations come out right, something one would only wish to do if there was no alternative. The merit of the basic assumptions of Ultimate Event Theory is that they provide a comprehensible, simple framework (or so I would claim) and certainly the simplest and most reasonable assumption is to suppose that all ultimate events are of fixed size (supposing it makes any sense to talk of their having a size) and likewise that the positions available on the Locality are also of fixed size. And finally, for reasons of simplicity and also perhaps aesthetics, I insist on the ‘ksana’, the ‘temporal’ dimension of every event block  as being of fixed size.
If I were stuck with a strictly continuous model of reality, I would now be in an impossible situation. But my Event Locality — which the reader may envisage as, very roughly, the equivalent of ‘Space/Time’ in normal physics — is radically discontinuous, that is, there are gaps. The Locality is not a continuum but a connected dis-continuum, at any rate that section of it that is available to ultimate events. To make Ultimate Event Theory square with Special relativity (which I certainly consider desirable) the only possibility is to consider the ‘gaps’ between events, i.e. the ‘interval’ between co-existing grid-positions and also between successive grid-positions (i.e. between ksanas) as being ‘elastic’, ‘flexible’. These gaps are ‘non-metrical’, have no objective fixed extent and may thus function differently in different event-chains, or rather the same event-chain envisaged from a different perspective (Note 3).

Now, it is possible to maintain the same gradient in the diagram by adjusting the lateral and vertical spacings. Suppose I increase the drift to the right of the red square to represent an increase in speed of the spaceship as perceived by me.  Instead of the original speed of ‘one space to right per ksana’ we have, say, ‘two spaces/ksana’   i.e. we go from

 

 

 

 

However, if I compensate by spacing out the rows, representing the situation at successive ksanas, we have something more likespeed visuals final

The increased gap between rows, i.e. between successive ksanas,  corresponds to the famous ‘time dilation’ of Special Relativity.
There is, however, still an ‘extra space’ between the red squares in any row, a space which,  by hypothesis cannot be filled ─ since, if so, we would have something travelling faster than light which (according to Einstein) cannot occur. If we want to keep the ‘one space per ksana’ as the maximum ‘speed’ (reappearance rhythm) we can adjust matters by ‘spacing out’ the grid-positions within each ksana, in effect by suppressing the extra black square. This gives something like

c visualwhere the diagonal line red squares has roughly the same slant as in my original diagram ─ the difference is due to the deficiencies of my computer graphics. Spacing out the black squares (which correspond to possible locations of ultimate events) is equivalent to a ‘space contraction’, also a standard alleged effect of Special Relativity.
It must be stressed that there is a significant difference between this model and that of Special Relativity, at least as commonly understood. While the ‘length’ and ‘duration’ of objects (event conglomerates) or trajectories (event chains) are, as in SR, dependent on relative states of motion (reappearance rates), the number of ultimate events in any event chain is not relative but is ‘absolute’. Every trajectory between two marker events will have associated with it an ‘Ultimate Event Number’ which is completely independent of states of motion or material cosntituents or anything else you like to mention. We will not normally know this number — though we will perhaps one day be able to make an informed guess much as we can make an informed guess as to the number of molecules in a given piece of chalk — but it suffices to know that (according to the postulates of UET) this number exists and is unchangeable. I have enshrined this in one of the fundamental assumptions of the theory, the Axiom of Occurrence, “Once an ultimate event has occurrence, there is no way in which it can be altered or prevented from having occurrence : its occurrence is absolute.”
It is not yet entirely clear to me what consequences this principle would have in actual physical situations. It would mean, for example, that the ‘event number’ for the voyage of the twin who goes off on a trip at nearly the speed of light would be the same for both brothers : simply travelling around is not going to conjure into existence events which do not exist for the stay at home brother. If the twin is indeed ‘younger’ when he returns (as Special relativity predicts) this can only be because the gaps between the two twins’ biological events such as heart beats are relatively shorter or longer. Of course, no such experiment could ever be carried out and the occurrence is not in fact covered by the theory of Special Relativity since accelerations are involved when the space traveller takes off, turns round and lands. However, there may be a way to test the independence of the event number in cases of the decay of particles entering the Earth’s orbit, the usual example given of differing time scales because of SR.        SH  26/11/12

___________________________________________

Note 1  Zeno of Elea pointed out the relativity of motion in his Paradox of the Chariot. What, he asked, was the ‘true’ speed of a chariot in a chariot race? This differed according to whether you adopt the standpoint of the spectator in the stand or that of the different charioteers in the race. By this thought experiment, Zeno seems to have been attempting to show that there was no such thing as ‘absolute motion’ since the perceived motion depended on the observer’s own state of motion. Newton was deeply bothered by the problem and came to the strange sounding conclusion that ‘absolute motion’ could only mean the motion an object had “relative to the fixed stars”. But today we know that the position of the stars is not at all fixed because of expansion, galactic rotation and so on.

Note 2 This ‘speed’ is, I must emphasize, purely illustrative. The actual speed or rather ‘reappearance rate’ of a photonic event chain would be far, far greater than this : a photonic event would have to shift billions of grid positions to the right or left from ksana to ksana relative to a ‘stationary’ event-chain. It would be interesting to know if there is an event chain whose reappearance rate is exactly 1 space/ksana. This is, incidentally, not the slowest possible rate since, as will be discussed subsequently, I envisage reappearance rates where, during many ksanas, the event does not repeat at all. For example, there could be a reappearance rate of  1 space/7 ksanas or 1 space/100 ksanas and so on. This could be expressed as a reappearance rate of “1/7 spaces per ksana” but this would give the unwary the wrong impression : neither grid positions not ksanas can be subdivided and that is that.

Note 3  This solution would seem to be closest to the spirit of the Special Theory of Relativity. Einstein and his followers continually emphasize that an observer within a given inertial frame would notice nothing untoward : he or she would consider himself to be at rest and the other inertial frame to be ‘moving’. There is only ever a problem when, at a later date, the two observers, one within a given frame and one outside it and in a second inertial frame, confront each other with their meticulous observations. In my terms, each observation is ‘correct’ for the individual concerned because the gaps  between events “have no intrinsic length” and thus may legitimately ‘vary’ according to the standpoint adopted. Are these discrepancies ‘real’ or sim,ply how things appear? There is general agreement that the viewpoint of any and every ‘inertial observer’ is equally legitimate :“there is no truth of the matter” as Martin Gardner put it. I am not sure that this answer is sufficient but I cannot improve on it : I ‘resolve’ the problem by simply positing that the Locality is non-metrical and so all sorts of different metrics can be legitimately ascribed to it provided we keep to the chosen metric.
But what is there between ultimate events? Just the emptiness between adjacent grid-positions. This may remind some readers of the so-called ‘ether’ in which all 19th century physicists believed. It is commonly stated that Einstein ‘did away with the ether’ but this is not strictly true. In a quote that unfortunately I cannot at present trace, he said that “the ether has no physical properties but does have geometrical properties”. By this one should understand that the background ether does not, for example, offer any noticeable resistance to the passage of bodies through it but can (and does) affect space-time the direction of trajectories. After banning mention of the ether for over sixty or so years, the ‘ether’ is well and truly back in physics again, re-baptised the vacuum and far from being empty it is vibrant with quantum energy.  “The modern conception of the vacuum is one of a seething ferment of quantum field activity, with waves surging randomly this way and that. In quantum mechanics waves also have characteristics of particles — photons for the electro-magnetic field, gravitons for the gravitational field and so on — popping out of nowhere and disappearing again. Wave or particle, what one gets is a picture of the vacuum that is reminiscent , in some respects of the ether. It does not provide a special frame of rest against which bodies may be said to move, but it does fill all of space and have measurable physical properties such as energy density and pressure.”    Paul Davies, article NS  19 Nov 2011