Bertrand Russell bewails the passing of the scientific spirit with the Greeks and notes that from Plotinus (A.D. 204-70) onwards “men were encouraged to look within rather than to look without“. But there is much to be gained from looking within : the only thing is that the insights to be gained have not yet been turned into science and technology. Maybe their time has come or is coming.
India is a strange civilization since its leading thinkers seem not only to have considered what I call the Unmanifest as more important than the everyday physical world (the Manifest) but to have actually been more at home there. Nonetheless, lost within the dense thickets of abstruse Hindu and Buddhist speculation, there are ideas which may yet find their application, in particular the concept of dharma.
We think of Buddhism today as a philosophical religion that recommends non-violence and compassion but, admirable though such aims may be, they do not appear to have been at all the Buddha’s main concern, to judge by the development of the religion he founded during the six or seven centuries after his supposed life.

“The formula of the Buddhist Credo — which professedly contains the shortest statement of the essence and spirit of Buddhism — declares that Buddha discovered the elements of existence (dharmas), their causal connection, and a method to suppress their efficiency for ever. Vasubandhu makes a similar statement about the essence of the doctrine : it is a method of converting the elements of existence into a condition of rest, out of which they never will arise again.” Stcherbatsky, The Central Conception of Buddhism

The (Hinayana) Buddhist equivalent of Democritus’ terse statement “Nothing exists except atoms and void” would thus be something like
“Nothing exists except Nirvana, Karma and Dharma”

       Nirvana is the state of absolute quiescence which is the end and origin of everything.
     Karma (literally ‘activity’, ‘action’) almost always has a strong moral sense in Buddhism — “[It is] that kind of activity which has an ethical charge and which must give rise to a ‘retributionary’ reverberation at a later time” (Anacker, Works of Vasubandhu). To be karmic an act must first of all be deliberate and, secondly, must be the result of an intent to harm another sentient being, or the result of an intent to relieve suffering. Although the Buddha categorically affirmed freedom of will, Buddhist psychology, known as Abhidharma, naturally accepted that most of our daily actions such as eating, sleeping and so on are ‘quasi-automatic’ and do not bring about either reward or punishment in this or a future life. But this concentration on moral acts and their consequences merely underlines the whole aim and approach of Buddhism as a religion/philosophy which is to bring to an end the suffering that is an inevitable part of human (and animal) existence. It would thus seem perfectly legitimate to extend the sense of karma to causal processes in general — “the law of karma,….is only a special case of causality” (Stcherbatsky, BL).  Had the Buddhist thinkers wished to develop a physical as opposed to a spiritual/psychological belief system, they would most likely have made karma (in this extended sense) a prominent feature of such a system.  But not only did they abstain from doing so but they would have regarded excessive interest in the physical world as altogether undesirable since it did not further enlightenment but on the contrary tended to obstruct it.
     For (Hinayana) Buddhist thinkers of the time dharma(s) are the ephemeral but constantly reappearing ‘elements’ which make up absolutely everything we think of as real, material or immaterial. All alleged ‘entities’ such as Matter, Soul, the universe, individual objects, persons  &c. &c. are not true entities but merely bundles (skandhas) or sequences (santanas) of dharmas. Hence the first line of my poem (see Note)

                      Just elements exist, there is no world”

     Although the subsequent Mahayana (‘Greater Vehicle’) Buddhist thinkers denied the ‘absolute reality’ of the ‘dharma(s)’, the Hinayana thinkers of this era (Vasubandhu, Dignaga, Dharmottara &c.) emphatically affirmed their reality — but with the proviso that our ‘normal’ perceptions are hopelessly distorted by irrelevant intellectual additions that are delusory. The dharma(s) have sometimes been compared to the noumena or ‘Things-in-themselves’ of Kant, but they are in fact what Kant would have called phenomena, but phenomena purified by a (usually) long and painful process of demystification and deintoxication. The Hinayana philosophical approach is all on its own in claiming that knowledge of what is ‘really real’ does not at all entail fleeing from the physical world into a transcendent Neverneverland but on the contrary recovering the pristine world of ‘direct sensation’ — “in pure reality there is not the slightest bit of imaginative construction” (Stcherbatsky, BL).

   All this is all very well, but what exactly are the dharma(s) and to what extent can they be made to form the basis of a physical theory?  (Although the plural of dharma is made by adding an ‘s’ I cannot quite accustom myself to doing this.) Being irreducibles, there is nothing more elementary in terms of which the dharma(s)y can be defined. However, what can be said, summarizing the conclusion of Stcherbatsky’s excellent book and other sources, is that dharma(s) are :

1. entirely separate one from another;
2. have no duration;
3. tend to congregate in bundles;
4. are subject to a causal force which makes them ‘co-operate’ with one another.
5. are in a perpetual state of commotion.

    I draw certain far-reaching, possibly fanciful, conclusions from (1-5) above — or, if you like, I interpret them in accord with my own independent thought-experience.
    (1) to my mind implies that there are gaps between dharma(s) and thus that there are no continuous entities whatsoever — with the exception of nirvana which one could (perhaps) just conceivably equate with the quantum vacuum.
    (1) in combination with (2) means that there is incessant change (replacement of one dharma by another) but strictly speaking no motion, no continuous motion that is. What we call motion is nothing but consecutive dharma(s) which are so close to each other that the mind merges them together just as it does the separate images on a cinema screen. “Momentary things” writes Kamalasila, “cannot displace themselves because they disappear at the very place at which they appeared”.
    (3) explains, or rather describes, the appearance of what we consider to be matter : it is the result of the ‘combining’ — the Indian sources say ‘co-operating’ — tendencies of the dharma(s).
    (4) recognizes that what has occurrence is subject to certain formal ‘laws’, i.e. events do not usually occur at random and certain events are invariably followed by similar different events with which they are regularly associated (‘This being, that arises’).
    It is difficult to know what to make of (5), the claim that the dharma are ‘turbulent‘, ‘agitated‘ — though this is perhaps the most important characteristic of the dharma(s). (The Buddha was doubtless thinking of the great difficulty of ‘quietening’ the mind during meditation and, for that matter, during all conscious states.)  Now, air or water can be turbulent — what does this mean?  Physically, if we are to believe the current scientific world view, it means that the microscopic molecules that make up what we loosely call ‘air’ or ‘water’ are rushing about in a random manner, colliding violently with each other. This state of commotion is to be contrasted with the state of affairs when everything is ‘still’ —  though, according to contemporary science, the molecules of a fluid are still moving about randomly even when the fluid is ‘in equilibrium’ (albeit less violently). There is, interestingly, no mention in Buddhist literature of the dharma(s) actually colliding with one another even when they are collected into bundles (‘skandhas‘). So the ‘turbulence’ should perhaps be interpreted as the tendency of these ‘elements‘ to reform, or rather to bring into momentary existence other similar dharma(s) until, eventually, when finally pacified, they cease altogether to conglomerate in space or to persist in time Vasubandhu’s “condition of rest from which they never shall arise again”.
    Although the following conception is much more Hindu than Buddhist in spirit, and would have been strenuously rejected by the Buddhists who developed the dharma theory, I personally envisage the ‘turbulence’ as pertaining to an invisible, all-pervasive substratum: the dharma(s) are specks of turbulence on the surface of a sort of cosmic fluid, foam on an invisible ocean. When the turbulence dies away, the ocean returns to its original state of quiet — until the next cycle commences. Where have the dharma(s) gone to? Nowhere. What we call ‘matter’ and ‘life’ are nothing more (nor less) than a temporary surface film on this enduring ‘sub-stance’. The universe is a knot tied in a (non-material) string : it is pointless to ask where the knot has gone to when the knot is finally untied.

S.H. 4/10/19

Abbreviations:     BL  refers to Buddhist Logic Vol. I by Stcherbatsky (Dover Publications 1962, an “unabridged republication of the work first published by the Academy of Sciences of the U.S.S.R., Leningrad, circa 1930”).

Note  The full version is:

Just elements exist, there is no world,
Events emerge from nowhere, blossom, fall,
just elements exist, there is no world,

Events emerge from nowhere, blossom, fall,
Like hail upon the earth or glistening froth,
Just elements exist, there is no world.

 Like hail upon the earth or glistening froth,
The dharma form and open, scatter, burst,
Each moment brings forth others, vanishes.

The dharma form and open, scatter, burst,
Glistening the froth appears and thunderous the hail,
Just elements exist, there is no world.

Glistening the froth appears and thunderous the hail,
As ceaselessly the living dharma form,
Each moment brings forth others and then vanishes.

from Origins by Sebastian Hayes

A completely axiomatic theory purports to make no appeal to experience whatsoever though one doubts whether any such expositions are quite as ‘pure’ as their authors claim. Even Hilbert’s 20th century formalist version of Euclid, his Grundlagen der Geometrie, has been found wanting in this respect ─ “A 2003 effort by Meikle and Fleuriot to formalize the Grundlagen with a computer found that some of Hilbert’s proofs appear to rely on diagrams and geometric intuition” (Wikipedia).

What exactly is the axiomatic method anyway? It seems to have been invented by the Greeks and in essence it is simply a scheme that divides a subject into :
(1) that part which has to be taken for granted in order to get started at all ─ in Euclid the Axioms, Definitions and Postulates (Note 1); and
(2) that part which is derived by valid chains of reasoning from the first, namely the Theorems ─ Heath calls them ‘Propositions’.
A strictly deductive, axiomatic presentation of a scientific subject made perfect sense in the days when Western scientists believed that an all-powerful God had made the entire universe with the aid of a handful of mathematical formulae but one wonders whether it is really appropriate today when biology has become the leading science. Evolution proceeds via random mutation plus ruthless selection and human societies and/or individuals often seem to owe more to happenstance and experience than reasoning and logic. Few, if any, important discoveries in mathematics have been strictly deductive: I doubt if anyone ever sat down of an evening with the Axioms of von Neumann Set Theory in order to deduce something interesting and original, and certainly no one ever learned mathematics that way (except possibly a robot). For all that, the structural simplicity and elegance of the axiomatic method remains extremely appealing and is one of the reasons why Euclid’s Elements and Newton’s Principia are among the half dozen best-selling books of all time ─ though few people read them today.
Apart from the axioms which are an integral part of a science or branch of mathematics, there exist also certain methodological principles (or prejudices) which, properly speaking, don’t belong to the subject, but nonetheless determine the general approach and overshadow the whole work. These principles should, ideally, be stated at the outset though they rarely are.

There are two principles that I find I have used implicitly or explicitly throughout my attempts to kick-start Ultimate Event Theory. The first is Occam’s Razor, or the Principle of Parsimony, which in practice means preferring the simplest and most succinct explanation ‘other things being equal’. According to Russell, Occam, a mediaeval logician, never actually wrote that “Entities are not to be multiplied without necessity” (as he is usually quoted as stating), but he did write “It is pointless to do with more what can be done with less” which comes to much the same thing. The Principle of Parsimony is uncontroversial and very little needs to be said about it except that it is a principle that is, as it were, imposed on us by necessity rather than being in any way ‘self-evident’. We do not really have any right to assume that Nature always chooses the simplest solution: indeed it sometimes looks as if Nature enjoys complication just for the sake of it. Aristotle’s Physics is a good deal simpler than Newton’s and the latter’s much easier to visualize than Einstein’s: but the evidence so far seems to favour the more complicated theory.
The second most important principle that I employ may be called the Principle of Parmenides, since he first stated it in its most extreme form,
       “If there were no limits, there would be nothing”.
In the context of Ultimate Event Theory this often becomes:
        “If there were no limits, nothing would exist, except (possibly) the Locality itself”
and the slightly different “If there were no limits, nothing would persist”.

      This may sound unexceptional but what I deduce from this principle is highly controversial, namely the necessity to expel the notion of actual infinity from science altogether, and likewise in mathematics (Note 2). The ‘infinite’ is by definition ‘limitless’ and so falls under the ban of this very sensible principle. Infinity has no basis in our sense experience since no one, with the exception of certain mystics, has ever claimed to have ‘known’ the infinite. And mystical experience, though perfectly valid and apparently extremely enjoyable, obviously requires careful assessment before it can be introduced into a theory, scientific or otherwise. In the majority of cases, it is clear that what mystics (think they) experience is not at all what mathematicians mean by the sign ∞ but is rather an alleged reality which is ‘non-finite’ in the sense that any form of measurement would be totally inappropriate and irrelevant. (It is closer to what Bohm calls the Implicate Order as opposed to the Explicate Order ─ unhappy names for a  very useful dichotomy). In present-day science, ‘infinity’ simply functions as a sort of deus ex machina (Note 3) to get one out of a tight spot, and even then only temporarily. As far as I know, there is not a scrap of evidence to suggest that any known process or observable entity actually is either ‘infinitely large‘ or ‘infinitely small’. All energy exchanges are subject to quantum restrictions (i.e. come in finite packages) and all sorts of entities which were once regarded as ‘infinitely small’ such as atoms and molecules can now actually be ‘seen’, if only via an electron tunnelling microscope. Even the universe we live in, which for Newton and everyone else alive in his time, was ‘infinite’ in size, is sometimes thought today to have a finite current extent and is certainly thought to have a specific age (around 13.8 billion years). All that is left as a final bastion of the infinity delusion is space and time and even here one or two noted contemporary physicists (e.g. Lee Smolin and Fay Dowker) dare to suggest that the fabric of Space-Time may be ‘grainy’. But enough on this subject which, in my case,  tends to become obsessive.
What can an axiomatic theory be expected to do? One thing it cannot be expected to do is to give specific quantitative results. Newton showed that the law of gravitation had to be  an inverse square distance law but it was some time before a value could be attributed to the indispensable gravitational constant, G. And Eddington quite properly  said that we could conclude simply by reasoning that in any physical universe there would have to be an upper bound for the speed of a particle or the  transmission of information, but that we would not be able to deduce by reasoning alone the actual value of this upper bound (namely c ≈ 108 metres/second).
It is also legitimate, even in a broadly axiomatic presentation, to appeal to common experience from time to time, provided one does not abuse this facility. For example, de Sitter’s solution of Einstein’s field equations could not possibly apply to the universe we (think we) live in, since his solution required that such a ‘universe’ would be entirely empty of matter ─ which we believe not to be the case.
One would, however, require a broadly axiomatic theory to lead, by reasoning alone, to some results which, as it happens, we know to be correct, and also, if possible, to make certain other predictions that no rival theory had made. And a  theory which embodies a very different ‘take’ on life and the world might still prove worthwhile stating even if it is destined to be promptly discarded: it might prepare the ground for other, more mature,  theories by pointing in a certain  unexpected direction. Predictive power is not the only goal and raison d’etre of a scientific theory : the old Ptolemaic astronomy was for a long time perfectly satisfactory as a predictive system and, according to Koestler, Copernicus’s original heliocentric system was no simpler. As a piece of kinematics, the Ptolemaic Earth-centred system was adequate and, with the addition of more epicycles could probably ‘give the right answer’ even today. However, Copernicus’s revolution paved the way for Galileo’s and Newton’s dynamical world-view in which the movements of planets were viewed in terms of applied forces and so proved far more fruitful.
It is also worth saying that a different world-view from the current established one may remain more satisfactory with respect to certain specific areas, while being utterly inadequate for other purposes. If one is completely honest, one would, I think, have to admit that the now completely discredited magical animistic world-view has a certain cogency and persuasiveness when applied to aberrant human behaviour:   this is why we still talk meaningfully of charm, charisma, inspiration, luck, jinxes, fascination, fate ─ concepts that belong firmly to another era.
Finally, the world-views of other cultures and societies are not just historical curiosities : people in these societies had different priorities and may well have noticed, and subsequently sought to explain, things that modern man is unaware of. Ultimate Event Theory has its roots in the world-views of societies long dead and gone: in particular the world-view of certain Hinayana Buddhist monks in Northern India during the first few centuries of our era, and that of certain Native Amerindian tribes like the Hopi as reflected in the
structure of their languages (according to the Whorf-Sapir theory).

                                                                                                                                SH  26/09/19

Notes :
Note 1  The status of the fourth and last Euclidian subsection, the Definitions, is not entirely clear: they were supposed to be ‘informative’ only in the manner of an entry in a dictionary and “to have no existential import”. On the other hand, Russell concedes that “definitions are often nothing more than disguised axioms”.

Note 2 This is in line with Poincare’s categorical statement, “There is, and can be, no actual infinity”. Gauss, often considered the greatest mathematician of all time, said something similar.

Note 3 A deus ex machine was , in Greek tragedy, a supernatural being who was lowered onto the stage by a sort of crane and whose purpose was to ‘save’ the hero or heroine when no one else could.
Larry Constantine, in an insightful letter to the New Scientist (13 Aug 2011 p. 30), wrote : “Accounting for our universe by postulating infinite parallel universes or explaining the Big Bang as the collision of “branes” are not accounts at all, but merely ignorance swept under a cosmic rug — a rug which itself demands explanation but is in turn buried under still more rugs.”



Events rather than Things

Descartes kicked off  modern Western philosophy with his thought experiment of deciding what he absolutely couldn’t disbelieve in. He concluded  that he could, momentarily at any rate, disbelieve in all sorts of things, even the existence of other people, but that he couldn’t disbelieve in the existence of himself, the ‘thinking being’. Now for anyone who has done meditation (and for some who  haven’t likewise), Descartes is way off. It really is possible to disbelieve in one’s own existence if by this we mean the ‘person’ who was born at such and such a date and place, went to such and such a school, and so on (Note 1). This ‘entity’ simply drifts away once you are alone, reduce the input from the outside world and confine yourself strictly to your present sensations. Indeed, it is often more difficult to believe that such a ‘being’ ever did exist, than to doubt its existence !
However, what you can’t dismiss even when meditating in isolation in a dark quiet room is the idea that there are some sort of events continually occurring, call them mental or  physical or psychic (at this level such distinctions have little meaning). My version of the cogito ergo sum is thus, “There are thought/sensations, therefore there is something”. Sensations and thoughts are not physical objects but events of a particular kind, so why not take the concept of the event as primary and see where one gets to from there.
Moreover, one can at once draw certain conclusions. There must seemingly be a ‘somewhere’ for these sensations/thoughts to occur just as there must be a location for extendable bodies. We require a ‘place’ : let us call it the Locality. There is, however, no obligation to identify this ‘place where mental/physical events are occurring’ as the head (or brain) of René Descartes or of Sebastian Hayes (the author of the present pamphlet) and to rashly conclude, as Descartes does, that such a person necessarily exists . Nor is there any need just yet to identify the Locality with modern Einsteinian ‘Space-Time’ (though clearly for some people there is an irressistible temptation to do so). A second deduction, or rather observation, is that these mental/physical events do not occur ‘all at once’, they come ‘one after the other’, i.e. they are successive events.
A further question that requires settling is whether these fleeting thought/sensations are connected up in some way. This is not so easy to answer. In some cases quite clearly a certain thought does give rise to another in much the same way as a certain physical impulse triggers an action. But there also seem to be cases when thought/sensations simply emerge from nowhere and drift away into nowhere, i.e. appear to be entirely disconnected from neighbouring events. The first category, the thoughts that follow each other according to a recognizable pattern, naturally us to believe in some form of Causality but there is reason to believe that it is not always operative.
All this seemed enough to make a start. I had a primary entity, the Event — primary because I couldn’t disbelieve in it — and, following closely after it in the sequence of ideas, the notions of an Event-Locality and of an Ordering of Events or Event-Succession. Finally, some causal principle linking one thought/event to another was needed which I eventually baptised Dominance, partly to emphasize the (usually) one-sided rapport between two or more events but also to stress that a force is at work. Today, the notion of a binding causal connection between disparate events has been largely replaced by the much weaker statistical concept of correlation ; indeed there is a strong tendency in contemporary scientific thought to  expel both causality and force from physics altogether.      

What is an Event?

Modern  axiomatic systems usually leave the basic notions, such as ‘lines’, ‘points’ &c. undefined for the good reason that, if they really are fundamental, there is nothing more basic in terms of which they can be described. At first glance this seems reasonable enough but the practice has always struck me as being rather deceitful. The authors of new-fangled geometries such as Hilbert know perfectly well that they could take for granted the reader’s prior knowledge of what a line or a point is — so why not say so?  My basic concept, the event, cannot be defined using other concepts that are more fundamental but what I can do is to openly appeal to the ‘intuitive’, or rather experiential, knowledge that people have of ‘events’ while striving to make this ‘prior knowledge’ more precise.
So what is an event? ‘Something that happens’, an ‘occurrence’…… It is easier to say what it is not. An event is not a thing. Why not? Because things are long-lasting, relatively permanent. An event is punctual, it is not lasting, not permanent, it is here and it is gone never to be experienced again. And it seems to have more to do with time (in the sense of succession) than space (in the sense of extension). An event is usually pinpointed by referring to events of the same type that happened before or after it, rather than by referring to events that happened alongside it. The Battle of Stalingrad came after the fall of France and before the Normandy invasion: the fighting that was going on in parts of Russia other than Stalingrad is not usually mentioned. An event is ‘entire’, ‘all of a piece’, ‘has no parts’, it is not a process, has no inner development since there is no ‘time’ (duration) for it to develop, it is here and then gone. The decline-and-fall of the Roman Empire is not an event.
An important consequence is that events cannot be tampered with –once they have happened, they have happened.

    “The moving finger writes and having writ,
Moves on, nor all they piety nor wit
Can lure it back to cancel half a line
    Nor all thy tears wash out a word of it”

But objects, since they are more spread out in time, are alterable, can be expanded, diminished, painted over, vandalized, restored, bought and sold and likewise individuals can change for the better or worse otherwise life would not be worth living.
Events also seem to be more intimately involved with causality than things. The question, “Why is that tree there?” though by no means  nonsensical sounds somewhat peculiar. But “Why did that branch break?” is a natural question to ask. Why indeed. And, as stated, events usually appear to be causally connected, we feel them to be very strongly bonded to specific other events which is why we look for ‘cause-and-‘effect’ pairs of events.
To sum up:  An event is punctual, sequential,  entire, evanescent, irrevocable, and usually dependent on earlier events.

Ultimate Events

But here we come across a problem.
Although an event such as a battle, an evening out, a chance meeting with a friend, even a fall, is perceived as a ‘single item’, as being entire — otherwise we would not call it an event — it is obvious that any event you like to mention can be subdivided into a myriad of smaller events. Even a blow with a hammer, though treated in mechanics as an impulsive force and thus as having no duration to speak of, is not in point of fact a single event since, thanks to modern technology, we can take snapshots showing the progressive deformation of the object struck.
So, are we to conclude that all events are in fact composite? This is, I suppose, a permissible assumption but it does not appeal to me since it leads at once to infinite regress. It is already bad enough having to treat ‘space’ as being ‘infinitely divisible’ as the traditional mathematical treatment of motion  assumes it to be. But it is much worse to suppose that any little action we make is in reality made up of an ‘infinite’ number of smaller events. I certainly don’t want to go down this path and so I find myself obliged at the very outset to introduce an axiom which states that there are certain events which cannot be further decomposed. I name these ultimate events and they play much the same role in Eventrics as atoms once did in physical theory.
Ultimate events, if they exist at all (and I am convinced they do) must be very short-lived indeed since there are many physical processes which are known to take only a few nanoseconds and any such process must contain more than one ultimate event. Perhaps ultimate events will remain forever unobservable and unrecordable in any way, though I doubt this since the same was until recently said of atoms prior to  the invention of the electron tunnelling microscope. Today it is possible to count the atoms on the surface of a piece of metal and sheets of graphene a single atom thick have either already been manufactured, or very soon will be. I can easily foresee that one day we will have the equivalent of Alvogrado’s number for standard bundles of ultimate events. Whether or not this will come to pass, what we can do right now is to assume that all the features that we attribute to ordinary events but which are only approximately true, are strictly true of ultimate events.  Thus ultimate events really are punctual, all of a piece, have no parts and so on.
Assuming that a macroscopic event is made up of a large number of ultimate events, there must seemingly be something that keeps the ultimate events separate, i.e. stops them fusing. There is here a further choice. Are the ultimate events stacked up tightly against each other so that their extremities touch, as it were, or are they separated by gaps? Almost all thinkers who have taken the concept of ‘atoms of time’ seriously have opted for the first possibility but it does not appeal to me, indeed  I find it implausible. If ultimate events (or chronons) have a sort of skin as cells apparently have, this would imply that there is at least a rudimentary structure, an ‘inside’ and an ‘outside’ to an ultimate event. This seems an unnecessary and, to me, rather artificial assumption; also there are advantages in opting for the second alternative that will only become apparent later. At any rate, I decided from the very beginning to assume that there are gaps between ultimate events which means that bundles and streams of macroscopic events are not just made up of discrete micro-entities (ultimate events) but are discontinuous in a very strong sense. This is an extremely important assumption and it applies right across the board. Since everything is (by hypothesis) made up of ultimate events, it means that there are no truly physical continuous entities whatsoever with the single exception of ultimate events themselves (since they are entire by definition) and (possibly) the Locality itself. As the philosopher Heidegger put it in a memorable phrase, “Being is shot through with nothingness”.

A  (very) Rough Visual Image

Many of the early Western scientists had a clear mental picture of solid bodies knocking into each other like billiard balls and, reputedly, Newton had a Eureka moment on seeing an apple falling to the ground in the orchard of the family farm (Note 1). Such mental pictures, though they do not always stand up to close scrutiny have nonetheless been extremely helpful (as well as misleading) to scientists and philosophers in the past. Today abstraction is the name of the game but I suspect that many a pure mathematician employs crude images on the sly when no one is looking — some brave spirits even admit to doing so. I think it is better to declare one’s mental imagery from the outset. I picture to myself a sort of grid extending in all possible directions, or, better, a featureless expanse which reveals itself to be such a grid as soon as, or just before, an event ‘has occurrence’. Moreover, I imagine an ultimate event completely filling a grid-cube or grid-disc so that there is no room for any other ultimate events at this spot. This is the image that comes to mind when I say to myself, “This particular event has occurrence there and nowhere else”.
I now stipulate that a ‘spot’ of this grid is either occupied or empty but not both at once. This might seem obvious but it is nonetheless worth stating : it is the equivalent of the logical Law of Non-Contradiction but applied to events. No kind of prediction system would be much use to anyone if, say, it predicted that there would be an eclipse of the moon at a particular place and time and that there would simultaneously not  be an eclipse at the same spot. One might reasonably object that Quantum Mechanics with its superposition of states does not verify this principle, but that is precisely why Quantum Mechanics is so worrisome (Note 2).
Thirdly, I assume that once a square of the grid is occupied it remains occupied ‘forever’. This is merely another way of saying, “What has happened has happened”, and I doubt if many people would quarrel with that. It is not possible to rewrite the (supposed) past because such events are not accessible to us and, even if they were, they could not be tampered with : there is no way un-occur an event, or so I at any rate believe.
Finally, for the sake of simplicity, I assume to begin with that all ultimate events are the ‘same size’, i.e. occupy a spot of equivalent size on the Locality.

Axioms of Ultimate Event Theory

Putting these last assumptions together, along with my requirement that every occurrence can be decomposed into so many ultimate events, also my requirement that there must be some sort of interconnectedness between certain events, we have a set of axioms, i.e. assertions which it is not necessary or possible to ‘prove’ — you either take them or leave them. The whole art of finding the right axioms is to choose those that seem the most ‘reasonable’ (least far-fetched) but which readily giving rise to non-obvious deductions. Ultimately the validity of the axioms depends on what one can make them do (Note 3).
Ultimate Event Theory, or my contemporary version of it, thus seems to require the following set of Definitions and Axioms :

 FUNDAMENTAL ITEMS:    Events, the Locality, Succession, Dominance.

    An ultimate event is an event that cannot be further decomposed.
    The Locality is the connected totality of all spots where ultimate events may have occurrence.
   Dominance is an influence that certain ultimate events exert on other events and on (repetitions of) themselves.


Everything that has occurrence is made up of a finite number of ultimate events.


All ultimate events have the same extent, i.e. occupy spots on the Locality of equivalent size.


A  spot on the Locality may receive at most one ultimate event, and every ultimate event that has occurrence occupies one, and only one, spot on the Locality.


 A spot on the Locality is either  full, i.e. occupied by an ultimate event, or it is empty, but not both at once.


 If an ultimate event has occurrence,  there is no way in which it  can be altered or prevented from having occurrence.


Only events that have occurrence on the Locality may exercise dominance over  other events.


There are gaps between successive ultimate events.

                                                                                                                                            SH  25/09/19

Note 1  “Introspective experience [according to Buddhists] shows us no ‘ego’ at all and no ‘world’ but only a stream of all sorts of sensations, strivings, and representations which together constitute ‘reality'”  (Max Weber, The Religions of India

Note 2 Historians are often embarrassed by anecdotes about Newton seeing an apple fall and wondering whether there might be a universal ‘force of attraction’. But there is plenty of good evidence that the story is based on fact though Newton gave slightly different accounts of it in later life as we all tend to do about really important events. It is notable that (arguably) the three greatest scientists of all time, Archimedes, Newton and Einstein all had Eureka moments.

Note 3   If one accepts the original Schrödinger schema of Quantum Mechanics, the wave function itself does not model ‘events’ since whatever is going on prior to the collapse of the wave function, entirely lacks the specificity and decisiveness of events. So there are apparently ‘real entities’ that are not composed of ultimate events. But we lack appropriate terms to deal with such semi-realities: probability’ is far too weak a term, ‘potentiality’ is in every way preferable. Contemporary scientific parlance studiously avoids the concept of ‘potentiality’, so important for Aristotle,  because of the dead weight of positivism — but the concept is due for a revival.

Note 4   Since sketching out the barebones of this theory some thirty years ago, I have somewhat lost faith in the appropriateness of the axiomatic method but, until something better is available, one continues to use it. We enter the drama of life in media res, as it were, and, I am inclined to think that, like human societies or animal species, the universe  itself ‘makes things up as it goes along’, as it were, subject to some very general fundamental constraints of a logical nature. Such an Experimental Universe Theory is not yet the accepted contemporary scientific paradigm by a long shot but we seem to be moving steadily towards it.




The Rise and Fall of Atomism

So-called ‘primitive’ societies by and large split the world into two, what one might call the Manifest (what we see, hear &c.) and the Unmanifest (what we don’t perceive directly but intuit or are subliminally aware of). For the ‘primitives’ everything originates in the Unmanifest, especially drastic and inexplicable changes like earthquakes, sudden storms, avalanches and so on,  but also more everyday but nonetheless mysterious occurrences like giving birth, changing a substance by heating it (i.e. cooking), growing up, aging, dying. The Unmanifest is understandably considered to be much more important than the Manifest — since the latter originates in the first but not vice-versa — and so the shaman, or his various successors, the ‘sage’, ‘prophet’, ‘initiate’ &c. claims to have special knowledge because he or she has ready access to the Unmanifest which normal people do not.  The shaman and more recently the priest is, or claims to be, an intermediary between the two realms, a sort of spiritual marriage broker. Ultimately, a single principle or ‘hidden force’ drives everything, what has been variously termed in different cultures mana, wakanda, ch’i ….  Ch’i is ‘what makes things go’ as Chuang-tzu puts it, in particular what makes things more, or less, successful. If the cheetah can run faster than all other animals, it is because the cheetah has more mana and the same goes for the racing car; a warrior wins a contest of strength because he has more mana, a young woman has more suitors because of mana and so on.
Charm and charisma are watered down modern versions of mana and, like mana, are felt to originate in the beyond, in the non here and now, in the Unmanifest. This ancient dualistic scheme is far from dead and is likely to re-appear in the most unexpected places despite the endless tut-tutting of rationalists and sceptics; as a belief system it is both plausible and comprehensible, even conceivably contains a kernel of truth. As William James put it, “The darker, blinder strata of character are the only places in the world in which we catch real fact in the making”.
Our own Western civilization, however,  is founded on that of Ancient Greece (much more so than on ancient Palestine). The Greeks, the ones we take notice of at any rate, seem to have been the first people to have disregarded the Unmanifest entirely and to have considered that supernatural beings, whether they existed or not, were not required if one wanted to understand the physical universe: basic natural processes properly understood sufficed (Note 1). Democritus of Abdera, whose works have unfortunately been lost,  kicked off a vast movement which has ultimately led to the building of the Hadron Particle Collider, with his amazing statement, reductionist if ever there was one, Nothing exists except atoms and void.

Atoms and void, however, proved to be not quite enough to describe the universe : Democritus’s whirling atoms and the solids they composed when they settled themselves down were seemingly subject to certain  ‘laws’ or ‘general principles’ such as the Law of the Lever or the Principle of Flotation, both clearly stated in quantitative form by Archimedes.  But a new symbolic language, that of higher mathematics, was required to talk about such things since the “Book of Nature is written in the language of mathematics” as Kepler, a Renaissance successor and great admirer of the Greeks,  put it. Geometry stipulated the basic shapes and forms to which the groups of atoms were confined when they combined together to form regular solids — and so successfully that, since the invention of the high definition microscope, ‘Platonic solids’ and other fantastical shapes studied by the Greek geometers can actually be ‘seen’ today embodied in the arrangement of molecules in rock crystals and in the fossils of minute creatures known as radiolarians.
To all this Newton added the important notion of Force and gave it a precise meaning, namely the capacity to alter a body’s state of rest or constant straight line motion, either by way of contact (pushes and pulls) or, more mysteriously, by  ‘gravitational attraction’ which could operate at a distance through a vacuum. Nothing succeeds like success and by the middle of the nineteenth century Laplace had famously declared that he had “no need of that hypothesis”  — the existence of God — to explain the movements of heavenly bodies while Helmholtz declared that “all physical problems are reducible to mechanical problems” and thus, in principle, solvable by applying Newton’s Laws. Why stop there? The dreadful implication, spelled out by maverick thinkers such as Hobbes and La Mettrie,  was that human thoughts and emotions, maybe life itself,  were also ultimately reducible to “matter and motion” and that it was only a question of time before everything would be completely explained scientifically.
The twentieth century has at once affirmed and destroyed the atomic hypothesis. Affirmed it because molecules and atoms, at one time considered by most physicists simply as useful fictions, can actually be ‘seen’ (i.e. mapped precisely) with an electron tunnelling microscope and substances ‘one atom thick’ like graphene are actually being manufactured, or soon will be. However, atoms have turned out not to be indestructible or even indivisible as Newton and the  early scientists supposed.  Atomism and materialism have, by a curious circuitous route, led us back to a place not so very far from our original point of departure since the 20th century scientific buzzword, ‘energy’, has disquieting similarities to mana.  No one has ever seen or touched ‘Energy‘ any more that they have ever seen or touched mana. And, strictly speaking, energy in physics is ‘Potential Work’, i.e. Work which could be done but is not actually being done  while ‘Work’ in physics has the precise meaning, Force × distance moved in the direction of the applied force.  Energy is not something actual at all, certainly not something perceptible by the senses or their extensions, it is “strictly speaking a definition rather than a physical entity, merely being the first integral of the equations of motion” (Heading, Mathematical Methods in Science and Engineering p. 546). It is questionable whether statements in popular science books such as “the universe is essentially radiant energy” have any real meaning — taken literally they imply that the universe is ‘pure potentiality’ which it clearly isn’t.
The present era thus exhibits the contradictory tendencies of being on the one hand militantly secular and ‘materialistic’ both in the acquisitive and the philosophic senses of the word, while the foundations of this entire Tower of Babel, good old solid ‘matter’ composed of  “hard, massy particles” (Newton)  and “extended bodies” (Descartes) has all but evaporated. When he wished to refute the idealist philosopher, Bishop Berkeley, Samuel Johnson famously kicked a stone, but it would seem that the Bishop  has had the last laugh.

A New Starting Point?

Since the wheel of thought concerning the physical universe has more or less turned full circle, a few brave 20th century souls have wondered whether, after all, ‘atoms‘ and ‘extended bodies’ were not the best starting point, and that one might do better starting with something else. What though? There was in the early 20th century a resurgence of ‘animism’ on the fringes of science and philosophy,  witness Bergson’s élan vital (‘Life-force’), Dreisch’s ‘entelechy‘ and similar concepts. The problem with such theories is not that they are implausible — on the contrary they have strong intuitive appeal — but that they seem to be scientifically and technologically sterile. In particular, it is not clear how such notions can be represented symbolically by mathematical (or other) symbols, let alone tested in laboratory conditions.
Einstein, for his part, pinned his faith on ‘fields‘ and went so far as to state that “matter is merely a region where the field is particularly intense”. However, his attempt to unify physics via a ‘Unified Field’ was unsuccessful: unsuccessful for the layman because the ‘field‘ is an elusive concept at best, and unsuccessful for the physicist because Einstein never did succeed in combining mathematically the four basic physical forces, gravity, electro-magnetism and the strong and weak nuclear forces.
More recently, there have been one or two valiant attempts to present and attempt to elucidate the universe in terms of ‘information’, even to view the extent of viewing it as a vast computer or cellular automaton (Chris Langton, Stephen Wolfram et al.). But such attempts may well one day appear just as crudely anthropomorphic as Boyle’s vision of the universe as a sort glorified town clock. Apart from that one hopes that the universe, or whatever is behind it, has better things to do than simply pile up endless stacks of data like the odious Super Brains of Olaf Stapledon’s prescient SF fantasy The Last and First Men whose only ’emotion’ is curiosity.

The Event

During the Sixties and Seventies, at any rate within the booming counter-culture, there was a feeling that the West had somehow ‘got it wrong’ and was leading everyone towards disaster with its obsessive emphasis on material goods and material explanations. The principal doctrine of the hippie movement, inasmuch as it had one, was that “Experiences are more important than possessions” — and the more outlandish the experiences the better.  Zen-style ‘Enlightenment’ suddenly seemed much more appealing than the Eighteenth century movement of the same name which spearheaded Europe into the secular, industrial era . A few physicists, such as Fritjof Capra, argued that, although classical physics was admittedly very materialistic in the bad sense, modern physics “wasn’t like that” and had strong similarities with the key ideas of eastern mysticism. However, though initially attracted, I found modern physics (wave/particle duality, quantum entanglement, Block Universe, &c. &c.) a shade too weird, and what followed soon after, String Theory, completely opaque to all but a small band of elite advanced mathematicians .
But the trouble didn’t start in the 20th century. Newtonian mechanics was clearly a good deal more sensible but Calculus, when I started learning mathematics towards middle age, proved to be a major stumbling block, not so much because it was difficult to learn as because its basic principles and procedures were so completely  unreasonable. D’Alembert is supposed to have said to a student who expressed some misgivings about manipulating infinitesimals, “Allez à l’avant; la foi vous viendra” (“Keep going, conviction will follow”), but in my case it never did. Typically, the acceleration (change of velocity) of a moving body is computed by supposing the velocity of the body to be constant during a certain ‘short’ interval in time; we then reduce this interval ‘to the limit’ and, hey presto! we have the derivative appearing like the rabbit out of the magician’s hat. But if the particle is always accelerating its speed is never constant, and if the particle is always moving, it is never at a fixed location. The concept of ‘instantaneous velocity’ is mere gobbledeegook as Bishop Berkeley pointed out to Newton centuries ago. In effect, ‘classical’ Calculus has its cake and eats it too — something we all like doing if we can get away with it — since it merrily sets δx to non-zero and zero simultaneously on opposite sides of the same equation. ‘Modern’, i.e. post mid nineteenth-century Calculus, ‘solved’ the problem by the ingenious concept of a ‘limit’, the key idea in the whole of Analysis. Mathematically speaking, it turns out to be irrelevant whether or not a particular function actually attains  a given limit (assuming it exists) just so long as it approaches closer than any desired finite quantity . But what anyone with an enquiring mind wants to know is whether in reality the moving arrow actually attains its goal or whether the closing door ever actually slams shut (to use two examples mentioned by Zeno of Elea). As a matter of fact in neither case do they attain their objectives according to Calculus, modern or classical,  since, except in the most trivial case of a constant function, ‘taking the derivative’ involves throwing away non-zero terms on the Right Hand Side which, however puny, we have no right to get rid of just because they are inconvenient. As Zeno of Elea pointed out over two thousand years ago, if the body is in motion it is not at a specific point, and if  situated exactly at a specific point, it is not in motion. 
     This whole issue can, however, be easily resolved by the very natural supposition (natural to me at any rate) that intervals of time cannot be indefinitely diminished and that motion consists of a succession of stills in much the same way as a film we see in the cinema gives the illusion of movement. Calculus only works, inasmuch as it does work, if the increment in the independent variable is ‘very small’ compared to the level of measurement we are interested in, and the more careful textbooks warn the student against relying on Calculus in cases where the minimum size of the independent variable is actually known — for example  in molecular thermo-dynamics where dn cannot be smaller than that of a single molecule.
In any case, on reflection, I realized that I had always felt ‘time’ to be discontinuous, and life to be made up of a succession of discrete moments. This implies — taking things to the limit —  that there must be a minimal  ‘interval of time’ which, moreover, is absolute and does not depend on the position or motion of an imaginary observer. I was thus heartened when, in my vasual reading, I learned that nearly two thousand years ago, certain Indian Buddhist thinkers had advanced the same supposition and even apparently attempted to give an estimate of the size of such an ‘atom of time’ that they referred to as a ksana. More recently, Whitrow, Stefan Wolfram and one or two others, have given estimates of the size of a chronon  based on the Planck limit — but it is not the actual size that is important as the necessary existence of such a limiting value (Note 2).
Moreover, taking seriously the Sixties mantra that “experiences are more important than things” I wondered whether one could, and should, apply this to the physical world and take as a starting point not the ‘fundamental thing’, the atom, but the fundamental event, the ultimate event, one that could not be further decomposed. The resulting general theory would be not so much physics as Eventrics, a theory of events which naturally separates out into the study of the equivalent of the microscopic and macroscopic realms in physics. Ultimate Event Theory, as the name suggests, deals with the supposed ultimate constituents of physical (and mental) reality – what Hinayana Buddhists referred to as dharma(s) — while large-scale Eventrics deals with ‘historical events’ which are massive bundles of ultimate events and which have their own ‘laws’.
        The essential as far as I was concerned was that I suddenly had the barebones of a physical schema : ‘reality’ was composed of  events, not of objects (Note 3), or “The world is the totality of events and not of things” to adapt Wittgenstein’s aphorism.  Ultimate Event Theory was born, though it has taken me decades to pluck up the courage to put such an intuitively reasonable theory into the public domain, so enormous is the paradigm shift involved in these few innocuous sounding assumptions.       S.H. (3/11/ 2019)

Note 1 There exists, however, an extremely scholarly (but nonetheless very readable) book, The Greeks and the Irrational by E.R. Dodds, which traces the history of an ‘irrational’ counter-current in Greek civilisation from Homer to Hellenistic times. The author, a professor of Greek and a one time President of the Psychical Research Society, asked himself the question, “Were the Greeks in fact quite so blind to the importance of non-rational factors in man’s experience and behaviour as is commonly assumed both by their apologists and by their critics?” The book in question is the result of his erudite ponderings on the issue.

Note 2 Caldirola suggests 6.97 × 10−24 seconds for the minimal temporal interval, the chronon ─ what I refer to by the Sanscrit term ksana. Other estimates exist such as 5.39 ×10–44  seconds. Causal Set Theory and some other contemporary relativistic theories assume minimal values for spatial and temporal intervals, though I did not know this at the time (sic).

Note 3 Bertrand Russell, of all people, clearly anticipated the approach taken in UET, but made not the slightest attempt to lay out the conceptual foundations of the subject.  “Common sense thinks of the physical world as composed of ‘things’ which persist through a certain period of time and move in space. Philosophy and physics developed the notion of ‘thing’ into that of ‘material substance’, and thought of material substance as consisting of particles, each very small, and each persisting throughout all time. Einstein substituted events for particles; each event had to each other a relation called ‘interval’, which could be analyzed in various ways into a time-element and a space-element. (…) From all this it seems to follow that events, and not particles, must be the ‘stuff’ of physics. What has been thought of as a particle will have to be thought of as a series of events. (…) ‘Matter’ is not part of the ultimate material of the world, but merely a convenient way of collecting events into bundles.”  Russell, History of Western Philosophy p. 786 (Counterpoint, 1979


Every event or event cluster is in Ultimate Event Theory (UET) attributed a recurrence rate (r/r) given in absolute units stralda/ksana where the stralda is the minimal spatial interval and the ksana the minimal temporal interval. r/r can in principle take the value of any rational number n/m or zero ─ but no irrational value. The r/r of an event is roughly the equivalent of its speed in traditional physics, i.e. it is a distance/time ratio.

If r/r = 0, this means that the event in question does not repeat.
If r/r = m/n this signifies that the event repeats m positions to the right every n ksanas and if r/r = −m/n it repeats m positions to the left.

But right or left relative to what? It is necessary to assume a landmark event-chain where successive ultimate events lie exactly above (or underneath) each other when one space-time ‘slice’ is replaced by the next. Such an event-chain is roughly the equivalent of an inertial system in normal physics. We generally assume that we ourselves constitute a standard  inertial system relative to which all other inertial systems can be compared ─ we ‘are where we are’ at all instants and so, in a certain sense, are always at rest. In a similar way we constitute a sort of standard landmark event-chain to which all other event-chains can be related. But we cannot see ourselves so we choose instead as standard landmark event chain some  object (=repeating event-cluster) that remains at a constant distance from us as far as we can tell.  Such a choice is clearly relative, but we have to choose some repeating event chain as standard in order to get going at all. The crucial difference is, of course, not between ‘vertical’ event-paths and ‘slanting’ event-paths but between ‘straight’ paths, whether vertical or not, and ones that are jagged or curved, i.e. not straight (assuming these terms are appropriate in this context). As we know, dynamics only really took off when Galileo, as compared to Aristotle, realized that it was the distinction between accelerated and non-accelerated motion that was fundamental, not that between rest and motion.

So, the positive or negative (right or left) m variable in m/n assumes some convenient ‘vertical’ landmark sequence.

The denominator n of the stralda/ksana ratio cannot ever be zero ─ not so much because ‘division by zero is not allowed’ as because ‘the moving finger writes and having writ, moves on” as the Rubaiyàt puts it, i.e. time only stands still for the space of a single ksana. So, an r/r where an event repeats but ‘stays where it is’ at each appearance, takes  the value 0/n which we need to distinguish from 0.
Thus 0/n ≠ 0

m/n is a ratio but, since the numerator is in the absolute unit of distance, the stralda, m:n is not the same as (m/n) : 1 unless m = n.  To say a particle’s speed is 4/5ths of a metre per second is meaningful, but if r/r = 4/5 stralda per ksana we cannot conclude that the event in question shifts 4/5ths of a stralda to the right every ksana (because the stralda is indivisible). All we can conclude is that the event in question repeats every fifth ksana at  a position four spaces to the right relative to its original position.
We thus need to distinguish between recurrence rates which appear to be the same because of cancelling. The denominator will thus, unless stipulated otherwise, always refer to the next appearance of an event. 187/187 s/k is for example very different from 1/1 s/k since in the first case the event only repeats every 187th ksana while in the second case it repeats every ksana. This distinction is important when we consider collisions. If there is any likelihood of confusion the denominator will be marked in bold, thus 187/187.

Also, the stralda/ksana ratio for event-chains always has an upper limit. That is, it is not possible for a given ultimate event to reappear more than M stralda to the right or left of its original position at the next ksana ─ this is more or less equivalent to setting c » 108 metres/second as the upper limit for causal processes according to Special Relativity. There is also an absolute limit N for the denominator irrespective of the value of the numerator, i.e.  the event-chain with r/r = m/n terminates after n = (N−1) — or at the Nth ksana if it is allowed to attain the limit.

These restrictions mean that the Locality, even when completely void of events, has certain inbuilt constraints. Given any two positions A and B occupied by ultimate events at ksana k, there is an upper  limit to the amount of ultimate events that can be fitted into the interval AB at the next or any subsequent ksana. This means that, although the Locality is certainly not metrical in the way ordinary spatial expanses are, it is not true in UET that “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event”(Note 1).       SH  11/09/19

Note 1 The statement “Between any two ultimate events, it is always possible to introduce an intermediate ultimate event” is the equivalent in UET of the axiom “Between any two points there is always another point” which underlies both classical Calculus and modern number theory. Coxeter (Introduction to Geometry p. 178) introduces “Between any two points….” as a theorem derived from the axioms of ‘Ordered Geometry’, an extremely basic form of geometry that takes as ‘primitive concepts’ only points and betweenness. The proof only works because the geometrical space in question entirely lacks the concept of distance whereas in UET the Locality, although in general non-metrical and thus distance-less, does have the concept of a minimum separation between positions where ultimate events can have occurrence. This follows from the general principle of UET based on a maxim of the great ancient Greek philosopher Parmenides:
“If there were no limits, nothing would persist except the limitless itself”.

The genesis of Ultimate Event Theory can be traced back to a stray remark made by the author of popular books on mathematics, W.W. Sawyer. In the course of an exchange of views on contradiction in mathematics, Sawyer threw off the casual remark that “a scientific theory would be useless if it predicted that an event such as an eclipse of the sun would happen at a given place and time, and also that it would not happen at the same time and place”. Such a ‘Law of Non-Contradiction for Events’ was assumed by all the classical physicists and seems to be a necessary (though never stated) assumption for doing science at all. Arguably, Quantum Mechanics does not verify this principle, but this is precisely why QM is so worrisome (Note 1).
Sawyer’s chance remark sounds innocuous enough, but the principle involved turns out to be extremely far-reaching. We have in effect a non-contradiction law for events (not statements), a building block of a physical not a logic theory. Now, it seems of the essence of an ‘event’ that it either happens ‘at a particular time and place’ or it does not: there is no middle ground. There would be little point in announcing that a certain musical or theatrical event was scheduled to take place in such and such a Town Hall on, say, Monday, the 25th of December in the year 20**, but also scheduled not to take place on the given time and date. And certainly once the time and date has passed, the ‘event’ either has occurred or it has not. Moreover, it seems to be of the essence of an ‘event’ to be ‘punctual’, ‘precise’ as to place and time.
An ‘event’, however, is clearly itself made up of smaller events, there are, as it were, macro- and micro-events.  Narrowing everything down and ‘taking the limit’, we end up with the eminently reasonable supposition that there are ‘ultimate events’, i.e. events that cannot be further decomposed. Secondly, since like macro-events they are ‘precise as to time and place’, we may presume that they, as it were, occupy a single ‘grid position’ on the Event Locality. This at any rate is the schema I proposed to work with.
There are two philosophic assumptions, one negative and one positive, built into this schema, namely 1. that there is no such thing as infinite regress and 2. that an ‘event’ has to happen ‘somewhere’. Calculus and much of traditional physics has ‘infinite regress’ (or ‘infinite divisibility’ which comes to the same thing) built into it, i.e. it rejects (1). Some contemporary systems such as Loop Quantum Gravity (LPG) are prepared to consider that space-time is perhaps ‘grainy’, but they do not see the need for an ‘event locality’, i.e. they reject (2). In  LPG what we call time and space are  simply ‘relations’ between basic entities (nodes) and have no real existence. And one could, of course, dispense both with actual infinity and an Event Locality i.e. reject both (1) and (2) — but such a course does not appeal to me. I opted to exclude infinity from my proposed system of the world but, on the other hand, to accept that that there is indeed a ‘Locality’, i.e. a ‘place’ where ultimate events can and do have occurrence.

Dispensing with actual infinity gets rid in one fell swoop of the ingenious paradoxes of Zeno and  the reality of Cantor’s transfinite sets in which no one except Cantor himself really believes. Instead of ‘infinite sets’, we have ‘indefinitely extendable sets’ which, as far as I can see, do all the work required of them without us having to (pretend to) believe in ‘actual infinity’. It is tedious to have to explain to mathematicians that so-called infinite sequences can indeed (and very often do) have a finite limit but that this limit is, in the vast majority of cases, manifestly not attained. The terms ‘sum’ and ‘limit’ are not interchangeable and so-called ‘infinite’ series only ever have partial sums, indeed are indefinitely extendable sequences of partial sums. For example the well known series 1 + 1/2 + 1/4 + 1/8 + …. has limit 2 but cannot ever attain it. Most (all?) non-trivial so-called ‘infinite’ series are strictly speaking interminable.
As to (2), the positive requirement, it is to me inconceivable that ‘something that happens’, i.e. an event, does not happen somewhere, i.e. has a precise position on some underlying substratum without which it simply could not occur. The idea of space and time being ‘relations’ between things that exist rather than things that exist in their own right goes back to Leibnitz and is one of the features that distinguishes his mathematics and science from that of Newton who was a great believer in absolute time and space and thus in absolute position. I do not think there is any experiment that can determine the issue one way or the other and doubtless temperament comes into play here, but for what it is worth I believe that Newton’s approach makes much more sense and has been more fruitful. As far as I am concerned, I am convinced that an event, if it occurs at all, occurs somewhere though there is no reason at this stage to attribute any property to this ‘somewhere’ except that it allows events to ‘have occurrence’. It does, however, make the ‘Event Locality’ a primary actor since this Locality seemingly existed prior to any particular events taking place. One could alternatively consider that an event, when and as it has occurrence, as it were carves out a place for itself to happen. In this schema the Locality is an essentially negative entity which does nothing to obstruct occurrence and that is all. This is a perfectly reasonable approach but again one that does not appeal to me for aesthetic or temperamental reasons.  However, once I accepted ultimate events and an Event Locality I realized that I had two ‘primary entities’ that henceforth could be taken for granted. A third primary entity was some ‘force of causality’ providing order and coherence to events as they occurred, or rather re-occurred, and so we have the three primary entities of Ultimate Event Theory:  ultimate events, the Locality and a kind of causality that I call Dominance.       SH  

Note 1  It is not, I think, at this stage worth getting involved in interminable discussions about Schrödinger’s dead-and-alive cats though the issue will have to be faced at some stage. Suffice it to say, for the moment, that the wave function, prior to an intervention by a human or other conscious agent does not verify the Law of Non-Contradiction for Events — and one way out is to simply accept that the wave function does not describe ‘events’ at all, though it does deal in ‘potential’ physical entities that are capable of producing bona fide events.




                             “There is a tide in the affairs of men,
                            Which taken in the flood leads on to fortune;
                            Omitted, all the voyage of their life
                            Is bound in shallows and in miseries.”

                                                         Julius Caesar, IV. 3

Eventrics, like ordinary physics, divides into two parts: macro and micro. The micro part is covered in Ultimate Event Theory while macro-Eventrics, or just Eventrics for short, deals with ‘bulk events’, the only ones we perceive directly. As in physics, it is not at all clear whether the interplay of events at the macro-level is, or is not, ultimately reducible to behaviour at the micro level. In what follows, I shall for the moment simply take for granted that there are such things as ‘individuals’, ‘society’, ‘historical forces’ and so on, without attempting to ‘explain’ them in terms of more basic entities.
Nonetheless, the focus remains firmly fixed on ‘events’ (as opposed to persons or processes). In particular, it is assumed that particular bundles of complex macro-events have an intrinsic momentum that is to a considerable extent independent of the personalities involved. This does not, however, mean that individuals or close-knit associations of individuals are powerless, quite the reverse. The successful individual  ‘goes with the current’ when it suits him and immediately abandons it when it ceases to be favourable. Moreover, depending on where one is situated, some control over the direction of the current is possible: as the 19th century diplomat, Talleyrand, put it “L’homme supérieur épouse les évènements pour les conduire” (‘The great man welcomes events in order to redirect their course’) (Note 1).
The most important question in ‘Eventrics’ is to determine   whether there exists a completely general method for dealing with whatever one is confronted with, something that can be applied,  with appropriate modifications, to the specific context right across the board. Such a life-skill is what the Chinese Taoists referred to as the ‘tao’ (tao literally means ‘way’ or ‘path’). If there really is such a method, it follows that, when examining ‘world-historical figures’, we should expect to find very similar defining circumstances, roughly similar life trajectories and, above all, similar ‘event-strategies’. Is this the case? 


We start by asking to what extent conquerors and world-historical figures foreshadow their future greatness (good or bad) at an early age? Surprisingly, the answer seems to be ‘not very much’. The early lives of  Abraham Lincoln or Hitler, even Julius Caesar, showed no particular promise; the sense that they, and people like them, were destined to be world-movers and world-shakers often only came with maturity, and even then somewhat by accident (Note 2). As we know, Hitler was twice refused entry to the Viennese School of Architecture for lack of  talent and, incredibly for a future war leader and strategist, he started his military career as an Austrian draft dodger ─ though he volunteered promptly enough when World War I broke out. Lincoln was an ungainly, self-educated man from the backwoods who, though a reasonably successful lawyer, only got the Republican Presidential nomination because the support for the other, more popular, candidates was evenly divided. At the age of forty, Oliver Cromwell was a provincial squire, holding no office, local or national, and not even possessing the land on which he grazed his cattle. As for the Duke of Wellington ─ “Until his early twenties, Arthur showed little sign of distinction and his mother grew increasingly concerned at his idleness, stating, “I don’t know what I shall do with my awkward son Arthur” ”(Wikipedia). The list can be extended endlessly.
Even Julius Caesar, the most famous Roman of all,  though he had some minor military successes, was, up to the age of forty, notorious not for his victories but for his debts and dissipated life style ─ Curio referred to him contemptuously as “every man’s woman and every woman’s man”. Even in the case of military prodigies like Alexander the Great and the 17th century Charles XII of Sweden (now somewhat forgotten but hailed at the time as ‘the second Alexander’), circumstances played at least as great a part in their future celebrity as the drumroll of destiny. Both Alexander the Great and Charles XII came unexpectedly to the throne at a very young age (20 for Alexander, 15 for Charles XII). It was ‘sink or swim’ and, as it happened, their enemies, the anti-Macedon Greek states in the case of Alexander and Denmark in the case of Charles XII, got the shock of their lives when they took them on. But it was in both cases as much ‘forced to become great’ as ‘predestined to conquer’ (Note 3). In war and politics, it is often the case that, after an early success, the only way forward is up since retreat is actually more dangerous than the attempt to scale the peak ahead (Note 4).

Summarizing so far, one might even hazard a sort of  ‘power law’ of Eventrics, namely:
       An early disadvantage overcome gives rise to a much greater advantage than an outright advantage.

Macchiavelli even makes this a sine qua non:
“Fortune, when she wants to make a new ruler powerful….makes him start off surrounded by enemies and endangered by threats, so that he can overcome the obstacles and climb higher on a ladder supplied by his enemies”   The Prince, ch. 20

Ruthlessness and Luck

 It is sometimes said that such people as Hitler and Julius Caesar only got to the top because they were extremely ruthless and extremely lucky. Certainly, they were both, but this explanation doesn’t get you very far. Ruthlessness is, unfortunately, not a particularly rare human trait ─ every incumbent mafiosi has it, but how many get to be controllers of nations? Moreover, to demonstrate cold-bloodedness too readily, or too systematically, can be a liability, as it makes it extremely difficult to form alliances which every future leader needs at some stage. Psychopaths don’t usually become conquerors: even Genghis Khan, who is the nearest to being one, spent years forging (and unforging) alliances in the complicated world of Mongolian tribal politics before he was finally accepted as the ‘Great Khan’.
As for luck, Pasteur rightly said that it comes to the prepared mind and Macchiavelli agrees:
“You will find that they [Moses and Cyrus] were only dependent on chance for their first opportunity. They seized their chance to make it what they wanted. Without that first opportunity, their strength of purpose [virtù] would never have been revealed. Without their strength of purpose [virtù], the opportunity they were offered would not have amounted to anything”    The Prince ch. 6


Future commanders and world-leaders are rarely exceptionally intelligent in the ‘normal’ sense, I mean academically speaking. Napoleon is in this respect an exception, since he was a brilliant pupil at his École Militaire and is one of the few (only?) Western rulers who was a capable mathematician. Nonetheless, it is generally accepted today that Napoleon was not a great military theorist or even innovator:  he took almost all his ideas from the Maréchal de Saxe ─ but then again why not? “Napoleon was wise enough not to tinker with his legacy; [but] he knew how to exploit it to the full” writes Marshall-Cornwall in Napoleon as Military Commander.
Oliver Cromwell, one of the greatest cavalry leaders of all time, was certainly no intellectual and, indeed, prided himself on being a man of common sense, hence his approval (doubtless with himself in mind) of “the plain russet-coated captain that knows what he fights for and loves what he knows”. Stalin was an extremely capable Bolshevik hit-man (hence the nickname ‘Stalin’ or ‘man of steel’) but, unlike Lenin and Mao, he made no important contribution to Marxist theory apart from the nebulous doctrine of “Socialism in One Country” which was forced upon him by events. Hitler, surely the most unprepossessing  of all modern leaders, turned his lack of formal education and undistinguished appearance  (“He looks like the house painter he once was”) into an advantage since it enabled him to relate effectively to ‘ordinary people’ ─ and had the immense additional benefit of making aristocrats and educated people underrate him to their cost.

Similarity of Situation  

So, if we discount intellectual brilliance, ruthlessness and an early sense of mission as essentials, what does one notice about the rise and fall of famous historical figures? The four most powerful, non-hereditary Western leaders in recent centuries are probably Oliver Cromwell, Napoleon, Hitler and Stalin. Now, the first thing to note is that they all came to prominence in a fractured society near to breakdown: this gave them the chance they would never have had otherwise. Take Napoleon. Had Buonaparte been born just a few years earlier, he would never have been able to obtain a scholarship to a French École Militaire. For Corsica belonged to Genoa until 1768  and, anyway, it was well nigh impossible for someone outside the leading French families to get real advancement in the army prior to the revolution ─ as it was, the teenage Buonaparte was mocked by his fellow cadets for his dreadful accent and flimsy claim to noble birth. Moreover, the revolution came at exactly the right time for him: all but three of the cadets of Napoleon’s year offered their services to the monarchy ─ which meant the Republic desperately needed trained officers and was eager to promote them. Much the same applied to the Roundhead armies: the professional soldiers mostly fought for Charles I and, once Cromwell’s ability as an organiser of scratch troops was noticed, advancement followed.
As for Hitler, one can with difficulty see him getting anywhere at all in a different time and place, not because he had no talents but because a different environment might never have revealed them to him. He only discovered his uncanny ability as a public speaker by chance when addressing a tiny patriotic society in Munich in 1921 and, as for his military experience, he would never have had any but for the outbreak of WWI which enabled him to gain the Iron Cross and the respect of his comrades and superior officers.
But why, we must ask, did social breakdown favour these individuals? Because there was all of a sudden a power vacuum and someone had to fill it (Note 5). But this is not the only reason. A revolutionary situation drives  a society close to ‘tipping point’ and it may only require a very slight action on the part of a single individual to propel it irreversibly over a certain threshold. In normal circumstances this is almost never the case: a slight action produces a slight outcome but, when a complex system is  near to a ‘phase transition’ or ‘tipping point’, the effects of tiny actions are ‘non-linear’, i.e. can produce  disproportionately large consequences. Principe, the Serbian nationalist who shot the Austrian Archduke Ferdinand in Sarajevo in 1914 did not, as it happens, intend to bring about a European war but this is what ensued. The five great powers were locked in a tense, complex web of alliances, so much so that what was in itself a fairly trivial incident at once set off a frantic round of threat, bluff and counter bluff between Austria, Serbia, Russia and Germany which, within a couple of months,  culminated in the invasion of Belgium and we know the rest.
The 9/11 attack on the Two Towers is one of those rare historical ‘avalanche events’ that really was deliberate. Without the Two Towers there would almost certainly have been no invasion of Iraq and thus none of the sequels. Bin Laden seems to have known what he was doing, his aim being not to ‘overcome’ America militarily, which was and is impossible, but to tempt it into invading an Arab country in reprisal. The Middle East then, and even more so today, exhibits all four classic attributes of a ‘complex system’ on the brink:  the states involved are (1) diverse; (2) closely connected geographically; (3) interdependent; and (4) ceaselessly adapting to each other’s initiatives. 9/11 drew America directly into the fray (invasions of Afghanistan and Iraq) and this extra ingredient made the whole Middle East tip.

One lesson to draw from all this is that if someone, or some group, wishes to bring about really big changes, he or she must position himself at an ‘event hub’, somewhere that is extensively connected to diverse, rival, mutually interacting power groups. In such a position, minor personal initiatives really can have vast consequences ─ this is Archimedes’s “Give me a fixed point and I will move the world” translated into geopolitics. Napoleon and Hitler found themselves ‘by chance’ at such an event hub, revolutionary France at the end of the 18th century and Germany in the Twenties after her ignominious defeat in WWI  and subsequent hyper-inflation (Note 6).

Opposing Strategies

Supposing one happens to find oneself in an ‘event-hub’ of potentially momentous importance, what then? Broadly speaking, there exist two opposing strategies for the ambitious person, the first active, deliberate, calculating, the second passive, indirect, instinctive. Nineteenth century Western thinkers such as Carlyle and Nietzsche emphasized ‘will’ and ‘character’ while Clausewitz stressed the importance of sheer numbers, i.e. the active approach. Eastern philosophies generally recommend the second, indirect approach. China’s most famous military theorist, Sun Tzu (who is said to have influenced Mao), recommends systematically avoiding direct confrontation and relying instead on manoeuvre and deception. (Not that China’s history is any less bloody than Europe’s for all that.)
To employ Taoist terms, the first method is ‘Doing’, the second ‘Not-Doing’ (wu-wei), a strange concept to our ears though it is central to Taoism. The Tao Te Ching is a peculiar work because it can be (and has been) interpreted in two mutually contradictory ways. On the one hand, it purports to preach a form of quietism: it recommends retirement from the ‘world’ with all its bustle and senseless striving in order to cultivate the ‘inner self’. The Tao Te Ching specifically condemns the use of brute force in government, viewing it as both inhumane and ultimately ineffective. At the same time the title Tao Te Ching means, literally, Way Power Book and it has been interpreted as a sort of manual for an aspiring ruler. According to this view, the aim of the book is to show the future ‘philosopher king’ how to rule effectively without appearing to govern at all. At first sight this sounds all very civilised ─ but is it really? Such a ruler, according to the Tao Te Ching, gets people to do what he thinks right because they admire him for his ‘moral authority’ and ‘inner poise’ ─ but this sounds dangerously close to the ‘charisma’ that mass-murderers like Hitler and Stalin undoubtedly possessed to a high degree.
What does all this mean in practice? ‘Not-Doing’ does not necessarily mean abstaining from action, though it can mean this ─ sometimes the best plan is simply to let things take their course. Sun Tzu speaks a great deal about ‘momentum’ which he sees as an intrinsic property of certain sequences of events ─ what in Ultimate Event Theory I term ‘dominance’. “Skilful warriors” he writes, “are able to allow the force of momentum to seize victory for them without exerting their strength”. And this ‘momentum’ is impersonal, does not depend on individuals: “Good warriors seek effectiveness in battle from the force of momentum, not from individual people”.
The Tao Te Ching assumes that only a ‘good’ man or woman can possess the mysterious moral authority that makes the use of force secondary, or even unnecessary. This is too optimistic by far, not to say dangerously naive. ‘Not-Doing’ is certainly useful (and preferable to coercion) but will not take you all the way: one thinks at once of Stalin’s immortal quip, “How many battalions can the Pope put in the field?”
          The truth seems to be that both ‘Doing’ and ‘Not-Doing’ are essential for success in practically every sphere, but above all in warfare and government. If we look at famous European leaders, especially Cromwell and Hitler, we find that they practised both ‘Doing’ and ‘Not-Doing’ in more or less equal doses, were alternately ‘active’ and ‘passive’ and at ease in  both modes. It is now known that a great deal of mental and physical activity is ‘unconscious’: in a routine situation, it is often better, and even safer, to put oneself in a state of ‘auto-pilot’. However, the ‘self’ must remain ultimately in control, able to step in and overrule learned behaviour when changing circumstances make it inappropriate. Warfare is inevitably an activity that requires intense training, since the aim is to turn a warm-blooded human being into a killing machine (Note 7). But the soldier who is completely incapable of taking initiative is a liability: part of Napoleon’s success lay precisely in his ability to maintain firm overall control of strategy while encouraging his subordinates to act independently when necessary. This is one reason why he outclassed the Prussians and Austrians who tended to make war strictly by rote and were thrown into confusion by the unexpected. Similarly, the historian Grant says that Julius Caesar’s “supreme qualities as a commander were speed, timing, and adaptability to suddenly changing circumstances” (my italics).

If we consider the English Civil War and the Protectorate, we see that Cromwell and the Roundheads in general owed much of their success to their belief system. The Puritan world-view, though hardly logical, proved to be a very suitable one for men of action whether soldiers or, at a later date, pioneers of the Industrial Revolution. For, while the Puritans, and Protestants generally, firmly believed that  ‘grace’ trumped virtue (since God chose whoever He wished), they simultaneously stressed the importance of an ‘active life in the world’ ─ as opposed, for example, to retreating into a monastery to ‘watch and pray’. Cromwell’s belief in Providence is central to his character and to his conduct as a military and political leader. The moral earnestness of the Puritan obliged him not only to ‘take up arms’ for a just cause, but also to plan ahead carefully since he could not expect any miraculous intervention from above. “Duties are ours, events are the Lord’s” as Samuel Rutherford put it in a nutshell. Such a belief system protected the Puritans from the dangers of cocksureness, and induced in Cromwell a state of mind somewhere in between ‘meditation’ and ‘rational analysis’. Typically, when a categorical decision one way or the other was required, Cromwell would retire to weigh up the situation and commune with God. In ‘event’ terms, he was trying to get the feel of the mysterious ‘momentum’ of which Sun Tzu speaks ─ except that, for Cromwell, this ‘momentum’ had something to do with Providence. But if a plan did not work, it was his fault, never God’s ─ he had not been sufficiently alert to the signs pointing the way. This was clearly a very favourable mind-set for the leader of a rebellion.
Cromwell’s admirers encouraged him in the belief that he was chosen by God: “Your victories have been given you of God himself, it is himself that has raised you up amongst men, and hath called you to high enjoyments” as John Desborough put it. This sense of being a ‘man with a mission’ obviously gave Cromwell enormous self-confidence as it always does but, again, the Puritan in him stopped him from being completely carried away: he did not, as Napoleon seems to have done, conclude that he was invincible or, like Hitler, that his judgment was infallible. Cromwell, thus combined to a remarkable degree the advantages of the indirect and the direct  approaches. His ‘Not-Doing’ was making himself a passive instrument for God and Providence, his ‘Doing’ was giving full attention to meticulous military planning and logistics. One of the reasons he was such a successful cavalry leader was the seemingly mundane one that he trained his troops to advance at a trot and regroup smartly in good order once they had penetrated enemy lines, whereas the Cavaliers charged at full gallop and typically wasted precious time ransacking the supply train behind the lines.
We find much the same combination of opposites in Hitler as we find in Cromwell. “I carry out the commands that Providence has laid upon me” might well have come from Cromwell, but it is in fact Hitler speaking. As for ‘Not-Doing’, we have Hitler’s  chilling statement, “I go to my goal with the precision and security of a sleep-walker”. But this sense of mission, even combined with Hitler’s oratory, would not have ‘taken him to his goal’ if he had only been a sleep-walker. Halder, his one time Chief of General Staff, writes of Hitler’s “astonishing grasp of technical detail” ─ and, since Halder was eventually sent to a concentration camp by Hitler, he was not playing the flatterer. We are talking about data such as the range of certain guns or the tonnage of certain ships, hardly the bedside reading of a visionary.
Bullock cites the diary of an ordinary German who heard Hitler speak long before he became Chancellor and who wrote, “I have never heard an orator so fanatical or so logical”. Logical? Hitler? In fact, yes, given his premises which were to make Germany great at all costs. Hitler saw more clearly than anyone else at the time that Germany could not become a world power in post-WWI circumstances for two reasons: (1) it did not produce enough food for its burgeoning population; and (2) it was woefully deficient in raw materials for a leading industrial power. His solution was simple: invade the Soviet Union to get hold of the wheat-growing areas of the Ukraine and the oil rich Caucasus. This was clearly stated in the so-called Holbach Memorandum where Hitler outlined (to his generals) the reasons for the forthcoming invasion of Russia. An additional plus for this strategy was that it did not involve war with Britain (or so Hitler supposed); there was no point, Hitler argued in trying to recover  Germany’s lost colonies in Africa ─ let Britain rule the waves and Germany the land. Furthermore, this devastatingly rational economic analysis dictated Hitler’s basic military strategy, Blitzkrieg. Aggression suited Hitler’s temperament, of course, but the main reason for ‘lightning war’ was that Germany would never have been able to sustain a long war because it needed imported food, oil and steel amongst many other commodities.
Bullock observes that most historians writing about Hitler either stress his “fanatical will” or “insist that he relied for his success on calculation and lack of scruple”. But Bullock goes on to say, correctly, that these interpretations are not mutually exclusive. “He [Hitler] was at once fanatical and cynical, unyielding in his assertion of will power and cunning in calculation”. In particular, “His foreign policy….combined consistency of aim with complete opportunism in method and tactics”. Now, this is an extremely unusual combination and shows where Hitler differed from Mussolini, “an opportunist who snatched eagerly at any chance that was going”. In summing up, Bullock writes, “Fixity of aim by itself, or opportunism by itself, would have produced nothing like the same results”.

Application to other areas

To what extent can these precepts be applied in more congenial areas of human activity than war and government?

Firstly, there is the importance of being at a cultural ‘event-hub’. It is possible for geniuses like Nietzsche to mature in more or less complete isolation but this is hardly to be recommended ─  it doubtless contributed to his mental collapse. Writers, painters and composers tend to congregate in particular spots where they cross-fertilize each other even if, or maybe above all, if they quarrel. For reasons that are none too clear, Elizabethan London suddenly produced more great dramatists than perhaps any other place or time. And the Restoration London coffee-houses were suddenly all agog at once with sparkling comedies, Locke’s philosophizing, Defoe’s political and social broadsheets and the revolutionary physical ideas emanating from the newly created Royal Society. Edinburgh at the end of the 18th century is another notable hub since it produced James Watt, Adam Smith and Hume alongside many lesser but still significant thinkers. Vienna at the beginning of the 20th century saw the birth of psycho-analysis, Boltzmann’s statistical physics, logical-positivism and early abstract painting. I have read somewhere that post-war Paris deliberately kept its exchange rate artificially low relative to the dollar and the pound in order to attract Americans and Britons; this along with the incredible profusion of cafés and cheap hotels made Paris the cultural world-centre for half the 20th century, spewing out surrealism, cubism, modernist fiction and finally existentialism. More recently, the cultural ‘world event-hub’ seems to have shifted to California since the latter state gave rise to two utterly opposed but strangely interrelated cultural phenomena, the hippie movement with all that it entailed and Silicon Valley. Today, the awakening giant, China, has given birth to an unexpected amalgam of laissez-faire capitalism and centralized government but has, so far, not produced anything equally new and wonderful in the cultural domain. Maybe this is to come.
So, the advice to an aspiring author, artist or entrepreneur is to position yourself near the coming (not actual) cultural centre-point, or at least pass through to absorb the vibes. (However, the existence of the world-wide web has, arguably, made geography far less important.) As for luck and ruthlessness, much the same principles apply to artists as to politicians and military commanders. In a writer, ruthlessness translates as clarity, precision and economy with respect to words and lack of sentimentality with respect to one’s own early productions ─ though it is also crucial not to overdo this. In mathematics, rigour is the rule but the really great mathematicians such as Leibnitz, Newton and Euler were very far from being logic-machines and relied to a large extent on their undefinable ‘mathematical intuition’ (which is why they sometimes made mistakes). And Bullock’s ‘consistency of aim combined with opportunism of execution’ certainly sounds as much a winning formula in the arts as in foreign policy.

SH 15/2/2018

 Note 1 Talleyrand was the ultimate survivor: he not only lived through, but flourished during, (1) the French Revolution, (2) the Directorate, (3) the Napoleonic period and (4) the Bourbon Restoration, eventually dying quietly in his bed.  

Note 2 Hitler, the proto-typical modern ‘man of destiny’, only got the first intimations of his future role in the trenches in 1915 when he was twenty-six. And it was only during his brief imprisonment in 1923 after the failed ‘Beer-hall putsch’ that he finally cast himself in the role of Germany’s predestined leader. As a youth Hitler had no interest in warfare and little enough in politics: his passion was, and remained, architecture. 

Note 3 If it is true that Alexander’s mother had Philip of Macedon assassinated, as some historians think, she has a better claim than Cleopatra to being a woman who changed the course of history.

Note 4 In the case of Julius Caesar, he absolutely had to stay continuously in office once launched on his career since, like certain contemporary heads of state, this gave him immunity from prosecution.

Note 5 I had the occasion to personally witness something similar, albeit on a much smaller scale, during the May 1968 ‘student revolution’ and ensuing General Strike. For a few weeks, the Parisian faculties were occupied and even the police didn’t dare to go in. It was amazing to see, alongside genuine ‘revolutionaries’, future Robespierres and Stalins manouevering shamelessly in committees to get themselves into positions of power (see my reminiscences ‘Le Temps des Cérises, May ’68 and aftermaths’ in the anarchist quarterly The Raven No. 38).

Note 6 In Hitler’s case, it was perhaps not entirely chance that had him end up in Germany: some obscure instinct made him leave Vienna for what, as it transpired, was an even more suitable locale, Munich ─ since Germany offered far greater scope to his ambitions than Austria. Hitler, though Austrian by birth, was permitted, by special demand, to enlist in the German (not Austrian) army in WWI. 

Note 7 “The aim [of military training]… to reduce the conduct of war to a set of rules and a system of procedures ─ and thereby make orderly and rational what is essentially chaotic and instinctive” ( Keegan, The Face of Battle p. 20).