Archives for category: Thinkers

Two Models of the Beginning of the Universe

 There are basically two models for how the universe began. According to the first, the universe, by which we should understand the whole of physical reality, was deliberately created by a unique Being. This is the well-known Judaeo-Christian schema which until recently reigned supreme.
According to the second schema, the universe simply came about spontaneously: no one planned  it and no one made it happen. It ‘need not have been’, was essentially  ‘the product of chance’. This seems to be the Eastern view, though we also  come across it in some Western societies at an early stage of their development for example in Greece (Note 1).
Although for a long time the inhabitants of the Christian West were totally uninterested in the workings of the natural world, the ‘Creationist’ model eventually led on to the development of science as we know it. For, so it was argued, if the universe was deliberately created, its creator must have had certain rules and guidelines that He imposed on his creation. These rules could conceivably be discovered, in which case many of the mysteries of the physical universe would be explained. Moreover, if the Supreme Designer or Engineer really was all-knowing, one set of rules would suffice for all time. This was basically the world-view of the men who masterminded the scientific revolution in the West,  men such as Galileo, Kepler, Descartes, Newton and Leibnitz, all firm believers in both God and the power of mathematics which they viewed as the ‘language of God’ inasmuch as He had one.
If, on the other hand, the universe was the product of chance, one would not expect it to necessarily obey a set of rules, and if the universe was in charge of itself, as it were, things could change abruptly at any moment. In such a case, clever people might indeed notice certain regularities in the natural world but there would be no guarantee that these regularities were binding or would continue indefinitely. The Chinese equivalent of Euclid was the Y Ching, The Book of Changes, where the very title indicates a radically different world view. The universe is something that is in a perpetual state of flux, while nonetheless remaining ‘in essence’ always the same. According to Needham, the main reason why the scientific and technological revolution did not happen in China rather than the West, given that China was for a long time centuries ahead of the West technically, was that Chinese thinkers lacked  the crucial notion of unchanging ‘laws of Nature’ (Note 2).
Interestingly, there is a noticeable shift in Western thought towards the second model : the consensus today is that the universe did indeed come about ‘by chance’ and the same goes for life. However, contemporary physicists still hold tenaciously onto the idea that there are nonetheless certain more or less unchanging physical laws and rational principles which are in some sense ‘outside Nature’ and independent of it.  So the laws remain even though the Lawmaker has long since died quietly in his bed.

Emergent Order and Chaos

Models of the second ‘Spontaneous Emergence’ type generally posit an initial ‘Chaos’ which eventually settles down into a semblance of Order. True Chaos (not the contemporary physical theory of the same name (Note 3)) is a disordered free-for-all: everything runs into everything else and the world, life, us, are at best an ephemeral emergent order that suddenly occurs like the ripples the wind makes on the surface of a pond ─ and may just as suddenly disappear.
Despite the general triumph of Order over Chaos in Western thinking, even in the 19th century a few discordant voices dissented from the prevailing  orthodoxy ─ but none of them were practising scientists. Nietzsche, in a remarkable passage quoted by Sheldrake, writes:

“The origin of the mechanical world would be a lawless game which would ultimately acquire such consistency as the organic laws seem to have… All our mechanical laws would not be eternal but would have survived innumerable alternative mechanical laws” (Note 4)

Note that, according to this view, even the ‘laws of Nature’ are not fixed once and for all : they are subject to a sort of natural selection process just like everything else. This is essentially the viewpoint adopted in Ultimate Event Theory i.e. the universe was self-created, it has ascertainable ‘laws’ but these regularities need not be unchanging nor binding in all eventualities.

In the Beginning…. Random Ultimate Events  

In the beginning was the Void but the Void contained within itself the potential for ‘something’. For some reason a portion of the Void became active and random fluctuations appeared across its surface. These flashes that I call ‘ultimate events’ carved out for themselves emplacements within or on the Void, spots where they could and did have occurrence. Part at least of the Void had become a place where ultimate events could happen, i.e. an Event Locality. Such emplacements or ‘event-pits’ do not, by assumption, have a fixed shape but they do have fixed ‘extent’.
Usually, ultimate events occur once and disappear for ever, having existed for the ‘space’ of a single ksana only. However, if this was all that happened ever, there would be no universe, no matter, no solar system, no us. There must, then, seemingly have been some mechanism which allowed for the eventual formation of relatively persistent event clusters and event-chains : randomness must ultimately be able to give rise to its opposite, causal order. This is reasonable enough since if a ‘system’ is truly random, and is allowed to go on long enough, it will eventually cover all possibilities, and the emergence of ‘order’ is one of them.
As William James writes:
“There must have been a far-off antiquity, one is tempted to suppose, when things were really chaotic. Little by little, out of all the haphazard possibilities of that time, a few connected things and habits arose, and the rudiments of regular performance began.”

This suggests the most likely mechanism : repetition which in time gave rise to ingrained habits. Such a simple progression requires no directing intelligence and no complicated physical laws.
Suppose an ultimate event has occurrence at a particular spot on the Locality; it then disappears for ever. However, one might imagine that the ‘empty space’ remains, at least for a certain time. (Or, more correctly, the emplacement repeats, even though its original occupant is long gone). The Void has thus ceased to be completely homogeneous because it is no longer completely empty: there are certain mini-regions where emplacements for further ultimate events persist. These spots  might attract further ultimate events since the emplacement is there already, does not have to be created.
This goes on for a certain time until a critical point is reached. Then something completely new happens: an ultimate event repeats in the ‘same’ spot at the very next ksana, and, having done this once, carries on repeating for a certain time. The original ultimate event has thus acquired the miraculous property of persistence and an event-chain is born. Nothing succeeds like success and the persistence of one  event-chain makes the surrounding region more propitious for the development of similar rudimentary event-chains which, when close enough, combine to form repeating event-clusters. This is roughly how I see the ‘creation’ of the massive repeating event-cluster we call the universe. Whether the latter emerged at one fell swoop (Big Bang Theory) or bit by bit as in Hoyle’s modified Steady State Theory is not the crucial point and will be decided by observation. However, I must admit that piecemeal manifestation seems more likely a priori. Either way, according to UET, the process of event-chain formation ‘from nothing’ is still going on. 

The Occurrence Function  

This, then, is the general schema proposed ─ how to model it mathematically? We require a ‘Probability Occurrence Function’ which increases very slowly but, once it has reached a critical point, becomes unity or slightly greater than unity.
The Void or Origin, referred to in UET as K0 , is ‘endless’ but we shall only concerned with a small section of it. When empty of ultimate events, K0  is featureless but, when active, it has the capacity to  provide emplacements for ultimate events ─ for otherwise they would not occur. A particular region of K0 can accommodate a maximum of, say, N ultimate events at one and the same ksana. N is a large, but not ‘infinite’ number ─ ‘infinity’ and ‘infinitesimals’ are completely excluded from UET. If there are N potential emplacements and the events appear at random, there is initially a 1/N chance of an ultimate event occurring at one particular emplacement.
However, once an ultimate event has occurred somewhere (and subsequently disappeared), the emplacement remains and the re-occurrence of an event at this spot, or within a certain radius of this spot,  becomes very slightly more likely, i.e. the probability is greater than 1/N. For no two events are ever completely independent in Ultimate Event Theory. Gradually, as more events have occurrence within this mini-region, the radius of probable re-occurrence narrows and  eventually an ultimate event acquires the miraculous property of repeating at the same spot (strictly speaking, the equivalent spot at a subsequent ksana). In other words, the probability of re-occurrence is now a certainty and the ultimate event has turned into an event-chain.
As a first very crude approximation I suggest something along the following lines. P(m) stands for the probability of the occurrence of an ultimate event at a particular spot. The Rule is : 

P(m+1) = P(m) (1/N) ek    m = (–1),0,1, 2, 3…..

P(0) = 1     P(1) = (1/N)

Then,

P(2) = (1/N) (1/N) ek = (1/N2) ek
P(3) = ((1/N2) ek) (1/N) ek = (1/N3) e2k
P(4) = (1/N3) e2k (1/N) ek = (1/N4) e3k
P(5) = (1/N4) e4k (1/N) ek = (1/N5) e4k
P(m+1) = (1/Nm+1) emk  

Now, to have P(m+1) ≥ 1  we require

(1/Nm+1) emk ≥ 1
emk ≥  Nm+1
 mk ≥ (m+1) ln N     (taking logs base e on both sides)
k ≥ ((m+1)/m) ln N  

       If we set k as the first integer > ln N  this will do the trick.
For example, if we take N = 1050   ln N = 115.129….
       Then, e116(m+1)  > (1050)m+1 for any m ≥ 0 

However, we do not wish the function to get to unity or above straightaway. Rather, we wish for some function of N which converges very slowly to ln N  or rather to some value slightly above ln N (so that it can attain ln N). Thus k = f(N) such that ef(N)(m+1) ≥ Nm+1
       I leave someone more competent than myself to provide the details of such a function.
This ‘Probability Occurrence Function’ is the most important function in Ultimate Event Theory since without it  there would be no universe, no us, indeed nothing at all except random ultimate events firing off aimlessly for all eternity. Of course, when I speak of a mathematical function providing a mechanism for the emergence of the universe,  I do not mean to imply that a mathematical formula in any way ‘controls’ reality, or is even a ‘blueprint’ for reality. From the standpoint of UET, a mathematical formula is simply a description in terms comprehensible to humans of what apparently goes on and,  given the basic premises of UET, must go on.

Note the assumptions made. They are that:

(1) There is a region of K0 which can accommodate N ultimate events within a single ksana, i.e. can become an Event Locality with event capacity N;
(2) Ultimate events occur at random and continue to occur at random except inasmuch as they are more likely to re-appear at a spot where they have previously appeared;
(3) ‘Time’ in the sense of a succession of moments of equal duration, i.e. ksanas, exists from the very beginning, but not ‘space’;
(4) ‘Space’ comes into existence in a piecemeal fashion as, or maybe just before, ultimate events have occurrence — without events there is no need for space;
(5) Causality comes into existence when the first event-chain is formed : prior to that, there is no causality, only random emergence of events from wherever events come from (Note 5).

What happens once an event-chain has been formed? Does the Occurrence Function remain ≥ 1 or does it decline again? There are two reasons why the Probability Occurrence Function probably (sic) does at some stage decline, one theoretical and one observational. Everything in UET, except K0 the Origin, is finite ─ and K0 should be viewed as being neither finite nor infinite, ‘para-finite’ perhaps. Therefore, no event can keep on repeating indefinitely : all event-chains must eventually terminate, either giving rise to different event-chains or simply disappearing back into the Void from which they emerged. This is the theoretical reason.
Now for the observational reason. As it happens, we know today that the vast majority of ‘elementary particles’ are very short-lived and since all particles are, from the UET point of view, relatively persistent event-chains or event-clusters, we can conclude that most event-chains do not last for very long. On the other hand, certain particles like the proton and the neutrino are so long-lasting as to be virtually immortal. The cause of ‘spontaneous’ radio-active decay is incidentally not known, indeed the process is considered to be completely random (for a particular particle) which is tantamount to saying there is no cause. This is interesting since it shows that randomness re-emerges and re-emerges where it was least expected. I conceive of event-chains that have lost their causal bonding dwindling away in much the same way as they began only in reverse. There is a sort of pleasing symmetry here : randomness gives rise to order which gives rise to randomness once more.
There is the question of how we are to conceive the ‘build up’ of probability in the occurrence function : exactly where does this occur? Since this process has observable effects, it is more than a mathematical fiction. One could imagine that this slow build-up, and eventual weakening and fading away, takes place in a sort of semi-real domain, a hinterland between K0 and K1 the physical universe. I note this as K01.
I am incidentally perfectly serious in this suggestion. Some such half-real domain is required  to cope, amongst many other things, with the notorious ‘probabilities’ — more correctly ‘potentialities’ — of the Quantum Wave Function. The notion of a semi-real region where ‘semi-entities’ gradually become more and more real, i.e. closer to finalization, is a perfectly respectable idea in Hinayana Buddhism ─ many  authors speak of 17 stages in all,  though I am not so sure about that. Western science and thought generally has considerable difficulty coping with phenomena that are clearly neither completely actual nor completely imaginary (Note 6); this is so because of the dogmatic philosophic materialism that we inherit from the Enlightenment and Newtonian physics. Physicists generally avoid confronting the issue, taking refuge behind a smoke-screen of mathematical abstraction.                                                                SH  8/6/14

Note 1  This tends to be the Eastern view : neither the Chinese nor the Hindus seem to have felt much need for a purposeful all-powerful creator God. For the Chinese, there were certain patterns and trends to be discerned but nothing more, a ceaseless flux with one situation engendering another like the hexagrams of the Y Ching. Consulting the Y Ching involves a chance event, the fall of the yarrow sticks that the consultant throws at random. Whereas in divination chance is essential, in science every vestige of randomness is eliminatedas much as is humanly possible.
For the Hindus, the universe was not an artefact as it was for Boyle who likened it to the Strasbourg clock : it was a ‘dance’, that of Shiva. This is a very different conception since dances do not have either meaning or purpose apart from display and self-gratification. Also, although they may be largely repetitive, the (solitary) dancer is at liberty to introduce new movements at any moment.
As for the Buddhists, there was never any question of the universe being created : the emergence of the physical world was regarded as an accident with tragic consequences.

Note 2 “Needham tells of the irony with which Chinese men of letters of the eighteenth century greeted the Jesuits’ announcement of the triumphs of modern science. The idea that nature was governed by simple, knowable laws appeared to them as a perfect example of anthropomorphic foolishness. (…) If any law were involved [in the harmony and regularity of phenomena] it would be a law that no one, neither God nor man, had ever conceived of. Such a law would also have to be expressed in a language undecipherable by man and not be a law established by a creator conceived in our own image.”
Prigogine, Order out of Chaos p. 48 

Note 3  Contemporary Chaos Theory deals with systems that are deterministic in principle but unpredictable in practice. This is because of their sensitive dependence on initial conditions which can never be known exactly. True chaos cannot be modelled by Chaos Theory so-called. 

Note 4 See pages 12-14 of Rupert Sheldrake’s remarkable book, The Presence of the Past where he quotes this passage, likewise that from Nietzsche. Dr Sheldrake has perhaps contributed more than any other single person to the re-emergence of the ‘randomness/order’ paradigm. In his vision, ‘eternal physical laws’ are essentially reduced to habits and the universe as a whole is viewed as in some sense a living entity. “The cosmos now seems more like a growing and developing organism than like an eternal machine. In this context, habits may be more natural than immutable laws” ( Sheldrake, The Presence of the Past, Introduction).
  Stefan Wolfram also adopts a similar philosophic position, believing as he does that not only can randomness give rise to complex order, but must eventually do so. Both thinkers would probably concur with the idea that “systems with complex behaviour in nature must be driven by the same kind of essential spirit as humans” (Wolfram, A New Kind of Science p. 845)

Note 5.  This idea that causality comes into existence when, and only when, the first event-chains are formed, may be compared to the Buddhist doctrine that ‘karma’ ceases in nirvana, or rather that nirvana is to be defined as the complete absence of karma. Karma literally means ‘activity’ and there is no activity in the Void, or K0. Ultimate events are the equivalent of the Buddhist dharma ─ actually it should be dharmas plural but I cannot bring myself to write dharmas. Reality is basically composed of three ‘entities’, nirvana, karma, dharma, whose equivalents within Ultimate Event Theory are K0 or the Void, Causality (or Dominance) and Ultimate Events. All three are required for a description of phenomenal reality because the ultimate events must come from somewhere and must cohere together if they are to form ‘objects’, the causal force providing the force of cohesion. There is no need to mention matter nor for that matter (sic) God.

Note 6   “ ‘The possible’ cannot interact with the real: non-existent entities cannot deflect real ones from their paths. If a photon is deflected, it must have been deflected by something, and I have called that thing a ‘shadow photon’. Giving it a name does not make it real, but it cannot be true that an actual event, such as the arrival and detection of a tangible photon, is caused by an imaginary event such as what that photon ‘could have done’ but did not do. It is only what really happens that can cause other things really to happen. If the complex motions of the shadow photon in an interference experiment were mere possibilities that did not in fact take place, then the interference phenomena se see would not, in fact, take place.”       David Deutsch, The Fabric of Reality pp.48-9

Comment by SH
 : This is fine but I cannot go along with Deutsch’s resolution of the problem by having an infinite number of different worlds, indeed I regard it as crazy.

 


The phenomenon of time dilation, though not noticeable in everyday circumstances, is not a mathematical trick but really exists. Corrections for time dilation are made regularly to keep the Global Positioning System from getting out of sync. The phenomenon becomes a good deal more comprehensible if we consider a network of ultimate events which does not change and spaces between then which can and do change. We are familiar with the notion that ‘time speeds up’ or ‘slows down’ when we are elated or anxious : the same, or very similar occurrences, play out differently according to our moods. Of course, it will be pointed out that the distances between events do not ‘actually change’ in such cases, only our perceptions. But essentially the same applies to ‘objective reality’. or would apply if our senses or instruments were accurate enough the selfsame events would slow down or speed up according to our viewpoint and relative state of motion.
RELATIVITY TIME DILATION DIAGRAMThis could easily be demonstrated by making a hinged ‘easel’ or double ladder which can be extended at will in one direction without altering the spacing in the other, lateral, direction. The ‘time dimension’ is down the page.The stars represent ultimate events, light flashes perhaps which are reflected back and forth in a mirror (though light flashes are made of trillions of ultimate events packed together). The slanting zig-zag line connects the ultimate events : they constitute an event-chain. By pulling the red right hand side of the double ladder outwards and extending it at the same time, we increase the difference between the ultimate events on this part of the ladder but do not increase the ‘lateral’ distance. These events would appear to an observer on the black upright plane as ‘stretched out’ and the angle we use represents the relative speed. As the angle approaches 90 degrees, i.e. the red section nearly becomes horizontal, the red part of the ladder has become enormously long. Setting different angles shows the extent of the time dilation for different relative speeds.
Note that in this diagram the corresponding space contraction is not shown since it is not this spatial dimension that is being contracted (though there will still be a contraction in the presumed direction of motion). We are to imagine someone flashing a torch across a spaceship and the light being reflected back. Any regular repeating event can be considered to be a ‘clock’.
What such a diagram does not show, however, is that, from the point of view of the red ladder, it is the other event-chain that will be stretched : the situation is reversible.

The idea for this diagram is not original : Stefan Wolfram has a similar more complicated set of diagrams on p. 524 of his great work A New Kind of Science. However, he makes the ‘event-lines’ continuous and does not use stars to mark ultimate events.  More elaborate models could actually be made and shown in science museums to demonstrate time dilation. There is, I think, nothing outrageous in the idea that the ‘distance between two events’ is variable : as stated we experience this ourselves all the time. What is shocking is the idea of the whole of Space/Time contracting and dilating. Ultimate events provide as it were the skeleton which shows up in an X-ray : distances between events are flesh that does not show up. There is ‘nothing’ between these events, nothing physical at any rate.        SH 

,

In its present state, Ultimate Event Theory falls squarely between two stools : too vague and ‘intuitive’ to even get a hearing from professional scientists, let alone be  taken seriously, it is too technical and mathematical to appeal to the ‘ordinary reader’. Hopefully, this double negative can be eventually turned into a double positive, i.e. a rigorous mathematical theory capable of making testable predictions that nonetheless is comprehensible and has strong intuitive appeal. I will personally not be able to take the theory to the desired state because of my insufficient mathematical and above all computing expertise : this will be the work of others. What I can do is, on the one hand, to strengthen the mathematical, logical side as much as I can while putting the theory in a form the non-mathematical reader can at least comprehend. One friend in particular who got put off by the mathematics asked me whether I could not write something that gives the gist of the theory without any mathematics at all. Thus this post which recounts the story of how and why I came to develop Ultimate Event Theory in the first place some thirty-five years ago.

 Conflicting  beliefs

Although scientists and rationalists are loath to admit it, personal temperament and cultural factors play a considerable part in the development of theories of the universe. There are always individual and environmental factors at work although the accumulation of unwelcome but undeniable facts may eventually overpower them. Most people today are, intellectually speaking, opportunists with few if any deep personal convictions, and there are good reasons for this. As sociological and biological entities we are strongly impelled to accept what is ‘official doctrine’ (in whatever domain) simply because, as a French psycho-analyst whose name escapes me famously wrote, “It is always dangerous to think differently from the majority”.
At the same time, one is inclined, and in some cases compelled, to accept only those ideas about the world that make sense in terms of our own experience. The result is that most people spend their lives doing an intellectual balancing act between what they ‘believe’ because this is what they are told is the case, and what they ‘believe’ because this is what their experience tells them is (likely to be) the case. Such a predicament is perhaps inevitable if we decide to live in society and most of the time the compromise ‘works’; there are, however, moments in the history of nations and in the history of a single individual when the conflict becomes intolerable and something has to give.

The Belief Crisis : What is the basis of reality?

Human existence is a succession of crises interspersed with periods of relative stability (or boredom). First, there is the birth crisis (the most traumatic of all), the ‘toddler crisis’ when the infant starts to try to make sense of the world around him or her, the adolescent crisis, the ‘mid-life’ crisis which kicks in at about forty and the age/death crisis when one realizes the end is nigh. All these crises are sparked off by physical changes which are too obvious and powerful to be ignored with the possible exception of the mid-life crisis which is not so much biological as  social (‘Where am I going with my life?’ ‘Will I achieve what I wanted?’).
Apart from all these crises ─ as if that were not enough already ─  there is the ‘belief crisis’. By ‘crisis of belief’ I mean pondering the answer to the question ‘What is real?’ ‘What do I absolutely have to believe in?’. Such a crisis can, on the individual level, come at any moment, though it usually seems to hit one between the eyes midway between the adolescent ‘growing up’ crisis and the full-scale mid-life crisis. As a young person one couldn’t really care less what reality ‘really’ is, one simply wants to live as intensely as possible and ‘philosophic’ questions can just go hang. And in middle age, people usually find they want to find some ‘meaning’ in life before it’s all over. Now, although the ‘belief crisis’ may lead on to the ‘middle age meaning crisis’ it is essentially quite different. For the ‘belief crisis’ is not a search for fulfilment but simply a deep questioning about the very nature of reality, meaningful or not. It is not essentially an emotional crisis nor is it inevitable ─ many people and even entire societies by-pass it altogether without being any the worse off, rather the reverse (Note 1).
Various influential thinkers in history went through such a  ‘belief crisis’ and answered it in memorable ways : one thinks at once of the Buddha or Socrates. Of all peoples, the Greeks during the Vth and VIth centuries BC seem to have experienced a veritable epidemic of successive ‘belief crises’ which is what  makes them so important in the history of civilization  ─ and also what made the actual individuals and city-states so unstable and so quarrelsome. Several of the most celebrated answers to the ‘riddle of reality’ date back to this brilliant era. Democritus of Abdera answered the question, “What is really real?” with the staggering statement, “Nothing exists except atoms and void”. The Pythagoreans, for their part, concluded that the principle on which the universe was based was not so much physical as numerical, “All is Number”. Our entire contemporary scientific and technological ‘world-view’ (‘paradigm’) can  be traced back to the  two giant thinkers, Pythagoras and Democritus, even if we have ultimately ‘got beyond’  them since we have ‘split the atom’ and replaced numbers as such by mathematical formulae. In an equally turbulent era, Descartes, another major ‘intellectual crisis’ thinker, famously decided that he could disbelieve in just about everything but not that there was a ‘thinking being’ doing the disbelieving, cogito ergo sum (Note 2).
In due course, in my mid to late thirties, at about the time of life when Descartes decided to question the totality of received wisdom, I found myself with quite a lot of time on my hands and a certain amount of experience of the vicissitudes of life behind me to ponder upon. I too became afflicted by the ‘belief crisis’ and spent the greater part of my spare time (and working time as well) pondering what was ‘really real’ and discussing the issue interminably with the same person practically every evening (Note 3). 

Temperamental Inclinations or Prejudices

 My temperament (genes?) combined with my experience of life pushed me in certain well-defined philosophic directions. Although I only  started formulating Eventrics and Ultimate Event Theory (the ‘microscopic’ part of Eventrics) in the early nineteen-eighties and by then had long since retired from the ‘hippie scene’, the heady years of the late Sixties and early Seventies provided me with my  ‘field notes’ on the nature of reality (and unreality), especially the human part of it. The cultural climate of this era, at any rate in America and the West, may be summed up by saying that, during this time “a substantial number of people between the ages of fifteen and thirty decided that sensations were far more important than possessions and arranged their lives in consequence”. In practice this meant forsaking steady jobs, marriage, further education and so on and spending one’s time looking for physical thrills such as doing a ton up on the M1, hitch-hiking aimlessly around the world, blowing your mind with drugs, having casual but intense sexual encounters and so on. Not much philosophy here but when I and other shipwrecked survivors of the inevitable débâcle took stock of the situation, we retained a strong preference for a ‘philosophy’  that gave primary importance to sensation and personal experience.
The physical requirement ruled out traditional religion since most religions, at any rate Christianity in its later public  form, downgraded the body and the physical world altogether in favour of the ‘soul’ and a supposed future life beyond the grave. The only aspect of religion that deserved to be taken seriously, so I felt, was mysticism since mysticism is based not on hearsay or holy writ but on actual personal experience. The mystic’s claim that there was a domain ‘beyond the physical’ and that this deeper reality can to some degree actually be experienced within this life struck me as not only inspiring but even credible ─ “We are more than what we think we are and know more than what we think we know” as someone (myself) once put it.
At the same time, my somewhat precarious hand-to-mouth existence had given me a healthy respect for the ‘basic physical necessities’ and thus inclined to reject all theories which dismissed physical reality as ‘illusory’, tempting though this sometimes is (Note 4). So ‘Idealism’ as such was out. In effect I wanted a belief system that gave validity and significance to the impressions of the senses, sentio ergo sum to Descartes’ cogito ergo sum or, better, sentio ergo est :  ‘I feel therefore there is something’.

Why not physical science ?

 Why not indeed. The main reason that I didn’t decide, like most people around me,  that “science has all the answers” was that, at the time, I knew practically no science. Incredible though this seems today, I had managed to get through school and university without going to a single chemistry or physics class and my knowledge of biology was limited to one period a week for one year and with no exam at the end of it.
But ignorance was not the only reason for my disqualifying science as a viable ‘theory of everything’. Apart from being vaguely threatening ─ this was the era of the Cold War and CND ─ science simply seemed monumentally irrelevant to every aspect of one’s personal daily life. Did knowing about neutrons and neurons make you  more capable of making more effective decisions on a day to day basis? Seemingly not. Scientists and mathematicians often seemed to be less (not more) astute in running their lives than ordinary practical people.
Apart from this, science was going through a difficult period when even the physicists themselves were bewildered by their own discoveries. Newton’s billiard ball universe had collapsed into a tangled mess of probabilities and  uncertainty principles : when even Einstein, the most famous modern scientist, could not manage to swallow Quantum Theory, there seemed little hope for Joe Bloggs. The solid observable atom was out and unobservable quarks were in, but Murray Gell-Mann, the co-originator of the quark theory, stated on several occasions that he did not ‘really  believe in quarks’ but merely used them as ‘mathematical aids to sorting out the data’. Well, if even he didn’t believe in them, why the hell should anyone else? Newton’s clockwork universe was bleak and soulless but was at least credible and tactile : modern science seemed nothing more than a farrago of  abstruse nonsense that for some reason ‘worked’ often to the amazement of the scientists themselves.
There was another, deeper, reason why physical science appeared antipathetic to me at the time : science totally devalues personal experience. Only repeatable observations in laboratory conditions count as fact : everything else is dismissed as ‘anecdotal’. But the whole point of personal experience is that (1) it is essentially unrepeatable and (2) it must be spontaneous if it is to be worthwhile. The famous ‘scientific method’ might have a certain value if we are studying lifeless atoms but seemed unlikely to uncover anything of interest in the human domain — . the best ‘psychologists’ such as  conmen and dictators are sublimely ignorant of psychology. Science essentially treats everything as if it were dead, which is why it struggles to come up with any strong predictions in the social, economic and political spheres. Rather than treat living things as essentially dead, I was more inclined to treat ‘dead things’ (including the universe itself) as if they were in some sense alive. 

Descartes’ Thought Experiment 

Although I don’t think I had actually read Descartes’ Discours sur la méthode at the time, I had heard about it and the general idea was presumably lurking at the back of my mind. Supposedly, Descartes who, incredibly, was an Army officer at the time, spent a day in what is described in history books as a poêle (‘stove’) pondering the nature of reality. (The ‘stove’ must have been a small chamber close to a source of heat.) Descartes came to the conclusion that it was possible to disbelieve in just about everything except that there was a ‘thinking  being’, cogito ergo sum. To anyone who has done meditation, even in a casual way, Descartes’ conclusion appears by no means self-evident. The notion of individuality drops away quite rapidly when one is meditating and all one is left with is a flux of mental/physical impressions. It is not only possible but even ‘natural’ to temporarily disbelieve in the reality of the ‘I’ (Note 5)─ but one cannot and does not disbelieve in the reality of the various sensations/impressions that are succeeding each other as ‘one’ sits (or stands).

Descartes’ thought experiment nonetheless seemed  suggestive and required, I thought, more precise evaluation. Whether the ‘impressions/sensations’ are considered to be mental, physical or a mixture of the two, they are nonetheless always events and as such have the following features:

(1) they are, or appear to be, ‘entire’, ‘all of a piece’, there is no such thing as a ‘partial’ event/impression;

(2) they follow each other very rapidly;

(3) the events do not constitute a continuous stream, on the contrary there are palpable gaps between the events (Note 6);

(4) there is usually a connection between successive events, one thought ‘leads on’ to another and we can, if we are alert enough, work backwards from one ‘thought/impression’ to its predecessor and so on back to the start of the sequence;

(5) occasionally ‘thought-events’ crop up that seem to be  completely disconnected from all previous ‘thought-events’, arriving as it were ‘out of the blue.’.

Now, with these five qualities, I already have a number of features which I believe must be part of reality, at any rate individual ‘thought/sensation’  reality. Firstly, whether my thoughts/sensations are ‘wrong’, misguided, deluded or what have you, they happen, they take place, cannot be waved away. Secondly, there is always sequence : thought ‘moves from one thing to another’ by specific stages. Thirdly, there are noticeable gaps between the thought-events. Fourthly, there is  causality : one thought/sensation gives rise to another in a broadly predictable and comprehensible manner. Finally, there is an irreducible random element in the unfolding of thought-events — so not everything is deterministic apparently.
These are properties I repeatedly observe and feel I have to believe in. There are also a number of conclusions to be drawn from the above; like all deductions these ‘derived truths’ are somewhat less certain than the direct impressions, are ‘second-order’ truths as it were, but they are nonetheless compelling, at least to me. What conclusions? (1) Since there are events, there  must seemingly be a ‘place’ where these events can and do occur, an Event Locality. (2) Since there are, and continue to be, events, there  must be an ultimate source of events, an Origin, something distinct from the events themselves and also (perhaps) distinct from the Locality.
A further and more radical conclusion is that this broad schema can legitimately be generalized to ‘everything’, at any rate to everything in the entire known and knowable universe. Why make any hard and fast distinction between mental events and their features and ‘objective’ physical events and their features? Succession, discontinuity and causality are properties of the ‘outside’ world as well, not just that of the private world of an isolated thinking individual.
What about other things we normally assume exist such as trees and tables and ourselves? According to the event model, all these things must either be (1) illusory or irrelevant (same thing essentially) (2) composite and secondary and/or (3) ‘emergent’.
Objects are bundles of events that keep repeating more or less in the same form. And though I do indeed believe that ‘I’ am in some sense a distinct entity and thus ‘exist’, this entity is not fundamental, not basic, not entirely reducible to a collection of events. If the personality exists at all ─ some persons  have doubts on this score ─ it is a complex, emergent entity. This is an example of a ‘valid’ but not  fundamental item of reality.
Ideas, if they take place in the ‘mind’, are events whether true, false or meaningless. They are ‘true’ to the extent that they can ultimately be grounded in occurrences of actual events and their interactions, or interpretations thereof. I suppose this is my version of the ‘Verification Principle’ : whatever is not grounded in actual sensations is to be regarded with suspicion.  This does not necessarily invalidate abstract or metaphysical entities but it does draw a line in the sand. For example, contrary to most contemporary rationalists and scientists, I do not entirely reject the notion of a reality beyond the physical because the feeling that there is something ‘immeasurable’ and ‘transcendent’ from which we and the world emerge is a matter of experience to many people, is a part of the world of sensation though somewhat at the limits of it. This reality, if it exists, is ‘beyond name and form’ (as Buddhism puts it) is ‘non-computable’, ‘transfinite’. But I entirely reject the notion of the ‘infinitely large’ and the ‘infinitely small’ which has bedevilled science and mathematics since these (pseudo)entities are completely  outside  personal experience and always will be. With the exception of the Origin (which is a source of events but not itself an event), my standpoint is that  everything, absolutely everything, is made up of a finite number of ultimate events and an ultimate event is an event  that cannot be further decomposed. This principle is not, perhaps, quite so obvious as some of the other principles. Nonetheless, when considering ‘macro’ events ─ events which clearly can be decomposed into smaller events ─ we have two and only two choices : either the process comes to an end with an ‘ultimate’ event or it carries on interminably (while yet eventually coming to an end). I believe the first option is by far the more reasonable one.
With this, I feel I have the bare bones of not just a philosophy but a ‘view of the world’, a schema into which pretty well everything can be fitted ─ the contemporary buzzword is ‘paradigm’. Like Descartes emerging from his ‘stove’, I considered  I had a blueprint for reality or at least that part of it amenable to direct experience. To sum up, I could disbelieve, at least momentarily,  in just about everything but not that (1) there were events ; (2) that events occurred successively; (3) were subject to some sort of omnipresent causal force with  occasional lapses into lawlessness. Also, (4) these events happened somewhere (5) emerged from something or somewhere and (6) were decomposable into ‘ultimate’ events that could not be further decomposed.  This would do for a beginning, other essential features would be added to the mix as and when required.                                                                             SH

Note 1  Many extremely successful societies seem to have been perfectly happy in  avoiding the ‘intellectual crisis’ altogether : Rome did not produce a single original thinker and the official Chinese Confucian world-view changed little over a period of more than two thousand years. This was doubtless  one of the main reasons why these societies lasted so long while extremely volatile societies such as VIth century Athens or the city states of Renaissance Italy blazed with the light of a thousand suns for a few moments and then were seen and heard no more.

Note 2 Je pris garde que, pendant que je voulais ainsi penser que tout était faux, il fallait nécessairement que moi, qui le pensais, fusse quelquechose. Et remarquant que cette vérité : je pense, donc je suis, était si ferme et si assure, que toutes les autres extravagantes suppositions des sceptiques n’étaient capables de l’ébranler, je jugeai que je pouvais le reçevoir, sans scrupule, pour le premier principe de la philosophie que je cherchais.”
      René Descartes, Discours sur la Méthode Quatrième Partie
“I noted, however, that even while engaged in thinking that everything was false, it was nonetheless a fact that I, who was engaged in thought, was ‘something’. And observing that this truth, I think, therefore I am, was so strong and so incontrovertible, that the most extravagant proposals of sceptics could not shake it, I concluded that I could justifiably take it on  board, without misgiving, as the basic proposition of philosophy that I was looking for.”  [loose translation]

Note 3  The person in question was, for the record, a primary school teacher by the name of Marion Rowse, unfortunately now long deceased. She was the only person to whom I spoke about the ideas that eventually became Eventrics and Ultimate Event Theory and deserves to be remembered for this reason.

Note 4   As someone at the other end of the social spectrum, but who seemingly also went through a crisis of belief at around the same time, put it, “I have gained a healthy respect for the objective aspect of reality by having lived under Nazi and Communist regimes and by speculating in the financial markets” (Soros, The Crash of 2008 p. 40).
According to Boswell, Dr. Johnson refuted Bishop Berkeley, who argued that matter was essentially unreal, by kicking a wall. In a sense this was a good answer but perhaps not entirely in the way Dr. Johnson intended.  Why do I believe in the reality of the wall? Because if I kick it hard enough I feel pain and there is no doubt in my mind that pain is real — it is a sensation. The wall must be accorded some degree of reality because, seemingly, it was the cause of the pain. But the reality of the wall, is, as it were, a ‘derived’ or ‘secondary’  reality : the primary reality is the  sensation, in this case the pain in my foot. I could, I argued to myself, at a pinch, disbelieve in the existence of the wall, or at any rate accept that it is not perhaps so ‘real’ as we like to think it is, but I could not disbelieve in the reality of my sensation. And it was not even important whether my sensations were, or were not, corroborated by other people, were entirely ‘subjective’ if you like, since, subjective or not, they remained sensations and thus real.

Note 5 In the Chuang-tzu Book, Yen Ch’eng, a disciple of the philosopher Ch’i  is alarmed because his master, when meditating, appeared to be “like a log of wood, quite unlike the person who was sitting there before”. Ch’I replies, “You have put it very well; when you saw me just now my ‘I’ had lost its ‘me’” (Chaung-tzu Book II. 1) 

Note 6 The practitioner of meditation is encouraged to ‘widen’ these gaps as much as possible (without falling asleep) since it is by way of the gaps that we can eventually become familiar with the ‘Emptiness’ that is the origin and end of everything.

 

Are there/can there be events that are truly random?
First of all we need to ask ourselves what  we understand by randomness. As with many other properties, it is much easier to say what randomness is not than to say what it is.

Definitions of Randomness

If a series of events or other assortment exhibits a definite pattern, then it is not random” ─ I think practically everyone would agree to this.
This may be called the lack of pattern definition of randomness. It is the broadest and also the vaguest definition but at the end of the day it is what we always seem to come back to. Stephen Wolfram, the inventor of the software programme Mathematica and a life-long ‘randomness student’  uses the ‘lack of pattern’ definition. He writes, “When one says that something seems random, what one usually means is that one cannot see any regularities in it” (Wolfram, A New Kind of Science p. 316). 
        The weakness of this definition, of course, is that it offers no guidance on how to distinguish between ephemeral patterns and lasting ones (except to keep on looking) and some people have questioned whether the very concept of ‘pattern’ has any clear meaning. For this reason, the ‘lack of pattern’ definition is little used in science and mathematics, at least explicitly.

The second definition of randomness is the unpredictable definition and it follows on from the first since if a sequence exhibits patterning we can usually tell how it is going to continue, at least in principle. The trouble with this definition is that it has nothing to say about why such and such an event is unpredictable, whether it is unpredictable simply because we don’t have the necessary  information or for some more basic reason. Practically speaking, this may not make a lot of difference in the short run but, as far as I am concerned, the difference is not at all academic since it raises profound issues about the nature of physical reality and where we stand on this issue can lead to very different life-styles and life choices.

The third definition of randomness, the frequency definition goes something like this. If, given a well-known and well-defined set-up, a particular outcome, or set of outcomes, in the long run crops up just as frequently (or just as infrequently for that matter) as any other feasible outcome, we class this outcome as ‘random’ (Note 1). A six coming up when I throw a dice is a typical example of a ‘random event’ in the frequency sense. Even though any particular throw is perfectly determinate physically, over thousands or millions of throws, a six would come up no more and no less than any of the other possible outcomes, or would deviate from this ‘expected value’ by a very small amount indeed. So at any rate it is claimed and, as far as I know, experiment fully supports this claim.
It is the frequency definition that is usually employed in mathematics and mathematicians are typically always on the look-out for persistent deviations from what might be expected in terms of frequency. The presence or absence of some numerical or geometrical feature without any obvious reason suggests that there is, or at any rate might be, some hidden principle at work (Note 2).
The trouble with the frequency definition is it is pretty well useless in the real world since a vast number of instances is required to ‘prove’ that an event is random or not  ─ in principle an ‘infinite’ number ─ and when confronted with messy real life situations we have neither the time nor the capability to carry out extensive trials. What generally happens is that, if we have no information to the contrary, we assume that a particular outcome is ‘just as likely’ as another one and proceed from there. The justification for such an assumption is post hoc : it may or may  not ‘prove’ to be a sound assumption and the ‘proof’ involved has nothing to do with logic, only with the facts of the matter, facts that originally we do not and usually cannot know.

The fourth and least popular definition of randomness is the causality definition. For me, ‘randomness’ has to do with causality ─ or rather the lack of it. If an event is brought about by another event, it may be unexpected but it is not random. Not being a snooker player I wouldn’t bet too much money on exactly what is going to happen when one ball slams full pelt into the pack. But, at least according to Newtonian Mechanics, once the ball has left the cue, whatever does happen “was bound to happen” and that is that. The fact that the outcome is almost certainly unpredictable in all its finest details even for a powerful computer is irrelevant.
The weakness of this definition is that there is no foolproof way to test the presence or absence of causality: we can at best only infer it and we might be mistaken. A good deal of practical science is taken up with distinguishing between spurious and genuine cases of causality and, to make matters worse,   philosophers such as Hume and Wittgenstein go so far as to question whether this intangible ‘something’ we call causality is a feature of the real world at all. Ultimately, all that can be said in answer to such systematic sceptics is that belief in causality is a psychological necessity and that it is hard to see how we could have any science or reliable knowledge at all without bringing causality into the picture either explicitly or implicitly. I am temperamentally so much a believer in causality that I view it as a force, indeed as the most basic force of all since if it stopped operating in the way we expect life as we know it would be well-nigh impossible. For we could not be sure of the consequences of even the most ordinary actions; indeed if we could in some way voluntarily disturb the way in which causes and effects get associated, we could reduce an enemy state to helplessness much more rapidly and effectively than by unleashing a nuclear bomb. I did actually, only half-facetiously, suggest that the Pentagon would be advised to do some research into the matter ─ and quite possibly they already have done. Science has not paid enough attention to causality, it tends either to take its ‘normal’ operation for granted or to dispense with it altogether by invoking the ‘Uncertainty Principle’ when this is convenient. No one as far as I know has suggested there may be degrees of causality or that there could be an unequal distribution of causality amongst events.

Determinism and indeterminism

Is randomness in the ‘absence of causality’ sense in fact possible?  Not so long ago it was ‘scientifically correct’ to believe in total determinism and Laplace, the French 19th century mathematician, famously claimed  that if we knew the current state of the universe  with enough precision we could predict its entire future evolution (Note 3). There is clearly no place for inherent randomness in this perspective, only inadequate information.
Laplace’s view is no longer de rigueur in science largely because of Quantum Mechanics and Chaos Theory. But the difference between the two world-views has been greatly exaggerated. What we get in Quantum Mechanics (and other branches of science not necessarily limited to the world of the very small) is generally the replacement of individual determinism by so-called statistical determinism. It is, for example, said to be the case that a certain proportion of the atoms in a radio-active substance will decay within a specified time, but which particular atom out of the (usually very large) number in the sample actually will decay is classed as ‘random’. And in saying this, physics textbooks do not usually mean that such an event is in practice unpredictable but genuinely unknowable, thus indeterminate.
But what exactly is it that is ‘random’? Not the micro-events themselves (the  radio-active decay of particular atoms) but only their order of occurrence. Within a specified time limit half, or three quarters or some other  proportion of the atoms in the sample, will have decayed and if you are prepared to wait long enough the entire sample will decay. Thus, even though the next event in the sequence is not only unpredictable for practical reasons but actually indeterminate,  the eventual outcome of the entire sample is completely determined and, not only that, completely predictable !
Normally, if one event follows another we assume, usually but not always with good reason, that this prior event ‘caused’ the subsequent event, or at least had something to do with its occurrence. And even if we cannot specify the particular event that causes such and such an outcome, we generally assume that there is such an event. But in the case of this particular class of events, the decay of radio-active atoms, no single event has, as I prefer to put it, any ‘dominance’ over any other. Nonetheless, every atom will eventually decay : they have no more choice in the matter than Newton’s billiard balls.
Random Generation       To me, the only way the notion of ‘overall determinism without individual determinism’ makes any sense at all is by supposing that there is some sort of a schema which dictates the ultimate outcome but which leaves the exact order of events unspecified. This is an entirely Platonic conception since it presupposes an eventual configuration that has, during the time decay is going on, no physical existence whatsoever and can even be prevented from manifesting itself by my forcibly intervening and disrupting the entire procedure ! Yet the supposed schema must be considered in some sense ‘real’ for the very good reason that it has bona fide observable physical effects which the vast majority of imaginary shapes and forms certainly do not have (Note 4).

An example of something similar can be seen in the case of the development an old-fashioned  (non-digital) picture taken in such faint light that the lens only allows one photon to get through at a time.  “The development process is a chemical amplification of an initial atomic event…. If a photograph is taken with exceedingly feeble light, one can verify that the image is built up by individual photons arriving independently and, it would seem at first, almost randomly distributed in position” (French & Taylor, An Introduction to Quantum Physics p. 88-9)  This case is slightly different  from that of radio-active decay since the photograph has already been taken. But the order of events leading up to the final pattern is arbitrary and, as I understand it, will be different on different occasions. It is almost as if because the final configuration is fixed, the order of events is ‘allowed’ to be random.

Uncertainty or Indeterminacy ?

 Almost everyone who addresses the subject of randomness somehow manages to dodge the central question, the only question that really matters as far as I am concerned, which is : Are unpredictable events merely unpredictable because we lack the necessary information  or are they inherently indeterminate?
        Taleb is the contemporary thinker responsible more than anyone else for opening up Pandora’s Box of Randomness, so I looked back at his books to see what his stance on the uncertainty/indeterminacy issue was. His deep-rooted conviction that the future is unpredictable and his obstinacy in sticking to his guns against the experts would seem to be driving him in the indeterminate direction but at the last minute he backs off and retreats to the safer sceptical position of “we just don’t know”.

“A true random system is in fact random and does not have predictable properties. A chaotic system [in the scientific sense] has entirely predictable properties, but they are hard to know.” (The Black Swan p. 198 )

This is excellent and I couldn’t agree more. But, he proceeds   “…in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1. (…)  Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.”   The Black Swan p. 198 
As far as I am concerned randomness either is or is not an intrinsic property and difference between theory and practice doesn’t come into it. No doubt, from the viewpoint of an options trader, it doesn’t really matter whether market prices are ‘inherently unpredictable’ or ‘indeterminate’ since one still has to decide whether to buy or not.        However, even from a strictly practical point of view, there is a difference and a big one between intrinsic and ‘effective’ randomness.
Psychologically, human beings feel much easier working with positives than negatives as all the self-help books will tell you and it is even claimed that “the unconscious mind does not understand negatives”. At first sight ‘uncertainty’ and ‘indeterminacy’ appear to be equally negative but I would argue that they are not. If you decide that some outcome is ‘uncertain’ because we will never have the requisite information, you will most likely not think any more about the matter but in stead work out a strategy for coping with uncertainty ─ which is exactly what Taleb advocates and claims to have put into practice successfully in his career on the stock market.
On the other hand, if one ends up by becoming convinced that certain events really are indeterminate, then this raises a lot of very serious questions. The concept of a truly random event, even more so a stream of them, is very odd indeed. One is at once reminded of the quip about random numbers being so “difficult to generate that we can’t afford to leave it to chance”. This is rather more than a weak joke. There is a market for ‘random numbers’ and very sophisticated methods are employed to generate them. The first ‘random number generators’ in computer software were based on negative feedback loops, the irritating ‘noise’ that modern digital systems are precisely designed to eliminate. Other lists are extracted from the expansion of π (which has been taken to over a billion digits) since mathematicians are convinced this expansion will never show any periodicity and indeed none has been found. Other lists are based on so-called linear congruences.  But all this is in the highest degree paradoxical since these two last methods are based on specific procedures or algorithms and so the numbers that actually turn up are not in the least random by my definition. These numbers are random only by the frequency and lack of pattern definitions and as for predictability the situation is ambivalent. The next number in an arbitrary  section of the expansion of π  is completely unpredictable if all you have in front of you is a list of numbers but it is not only perfectly determinate but perfectly predictable if you happen to know the underlying algorithm.

Three types of Randomness

 Stephen Wolfram makes a useful distinction between three basic kinds of randomness. Firstly, we have randomness which relies on the connection of a series of events to its environment. The example he gives is the rocking of a boat on a rough sea. Since the boat’s movements depend on the overall state of the ocean, its motions are certainly unpredictable for us because there are so many variables involved ─ but perhaps not for Laplace’s Supermind.
Wolfram’s second type  of ‘randomness’ arises, not  because a series of events is continuously interacting with its environment, but because it is sensitively dependent on the initial conditions. Changing these conditions even very slightly can dramatically alter the entire future of the system and one consequence is that it is quite impossible to trace the current state of a system back to its original state. This is the sort of case studied in chaos theory. However, such a system, though it behaves in ways we don’t and can’t anticipate, is strictly determinate in the sense that every single event in a ‘run’ is completely fixed in advance (Note 5)
Both these methods of generating randomness depend on something or someone from outside the sequence of events : in the first case the randomness in imported from the far larger and more complex system that is the ocean, and in the second case the randomness lies in the starting conditions which themselves derive from the environment or are deliberately set by the experimenter. In neither case is the randomness inherent in the system itself and so, for this reason, we can generally reduced the amount of randomness by, for example, securely tethering the boat to a larger vessel or by only allowing a small number of possible starting conditions.
Wolfram’s third and final class of generators of randomness is, however, quite different since they are inherent random generators. The examples he gives are special types of cellular automaton. A cellular automaton consists essentially of a ‘seed’, which can be a single cell, and a ‘rule’ which stipulates how a cell of a certain colour or shape is to evolve. In the simplest cases we just have two colours, black and white, and start with a single black or white cell. Most of the rules produce simple repetitive patterns as one would expect, others produce what looks like a mixture of ‘order’ and ‘chaos’, while a few show no semblance of repetitiveness or periodicity whatsoever. One of these, that  Wolfram classes as Rule 30, has actually been employed in Random [Integer] which is part of Mathematica and so has proved its worth by contributing to the financial success of the programme and it has also, according to its inventor, passed all tests for randomness it has been subjected to.
Why is this so remarkable? Because in this case there is absolutely no dependence on anything external to the particular sequence which is entirely defined by the (non-random) start point and by an extremely simple rule. The randomness, if such it is to be called, is thus ‘entirely self-generated’ : this is not production of randomness by interaction with other sets of events  but is, if you like, randomness  by parthenogenesis. Also, and more significantly, the author claims that it is this type of randomness that we find above all in nature (though the other two types are also present).

Causal Classification of types of randomness

This prompts me to introduce a classification of  my own with respect to causality, or dominance as I prefer to call it. In a causal chain there is a forward flow of ‘dominance’ from one event to the next and, if one connection is missing, the event chain terminates (though perhaps giving rise to a different causal chain by ricochet). An obvious example  is a set of dominoes where one knocks over the next but one domino is spaced out a bit more  and so does not get toppled. A computer programme acts essentially in the same way : an initial act activates a sequential chain of events and terminates if the connection between two successive states is interrupted.
In the environmental case of the bobbing boat, we have a sequence of events, the movements of the boat, which do not  by themselves form an independent causal chain since each bob depends, not on the previous movement of the boat, but on the next incoming wave, i.e. depends on something outside itself. (In reality, of course, what actually goes on is more complex since, after each buffeting, the boat will be subject to a restoring force tending to bring it  back to equilibrium before it is once more thrown off in another direction, but I think the main point I am making still stands.)
In the statistical or Platonic case such as the decay of a radio-active substance or the development of the photographic image, we have a sequence of events which is neither causally linked within itself nor linked to any actual set of events in the exterior like the state of the ocean. What dictates the behaviour of the atoms is seemingly the eventual configuration (the decay of half, a quarter or all of the atoms) or rather the image or anticipation of this eventual configuration (Note 6).

So we have what might be called (1) forwards internal dominance; (2) sideways dominance; and (3) downwards dominance (from a Platonic event-schema).

Where does the chaotic case fit in? It is an anomaly since although there is clear forwards internal dominance, it seems also to have a Platonic element also and thus to be a mixture of (1) and (3).

Randomness within the basic schema of Ultimate Event Theory

Although the atomic theory goes back to the Greeks, Western science during the ‘classical’ era (16th to mid 19th century) took over certain key elements from Judaeo-Christianity, notably the idea of there being unalterable ‘laws of Nature’ and this notion has been retained even though modern science has dispensed with the lawgiver. An older theory, of which we find echoes in Genesis, views the ‘universe’ as passing from an original state of complete incoherence to the more or less ordered structure we experience today. In Greek and other mythologies the orderly cosmos emerges from an original kaos (from which our word ‘gas’ is derived) and the untamed forces of Nature are symbolized by the Titans and other monstrous  creatures. These eventually give way to the Olympians who, signficantly, control the world from above and do not participate in terrestrial existence. But the Titans, the ancient enemies of the gods, are not destroyed since they are immortal, only held in check and there is the fear that at any moment they may  break free. And there is perhaps also a hint that these forces of disruption (of randomness in effect) are necessary for the successful  functioning of the universe.
Ultimate Event Theory reverts to this earlier schema (though this was not my intention) since there are broadly three phases (1) a period of total randomness (2) a period of determinism and (3) a period when a certain degree of randomness is re-introduced.
In Eventrics, the basic constituents of everything ─ everything physical at any rate ─  are what I call ‘ultimate events’ which are extremely brief and occupy a very small ‘space’ on the event Locality. I assume that originally all ultimate events are entirely random in the sense that they are disconnected from all other ultimate events and, partly for this reason, they disappear as soon as they happen and never recur. This is randomness in the causality sense but it implies the other senses as well. If all events are disconnected from each other, there can be no recognizable pattern and thus no means of predicting which event comes next.
So where do these events come from and how is it they manage to come into being at all? They emerge from an ‘Event Source’ which we may call ‘the Origin’ and which I sometimes refer to as K0 (as opposed to the physical universe which is K1).  It is an important facet of the theory that there is only one source for everything that can and does occur. If one wants to employ the computer analogy, the Origin either is itself, or contains within itself, a random event generator and, since there is nothing else with which the Origin can interact and it does  not itself have any starting conditions  (since it has always existed), this  generator can only be what Wolfram calls an inherent randomness generator. It is not, then, order and coherence that is the ‘natural’ state but rather the reverse : incoherence and discontinuity is the ‘default position’ as it were (Note 7).
Nonetheless, a few ultimate events eventually acquire ‘self-dominance’ which enables them to repeat indefinitely more or less identically and, in a few even rarer cases, some events manage to associate with other repeating events to form conglomerates.
This process is permanent and is still going on everywhere in the universe and will continue to go on at least for some time (though eventually all event-chains will terminate and return the ‘universe’ to the nothingness from which it originally came). Thus, if you like, ‘matter’ is being created all the time though at an incredibly slow rate just as it is in Hoyle’s Steady State model (Note 7).
Once ultimate events form conglomerates they cease to be random and are subject to ‘dominance’ from other sets of event and from previous occurrences of themselves. There will still, at this stage, be a certain unpredictability in the outcomes of these associations because determinism hs not yet ousted randomness completely. Later still, certain particular associations of events become stabilized and give rise to ‘event-schemas’. These ‘event-schemas’ are not themselves made up of ultimate events  and are not situated in the normal event Locality I call K1  (roughly what we understand by the physical universe). They are situated in a concocted secondary ‘universe’ which did not exist previously and which can be called K2. The reader may baulk at this but the procedure is really no different from the distinction that is currently made between the actual physical behaviour of bodies which exemplify physical laws (whether deterministic or statistical) and the laws themselves which are not in any sense part of the physical world. Theoretical physicists routinely speculate about other possible universes where the ‘laws’, or more usually the constants, “are different”, thus implying that these laws, or principles, are in some sense  independent of what actually goes on. The distinction is somewhat similar to the distinction between genotype and phenotype and, in the last resort, it is the genotype that matters, not the phenotype.
Once certain event-schemas have been established, they are very difficult to modify : from now on they ‘dictate’ the behaviour of actual systems of events. There are thus already three quite different categories of events (1) those that emerge directly from the Origin and are strictly random; (2) those that are brought about by previously occurring physical events and (3) events that are dependent on event-schemas rather than on other individual events.
So far, then, everything has become progressively more determined though evolving from an original state of randomness somewhat akin to the Greek kaos (which incidentally gave us the term ‘gas’) or the Hebrew tohu va-vohu, the original state when the Earth was “without form and void and darkness was upon the face of the deep”.
The advent of intelligent beings introduces a new element since such  beings can, or believe they can, impose their own will on events, but this issue will not be discussed here. Whether an outcome is the result of a deliberate act or the mere product of circumstances is an issue that vitally concerns juries but has no real bearing on the determinist/indeterminist dilemma.
Macroscopic events are conglomerates of ultimate events and one might suppose that if the constituent events are completely determined, it follows that so are they. This is what contemporary reductionists actually believer, or at least preach and, within a materialist world-view, it is difficult to avoid some such conclusion. But, according to the Event Paradigm, physical reality is not a continuum but a complicated mosaic where in general blocks of events fit together neatly into interlocking causal chains and clusters. The engineering is, however, perhaps not quite faultless, and there are occasional mismatches and irregularities much as there are ‘errors’ in the transcription of DNA ─ indeed, genetic mutations are the most obvious example of the more general phenomenon of random ‘connecting errors’. And it is this possibility that allows for the reintroduction of randomness into an increasingly deterministic universe.
Despite the subatomic indeterminacy due to Quantum Mechanics, contemporary science nonetheless in practice gives us  a world that is very nearly as predictable as the Newtonian, and in certain respects more so. But human experience keeps turning up events that do not fit our rational expectations at all :  people act ‘completely out of character’, ‘as if they were someone else’, regimes collapse for no apparent reason, wars break out where they are least expected and so on. This is currently attributed to the complexity of the systems involved but there may be a deeper reason. There remains an obstinate tendency for events not to ‘keep to the book’ and one suspects that Taleb’s profound conviction  that the future is unpredictable, and the tremendous welcome this idea has received by the public, is based on an intuitive awareness that a certain type of randomness is hard-wired into the normal functioning of the universe. Why is it there supposing that it really is there? For the same sort of reason that there are persistent random errors in the transcription of the genetic code : it is a productive procedure that works in the long run by turning up possibilities that no one could possibly have planned or worked for. One hesitates to say that this randomness is deliberately put there but it is not a wholly accidental feature either : it is perhaps best conceived as a self-generated controlling mechanism that is reintroducing randomness as a means of propelling the system forward into a higher level of order, though quite what this will be is anyone’s guess.      SH  28/2/13

Note 1  Charles Sanders Peirce, who inaugurated this particular definition, did not speak of ‘random events’ but restricted himself to discussing the much more tractable (but also much more academic) issue of taking a random sample. He defined this as one “taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one of a set of instances as often as any other set of the same number”.

Note 2  Take a simple example. One might at first sight think that a square number could end with any digit whatsoever just as a throw of a dice could produce any one of the possible six outcomes. But glancing casually through a list of smallish square numbers one notes that every one seems to be either a multiple of 5 like 25, one less than a multiple of 5 like 49 or one more than a multiple of 5 like 81. We could (1) dismiss this as a fluke, (2) simply take it as a fact of life and leave it at that or (3) suspect there is  a hidden principle at work which is worth bringing into the light of day.
In this particular case, it is not difficult to establish that the pattern is no accident and will repeat indefinitely. This is so because, in the language of congruences, the square of a number that is ±1 (mod 5)  is 1 (mod 5) while the square of a number that is  ±2 (mod 5) is either +1 or –1(mod 5). This covers all possibilities so we never get squares that are two units less or two units more than a multiple of five.   

Note 3  Laplace, a born survivor who lived through the French Revolution, the Napoleonic era and the Bourbon Restoration,  was careful to restrict his professed belief in total determinism to physical (non-human) events. But clearly, there was no compelling reason to do this except the pragmatic one of keeping out of trouble with the authorities. More audacious thinkers such as Hobbes and La Mettrie, the author of the pamphlet L’Homme est une Machine, both found themselves obliged to go into exile during their lives and were vilified far and wide as ‘atheists’. Nineteenth century scientists and rationalists either avoided the topic as too  contentious or, following Descartes, made a hard and fast distinction, between human beings who possessed free will and the rest of Nature whose behaviour was entirely reducible to the ‘laws of physics’ and thus entirely predictable, at any rate in theory.

Note 4 The current  notion of the ‘laws of physics’ is also, of course, an entirely  Platonic conception since these laws are not  in any sense physical entities and are only deducible by their presumed effects.
Plato definitely struck gold with his notion of a transcendent reality of which the physical world is an imperfect copy since this is still the overriding paradigm in the mathematical sciences. If we did not have the yardstick of, for example, the behaviour of an ‘ideal gas’ (one that obeys Boyle’s Law exactly) we could hardly do any chemistry at all ─ but, in reality, as everyone knows, no gas actually does behave like this exactly hence the eminently Platonic term ‘ideal gas’.
Where Plato went wrong as far as I am concerned was in visualizing his ‘Forms’ strictly in terms of the higher mathemartics of his day which was Euclidian geometry. I view them as ‘event-schemas’ since events, and not geometric shapes, are the building-blocks of reality in my theory. Plato was also mistaken in thinking these ‘Ideas’ were fixed once and for all. I believe that the majority ─ though perhaps not all ─  of the basic event-schemas which are operative in the physical universe were built up piecemeal, evolve over time and are periodically displaced by somewhat different event-schemas much as species are.

Note 5. Because of the interest in chaos theory and the famous ‘butterfly effect’, some people seem to conclude that any slight perturbation is likely to have enormous consequences. If this really were the case, life would be intolerable. In ‘normal’ systems tinkering around with the starting conditions makes virtually no difference at all and every ‘run’, apart from maybe the first few events, ends up more or less the same. Each time you start your car in the morning it is in a different physical state from yesterday if only because of the ambient temperature. But, after perhaps some variation, provided the weather is not too extreme, the car’s behaviour settles down into the usual routine. If a working machine behaved  ‘chaotically’ it would be useless since it could not be relied on to perform in the same way from one day to the next, even from one hour to the next.

Note 6  Some people seem to be prepared to accept ‘backwards causation ‘, i.e. that a future event can somehow dictate what leads up to it,  but I find this totally unacceptable. I deliberately exclude this possibility in the basic Axioms of Ultimate Event Theory by stating that “only an event that has occurrence on the Locality can have dominance over other events”. And the final configuration certainly does not have occurrence on the Locality ─ or at any rate the same part of the Locality as actual events ─ until it actually occurs!

 Note 7   Readers will maybe be shocked at there being no mention of the Big Bang. But although I certainly believe in the reality of the Big  Bang, it does not at all follow from any of the fundamental assumptions of Ultimate Event Theory and it would be dishonest of me to pretend it did. When I first started thinking along these lines Hoyle’s conceptually attractive Steady State Theory was not entirely discredited though even then very much on the way out. The only way I can envisage the Big Bang is as a kind of cataclysmic ‘phase-transition’, presumably preceded by a long slow build up. If we accept the possibility of there being multiple universes, the Big Bang is not quite such a rare or shocking event after all : maybe when all is said and done it is a cosmic ‘storm in a teacup’.

Pagoda I want to start by expressing my gratitude to MeetUp in general and the London Futurists in particular for enabling this event to take place at all, the first time ever that my ideas have been aired in a public place. I intended to conclude the meeting with an expression of my debt to MeetUp,  the Futurists and founder/organiser David Wood, but unfortunately this slipped my mind as the meeting broke up fairly rapidly after a full hour in the cold. (A summary of my talk will be given in a subsequent post.)
The meeting at the Pagoda on Sunday was, as far as I am concerned, well attended — I did not expect or desire  crowds. All those present seem to have had serious intent and to judge by the thoughtful comments made in the discussion afterwards (drastically curtailed because of the cold) they grasped the main drift of my argument. Some missed the meeting because of the weather or did not find us because we were hidden behind a wall on the south side of the Pagoda.

Two persons have already said they would like to have heard the talk and wondered whether there could be a repeat. However, I feel that my ideas are rather far from the framework and general ethos of the London Futurists — though naturally if asked I would be glad to repeat the talk indoors somewhere at a later date. Instead, I plan to have a monthly series of talks/discussions on various issues arising from ‘Ultimate Event Theory’, the scientific and philosophical system I am currently developing. The place will remain the Peace Pagoda, Battersea Park, South facing wall, at 2 p.m. on a date to be announced, probably the last Sunday of each month — watch this site in January. If no one comes at all, the session won’t be wasted since I will be periodically renewing my contact with the ideas of the Buddha via the beautiful edifice in Battersea Park.

What follows is ‘matters arising’ from the talk:

Three stages

It is said that every new scientific idea goes through three stages : Firstly, they say it is not true, secondly, they say it is not important and, thirdly, they credit the wrong person.
Although I am to my knowledge the first person to have taken the world-view of Hinayana Buddhism seriously as a physical theory (as opposed to a religious or metaphysical doctrine), it is entirely appropriate that the first time Ultimate Event Theory was presented verbally to the public the venue was the Peace Pagoda (built by practising Buddhist craftsmen) since the theory I am developing, “Ultimate Event Theory”, can be traced back to the founder of one of the five great world religions, Buddhism.
Our science stems from the Greeks, in particular the atomist Democritus of Abdera  whose works have unfortunately been lost. He is credited with the amazing statement — reductionist if ever there was one —  “Nothing exists except atoms and void“. These atoms Democritus (and Newton) believed to be indestructile and eternal. Although we now know that some atoms decay, the statement is not so far out : around us are protons and neutrinos that have existed since the Big Bang nearly 15 billion years ago (or very soon afterwards). And as for the void, it is healthier and more vibrant than ever, since it is seething with quantum activity (Note 1).
Dharma    But around the same time when Democritus decided that the ultimate elements of existence were eternal atoms, Gautama Buddha in India reached exactly the opposite conclusion, namely that the dharma (‘elements’) were evanescent and that everything (except nirvana) ‘lasted for a moment only’.  A Buddhist credo summarised the teaching of the Buddha thus: “The Great Recluse identified the elements of existence (dharma), their causal interconnection (karma) and their ultimate extinction (nirvana)” (Stcherbatsky, The Central Conception of Buddhism).
I must emphasize that the theory I am developing, Ultimate Event Theory, is a physical theory (though it has ramifications far beyond physics) and does not presuppose any religious belief, still less is it an underhand way of ‘preaching Buddhism’ or any other form of religion. The Buddha himself founded no Church and spent the latter part of his long life wandering around India giving talks in the open air to anyone who cared to listen. My original interest in Buddhist theory was ‘scientific/philosophical’ rather than ‘spiritual’.  It seemed to me that Gautama Buddha had, through the practice of meditation, intuited certain basic features of physical and mental reality, and concluded correctly that matter, mind, soul, personality and so on are all ‘secondary’ not primary entities — in today’s parlance they are ’emergent’ entities. He also saw, or rather felt, that ‘existence’ was not continuous but that everything (incuding the physical universe) is, as it were, being destroyed and recreated at every instant (the Theory of Instantaneous Being). I do not personally, however, conclude that the personality, consciousness, free will and so on are ‘illusory’ as the Buddhist tradition seems to have inferred, merely not primary, not basic.  At bottom we are seemingly all made up of elementary particles and forces between these particles but at a deeper level still I believe that everything is composed of momentary ‘ultimate events’ flashing into existence and then disappearing for ever. As far as I am concerned the buck stops here : beyond the dharma lies only the Absolute, the ground of all being, and this, though it can perhaps be glimpsed by mystics, is wholly outside the domain of science, rational thought and mathematics. “The Tao that can be named (or measured)  is not the original Tao”.      SH  5 December 2012

Note 1  For the claim that Space/Time is “grainy” see Is Space Digital by Michael Moyer, Scientific American Feb. 2012, also  “How big is a grain of space-time?”  by Anil Ananthaswamy (New Scientist 9 July 2011)

______________________________________________________________________

Genesis of Ultimate Event Theory :  My life could be divided into two periods, the first ending one morning in the late seventies when I came across a curious book with the bizarre title Buddhist Logic in Balham Public Library, Battersea, London.  In this book for the first time I came across the idea that had always seemed to me intuitively to be true, that reality and existence were not continuous but discontinuous and, moreover, punctured by gaps — as the German philosopher Heidegger put it  “Being is shot through with nothingness”. A whole school of thinkers, those of the latter Hinayana, took this statement as so obvious it was hardly worth arguing about (though they did produce arguments to persuade their opponents, hence the title of the book).
This well-written tome of Stcherbatsky, not himself a practising Buddhist, thus introduced me to the ideas of certain Hinayana thinkers during the first few centuries of the modern era (Dignaga, Vasubandhu et al.)  I saw at once how ‘modern’ their views were and how, with a certain ingenuity, one could perhaps transform their ‘metaphysics’ into a physical theory very diffferent from what is taught today in schools. These deep and subtle thinkers, in every way the equal of the Greeks, had no interest in developing a physical theory for its own sake since their concern was with personal ‘enlightenment’ rather than the elucidation of the physical world.  Had they and their followers wished it, quite conceivably the world-wide scientific revolution would have taken place, not in the then backward West, but in India. But maybe the time was has now come for the insights of these men to take root some 1,800 years later on the other side of the world and to eventually become the basis of a new science and a new technology. Matter is getting thinner and thinner in contemporary physics so why not drop it entirely and stop viewing the world as the interaction of atoms or elementary particles ? According to Buddhism the ‘natural’ tendency of everything is not to last for ever (like Newton’s atoms) but to disappear and the relative persistence of certain rare event-chains is to be ascribed to a causal binding force, sort of physical equivalent of karma. There is no Space/Time continuum, only a connected discontinuum which is full of gaps. The universe itself will come to an end and everything will return to the absolute quiescence3 of nirvana — though some later Buddhist thinkers, like some conteomporary cosmologists, envisage a never-ending cycle of emergence/extinction/emergence……

Recommended Reading  Those interested in Buddhism as a ‘way of life’ are recommended to start (and also perhaps finish) with Conze, A Short History of Buddhism. This book really is short (132 small size pages) and so good that I seriously doubt whether anyone really needs to read any other book on the subject (unless they want to follow up a particular aspect of the theory) : the writing is clear, concise, comprehensive, pungent. If I were allowed to take only twenty books on a desert island, this would be one of them.
The Russian scholar Stcherbatsky whose books had such a big effect on me has written three seminal works covering the three main aspects of (Hinayana) Buddhism. The Central Conception of Buddhism concerns what I call ‘ultimate events’ (dharma),  Buddhist Logic deals in the main with causality (karma) and The Buddhist Conception of Nirvana with nirvana as one might expect.  Although it is the second book, Buddhist Logic (Volume 1 only), that influenced me, most interested readers would probably find it forbidding in aspect and would be advised to read the Central Conception of Buddhism first (100 pages only) , and not to bother at all with The Buddhist Conception of Nirvana which I found quite poor.

(Note: This is not a review of the best-selling book, The Black Swan, The Impact of the Highly Improbable, by Nassim Nicholas Taleb : I shall merely discuss its relevance to the practical side of ‘Eventrics’.  S.H.)

The chief drawback of this otherwise very interesting and insightful book, The Black Swan, is that it is too negative.  It tends to focus on catastrophic Black Swan events and argues that such events are strictly unpredictable, inherently so, not just because we lack the necessary information or computing power. In my own lifetime I have witnessed incredible turnanouts that strictly noone saw coming, the sudden collapse of communism in Eastern Eureope, the advent of the Internet, 9/11, the financial meltdown of 2008, the sudden emergence of China as the 21st century’s superpower, the list is endless. I was personally a witness of the May 1968 ‘Student Revolution’ when a scuffle between students and the police in the Sorbonne rapidly led on to a general collapse of law and order and the longest General Strike in a Western country during the 20th century. The amazing thing was that all the Left (and Right) political groups and parties didn’t know whether they were coming or going : this was an entirely unforeseen and above all spontaneous movement emerging from nowhere (Note 1).
For these and other reasons, I had no difficulty in agreeing with Taleb’s main  thesis that history moves by jumps, not small steps, and that the big jumps are caused by events few people, if any, predicted, by what he calls ‘Black Swan events’. [To recap : a Black Swan event is an event that is rare, sudden, unexpected and has extreme impact).
But what about one’s personal life? Can anything be done about Black Swan events, the unpredictables of life, apart from getting out of the way when they start looming up and you don’t like the look of them ? That Taleb thinks something can be done is shown by the uncharacteristic aside made on p. 206 “As a matter of fact, I suspect that the most successful businesses are precisely those that know how to work round inherent unpredictability and even exploit it“.  I entirely concur with this, except that I would remove the word ‘even‘.
In the active professions (business, warfare, invention, living  by your wits, staying alive when you should be dead &c,), it is essential not only to fully recognize the role of the unexpected but to be prepared to turn it to one’s (apparent or real) advantage.  This is an extremely difficult skill that may need a lifetime of practice but it is worth learning because it can lead to outcomes that  otherwise would be unthinkable  — this is why it is called ‘Not-Doing’ in the Tao Te Ching , to distinguish it from ‘Doing’ which requires the use of force and/or intellect.
The “modest tricks” (Taleb’s term) that Taleb has gleaned from his life as an option trader and sceptical observer of humanity are given on pp. 206-11 of his book, The Black Swan (Penguin edition). The key principle which may be called Taleb’s Wager derived from Pascal’s rather dubious ‘wager’ concerning the existence of God, goes as follows :
“I will use statistical and inductive methods to make aggressive bets, but I will not use them to manage my risks and exposures.”
Taleb, Fooled by Randomness p. 130

I am not sure that this principle  quite follows  from the author’s basic principles, it sounds more  like a ‘rule of thumb’, the sort of thing Taleb in other contexts tends to look down on since Taleb has little time for instinct and ‘gut reactions’. But the logical argument seems to go something like this :
“It makes sense to use conventional wisdom when calculating likely positive outcomes because the conventional economic wisdom works (up to a point) if we entirely disregard the possibility of potent unexpected events, Black Swans. Now, if a Black Swan event is fortunate (for us) we don’t need to take it into account because it will happen when and if it will happen : all we need to bother about is the everyday events which are, up to a point, predictable. But the reverse applies to an unfortunate Black Swan : we can’t stop it, no one can say when and if it will strike, so the best thing to do is cover ourselves against such an occurrence and completely disregard received opinion in the matter because it almost always discounts such events.”
The ensuing ‘life-strategy’ is to ‘make oneself available to fortunate Black Swans’ while ‘covering one’s defences against unfortunate ones’ e.g. by having a Plan B, not putting all one’s eggs in one basket and so on. Taleb claims that “all the surviving traders I know seem to have done the same …..They make sure that the costs of being wrong are limited (and their probability [i.e. the probability of the unfortunate Black Swan events S.H.] is not derived from past data)” (op. cit.).
Taleb’s Wager seems to have worked in his particular case. He tells us at one point, without giving the details, that he made a substantial amount of money during the shock 1987 Wall Street ‘Black Monday’ crash and he has also succeeded in publishing a  book which turned out to be a best-seller without any prior literary credentials — no small feat given the exclusiveness and current jitteriness of the  publishing industry.
    How do you get yourself into a position of ‘maximum exposure’ to positive Black Swans ? Well, one way is via chance encounters in bars which is how  James Dean and Rock Hudson got ‘spotted’ —  Rock Hudson was a truck-driver at the time.) Consequently, according to Taleb, it is advantageous to live in (or at least assiduously frequent) a big city because serendipitous chance encounters are much more likely to happen there. It also pays to ‘go out’ : as he remarks, diplomats, a fortiori spies, know the rich spoils to be had from hosting or attending parties. But even more bread-and-butter professionals should take note : “If you’re a scientist, you will chance upon a remark that might spark new research” . Learned societies, including the Royal Society itself, were originally informal get-togethers of enthusiastic amateurs and often took place in inns; Paris owed its central cultural (and political) position for two whole centuries not so much to its progressive educational system as its unique cafe ambiance. You could still see Sartre sipping coffee at Les Deux Magots on occasion when I first hit the Boulevard Saint-Germain and Cocteau recounts meeting at La Rotonde, a cafe I used to frequent, a funny little man with a pointed beard who, when asked what he wanted to do in life, replied, to general hilarity, that he was trying to bring down the Russian government — it was Lenin.
A point not mentioned, I think, by Taleb is that a negative Black Swans, if it doesn’t completely finish you off completely, can morph into a positive one : losing a battle could make you seriously revise a defective strategy, dismally failing to make a go as a commercial traveller might propel you into a less lucrative but much more satisfying profession. Even one might hazard the guess that a Black Swan turned on its head so to speak is more effective than a straightforward fortunae Black Swan. Steve Jobs lost the fight with Bill Gates over PCs but this prompted him to move into mobile phones, iPods and so on : the result is that Apple is, so I have been told, currently rated as an even bigger licrative company than Microsoft. Hitler transformed the complete fiasco of the BeerHall putsch into a resounding success using his appearance in court as a  way to broadcast his poisonous views to the nation, and it is said that it was to prevent this happening again that the Marines were told not to take Bin Laden alive.
Another useful tip from Taleb is not to be too precise about the sort of positive Black Swan you’re looking for. Since Black Swan events are by definition unexpected, they will appear in unexpected disguises — even, or above all, to those who are out hunting for them.
Certain other pieces of advice, especially those relating to probability and ‘rational decision-making’ I find a good deal less useful : they may be of value on Wall Street but not in the sort of places I’m used to frequenting. The entire apparatus of traditional logic, ‘straight thinking’, probability theory, even mathematics, is almost completely irrelevant to the hurly-burly of ‘real life’ which is one reason why so many people with little formal education e.g. Edison, Bill Gates and Richard Branson have been spectacularly successful in business. Mathematics creates a (virtually) foolproof little world closed off to the exterior : this is its strength, sometimes beauty, also its hopeless limitation. In real life, you generally have totally inadequate, even untrustworthy data, and there is no time to fit the data to equations, no time to quantify what you’ve got in front of you. You have to make quick qualitative decisions — exactly what mathematicians and logicians spurn — sign or don’t sign that document, fight or flee if you’re attacked in the street. ‘Rules of thumb’ based on experience are a good deal more use out in the real world than training in formal logic. Amusingly, someone I knew who worked in information technology told me that his firm does not welcome mathematicians, is indeed rather wary of them. The reason is not hard to guess : used as they are to perfectly well set-up situations, they are flummaxed by the unexpected and are no better at everyday decision-making than other people, often worse.
But betting on your own life’s best option is completely different to betting on the Stock Exchange. Why? For a start (as Taleb mentions in passing), in most professions you pay for your bad business decisions because the money’s your own and this clarifies the mind (or alternatively destabilizes it). Traders don’t, the big fish anyway, since even if they fail lamentably, they exit with golden handshakes.  But, even laying this aside, there are several other differences. On the Stock Exchange, a single action of a single individual, unless he is Warren Buffet or Geroge Soros, will not have much effect; however, in one’s own personal life, a single decision at a decisive point may count for more than years of effort. Retrospectively — though usually not prospectively — one sees certain key choices sticking out like signposts. Again, there is in real life no objective standard, no Moody publishing objective commercial ratings since one man’s wine may be another man’s poison.
Real life Black Swan situations also have a complication which seemingly does not apply to the Stock Exchange or the Board Meeting (though maybe it does after all) : at the beginning it is almost impossible to distinguish between a very favourable and a very unfavourable occasion, a negative or a positive Black Swan. Not only do you have to learn to deal with the unexpected, but must learn to cope with not knowing into which category the event cluster you find yourself committed to falls. Is the charming man or woman you have just met, and whom you feel you know so well already after only twenty minutes, going to be the person who will waft you out of obscurity to fame and fortune (if that’s what you want) or maybe tell you an important secret about the meaning of life? Or is he/she simply a confidence trickster or, which is almost as bad, someone who’s going to almost deliberately put you on the wrong track ?  In the fascinating but lethal hothouse habitat of big cities, it  pays to hone your ability to sum people up quickly and accurately : Richard Branson is on record as saying that he sums up a potential customer in the first two minutes and has rarely had reason to regret his verdict. It is also important, in many present-day chance encounters,  to be ready to run away if and when things turn nasty (fleeing is usually safer than fighting).
Taleb does mention an important defence stratagem : setting in advance for yourself a “cut-off point”, the moment when you will stop lending someone more money, (or stop asking someone for more and so lose him as a friend, which is more difficult to practice). “[In trading circles] this is called a ‘stop loss’, a predetermined exit point, a protection from the black swan. I find it rarely practised.” (Taleb, Fooled by Randomness p. 131).
Personal human situations are, anyway, very different from the complex physical systems such as the weather studied by chaos theory and complexity theory : there is a further layer of complexity added on since human beings are at one and the same time players and  observers, are inside the game and on the side-lines. They can in theory, if not in practice, “learn by their mistakes”. This happens in nature as well, of course, but the time-scale is rather longer — for a species it might be millions of years.
I disagree with Taleb’s strictures against what he calls the ‘narrative fallacy’, the tendency of human beings to jump to the conclusion that “where there is pattern, there is significance”.  This faculty doubtless has deep evolutionary origins : it goes back to the days when it was essential to interpret rapidly scarcely perceptible visual or auditory patterns which might well betray the proximity of a predator. Even on the cultural/intellectual level, pattern interpretation on the grand scale, though fraught with danger, has been incredibly productive even when it has turned out to be quiye misguided : scarcely anyone today believes in Plato’s, on the face of it, fantastic doctrine of Eternal Forms  but the approach has been extremely useful in the development of science — the very concept of an ‘ideal gas’ is thoroughly Platonic.
In any case, fom the point of view of Ultimate Event Theory, even ‘meaningless’ transitory patterns are significant since they are the result of ephemeral associations and dissociations of events :  the point is not whether these patterns are ‘spurious’ or ‘real’ — everything that has occurrence is real — but  whether they persist or not. Mandelbrot, who, like Taleb, warns against seeing significant patterns in financial price shifts says more than he realizes when he remarks that such changes “can be persistent, meaning that they reinforce each other; a trend once started tends to keep going. (…) Or they can be amti-persistent, meaning they contradict each other; a trend once begun is likely to reverse itself” (Mandelbrot, The(Mis)Beahviour of Markets p. 245). From my point of view, what this shows is that there are varying hidden forces of consolidation and dispersion at work, not unlike the tiny electro-magnetic forces between molecules in a gas (Van des Waals forces). It is strange that, although many writers are quite prepared to view the ‘market’ as a ‘living system’ not unlike a bacterial colony, no one seems prepared to view the components of such a system, events, as in some sense ‘alive’. But it is these myriad independent events, decisions, forecasts, sales &c. which direct everything and which, when and if they come together and act in unison cause a boom or a crash.
A more serious limitation of Taleb’s nonehtless excellent book is that he tends to view human beings as essentially passive victims of the unforeseen rather than as deliberate activators of change. You do not have to just sit waiting for a fortunate event to happen : you can sometimes deliberately put yourself in the way of a likely Black Swan event or even manufacture one deliberately, a technique which I call ‘Doing the Opposite’. If you are naturally an orderly, rational sort of person, do something wild and completely out of character like the woman who, after working in the City for many years, rowed single-handed across the Atlantic (Note 2) ; if you are naturally a spontaneous and romantic person, enrol for a course in calculus or Business Studies, you might even find you enjoy it. Such an unexpected course of action administers a severe shock to the system and, if it recovers (which is usually does), you will find yourself thoroughly invigorated.
Do I practice what I preach on this last point ? Pretty much. Last year, to the stupefaction of everyone who knows me, including myself, I suddenly decided to buy a ticket I couldn’t really afford to travel for the first time to a country that has always repulsed me (America), using a form of transport I disapprove of (air),  in order to engage in an activity I detest (trying to flog some of my work to strangers) in an environment I’d been warned I would absolutely loathe (Hollywood). I didn’t land an option with Warner Brothers but I view this decision as one of the most fruitful I’ve ever made in my life simply for the experience, and even plan going back for more punishment this year. (Actually, contrary to their reputation, I found almost everyone I met in LA charming and parts of downtown Los Angeles hauntingly beautiful, choc-a-bloc with the most incredible Art Deco buildings slowly crumbling into the dust, the whole area  suffused with the faded glamour of  the lost Golden Era of Los Angeles, the Twenties.)   S.H.

Notes :  (1) My reminiscences of these events, under the title Le Temps des Cerises : May ’68 and Aftermaths, have been published in The Raven, Anarchist Quarterly, Vol. 10 Number 2 Summer 1998

(2) “In 2005 Roz Savage became the first solo woman to compete in the Atlantic Rowing Race. She set out from the Canaries to row 3,000 miles alone and unsupported, and eventually arrived in Antigua after 103 days alone at sea.
What really intrigued me about her story was the process that led to her embarking on this extraordinary adventure. (…) After University, she followed a typical career path working her way up the corporate ladder, firstly as a Management Consultant and then moving on to be an Investment Banker. (…) In her early thirties, she started to get a niggling feeling that something was missing and that perhaps ther was ‘more to life than this’ ”  
From 52 Ways to Handle It, by Annabel Sutton (Neal’s Yard Press 2007)

S.H.