Archives for category: Taleb

Are there/can there be events that are truly random?
First of all we need to ask ourselves what  we understand by randomness. As with many other properties, it is much easier to say what randomness is not than to say what it is.

Definitions of Randomness

If a series of events or other assortment exhibits a definite pattern, then it is not random” ─ I think practically everyone would agree to this.
This may be called the lack of pattern definition of randomness. It is the broadest and also the vaguest definition but at the end of the day it is what we always seem to come back to. Stephen Wolfram, the inventor of the software programme Mathematica and a life-long ‘randomness student’  uses the ‘lack of pattern’ definition. He writes, “When one says that something seems random, what one usually means is that one cannot see any regularities in it” (Wolfram, A New Kind of Science p. 316). 
        The weakness of this definition, of course, is that it offers no guidance on how to distinguish between ephemeral patterns and lasting ones (except to keep on looking) and some people have questioned whether the very concept of ‘pattern’ has any clear meaning. For this reason, the ‘lack of pattern’ definition is little used in science and mathematics, at least explicitly.

The second definition of randomness is the unpredictable definition and it follows on from the first since if a sequence exhibits patterning we can usually tell how it is going to continue, at least in principle. The trouble with this definition is that it has nothing to say about why such and such an event is unpredictable, whether it is unpredictable simply because we don’t have the necessary  information or for some more basic reason. Practically speaking, this may not make a lot of difference in the short run but, as far as I am concerned, the difference is not at all academic since it raises profound issues about the nature of physical reality and where we stand on this issue can lead to very different life-styles and life choices.

The third definition of randomness, the frequency definition goes something like this. If, given a well-known and well-defined set-up, a particular outcome, or set of outcomes, in the long run crops up just as frequently (or just as infrequently for that matter) as any other feasible outcome, we class this outcome as ‘random’ (Note 1). A six coming up when I throw a dice is a typical example of a ‘random event’ in the frequency sense. Even though any particular throw is perfectly determinate physically, over thousands or millions of throws, a six would come up no more and no less than any of the other possible outcomes, or would deviate from this ‘expected value’ by a very small amount indeed. So at any rate it is claimed and, as far as I know, experiment fully supports this claim.
It is the frequency definition that is usually employed in mathematics and mathematicians are typically always on the look-out for persistent deviations from what might be expected in terms of frequency. The presence or absence of some numerical or geometrical feature without any obvious reason suggests that there is, or at any rate might be, some hidden principle at work (Note 2).
The trouble with the frequency definition is it is pretty well useless in the real world since a vast number of instances is required to ‘prove’ that an event is random or not  ─ in principle an ‘infinite’ number ─ and when confronted with messy real life situations we have neither the time nor the capability to carry out extensive trials. What generally happens is that, if we have no information to the contrary, we assume that a particular outcome is ‘just as likely’ as another one and proceed from there. The justification for such an assumption is post hoc : it may or may  not ‘prove’ to be a sound assumption and the ‘proof’ involved has nothing to do with logic, only with the facts of the matter, facts that originally we do not and usually cannot know.

The fourth and least popular definition of randomness is the causality definition. For me, ‘randomness’ has to do with causality ─ or rather the lack of it. If an event is brought about by another event, it may be unexpected but it is not random. Not being a snooker player I wouldn’t bet too much money on exactly what is going to happen when one ball slams full pelt into the pack. But, at least according to Newtonian Mechanics, once the ball has left the cue, whatever does happen “was bound to happen” and that is that. The fact that the outcome is almost certainly unpredictable in all its finest details even for a powerful computer is irrelevant.
The weakness of this definition is that there is no foolproof way to test the presence or absence of causality: we can at best only infer it and we might be mistaken. A good deal of practical science is taken up with distinguishing between spurious and genuine cases of causality and, to make matters worse,   philosophers such as Hume and Wittgenstein go so far as to question whether this intangible ‘something’ we call causality is a feature of the real world at all. Ultimately, all that can be said in answer to such systematic sceptics is that belief in causality is a psychological necessity and that it is hard to see how we could have any science or reliable knowledge at all without bringing causality into the picture either explicitly or implicitly. I am temperamentally so much a believer in causality that I view it as a force, indeed as the most basic force of all since if it stopped operating in the way we expect life as we know it would be well-nigh impossible. For we could not be sure of the consequences of even the most ordinary actions; indeed if we could in some way voluntarily disturb the way in which causes and effects get associated, we could reduce an enemy state to helplessness much more rapidly and effectively than by unleashing a nuclear bomb. I did actually, only half-facetiously, suggest that the Pentagon would be advised to do some research into the matter ─ and quite possibly they already have done. Science has not paid enough attention to causality, it tends either to take its ‘normal’ operation for granted or to dispense with it altogether by invoking the ‘Uncertainty Principle’ when this is convenient. No one as far as I know has suggested there may be degrees of causality or that there could be an unequal distribution of causality amongst events.

Determinism and indeterminism

Is randomness in the ‘absence of causality’ sense in fact possible?  Not so long ago it was ‘scientifically correct’ to believe in total determinism and Laplace, the French 19th century mathematician, famously claimed  that if we knew the current state of the universe  with enough precision we could predict its entire future evolution (Note 3). There is clearly no place for inherent randomness in this perspective, only inadequate information.
Laplace’s view is no longer de rigueur in science largely because of Quantum Mechanics and Chaos Theory. But the difference between the two world-views has been greatly exaggerated. What we get in Quantum Mechanics (and other branches of science not necessarily limited to the world of the very small) is generally the replacement of individual determinism by so-called statistical determinism. It is, for example, said to be the case that a certain proportion of the atoms in a radio-active substance will decay within a specified time, but which particular atom out of the (usually very large) number in the sample actually will decay is classed as ‘random’. And in saying this, physics textbooks do not usually mean that such an event is in practice unpredictable but genuinely unknowable, thus indeterminate.
But what exactly is it that is ‘random’? Not the micro-events themselves (the  radio-active decay of particular atoms) but only their order of occurrence. Within a specified time limit half, or three quarters or some other  proportion of the atoms in the sample, will have decayed and if you are prepared to wait long enough the entire sample will decay. Thus, even though the next event in the sequence is not only unpredictable for practical reasons but actually indeterminate,  the eventual outcome of the entire sample is completely determined and, not only that, completely predictable !
Normally, if one event follows another we assume, usually but not always with good reason, that this prior event ‘caused’ the subsequent event, or at least had something to do with its occurrence. And even if we cannot specify the particular event that causes such and such an outcome, we generally assume that there is such an event. But in the case of this particular class of events, the decay of radio-active atoms, no single event has, as I prefer to put it, any ‘dominance’ over any other. Nonetheless, every atom will eventually decay : they have no more choice in the matter than Newton’s billiard balls.
Random Generation       To me, the only way the notion of ‘overall determinism without individual determinism’ makes any sense at all is by supposing that there is some sort of a schema which dictates the ultimate outcome but which leaves the exact order of events unspecified. This is an entirely Platonic conception since it presupposes an eventual configuration that has, during the time decay is going on, no physical existence whatsoever and can even be prevented from manifesting itself by my forcibly intervening and disrupting the entire procedure ! Yet the supposed schema must be considered in some sense ‘real’ for the very good reason that it has bona fide observable physical effects which the vast majority of imaginary shapes and forms certainly do not have (Note 4).

An example of something similar can be seen in the case of the development an old-fashioned  (non-digital) picture taken in such faint light that the lens only allows one photon to get through at a time.  “The development process is a chemical amplification of an initial atomic event…. If a photograph is taken with exceedingly feeble light, one can verify that the image is built up by individual photons arriving independently and, it would seem at first, almost randomly distributed in position” (French & Taylor, An Introduction to Quantum Physics p. 88-9)  This case is slightly different  from that of radio-active decay since the photograph has already been taken. But the order of events leading up to the final pattern is arbitrary and, as I understand it, will be different on different occasions. It is almost as if because the final configuration is fixed, the order of events is ‘allowed’ to be random.

Uncertainty or Indeterminacy ?

 Almost everyone who addresses the subject of randomness somehow manages to dodge the central question, the only question that really matters as far as I am concerned, which is : Are unpredictable events merely unpredictable because we lack the necessary information  or are they inherently indeterminate?
        Taleb is the contemporary thinker responsible more than anyone else for opening up Pandora’s Box of Randomness, so I looked back at his books to see what his stance on the uncertainty/indeterminacy issue was. His deep-rooted conviction that the future is unpredictable and his obstinacy in sticking to his guns against the experts would seem to be driving him in the indeterminate direction but at the last minute he backs off and retreats to the safer sceptical position of “we just don’t know”.

“A true random system is in fact random and does not have predictable properties. A chaotic system [in the scientific sense] has entirely predictable properties, but they are hard to know.” (The Black Swan p. 198 )

This is excellent and I couldn’t agree more. But, he proceeds   “…in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1. (…)  Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.”   The Black Swan p. 198 
As far as I am concerned randomness either is or is not an intrinsic property and difference between theory and practice doesn’t come into it. No doubt, from the viewpoint of an options trader, it doesn’t really matter whether market prices are ‘inherently unpredictable’ or ‘indeterminate’ since one still has to decide whether to buy or not.        However, even from a strictly practical point of view, there is a difference and a big one between intrinsic and ‘effective’ randomness.
Psychologically, human beings feel much easier working with positives than negatives as all the self-help books will tell you and it is even claimed that “the unconscious mind does not understand negatives”. At first sight ‘uncertainty’ and ‘indeterminacy’ appear to be equally negative but I would argue that they are not. If you decide that some outcome is ‘uncertain’ because we will never have the requisite information, you will most likely not think any more about the matter but in stead work out a strategy for coping with uncertainty ─ which is exactly what Taleb advocates and claims to have put into practice successfully in his career on the stock market.
On the other hand, if one ends up by becoming convinced that certain events really are indeterminate, then this raises a lot of very serious questions. The concept of a truly random event, even more so a stream of them, is very odd indeed. One is at once reminded of the quip about random numbers being so “difficult to generate that we can’t afford to leave it to chance”. This is rather more than a weak joke. There is a market for ‘random numbers’ and very sophisticated methods are employed to generate them. The first ‘random number generators’ in computer software were based on negative feedback loops, the irritating ‘noise’ that modern digital systems are precisely designed to eliminate. Other lists are extracted from the expansion of π (which has been taken to over a billion digits) since mathematicians are convinced this expansion will never show any periodicity and indeed none has been found. Other lists are based on so-called linear congruences.  But all this is in the highest degree paradoxical since these two last methods are based on specific procedures or algorithms and so the numbers that actually turn up are not in the least random by my definition. These numbers are random only by the frequency and lack of pattern definitions and as for predictability the situation is ambivalent. The next number in an arbitrary  section of the expansion of π  is completely unpredictable if all you have in front of you is a list of numbers but it is not only perfectly determinate but perfectly predictable if you happen to know the underlying algorithm.

Three types of Randomness

 Stephen Wolfram makes a useful distinction between three basic kinds of randomness. Firstly, we have randomness which relies on the connection of a series of events to its environment. The example he gives is the rocking of a boat on a rough sea. Since the boat’s movements depend on the overall state of the ocean, its motions are certainly unpredictable for us because there are so many variables involved ─ but perhaps not for Laplace’s Supermind.
Wolfram’s second type  of ‘randomness’ arises, not  because a series of events is continuously interacting with its environment, but because it is sensitively dependent on the initial conditions. Changing these conditions even very slightly can dramatically alter the entire future of the system and one consequence is that it is quite impossible to trace the current state of a system back to its original state. This is the sort of case studied in chaos theory. However, such a system, though it behaves in ways we don’t and can’t anticipate, is strictly determinate in the sense that every single event in a ‘run’ is completely fixed in advance (Note 5)
Both these methods of generating randomness depend on something or someone from outside the sequence of events : in the first case the randomness in imported from the far larger and more complex system that is the ocean, and in the second case the randomness lies in the starting conditions which themselves derive from the environment or are deliberately set by the experimenter. In neither case is the randomness inherent in the system itself and so, for this reason, we can generally reduced the amount of randomness by, for example, securely tethering the boat to a larger vessel or by only allowing a small number of possible starting conditions.
Wolfram’s third and final class of generators of randomness is, however, quite different since they are inherent random generators. The examples he gives are special types of cellular automaton. A cellular automaton consists essentially of a ‘seed’, which can be a single cell, and a ‘rule’ which stipulates how a cell of a certain colour or shape is to evolve. In the simplest cases we just have two colours, black and white, and start with a single black or white cell. Most of the rules produce simple repetitive patterns as one would expect, others produce what looks like a mixture of ‘order’ and ‘chaos’, while a few show no semblance of repetitiveness or periodicity whatsoever. One of these, that  Wolfram classes as Rule 30, has actually been employed in Random [Integer] which is part of Mathematica and so has proved its worth by contributing to the financial success of the programme and it has also, according to its inventor, passed all tests for randomness it has been subjected to.
Why is this so remarkable? Because in this case there is absolutely no dependence on anything external to the particular sequence which is entirely defined by the (non-random) start point and by an extremely simple rule. The randomness, if such it is to be called, is thus ‘entirely self-generated’ : this is not production of randomness by interaction with other sets of events  but is, if you like, randomness  by parthenogenesis. Also, and more significantly, the author claims that it is this type of randomness that we find above all in nature (though the other two types are also present).

Causal Classification of types of randomness

This prompts me to introduce a classification of  my own with respect to causality, or dominance as I prefer to call it. In a causal chain there is a forward flow of ‘dominance’ from one event to the next and, if one connection is missing, the event chain terminates (though perhaps giving rise to a different causal chain by ricochet). An obvious example  is a set of dominoes where one knocks over the next but one domino is spaced out a bit more  and so does not get toppled. A computer programme acts essentially in the same way : an initial act activates a sequential chain of events and terminates if the connection between two successive states is interrupted.
In the environmental case of the bobbing boat, we have a sequence of events, the movements of the boat, which do not  by themselves form an independent causal chain since each bob depends, not on the previous movement of the boat, but on the next incoming wave, i.e. depends on something outside itself. (In reality, of course, what actually goes on is more complex since, after each buffeting, the boat will be subject to a restoring force tending to bring it  back to equilibrium before it is once more thrown off in another direction, but I think the main point I am making still stands.)
In the statistical or Platonic case such as the decay of a radio-active substance or the development of the photographic image, we have a sequence of events which is neither causally linked within itself nor linked to any actual set of events in the exterior like the state of the ocean. What dictates the behaviour of the atoms is seemingly the eventual configuration (the decay of half, a quarter or all of the atoms) or rather the image or anticipation of this eventual configuration (Note 6).

So we have what might be called (1) forwards internal dominance; (2) sideways dominance; and (3) downwards dominance (from a Platonic event-schema).

Where does the chaotic case fit in? It is an anomaly since although there is clear forwards internal dominance, it seems also to have a Platonic element also and thus to be a mixture of (1) and (3).

Randomness within the basic schema of Ultimate Event Theory

Although the atomic theory goes back to the Greeks, Western science during the ‘classical’ era (16th to mid 19th century) took over certain key elements from Judaeo-Christianity, notably the idea of there being unalterable ‘laws of Nature’ and this notion has been retained even though modern science has dispensed with the lawgiver. An older theory, of which we find echoes in Genesis, views the ‘universe’ as passing from an original state of complete incoherence to the more or less ordered structure we experience today. In Greek and other mythologies the orderly cosmos emerges from an original kaos (from which our word ‘gas’ is derived) and the untamed forces of Nature are symbolized by the Titans and other monstrous  creatures. These eventually give way to the Olympians who, signficantly, control the world from above and do not participate in terrestrial existence. But the Titans, the ancient enemies of the gods, are not destroyed since they are immortal, only held in check and there is the fear that at any moment they may  break free. And there is perhaps also a hint that these forces of disruption (of randomness in effect) are necessary for the successful  functioning of the universe.
Ultimate Event Theory reverts to this earlier schema (though this was not my intention) since there are broadly three phases (1) a period of total randomness (2) a period of determinism and (3) a period when a certain degree of randomness is re-introduced.
In Eventrics, the basic constituents of everything ─ everything physical at any rate ─  are what I call ‘ultimate events’ which are extremely brief and occupy a very small ‘space’ on the event Locality. I assume that originally all ultimate events are entirely random in the sense that they are disconnected from all other ultimate events and, partly for this reason, they disappear as soon as they happen and never recur. This is randomness in the causality sense but it implies the other senses as well. If all events are disconnected from each other, there can be no recognizable pattern and thus no means of predicting which event comes next.
So where do these events come from and how is it they manage to come into being at all? They emerge from an ‘Event Source’ which we may call ‘the Origin’ and which I sometimes refer to as K0 (as opposed to the physical universe which is K1).  It is an important facet of the theory that there is only one source for everything that can and does occur. If one wants to employ the computer analogy, the Origin either is itself, or contains within itself, a random event generator and, since there is nothing else with which the Origin can interact and it does  not itself have any starting conditions  (since it has always existed), this  generator can only be what Wolfram calls an inherent randomness generator. It is not, then, order and coherence that is the ‘natural’ state but rather the reverse : incoherence and discontinuity is the ‘default position’ as it were (Note 7).
Nonetheless, a few ultimate events eventually acquire ‘self-dominance’ which enables them to repeat indefinitely more or less identically and, in a few even rarer cases, some events manage to associate with other repeating events to form conglomerates.
This process is permanent and is still going on everywhere in the universe and will continue to go on at least for some time (though eventually all event-chains will terminate and return the ‘universe’ to the nothingness from which it originally came). Thus, if you like, ‘matter’ is being created all the time though at an incredibly slow rate just as it is in Hoyle’s Steady State model (Note 7).
Once ultimate events form conglomerates they cease to be random and are subject to ‘dominance’ from other sets of event and from previous occurrences of themselves. There will still, at this stage, be a certain unpredictability in the outcomes of these associations because determinism hs not yet ousted randomness completely. Later still, certain particular associations of events become stabilized and give rise to ‘event-schemas’. These ‘event-schemas’ are not themselves made up of ultimate events  and are not situated in the normal event Locality I call K1  (roughly what we understand by the physical universe). They are situated in a concocted secondary ‘universe’ which did not exist previously and which can be called K2. The reader may baulk at this but the procedure is really no different from the distinction that is currently made between the actual physical behaviour of bodies which exemplify physical laws (whether deterministic or statistical) and the laws themselves which are not in any sense part of the physical world. Theoretical physicists routinely speculate about other possible universes where the ‘laws’, or more usually the constants, “are different”, thus implying that these laws, or principles, are in some sense  independent of what actually goes on. The distinction is somewhat similar to the distinction between genotype and phenotype and, in the last resort, it is the genotype that matters, not the phenotype.
Once certain event-schemas have been established, they are very difficult to modify : from now on they ‘dictate’ the behaviour of actual systems of events. There are thus already three quite different categories of events (1) those that emerge directly from the Origin and are strictly random; (2) those that are brought about by previously occurring physical events and (3) events that are dependent on event-schemas rather than on other individual events.
So far, then, everything has become progressively more determined though evolving from an original state of randomness somewhat akin to the Greek kaos (which incidentally gave us the term ‘gas’) or the Hebrew tohu va-vohu, the original state when the Earth was “without form and void and darkness was upon the face of the deep”.
The advent of intelligent beings introduces a new element since such  beings can, or believe they can, impose their own will on events, but this issue will not be discussed here. Whether an outcome is the result of a deliberate act or the mere product of circumstances is an issue that vitally concerns juries but has no real bearing on the determinist/indeterminist dilemma.
Macroscopic events are conglomerates of ultimate events and one might suppose that if the constituent events are completely determined, it follows that so are they. This is what contemporary reductionists actually believer, or at least preach and, within a materialist world-view, it is difficult to avoid some such conclusion. But, according to the Event Paradigm, physical reality is not a continuum but a complicated mosaic where in general blocks of events fit together neatly into interlocking causal chains and clusters. The engineering is, however, perhaps not quite faultless, and there are occasional mismatches and irregularities much as there are ‘errors’ in the transcription of DNA ─ indeed, genetic mutations are the most obvious example of the more general phenomenon of random ‘connecting errors’. And it is this possibility that allows for the reintroduction of randomness into an increasingly deterministic universe.
Despite the subatomic indeterminacy due to Quantum Mechanics, contemporary science nonetheless in practice gives us  a world that is very nearly as predictable as the Newtonian, and in certain respects more so. But human experience keeps turning up events that do not fit our rational expectations at all :  people act ‘completely out of character’, ‘as if they were someone else’, regimes collapse for no apparent reason, wars break out where they are least expected and so on. This is currently attributed to the complexity of the systems involved but there may be a deeper reason. There remains an obstinate tendency for events not to ‘keep to the book’ and one suspects that Taleb’s profound conviction  that the future is unpredictable, and the tremendous welcome this idea has received by the public, is based on an intuitive awareness that a certain type of randomness is hard-wired into the normal functioning of the universe. Why is it there supposing that it really is there? For the same sort of reason that there are persistent random errors in the transcription of the genetic code : it is a productive procedure that works in the long run by turning up possibilities that no one could possibly have planned or worked for. One hesitates to say that this randomness is deliberately put there but it is not a wholly accidental feature either : it is perhaps best conceived as a self-generated controlling mechanism that is reintroducing randomness as a means of propelling the system forward into a higher level of order, though quite what this will be is anyone’s guess.      SH  28/2/13

Note 1  Charles Sanders Peirce, who inaugurated this particular definition, did not speak of ‘random events’ but restricted himself to discussing the much more tractable (but also much more academic) issue of taking a random sample. He defined this as one “taken according to a precept or method which, being applied over and over again indefinitely, would in the long run result in the drawing of any one of a set of instances as often as any other set of the same number”.

Note 2  Take a simple example. One might at first sight think that a square number could end with any digit whatsoever just as a throw of a dice could produce any one of the possible six outcomes. But glancing casually through a list of smallish square numbers one notes that every one seems to be either a multiple of 5 like 25, one less than a multiple of 5 like 49 or one more than a multiple of 5 like 81. We could (1) dismiss this as a fluke, (2) simply take it as a fact of life and leave it at that or (3) suspect there is  a hidden principle at work which is worth bringing into the light of day.
In this particular case, it is not difficult to establish that the pattern is no accident and will repeat indefinitely. This is so because, in the language of congruences, the square of a number that is ±1 (mod 5)  is 1 (mod 5) while the square of a number that is  ±2 (mod 5) is either +1 or –1(mod 5). This covers all possibilities so we never get squares that are two units less or two units more than a multiple of five.   

Note 3  Laplace, a born survivor who lived through the French Revolution, the Napoleonic era and the Bourbon Restoration,  was careful to restrict his professed belief in total determinism to physical (non-human) events. But clearly, there was no compelling reason to do this except the pragmatic one of keeping out of trouble with the authorities. More audacious thinkers such as Hobbes and La Mettrie, the author of the pamphlet L’Homme est une Machine, both found themselves obliged to go into exile during their lives and were vilified far and wide as ‘atheists’. Nineteenth century scientists and rationalists either avoided the topic as too  contentious or, following Descartes, made a hard and fast distinction, between human beings who possessed free will and the rest of Nature whose behaviour was entirely reducible to the ‘laws of physics’ and thus entirely predictable, at any rate in theory.

Note 4 The current  notion of the ‘laws of physics’ is also, of course, an entirely  Platonic conception since these laws are not  in any sense physical entities and are only deducible by their presumed effects.
Plato definitely struck gold with his notion of a transcendent reality of which the physical world is an imperfect copy since this is still the overriding paradigm in the mathematical sciences. If we did not have the yardstick of, for example, the behaviour of an ‘ideal gas’ (one that obeys Boyle’s Law exactly) we could hardly do any chemistry at all ─ but, in reality, as everyone knows, no gas actually does behave like this exactly hence the eminently Platonic term ‘ideal gas’.
Where Plato went wrong as far as I am concerned was in visualizing his ‘Forms’ strictly in terms of the higher mathemartics of his day which was Euclidian geometry. I view them as ‘event-schemas’ since events, and not geometric shapes, are the building-blocks of reality in my theory. Plato was also mistaken in thinking these ‘Ideas’ were fixed once and for all. I believe that the majority ─ though perhaps not all ─  of the basic event-schemas which are operative in the physical universe were built up piecemeal, evolve over time and are periodically displaced by somewhat different event-schemas much as species are.

Note 5. Because of the interest in chaos theory and the famous ‘butterfly effect’, some people seem to conclude that any slight perturbation is likely to have enormous consequences. If this really were the case, life would be intolerable. In ‘normal’ systems tinkering around with the starting conditions makes virtually no difference at all and every ‘run’, apart from maybe the first few events, ends up more or less the same. Each time you start your car in the morning it is in a different physical state from yesterday if only because of the ambient temperature. But, after perhaps some variation, provided the weather is not too extreme, the car’s behaviour settles down into the usual routine. If a working machine behaved  ‘chaotically’ it would be useless since it could not be relied on to perform in the same way from one day to the next, even from one hour to the next.

Note 6  Some people seem to be prepared to accept ‘backwards causation ‘, i.e. that a future event can somehow dictate what leads up to it,  but I find this totally unacceptable. I deliberately exclude this possibility in the basic Axioms of Ultimate Event Theory by stating that “only an event that has occurrence on the Locality can have dominance over other events”. And the final configuration certainly does not have occurrence on the Locality ─ or at any rate the same part of the Locality as actual events ─ until it actually occurs!

 Note 7   Readers will maybe be shocked at there being no mention of the Big Bang. But although I certainly believe in the reality of the Big  Bang, it does not at all follow from any of the fundamental assumptions of Ultimate Event Theory and it would be dishonest of me to pretend it did. When I first started thinking along these lines Hoyle’s conceptually attractive Steady State Theory was not entirely discredited though even then very much on the way out. The only way I can envisage the Big Bang is as a kind of cataclysmic ‘phase-transition’, presumably preceded by a long slow build up. If we accept the possibility of there being multiple universes, the Big Bang is not quite such a rare or shocking event after all : maybe when all is said and done it is a cosmic ‘storm in a teacup’.

(Note: This is not a review of the best-selling book, The Black Swan, The Impact of the Highly Improbable, by Nassim Nicholas Taleb : I shall merely discuss its relevance to the practical side of ‘Eventrics’.  S.H.)

The chief drawback of this otherwise very interesting and insightful book, The Black Swan, is that it is too negative.  It tends to focus on catastrophic Black Swan events and argues that such events are strictly unpredictable, inherently so, not just because we lack the necessary information or computing power. In my own lifetime I have witnessed incredible turnanouts that strictly noone saw coming, the sudden collapse of communism in Eastern Eureope, the advent of the Internet, 9/11, the financial meltdown of 2008, the sudden emergence of China as the 21st century’s superpower, the list is endless. I was personally a witness of the May 1968 ‘Student Revolution’ when a scuffle between students and the police in the Sorbonne rapidly led on to a general collapse of law and order and the longest General Strike in a Western country during the 20th century. The amazing thing was that all the Left (and Right) political groups and parties didn’t know whether they were coming or going : this was an entirely unforeseen and above all spontaneous movement emerging from nowhere (Note 1).
For these and other reasons, I had no difficulty in agreeing with Taleb’s main  thesis that history moves by jumps, not small steps, and that the big jumps are caused by events few people, if any, predicted, by what he calls ‘Black Swan events’. [To recap : a Black Swan event is an event that is rare, sudden, unexpected and has extreme impact).
But what about one’s personal life? Can anything be done about Black Swan events, the unpredictables of life, apart from getting out of the way when they start looming up and you don’t like the look of them ? That Taleb thinks something can be done is shown by the uncharacteristic aside made on p. 206 “As a matter of fact, I suspect that the most successful businesses are precisely those that know how to work round inherent unpredictability and even exploit it“.  I entirely concur with this, except that I would remove the word ‘even‘.
In the active professions (business, warfare, invention, living  by your wits, staying alive when you should be dead &c,), it is essential not only to fully recognize the role of the unexpected but to be prepared to turn it to one’s (apparent or real) advantage.  This is an extremely difficult skill that may need a lifetime of practice but it is worth learning because it can lead to outcomes that  otherwise would be unthinkable  — this is why it is called ‘Not-Doing’ in the Tao Te Ching , to distinguish it from ‘Doing’ which requires the use of force and/or intellect.
The “modest tricks” (Taleb’s term) that Taleb has gleaned from his life as an option trader and sceptical observer of humanity are given on pp. 206-11 of his book, The Black Swan (Penguin edition). The key principle which may be called Taleb’s Wager derived from Pascal’s rather dubious ‘wager’ concerning the existence of God, goes as follows :
“I will use statistical and inductive methods to make aggressive bets, but I will not use them to manage my risks and exposures.”
Taleb, Fooled by Randomness p. 130

I am not sure that this principle  quite follows  from the author’s basic principles, it sounds more  like a ‘rule of thumb’, the sort of thing Taleb in other contexts tends to look down on since Taleb has little time for instinct and ‘gut reactions’. But the logical argument seems to go something like this :
“It makes sense to use conventional wisdom when calculating likely positive outcomes because the conventional economic wisdom works (up to a point) if we entirely disregard the possibility of potent unexpected events, Black Swans. Now, if a Black Swan event is fortunate (for us) we don’t need to take it into account because it will happen when and if it will happen : all we need to bother about is the everyday events which are, up to a point, predictable. But the reverse applies to an unfortunate Black Swan : we can’t stop it, no one can say when and if it will strike, so the best thing to do is cover ourselves against such an occurrence and completely disregard received opinion in the matter because it almost always discounts such events.”
The ensuing ‘life-strategy’ is to ‘make oneself available to fortunate Black Swans’ while ‘covering one’s defences against unfortunate ones’ e.g. by having a Plan B, not putting all one’s eggs in one basket and so on. Taleb claims that “all the surviving traders I know seem to have done the same …..They make sure that the costs of being wrong are limited (and their probability [i.e. the probability of the unfortunate Black Swan events S.H.] is not derived from past data)” (op. cit.).
Taleb’s Wager seems to have worked in his particular case. He tells us at one point, without giving the details, that he made a substantial amount of money during the shock 1987 Wall Street ‘Black Monday’ crash and he has also succeeded in publishing a  book which turned out to be a best-seller without any prior literary credentials — no small feat given the exclusiveness and current jitteriness of the  publishing industry.
    How do you get yourself into a position of ‘maximum exposure’ to positive Black Swans ? Well, one way is via chance encounters in bars which is how  James Dean and Rock Hudson got ‘spotted’ —  Rock Hudson was a truck-driver at the time.) Consequently, according to Taleb, it is advantageous to live in (or at least assiduously frequent) a big city because serendipitous chance encounters are much more likely to happen there. It also pays to ‘go out’ : as he remarks, diplomats, a fortiori spies, know the rich spoils to be had from hosting or attending parties. But even more bread-and-butter professionals should take note : “If you’re a scientist, you will chance upon a remark that might spark new research” . Learned societies, including the Royal Society itself, were originally informal get-togethers of enthusiastic amateurs and often took place in inns; Paris owed its central cultural (and political) position for two whole centuries not so much to its progressive educational system as its unique cafe ambiance. You could still see Sartre sipping coffee at Les Deux Magots on occasion when I first hit the Boulevard Saint-Germain and Cocteau recounts meeting at La Rotonde, a cafe I used to frequent, a funny little man with a pointed beard who, when asked what he wanted to do in life, replied, to general hilarity, that he was trying to bring down the Russian government — it was Lenin.
A point not mentioned, I think, by Taleb is that a negative Black Swans, if it doesn’t completely finish you off completely, can morph into a positive one : losing a battle could make you seriously revise a defective strategy, dismally failing to make a go as a commercial traveller might propel you into a less lucrative but much more satisfying profession. Even one might hazard the guess that a Black Swan turned on its head so to speak is more effective than a straightforward fortunae Black Swan. Steve Jobs lost the fight with Bill Gates over PCs but this prompted him to move into mobile phones, iPods and so on : the result is that Apple is, so I have been told, currently rated as an even bigger licrative company than Microsoft. Hitler transformed the complete fiasco of the BeerHall putsch into a resounding success using his appearance in court as a  way to broadcast his poisonous views to the nation, and it is said that it was to prevent this happening again that the Marines were told not to take Bin Laden alive.
Another useful tip from Taleb is not to be too precise about the sort of positive Black Swan you’re looking for. Since Black Swan events are by definition unexpected, they will appear in unexpected disguises — even, or above all, to those who are out hunting for them.
Certain other pieces of advice, especially those relating to probability and ‘rational decision-making’ I find a good deal less useful : they may be of value on Wall Street but not in the sort of places I’m used to frequenting. The entire apparatus of traditional logic, ‘straight thinking’, probability theory, even mathematics, is almost completely irrelevant to the hurly-burly of ‘real life’ which is one reason why so many people with little formal education e.g. Edison, Bill Gates and Richard Branson have been spectacularly successful in business. Mathematics creates a (virtually) foolproof little world closed off to the exterior : this is its strength, sometimes beauty, also its hopeless limitation. In real life, you generally have totally inadequate, even untrustworthy data, and there is no time to fit the data to equations, no time to quantify what you’ve got in front of you. You have to make quick qualitative decisions — exactly what mathematicians and logicians spurn — sign or don’t sign that document, fight or flee if you’re attacked in the street. ‘Rules of thumb’ based on experience are a good deal more use out in the real world than training in formal logic. Amusingly, someone I knew who worked in information technology told me that his firm does not welcome mathematicians, is indeed rather wary of them. The reason is not hard to guess : used as they are to perfectly well set-up situations, they are flummaxed by the unexpected and are no better at everyday decision-making than other people, often worse.
But betting on your own life’s best option is completely different to betting on the Stock Exchange. Why? For a start (as Taleb mentions in passing), in most professions you pay for your bad business decisions because the money’s your own and this clarifies the mind (or alternatively destabilizes it). Traders don’t, the big fish anyway, since even if they fail lamentably, they exit with golden handshakes.  But, even laying this aside, there are several other differences. On the Stock Exchange, a single action of a single individual, unless he is Warren Buffet or Geroge Soros, will not have much effect; however, in one’s own personal life, a single decision at a decisive point may count for more than years of effort. Retrospectively — though usually not prospectively — one sees certain key choices sticking out like signposts. Again, there is in real life no objective standard, no Moody publishing objective commercial ratings since one man’s wine may be another man’s poison.
Real life Black Swan situations also have a complication which seemingly does not apply to the Stock Exchange or the Board Meeting (though maybe it does after all) : at the beginning it is almost impossible to distinguish between a very favourable and a very unfavourable occasion, a negative or a positive Black Swan. Not only do you have to learn to deal with the unexpected, but must learn to cope with not knowing into which category the event cluster you find yourself committed to falls. Is the charming man or woman you have just met, and whom you feel you know so well already after only twenty minutes, going to be the person who will waft you out of obscurity to fame and fortune (if that’s what you want) or maybe tell you an important secret about the meaning of life? Or is he/she simply a confidence trickster or, which is almost as bad, someone who’s going to almost deliberately put you on the wrong track ?  In the fascinating but lethal hothouse habitat of big cities, it  pays to hone your ability to sum people up quickly and accurately : Richard Branson is on record as saying that he sums up a potential customer in the first two minutes and has rarely had reason to regret his verdict. It is also important, in many present-day chance encounters,  to be ready to run away if and when things turn nasty (fleeing is usually safer than fighting).
Taleb does mention an important defence stratagem : setting in advance for yourself a “cut-off point”, the moment when you will stop lending someone more money, (or stop asking someone for more and so lose him as a friend, which is more difficult to practice). “[In trading circles] this is called a ‘stop loss’, a predetermined exit point, a protection from the black swan. I find it rarely practised.” (Taleb, Fooled by Randomness p. 131).
Personal human situations are, anyway, very different from the complex physical systems such as the weather studied by chaos theory and complexity theory : there is a further layer of complexity added on since human beings are at one and the same time players and  observers, are inside the game and on the side-lines. They can in theory, if not in practice, “learn by their mistakes”. This happens in nature as well, of course, but the time-scale is rather longer — for a species it might be millions of years.
I disagree with Taleb’s strictures against what he calls the ‘narrative fallacy’, the tendency of human beings to jump to the conclusion that “where there is pattern, there is significance”.  This faculty doubtless has deep evolutionary origins : it goes back to the days when it was essential to interpret rapidly scarcely perceptible visual or auditory patterns which might well betray the proximity of a predator. Even on the cultural/intellectual level, pattern interpretation on the grand scale, though fraught with danger, has been incredibly productive even when it has turned out to be quiye misguided : scarcely anyone today believes in Plato’s, on the face of it, fantastic doctrine of Eternal Forms  but the approach has been extremely useful in the development of science — the very concept of an ‘ideal gas’ is thoroughly Platonic.
In any case, fom the point of view of Ultimate Event Theory, even ‘meaningless’ transitory patterns are significant since they are the result of ephemeral associations and dissociations of events :  the point is not whether these patterns are ‘spurious’ or ‘real’ — everything that has occurrence is real — but  whether they persist or not. Mandelbrot, who, like Taleb, warns against seeing significant patterns in financial price shifts says more than he realizes when he remarks that such changes “can be persistent, meaning that they reinforce each other; a trend once started tends to keep going. (…) Or they can be amti-persistent, meaning they contradict each other; a trend once begun is likely to reverse itself” (Mandelbrot, The(Mis)Beahviour of Markets p. 245). From my point of view, what this shows is that there are varying hidden forces of consolidation and dispersion at work, not unlike the tiny electro-magnetic forces between molecules in a gas (Van des Waals forces). It is strange that, although many writers are quite prepared to view the ‘market’ as a ‘living system’ not unlike a bacterial colony, no one seems prepared to view the components of such a system, events, as in some sense ‘alive’. But it is these myriad independent events, decisions, forecasts, sales &c. which direct everything and which, when and if they come together and act in unison cause a boom or a crash.
A more serious limitation of Taleb’s nonehtless excellent book is that he tends to view human beings as essentially passive victims of the unforeseen rather than as deliberate activators of change. You do not have to just sit waiting for a fortunate event to happen : you can sometimes deliberately put yourself in the way of a likely Black Swan event or even manufacture one deliberately, a technique which I call ‘Doing the Opposite’. If you are naturally an orderly, rational sort of person, do something wild and completely out of character like the woman who, after working in the City for many years, rowed single-handed across the Atlantic (Note 2) ; if you are naturally a spontaneous and romantic person, enrol for a course in calculus or Business Studies, you might even find you enjoy it. Such an unexpected course of action administers a severe shock to the system and, if it recovers (which is usually does), you will find yourself thoroughly invigorated.
Do I practice what I preach on this last point ? Pretty much. Last year, to the stupefaction of everyone who knows me, including myself, I suddenly decided to buy a ticket I couldn’t really afford to travel for the first time to a country that has always repulsed me (America), using a form of transport I disapprove of (air),  in order to engage in an activity I detest (trying to flog some of my work to strangers) in an environment I’d been warned I would absolutely loathe (Hollywood). I didn’t land an option with Warner Brothers but I view this decision as one of the most fruitful I’ve ever made in my life simply for the experience, and even plan going back for more punishment this year. (Actually, contrary to their reputation, I found almost everyone I met in LA charming and parts of downtown Los Angeles hauntingly beautiful, choc-a-bloc with the most incredible Art Deco buildings slowly crumbling into the dust, the whole area  suffused with the faded glamour of  the lost Golden Era of Los Angeles, the Twenties.)   S.H.

Notes :  (1) My reminiscences of these events, under the title Le Temps des Cerises : May ’68 and Aftermaths, have been published in The Raven, Anarchist Quarterly, Vol. 10 Number 2 Summer 1998

(2) “In 2005 Roz Savage became the first solo woman to compete in the Atlantic Rowing Race. She set out from the Canaries to row 3,000 miles alone and unsupported, and eventually arrived in Antigua after 103 days alone at sea.
What really intrigued me about her story was the process that led to her embarking on this extraordinary adventure. (…) After University, she followed a typical career path working her way up the corporate ladder, firstly as a Management Consultant and then moving on to be an Investment Banker. (…) In her early thirties, she started to get a niggling feeling that something was missing and that perhaps ther was ‘more to life than this’ ”  
From 52 Ways to Handle It, by Annabel Sutton (Neal’s Yard Press 2007)

S.H.

(Note: This is not a review of the best-selling book, The Black Swan, The Impact of the Highly Improbable, by Nassim Nicholas Taleb : I shall merely discuss its relevance to the scientific study of events and their interactions, what I call ‘Eventrics’. S.H.)

Before Westerners colonised Australia, it was assumed that all swans were white. However, it is now known that a few (very few) are black. The term ‘Black Swan’, though a brilliant choice of name for an important phenonenon, is somewhat misleading. For Taleb is not concerned with unusual birds but exclusively with unusual events. Moreover, although the term could, by extension, be applied to any kind of event, Taleb seems to be entirely concerned with human events, either general historical ones like wars and financial crashes, or, on an individual level, serendipitous circumstances leading to an important discovery like penicillin or a chance encounter in a bar leading to  a career change.
Taleb lists three characteristics of Black Swan events : “rarity, extreme impact, retrospective (though not prospective) predictability” (Note 1). These are best broken down into four : rarity, unpredictability, extreme impact, subsequent plausibility. Taleb rightly emphasizes how readily commentators and historians are to list the causes of, say, the outbreak of WWI the 1929 Wall Street Crash after the event, but how slow people living at the time were to even realise that anything special was about to happen. Human blindness and inveterate bias is an important issue but it is not my main concern : I wish to determine whether there are any discernable ‘general principles’ controlling the occurrence of events. Nonetheless, the fact that a reasonable attempt at showing the steps leading up to a devastating crisis like the outbreak of WWI or the 1929 and 2008 Wall Street crashes does show that a Black Swan event is not only possible — because it took place — but can be made to appear plausible, even inevitable (after the event).  Had the event under scrutiny been totally fantastic, historians would have had a much harder job trying to trace such an event’s antecedents, and, unless it was a very well verified and important event, they would have  been very happy to ignore it completely. ( Scientists and rationalists do this all the time when confronted with aberrant apparently ‘psychic’ phenomena on the principle that what they cannot explain casually cannot exist and, if they do, are best left well alone.)
The restrospective ‘plausibility’ of a Black Swan event is something to be born in mind for future reference, but the really important features of a Black Swan event are (1) unpredictability and (2) extreme impact with rarity  coming very much in third place (since Black Swan events turn out not to be so rare as all that) (Note 2). Nassim Taleb has in effect divides (human and hisotrical) events into two classes, one comprising ‘ordinary events’, those taking place in Mediocristan as he puts it, and one containing Black Swan events — those taking place in Extremistan.  The difference between these two classes is not so much the rarity of  Black Swan events as (1) their apparent lack of causal antecedents combined with (2) their sudden appearance and (3) their colossal consequences.
‘Normal’ human event-chains proceed by small increments that can be roughly figured out in advance : a nation becomes steadily wealthier as new markets open up or alternatively poorer as sources of valuable raw materials become exhausted, an individual gets promoted every five years until he retires and so on. Each step is defined by the previous one and such progress, if it is modelled mathematically, gives a so-called ‘continuous’ function. In the terms of Eventrics, there is a steady flow of what I call ‘dominance’ from one macroscopic event to the next — dominance, to be explained in due course, is roughly  causality viewed as a perfectly real ‘force’.
Black Swan events are not like this at all : they seem to appear from nowhere,  strike like a ‘bolt out of the blue’ (a significant expression) and have consequences that may last centuries in the case of historical turning-points  or a lifetime in the case of personal reversals of fortune. As an example of historical Black Swans take the rise of Islam, a movement that swept through half the known world, emerging from an insignificant, arid, scarcely civilized region no one bothered about, or, even more incredibly, the largest land Empire the world had yet seen emerging like a volcano from the steppes of Outer Mongolia. More recently, we have had 9/11,  the financial crisis of 2008, the sudden appearance of China as the twenty-first century’s super-power when not so long ago we had people in laundromat suits reading the Little Red Book and collecting scrap iron for “backyard steel foundries” (Mao’s own prhase). The examples are endless : it is actucally more difficult to find examples of ‘ordinary’, predictable events that have been turned out to be historical turning points, though there are one or two.
As for personal life :”Look into your own existence. Count the significant events, the technological changes, and the inventions that have taken place in our environment since you were born and compare them to what was expected before their advent. How many of them came on schedule ? Look into your own personal life….. How often did these things [important changes] occur according to plan?” (The Black Swan p. xix)
I would certainly concur with Taleb when he says that history is discontinuous (“History and societies do not crawl. They make jumps” (p. 11)), that history is dominated by Black Swan events and that such events seem to be on the increase (“The modern world being Extremistan, is dominated by rare — very rare — events“(p. 61) (Note 3).
Two questions immediately spring to mind : (1) Are Black Swan events truly random, or only unpredictable because of insufficient data? (2) Is there a ‘hidden pattern’ to Black Swan events, a driving force concealed from our vision ?
On the first point, Taleb is adamant that Black Swan events are truly random,  unpredictable through and through, not just because our instruments are too coarse or our information inadequate. He makes this an article of faith, not to say an obsession, and, in a sense, he ought to know what he is talking about since he has spent much of his adult life as a trader.  He is careful to distinguish true indeterminism even from the sort of randomness we associate with Chaos Theory since the latter somehow combines strict determinism with unpredictability (amongst other reasons, because the systems studied in Chaos Theory are very sensitive to the initial conditions). What do I think? I remain open minded  for the moment : though there is no doubt in my mind that Black Swan ‘causality’, if it exists at all, is totally different from the usual “Event A brings about event B” sort of causality. Black Swan events do not seem to originate in the everyday world, as if there were some sort of underground well of events which occasionally forces itself through to the surface, spewing out molten lava.
So, what of ‘hidden patterns’, ‘hidden historical sources’? Taleb, as a systematic  Sceptical Rationalist, gives any such ideas short shrift :  those who think they can read the writing on the wall, people like Plato, Hegel, Marx, Spengler et al. are deluding themselves — and doubtless he would say the same about my own humble attempts in the same direction. Few people living after the 2008 financial crash would quarrel with Taleb’s scathing attacks on economic experts, all of whom despite their sophisticated mathematical and computer telescopes, somehow managed not to see the hurricane that was blowing their way. Indeed, I would go further than Taleb (who sensibly suggests that the Nobel Prize for Economics be abolished) by also suggesting that Economics Faculties be closed down world-wide, especially the London School of Economics. However, market analysis, like weather forecasting, is a notoriously complex area and the ‘experts’ are not so much to blame for getting it wrong as for arrogantly refusing to admit that they did get it wrong. Weather forecasters have in fact been challenged and have mended their ways : after getting long-term (month or more) weather forecasting palpably wrong they have (in the UK) decided to restrict themselves to short-term forecasting where they do a good job.
But there is certainly no need to abandon the whole principle of drawing inferences from previous instances — which Taleb tends to rubbish as ‘tunneling’. The adage, “Where there is pattern, there is significance”, though it can at times mislead, has underpinned almost all of mankind’s greatest intellectual and cultural developments, whereas nothing great has ever been achieved by doubt — certainly not the great scientific discoveries such as gravitation and evolution which were wild generalisations from very inadequate evidence. The human mind and psyche does not work well with negatives : scepticism is a useful restraining force, a counter weight, but no more than that.
Taleb resuscitates the ‘Turkey Paradox’ (orginally Chicken Paradox) invented by Bertrand Russell, a man who has a deeply destructive influence on modern logic and mathematics. The ‘paradox’ goes something like this. A turkey, well-fed and well-protected by its owner for a good period of time, say forty days, concludes (or would conclude if it could reason) that, on the basis of the past record, it is going to continue to be well-fed and well-protected indefinitely. On Day 41 the turkey is slaughtered since it is Thanksgiving Day. “”Jumping to conclusions!” Unsound inference!” “Incomplete Induction!” scream Hume and Russell and all the rest of the rationalist posse. Not really. On Day 41, unbeknown to the turkey, the conditions of the problem have dramatically changed and that is why the forecast, based on outmoded conditions, turns out to be wrong. The lesson to draw is not that ‘incomplete induction is risky’ (we know that already) but rather that one should make sure that the conditions have not suddenly changed in a way that is not at first sight evident. Neither Hume nor Russell nor Popper was a practising scientist, businessman or military commander — and they would have failed in all three areas. The adroit general or businessman knows that there will never be enough data relating to economic conditions or the movements of enemy troops, he thus learns to make rapid conclusions on the basis of thoroughly inadequate data : a skill that can only really be learned by practise in the market place or on the  battlefield. In terms of the Turkey Problem, one tries to guess whether the conditions have remained the same or not and one develops a certain flair or scent for this — Taleb himself, who is dismissive of  the ability of most millionnaires, recognizes that Soros had this.
Science is a somewhat different case.  The very existence of science rests on the unprovable (and conceivably mistaken)  assumption that identical conditions produce identical results (Note 3). If we did not believe this, we would not trouble ourselves to repeat a scientific test : the very same experiment carried out in a different laboratory on a different day might well give completely different results. The ancient Greeks, likewise the Chinese, did not have make this assumption which is one reason why neither quite managed to develop natural science as we understand it today though they came very close. The Greeks, under the influence of Plato, considered that the sublunary world was subject to random influences, ‘chaotic’ in our terms, and thus could not be reduced to behaviour governed by a handful of ‘physical laws’. Heavenly bodies were different : they were regular in their movements. The result was a very advanced astronomy and mathematics but a somewhat defective mechanics despite Archimedes (no science of movement, dynamics). We have extended the application of ‘natural law’ to the entire universe, “One universe, one set of laws”. This enormous intellectual gamble seems to have paid off by and large, though it has been found necessary to exclude the inflationary period of the early universe, and there are some indications that the basic physical ‘constants’ have changed over time.  The “one universe. one set of laws” assumption is by no means self-evident: it is justified, not by its inherent plausibility, since it is not plausible, but uniquely only by its results — “One judges a tree by its fruits”. Similarly, the adage “Where there is pattern, there is significance” — which Taleb catigates as the ‘narrative fallacy’ —  has, by and large, served humanity very well and will continue to do so despite all the dreadful cautionary tales of poitivists and sceptics. You learn to distinguish between the important and unimportant patterns, false prophecies and true, on the job, not in the study, and in some cases, errors of judgement can be as fruitful as genuine discoveries, witness the wild guesses of intuitive mathematicians such as Fermat and Ramanujan (Note 4).
So, can any reasonable conclusions be drawn from Taleb’s discussion of Black Swan events, other than “Take care how you go” ? I think, yes, and Taleb, even though he categorically declined to provide specific forecasts on repeated occasions when interviewed, does as a matter of fact sneak in one or two predictions  — remarkably (since the book was written before the 2008 Wall Street crash) he says that “the government-sponsored institution Fanny Mae, seems to be sitting on a barrel of dynamite” (Note 5).
If we consider large historical Black Swans like the two world wars of the 20th century, we see that there is a family resemblance — the same big European powers opposed each other on much the same terrain — but the second Black Swan was not an identical repeat, indeed with respect to tactics it was almost exactly the reverse. WWI was a trench war that lasted nearly four years; WWII, as far as France was concerned, was a Blitzkrieg that lasted a few weeks. Likewise, the 2008 financial meltdown has a family resemblance to 1929 in America but is not an identical repeat. One can, from this and other evidence, hazard a good guess that successive large-scale Black Swans are never identical repeats : the preparations made for the ‘next’ Black Swan (the Maginot Line) tend to be quite useless, indeed counter-productive, since they draw attention away from the coming danger, and thus increase its eventual effects. This, combined with the very plausible guess that Black Swan events are becoming more and more frequent, allows one to hazard certain guesses about the 21st century.
Taleb remarks shrewdly that the supposedly ‘safe bets’ in the case of both banks and regimes may well be the most dangerous of all : not so long ago Iceland was considered a pretty safe country to invest in and standard economic wisdom says that government bonds are the safest of all investments, even though it is now whole countries, including Spain and Italy, that are on the verge of defaulting. Taleb says at one point that apparently ‘safe’ (but oppressive) regimes may be the very ones to fall, on the principle that the taller you are, the harder you fall. He mentions as examples Syria and Saudi Arabia and, since his book was written long before the Arab Spring, this is pretty good going.
I will myself stick my neck out and hazard a few guesses. It may well be that one of the affluent Arab Middle Eastern kingdoms, which no one pays much attention to at the moment, will precipitate the next financial crisis — as nearly happened with Abu Dhabi. I also predict that the natural resource that will most likely precipitate war and economic turmoil is not oil that everyone is bothered about but water (Note 6) .
What about science and technology? It seems clear that the leading science of the 21st century will be biology, but biology, though something of a Black Swan science when DNA was discovered, is hardly an outlier at the moment, is on the contrary advancing steadily year by year, the difference being that the steps it takes are getting larger. Traditional mathematics — or rather modern traditional mathematics — will decline in importance and (hopefully) be largely replaced by more flexible modelling such as computer simulations, cellular automata, ‘evolutionary computing’ and so forth.  The trouble with current mathematics is that it is fixed, inert, capable of m0delling change and motion (up to a point) but, by definition, incapable of growing from within itself. Everything unexpected is kept outside the hallowed domain which is both mathematics’s strength and elegance, but also its hopeless limitation. In the turbulent environment of today, we need symbolic systems that do not exclude the random and the uncontrollable, the source of evolutionary innovation but, on the contrary, welcome it into the system while nonetheless keeping it under control — “The price of freedom is endless vigilance”. We want computer programmes that advance by trial and error and take initiatives themselves : strangely enough, since writing these lines yesterday I have come across two articles in back issues of the New Scientist dealing with this very issue (Note 7).  As for theoretical physics, it looks like it will continue cascading into total unintelligibility until the basic concepts are rigorously re-examined and a radically new outlook emerges. Enough of all that, the rest is history.
S.H.
Notes : (1) “What we call here a Black Swan… is an event with the following three attributes :
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can con vincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet : rarity, extreme impact, and retrospective (though not prospective) predictability.”

(2) An actual Black Swan (the bird) has Taleb’s first attribute, ‘rarity’ and his third ‘unpredictability’ (since it was not even known to exist) but it completely lacks his second attribute, ‘extreme impact’. Who, apart from a few professionals, cares whether the bird is classed as a swan or not ? Viruses and  archaea are closer to being true ‘Black Swans’ since their belated and very unexpected discovery has provoked a mini-revolution in biology.

(3) Of course, it is not true in Quantum Mechanics that “the same conditions produce identical results” but this is precisely why QM, in the orthodox Copenhagen interpretation, is so worrisome.

(4) Fermat was an amateur mathematician — he was a jurist by profession  — and, though regarded as the founder of an important branch of mathematics, Number Theory, gave few proofs. He claimed to have the proof of his famous Last Theorem, but famously noted that “it was too long to go into the margin” of the book where he noted it down, Diophantus’s Arithmetica. Modern mathematicians think his proof was almost certainly spurious.
The  strange largely self-taught Hindu mathematician, Ramanujan, produced a whole lot of stunning mathematical theorems while working with chalk and slate-board (because he couldn’t afford ink and paper) on the verandah of his parents’ house near Madras in the early twentieth century. Hardy, the leading British pure mathematician of the day and himself a model of mathematical ‘rigour’, nonetheless had the breadth of vision to recognize Ramanujan’s genius and invited him to come to Cambridge. This was perhaps a mixed blessing for mathematics since it seems that Ramanujan’s really creative work, some of which turned out to be wrong, wilted in the aridity of modern ‘prove it or be damned’ mathematics (Ramanujan didn’t bother with proofs) and only flowered again briefly when he returned to India to die at the age of 32. Hardy said that “Ramanujan’s mistakes were as remarkable as his correct theorems” — or something to that effect.  Today, Ramanujan would stand even less chance of being recognized (except perhaps via the Internet) as few universities, let alone the Royal Society, would welcome such a maverick into their ranks and I doubt if any contemporary Hardy would have the vision  and above all the generosity to aid such a person.

(5) “The Black Swan” footnote p. 225 Penguin edition. Taleb also writes (p. 226), “We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur… I shiver at the thought. I rephrase here : we will have fewer but mroe severe crises”.

(6)  Since writing this I have come across the following :  “A shortage of water is a more serious peril than any of the others mentioned in this report [concerning Pakistan]. Combined with fast growth of population, it is the true existential threat to Pakistan. (…) The study forecasts that by 2025 Pakistan’s annual water supply will fall short of demand by around 100 billion cubic metres, about half of the entire present flow of the Indus {!!]. In parts of the country the shortage is already acute.” (Going with the Flow in The Economist, February 11th 2012).    
Also, “We are facing a planet without enough water and with a rapidly warming atmosphere….” (Princess Sumaya of Jordan, Interview in New Scientist 18 February 2012)

(7)  This  development has already taken place though the mathematical fraternity, which even now does not accept an innovator such as Mandelbrot into its ranks, has not yet woken up to the fact :
“Evolutionary computing allows computers to do things they haven’t been programmed to do and is already being used to solve problems as diverse  as creating train timetables to designing aircraft” (“Move Over Einstein” by Justin Mullins, New Scientist, 19 March 2011)
        “Genetic algorithms mimic natural selection by describing a design as if it were a genome conbstructed from segments. Each segment describes a parameter of the invention, varying from its shape, say, to much finer grained aspects, such as electrical resistance or a chemical’s molecular affinities. By randomly changing some segments — or ‘mutating them’ — the algorithm improves the design. The best results are then bred together to improve things further (“The Next Wave” by Paul Marks, New Scientist, 14 May 2011).