Archives for category: Uncategorized

What is random? That which cannot be predicted with any confidence. But there is a weak and a strong sense to ‘unpredictable’. We might say that the motion of a leaf blown about by the wind is ‘random’ ― but then that may simply be because we don’t know the exact speed and direction of the wind or the aerodynamic properties of this particular leaf. In classical mechanics, there is no room for randomness since all physical phenomena are fully determined and so could in principle be predicted if one had sufficient data. Indeed, the French astronomer Laplace claimed that a super-mind, aware of the current positions and momenta of all particles currently in existence could predict the entire future of the universe from Newtonian principles.

In practice, of course, one never does know the initial conditions of any physical system perfectly. Whether this is going to make a substantial difference to the outcome hinges on how sensitively dependent on the initial conditions the system happens to be. Whether or not the flap of a butterfly’s wings in the bay of Tokyo could give rise to a hurricane in Barbados as chaos theory claims, systems that are acutely sensitive to initial conditions undoubtedly exist, and this is, of course, what makes accurate weather forecasting so difficult. Gaming houses retire dice after a few hundred throws because of inevitable imperfections creeping in and a certain Jagger made a good deal of money because he noted that certain numbers seemed to come up slightly more often than others on a particular roulette wheel and bet on them. Later on, he guessed that the cause was a slight scratch on this particular wheel and there seems to have been something in this for eventually the management thwarted him by changing the roulette wheels every night (Note 1). All sorts of other seemingly ‘random’ phenomena turn out, on close examination, to exhibit a definite bias or trend: for example, certain digits turn up in miscellaneous lists of data more often than others (Bensford’s Law) and this bias, or rather its absence, has been successfully used to detect tax fraud.

There is, however, something very unsatisfactory about the ‘unpredictable because of insufficient data’ definition of randomness: it certainly does not follow that there is an inherent randomness in Nature, nor does chaos theory imply that this is the case either. Curiously, quantum mechanics, that monstrous but hugely successful creation of modern science, does maintain that there is an underlying randomness at the quantum level. The radioactive decay of a particular nucleus is held to be not only unforeseeable but actually ‘random’ in the strong sense of the word ― though the bulk behaviour of a collection of atoms can be predicted with confidence. Likewise, genetic mutation, the pace setter of evolution, is regarded today as not just being unpredictable but, in certain cases at least, truly ‘random’. Randomness seems to have made a strong and unexpected come-back since it is now a key player in the game or business of living ― a bizarre volte-face given that science had previously been completely deterministic.

The ‘common sense’ meaning of randomness is the lack of any perceived regularity or repeating pattern in a sequence of events, and this will do for our present purposes (Note 2). Now, it is extremely difficult to generate a random sequence of events in the above sense and in the recent past there was big money involved in inventing a really good random number generator. Strangely, most random number generators are not based on the behaviour of actual physical systems but depend on algorithms deliberately concocted by mathematicians. Why is this? Because, to slightly misquote Moshe, “complete randomness is a kind of perfection”(Note 3).

The more one thinks about the idea of randomness, the weirder the concept appears since a truly ‘random’ event does not have a causal precursor (though it usually does have a consequence). So, how on earth can it occur at all and where does it come from? It arrives, as common language puts it very well, ‘out of the blue’.

Broadly speaking there are two large-scale tendencies in the observable universe: firstly the dissipation of order and decline towards thermal equilibrium and mediocrity because of the ‘random’ collision of molecules, secondly the spontaneous emergence of complex order from processes that appear to be, at least in part, ‘random’. The first principle is enshrined in the 2nd Law of Thermo-dynamics : the entropy (roughly extent of disorder) of a closed system always increases, or (just possibly) stays the same. Contemporary biologists have a big problem with the emergence of order and complexity in the universe since it smacks of creationism. But at this very moment the molecules of tenuous dispersed gases are clumping together to form stars and the trend of life forms on earth is, and has been for some time, a movement from relative structural simplicity (bacteria, archaea &c.) to the unbelievable complexity of plants and mammals. Textbooks invariably trot out the caveat that any local ‘reversal of entropy’ must always be paid for by increased entropy elsewhere. This is, however, not a claim that has been, or ever could be, comprehensively tested on a large scale, nor is it at all ‘self-evident’ (Note 4). What we do know for sure is that highly organized structures can and do emerge from very unpromising beginnings and this trend seems to be locally on the increase ― though it is conceivable that it might be reversed.

For all that, it seems that there really are such things as truly random events and they keep on occurring. What can one conclude from this? That, seemingly, there is a powerful mechanism for spewing forth random, uncaused events, and that this procedure is, as it were, ‘hard-wired’ into the universe at a very deep level. But this makes the continued production of randomness just as mysterious, or perhaps even more so, than the capacity of whatever was out there in the beginning to give rise to complex life!

The generation of random micro-events may in fact turn out to be just about the most basic and important physical process there is. For what do we need to actually produce a ‘world’? As far as I am concerned, there must be something going on, in other words we need ‘events’ and these events require a source of some sort. But this source is remote and we don’t need to attribute to it any properties except that of being a permanent store of proto-events. The existence of a source is not enough though. Nothing would happen without a mechanism to translate the potential into actuality, and the simplest and, in the long run, most efficient mechanism is to have streams of proto-events projected outwards from the source at random. Such a mechanism will, however, by itself not produce anything of much interest. To get order emerging from the primeval turmoil we require a second mechanism, contained within the first, which enables ephemeral random events to, at least occasionally, clump together, and eventually build up, simply by spatial proximity and repetition, coherent and quasi-permanent event structures (Note 5). One could argue that this possibility, namely the emergence of ‘order from chaos’, however remote, will eventually come up ― precisely because randomness in principle covers all realizable possibilities. A complex persistent event conglomeration may be termed a ‘universe’, and even though an incoherent or contradictory would-be ‘universe’ will presumably rapidly collapse into disorder, others may persist and maybe even spawn progeny.

So, which tendency is going to win out, the tendency towards increasing order or reversion to primeval chaos? It certainly looks as if a recurrent injection of randomness is necessary for the ‘health’ of the universe and especially for ourselves ― this is one of the messages of natural selection and it explains, up to a point, the extraordinarily tortuous process of meiosis (roughly sexual reproduction) as against mitosis when a cell simply duplicates its DNA and splits in two (Note 6). But there is also the “nothing succeeds like success” syndrome. And, interestingly, the evolutionary biologist John Bonner argues that microorganisms “are more affected by randomness than large complex organisms” (Note 7). This and related phenomena might tip the balance in favour of order and complexity ― though specialization also makes the larger organisms more vulnerable to sudden environmental changes.                                                                 SH

 

Note 1 This anecdote is recounted and carefully analysed in The Drunkard’s Walk by Mlodinow.

 Note 2 Alternative definitions of randomness abound. There is the frequency definition whereby, “If a procedure is repeated over and over again indefinitely and one particular outcome crops up as many times as any other possible outcome, the sequence is considered to be random” (adapted from Peirce). And Stephen Wolfram writes: “Define randomness so that something is considered random only if no short description whatsoever exists of it” (Stephen Wolfram).

 Note 3 Moshe actually wrote “Complete chaos is a kind of perfection”.

Note 4 “The vast majority of current physics textbooks imply that the Second Law is well established, though with surprising regularity they say that detailed arguments for it are beyond their scope. More specialized articles tend to admit that the origins of the Second Law remain mysterious” (Stephen Wolfram, A New Kind of Science p. 1020

 Note 5 This is essentially the principle of ‘morphic resonance’ advanced by Rupert Sheldrake. Very roughly, the idea is that if a certain event, or cluster of events, has occurred once, it is slightly more likely to occur again, and so on and so on. Habit thus eventually becomes physical law, or can do. At bottom the ‘Gambler’s Fallacy’ contains a grain of truth: I suspect that current events are never completely independent of previous similar occurrences despite what statisticians say. Clearly, for the theory to work, there must be a very slow build-up and a tipping point after which a trend really takes off. We require in effect the equivalent of the Schrodinger equation to show how initial randomness evolves inexorably towards regularity and order.

Note 6. In meiosis not only does the offspring get genes from two individuals rather than one, but there is a great deal of ‘crossing over’ of segments of chromosomes and this reinforces the mixing process.

Note 7 The reason given for this claim is that there are many more developmental steps in the construction of a complex organism and so “if an earlier step fails through a deleterious mutation, the result is simple: the death of the embryo”. On the other hand “being small means very few developmental steps, with little or no internal selection” and hence a far greater number of species, witness radiolaria (50,000) and diatoms (10,000). See article Evolution, by chance? in the New Scientist 20 July 2013 and Randomness in Evolution by John Bonner.

 

 

 

 

 

 

Advertisements

                 “There is a tide in the affairs of men
Which, taken in the flood, leads on to fortune”

Shakespeare, Julius Caesar

In a previous post I suggested that the three most successful non-hereditary ‘power figures’ in Western history were Cromwell, Napoleon and Hitler. Since none of the three had advantages that came by birth, as, for example, Alexander the Great or Louis XIV did, the meteoric rise of these three persons suggests either very unusual abilities or very remarkable ‘luck’.
From the viewpoint of Eventrics, success depends on how well a particular person fits the situation and there is no inherent conflict between ‘luck’ and ability. Quite the reverse, the most important ‘ability’ that a successful politician, military commander or businessman can have is precisely the capacity to handle events, especially unforeseen ones. In other words success to a considerable extent depends on how well a person handles his or her ‘good luck’ if and when it occurs, or how well a person can transform ‘bad luck’ into ‘good luck’. Whether everyone gets brilliant opportunities that they fail to seize one doubts but, certainly, most of us are blind to the opportunities that do arise and, when not blind, lack the self-confidence to seize such an offered ‘chance’ and turn it to one’s advantage.
The above is hardly controversial though it does rule out the view that everything is determined in advance, or, alternatively, the exact opposite, that ‘more or less anything can happen at any time anywhere’. I take the commonsense view that there are certain tendencies that really exist in a given situation. It is, however, up to the individual to reinforce or make use of such ‘event-currents’ or, alternatively, to ignore them and, as it were, pass by on the other side like the Levite in the Parable of the Good Samaritan. The driving forces of history are not people but events and ‘event dynamics’; however, this does not reduce individuals to the status of puppets, far from it. Either through instinct or correct analysis (or a judicious mixture of the two) the successful person identifies a ‘rising’ event current, gets with it if it suits him or her, and abandons it abruptly when it ceases to be advantageous. This is easy enough to state, but supremely difficult to put into practice. Everyone who speculates on the Stock Exchange knows that the secret of success is no secret at all : it consists in buying  when the price of stock is low but just about to rise and selling when the price is high but just about to fall. For one Soros, there are a hundred thousand or maybe a hundred million ‘ordinary investors’ who either fail entirely or make very modest gains.
But why, one might ask, is it advantageous to identify and go with an ‘event trend’ rather than simply decide what you want to do and pursue your objective off your own bat? Because the trend will do a good deal of the work for you : the momentum of a rising trend is colossal, indeed for a while, seems to be unstoppable. Pit yourself against a rising trend and it will overwhelm you, identify yourself with it and it will take you along with a force equivalent to that of a million individuals. If you can spot coming trends accurately and go with them, you can succeed with only moderate intelligence, knowledge, looks, connections, what have you.

Is charisma essential for success?

It is certainly possible to succeed spectacularly without charisma since Cardinal Richelieu, the most powerful man in the France and Europe of his day, had none whereas Joan of Arc who had plenty had a pitifully short career. Colbert, finance minister of Louis XIV is another example; indeed, in the case of ministers it is probably better not to stick out too much from the mass, even to the extent of appearing a mediocrity.
Nonetheless, Richelieu and Colbert lived during an era when it was only necessary to obtain the support of one or two big players such as kings or popes, whereas, in a democratic era, it is necessary to inspire and fascinate millions of ‘ordinary people’. No successful modern dictator lacked charisma : Stalin, Mao-tse-tong, Hitler all had plenty and this made up for much else. Charisma, however, is not enough, or not enough if one wishes to remain in power : to do this, an intuitive or pragmatic grasp of the behaviour of event patterns is a sine qua non and this is something quite different from charisma.

Hitler as failure and mediocrity

Many historians, especially British, are not just shocked but puzzled by Hitler ─ though less now than they were fifty years ago. For how could such an unprepossessing individual, with neither looks, polish, connections or higher education succeed so spectacularly? One British newspaper writer described Hitler, on the occasion of his first big meeting with Mussolini, as looking like “someone who wanted to seduce the cook”.
Although he had participated in World War I and shown himself to be a dedicated and brave ‘common soldier’, Hitler never had any experience as a commander on the battlefield even at the level of a platoon ─ he was a despatch runner who was told what to do (deliver messages) and did it. Yet this was the man who eventually got control of the greatest military machine in history and blithely disregarded the opinions of seasoned military experts, initially with complete success. Hitler also proved to be a vastly successful public speaker, but he never took elocution lessons and, when he started, even lacked the experience of handling an audience that an amateur  actor or stand-up comedian possesses.
Actually, Hitler’s apparent disadvantages proved to be more of a help than a hindrance once he had  begun to make his mark, since it gave his adversaries and rivals the erroneous impression  that he would be easy to manipulate and outwit. Hitler learned about human psychology, not by reading learned tomes written by Freud and Adler, but by eking out a precarious living in Vienna as a seller of picture postcards and sleeping in workingmen’s hostels. This was learning the hard way which, as long as you last the course (which the majority don’t), is generally the best way.
It is often said that Hitler was successful because he was ruthless. But ruthlessness is, unfortunately, not a particularly rare human trait, at any rate in the lower levels of a not very rich society. Places like Southern Italy or Colombia by all accounts have produced and continue to produce thousands or tens of thousands of exceedingly ruthless individuals, but how many ever get anywhere? At the other end of the spectrum, one could argue that it is impossible to be a successful politician without a certain degree of ruthlessness ─ though admittedly Hitler took it to virtually unheard of extremes. Even ‘good’ successful political figures such as Churchill were ruthless enough to happily envisage dragging neutral Norway into the war (before the Germans invaded), to authorise the deliberate bombing of civilian centres and even to approve in theory the use of chemical weapons. Nor did de Gaulle bother unduly about the bloody repercussions for the rural population that the activities of partisans would inevitably bring  about. Arguably, if people like Churchill and de Gaulle had not had a substantial dose of ‘ruthlessness’ (aka ‘commitment’), we would have lost the war long before the Americans ever got involved  ─ which is not, of course, to put such persons on a level with Hitler and Stalin.
To return to Hitler. Prior to the outbreak of WWI, Hitler, though by all accounts  already quite as ruthless and opinionated as he subsequently proved himself to be on a larger arena, was a complete failure. He had a certain, rather conventional, talent for pencil drawing and some vague architectural notions but that is about it. Whether Hitler would or could have made a successful architect, we shall never know since he was refused entry twice by the Viennese School of Architecture. He certainly retained a deep interest in the subject and did succeed in spotting and subsequently promoting an architect of talent, Speer. But there is no reason to think we would have heard of Hitler if he had been accepted as an architectural student and subsequently articled to a Viennese firm of Surveyors and Architects.
As for public speaking, Hitler didn’t do any in his Vienna pre-war days, only discovering his flair in Munich in the early twenties. And although Hitler enlisted voluntarily for service at the outbreak of  WWI, he was for many years actually a draft-dodger wanted for national service by Austria, his country of birth. Hardly a promising start for a future grand military strategist.

Hitler’s Decisive Moment : the Beer Hall Putsch

Hitler did, according to the few accounts we have by people who knew him at the time, have boyhood dreams of one day becoming a ‘famous artist’ — but what adolescent has not? Certainly, Hitler did not, in  his youth and early manhood, see himself as a future famous political or military figure, far from it. Even when Hitler started his fiery speeches about Germany’s revival and the need for strong government, he did not at first cast himself in the role of ‘Leader’. On the contrary, it would seem that awareness of his own mission as saviour of the German nation came to him gradually and spasmodically. Indeed, one could argue that it was only after the abortive Munich Beer-Hall putsch that Hitler decisively took on this role : it was in a sense thrust on him.
The total failure of this rather amateurish plot to take over the government of Bavaria by holding a gun to the governor’s face and suchlike antics turned out to be the turning-point of his thinking, and of his life. In Quattrocento Italy it was possible to seize power in such a way ─ though only the Medici with big finance behind them really succeeded on a grand scale  ─ and similar coups have succeeded in modern Latin American countries. But in an advanced industrial country like Germany where everyone had the vote, such methods were clearly anachronistic. Even if Hitler and his supporters had temporarily got control of Munich, they would easily have been put down by central authority : they would have been seven day wonders and no more. It was this fiasco that decided Hitler to obtain power via the despised ballot box rather than the more glamorous but outmoded methods of an Italian condottieri.
The failed Beer-hall putsch landed Hitler in court and, subsequently in prison; and most people at the time thought this would be the end of him. However, Hitler, like Napoleon before him in Egypt after the destruction of his fleet, was a strong enough character not to be brought  down by the disaster but, on the contrary, to view it as a golden opportunity. This is an example of the ‘law’ of Eventrics that “a disadvantage, once turned into an advantage, is a greater advantage than a straightforward advantage”.
What were the advantages of the situation? Three at least. Firstly, Hitler now had a regional and soon a national audience for his views and he lost no time in making the court-room a speaker’s platform with striking success. His ability as a speaker was approaching its zenith : he had the natural flair and already some years of experience. Hitler was given an incredibly  lenient sentence and was even at one point thanked by the judge for his informative replies concerning Germany’s recent history! Secondly, while in prison, Hitler had the time to write Mein Kampf which, given his lax, bohemian life-style, he would probably have never got round to doing  otherwise. And his court-room temporary celebrity meant the book was sure to sell if written and published rapidly.
Thirdly, and perhaps most important of all, the various nascent extreme Right groups made little or no headway with the ‘leader’ in prison which confirmed them in the view that  Hitler was indispensable. Once out of prison, he found himself without serious competitors on the Right and his position stronger than ever.
But the most important outcome was simply the realization that the forces of the State were far too strong to be overthrown by strong-arm tactics. The eventual break with Röhm and the SA was an inevitable consequence of Hitler’s fateful decision to gain power within the system rather than by openly opposing it.

Combination of opposite abilities

As a practitioner of Eventrics or ‘handler of events’, Hitler held two trump cards that are rarely dealt to the same individual. Firstly, even though his sense of calling seems to have come relatively late, by the early nineteen-thirties he was entirely convinced that he was a man of destiny. He is credited with the remarkable statement, very similar to one made by Cromwell, “I follow the path set by Providence with the precision and assurance of a sleepwalker”. It was this messianic side that appealed to the masses of ordinary people, and it was something that he retained right up to the end. Even when the Russian armies were at the gates of Berlin, Hitler could still inspire people who visited him in the Bunker. And Speer recounts how, even  at Germany’s lowest ebb, he overheard (without being recognized) German working people in a factory repeating like a mantra that “only Hitler can save us now”.
However, individuals who see themselves as chosen by the gods, usually fail because they do not pay sufficient attention to ordinary, mundane technicalities. Richelieu said that someone who aims at high power should not be ashamed to concern himself with trivial details  ─ an excellent remark. Napoleon has been called a ‘map-reader of genius’ and to prepare for the Battles of Ulm and Austerlitz, he instructed Berthier “to prepare a card-index showing every unit of the Austrian army, with its latest identified location, so that the Emperor could check the Austrian order of battle from day to day” (Note 1). Hitler had a similar capacity for attention to detail, supported by a remarkable memory for facts and figures — there are many records of him reeling off correct data about the range of guns and the populations of certain regions to his amazed generals.
This ‘combination of contraries’ also applies to Hitler as a statesman. Opponents and many subsequent historians could never quite decide whether Hitler, from the beginning, aimed for world domination, or whether he simply drifted along, waiting to see where events would take him. In reality, as Bullock rightly points out, these contradictions are only apparent : “Hitler was at once fanatical and cynical, unyielding in his assertion of will power and cunning in calculation” (Bullock, Hitler and the Origins on the Second World War). This highly unusual combination of two opposing tendencies is the key to Hitler’s success. As Bullock again states, “Hitler’s foreign policy… combined consistency of aim with complete opportunism in method and tactics. (…) Hitler frequently improvised, kept his options open to the last possible moment and was never sure until he got there which of several courses of action he would choose. But this does not alter the fact that his moves followed a logical (though not a predetermined) course ─ in contrast to Mussolini, an opportunist who snatched eagerly at any chance that was going, but never succeeded in combining even his successes into a coherent policy” (Bullock, p. 139).
Certainly, sureness of ultimate aim combined with flexibility in day to day management is a near infallible recipe for conspicuous success. Someone who merely drifts along may occasionally obtain a surprise victory but will be unable to build on it; someone who is completely rigid in aim and means will not  be able to adapt to, and take advantage of, what is unforeseen and unforeseeable. Clarity of goal and unshakeable conviction is the strategic part of Practical Eventrics while the capacity to respond rapidly to the unforeseen belongs to the tactical side.

Why did Hitler ultimately fail?

Given the favourable political circumstances and Hitler’s unusual abilities, the wonder is, not that he lasted as long as he did, but that he eventually failed. On a personal level, there are two reasons for this. Firstly, Hitler’s racial theories, while they originally helped him to power, eventually proved much more of a drawback than an advantage. For one thing, since Hitler regarded ‘Slavs’ as inferior, this conviction unnecessarily alienated large populations in Eastern Europe, many of whom were originally favourable to German intervention since they had had enough of Stalin. Moreover, Hitler allowed ideological and personal prejudices to influence his choice of subordinates : rightly suspicious of the older Army generals but jealous of brilliant commanders like von Manstein and Guderian, he ended up with a General Staff of supine mediocrities.
Secondly, Hitler, though he had an excellent intuitive grasp of overall strategy, was a poor tactician. Not only did he have no actual experience of command on the battlefield but, contrary to popular belief, he was easily rattled and unable to keep a clear head in emergencies.
Jomini considered that “the art of war consists of six distinct parts:

  1. Statesmanship in relation to war
  2. Strategy, or the art of properly directing masses upon the theatre of war, either for defence or invasion.
  3. Grand Tactics.
  4. Logistics, or the art of moving armies.
  5. Engineering ─ the attack and defence of frotifications.
  6. Minor tactics.”
    Jomini, The Art of War p. 2

Hitler certainly ticks the first three boxes. But certainly not (4), Logistics. Hitler tended to override his highly efficient Chief of General Staff, Halder, whereas Napoleon always listened carefully to what Halder’s equivalent, Berthier, had to say. According to Liddell Hart, the invasion of Russia failed, despite the high quality of the commanders and fighting men, because of an error in logistics.
“Hitler lost his chance of victory because the mobility of his army was based on wheels instead of on tracks. On Russia’s mud-roads its wheeled transport was bogged when the tanks could move on. If the panzer forces had been provided with tracked transport they could have reached Russia’s vital centres by the autumn in spite of the mud” (Liddel-Hart, History of the Second World War )  On such mundane details does the fate of empires and even of the world often depend.
As for (5), the attack on fortifications, it had little importance in World War II though the long-drawn out siege of Leningrad exhausted resources and troops and should probably have been abandoned. Finally, on (6), what Jomini calls ‘minor tactics’, Hitler was so poor as to be virtually incompetent. By ‘minor tactics’, we should understand everything relating to the actual movement of troops on the battlefield (or battle zone) ─ the area in which Napoleon and Alexander the Great were both supreme.  Hitler was frequently indecisive and vacillating as well as nervy, all fatal qualities for a military commander.
On two occasions, Hitler made monumental blunders that cost him the war. The first was the astonishing decision to hold back the victorious tank units just as they were about to sweep into Dunkirk and cut off the British forces. And the second was Hitler’s rejection of  Guderian’s plan for a headlong drive towards Moscow before winter set in; instead, following conventional Clausewitzian principles,  Hitler opted for a policy of encirclement and head-on battle. Given the enormous man-power of the Russians and their scorched earth policy, this was a fatal decision.
Jomini, as opposed to Clausewitz, recognized the importance of statesmanship in the conduct of a war, something that professional army officers and even commanders are prone to ignore. Whereas Lincoln often saw things that his generals could not, and on occasion successfully overrided them  because he had a sounder long-term view, Hitler, a political rather than a military man, introduced far too much statesmanship into the conduct of war.
It has been plausibly argued, especially by Liddel Hart, that the decision to halt the tank units before Dunkirk was a political rather than a military decision. Blumentritt, operational planner for General Rundstedt, said, at a later date, that “the ‘halt’ had been called for more than military reasons, it was part of a political scheme to make peace easier to reach. If the British Expeditionary Force had been captured at Dunkirk, the British might have felt that their honour had suffered a stain which they must wipe out. By letting it escape, Hitler hoped to conciliate them” (Liddel Hart, History of the Second World War I p. 89-90). This did make some kind of sense : a rapid peace settlement with Britain would have wound up the Western campaign and freed Hitler’s hands to advance eastwards which had seemingly always been his intention. However, if this interpretation is correct, Hitler made a serious miscalculation, underestimating Britain’s fighting spirit and inventiveness.

Hitler’s abilities and disabilities

It would take us too far afield from the field of Eventrics proper to go into the details of Hitler’s political, economic and military policies. My overall feeling is that Hitler was a master in the political domain, time and again outwitting his internal and external rivals and enemies, and that he had an extremely good perception of Germany’s economic situation and what needed to be done about it. But he was an erratic and often incapable military commander ─ for we should not forget that, following the resignation of von Brauchitsh, Hitler personally supervised the entire conduct of the war in the East (and everywhere else eventually). This is something like the reverse of the conventional assessment of Hitler so is perhaps worth explaining.
Hitler is credited with the invention of Blitzkrieg, a new way of waging war and, in particular, with one of the most successful campaigns in military history, the invasion of France, when the tank units moved in through the Ardennes, thought to be impassible. The original idea was in reality not Hitler’s but von Manstein’s (who got little credit for it) though Hitler did have the perspicacity to see the merits of this risky and unorthodox plan of attack which the German High Command unanimously rejected. It is also true that Hitler took a special interest in the tank and does seem to have some good ideas regarding tank design.
However, Hitler never seems to have rid himself completely of the conventional Clausewitzian idea that wars are won by large-scale confrontations of armed men, i.e. by modern ‘pitched battles’. Practically all (if not all) the German successes depended on surprise, rapidity of execution and artful manoeuvre ─ that is, by precisely the avoidance of direct confrontation. Thus the invasion of France, the early stages of the invasion of Russia, Rommel in North Africa and so on. When the Germans fought it out on a level playing field, they either lost as at Al Alamein or achieved ‘victories’ that were so costly as to be more damaging than defeats as in the latter part of the Russian campaign.        Hitler was in fact only a halfway-modernist in military strategy. “The school of Fuller and Basil Liddel Hart [likewise Guderian and Rommel] moved away from using manoeuvre to bring the enemy’s army to battle and destroy it. Instead, it [the tank] should be used in such a way as to numb the enemy’s command, control, and communications and bring about victory through disintegration rather than destruction” (Messenger, Introduction to Jomini’s Art of War).

As to the principle of Bitzkrieg (Lightning War) itself, though it doubtless appealed to Hitler’s imagination, it was in point of fact forced on him by economic necessity : Germany just did not have the resources to sustain a long war. It was make or break. And much the same went for Japan.
Hitler’s duplicity and accurate reading of his opponents’ minds in the realm of politics needs no comment. But what is less readily recognized is how well he understood the general economic situation. Hitler had doubtless never read Keynes ─ though his highly capable Economics Minister, Schacht, doubtless had. But with his talent for simplification, Hitler realized early on that Germany laboured under two crippling economic disadvantages : she did not produce enough food for her growing population and, as an industrial power, lacked indispensable natural resources especially oil and quality iron-ore. So where to obtain  these and a lot more essential  items? By moving eastwards, absorbing the cereal-producing areas of the Ukraine and getting hold of the oilfields of the Caucasus. This was the policy exposed to the German High Command in the so-called ‘Hossbach Memorandum’ to justify the invasion of Russia to an unenthusiastic general staff.
The policy of finding Lebensraum in the East was based on a ruthless but shrewd and essentially correct analysis of the economic situation in Europe at the time. But precisely because Germany would need even more resources in a wartime situation, victory had to be rapid, very rapid. The gamble nearly succeeded : as a taster, Hitler’s armies  overwhelmed Greece and Yugoslavia in a mere six weeks and at first looked set to do much the same in Russia in three months. Perhaps if Hitler had followed Guderian’s plan of an immediate all-out tank attack on Moscow, instead of getting bogged down in Southern Russia and failing to take Stalingrad, the gamble would actually have paid off.

Hitler: Summary from the point of view of Eventrics

The main points to recall from this study of Hitler as a ‘handler of events’ are the following.

  1. The methods chosen must fit the circumstances, (witness Hitler’s switch to a strategy based on the ballot box rather than the revolver after the Beer-Hall putsch).
  2. An apparent defeat can be turned into an opportunity, a disadvantage into an advantage (e.g. Hitler’s trial after the Beer-hall putsch)
  3. Combining inflexibility of ultimate aim with extreme flexibility on a day-to-day basis is a near invincible combination (Hitler’s conduct of foreign affairs during the Thirties);
  4. It is disastrous to allow ideological and personal prejudices to interfere with the conduct of a military campaign, and worse still to become obsessed with a specific objective (e.g. Hitler’s racial views, his obsession with taking Stalingrad).

 

As related in the previous post, Einstein, in his epoch-making 1905 paper, based his theory of Special Relativity on just two postulates,

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

I asked myself if I could derive the main results of the Special Theory, the Rule for the Addition of Velocities, Space Contraction, Time Dilation and the ‘Equivalence’ of Mass and Energy from UET postulates.
Instead of Einstein’s Postulate 2, the ‘absolute value of the speed of light’, I employ a more general but very similar principle, namely that there is a ‘limiting speed’ for the propagation of causal influences from one spot on the Locality to another. In the simplest case, that of an  event-chain consisting of a single ultimate event that repeats at every ksana, this amounts to asking ourselves ‘how far’ the causal influence can travel ‘laterally’ from one ksana to the next. I see the Locality as a sort of grid extending indefinitely in all directions where  each ‘grid-position’ or ‘lattice-point’ can receive one, and only one, ultimate event (this is one of the original Axioms, the Axiom of Exclusion). At each ksana the entire previous spatial set-up is deftly replaced by a new, more or less identical one. So, supposing we can locate the ‘same’ spot, i.e. the ‘spot’ which replaces the one where the ultimate event had occurrence at the last ksana, is there a limit to how far to the left (or right) of this spot the ultimate event can re-occur? Yes, there is. Why? Well, I simply cannot conceive of there being no limit to how far spatially an ‘effect’ ─ in this case the ‘effect’ is a repetition of the original event ─ can be from its cause. This would be a holographic nightmare where anything that happens here affects, or at least could affect, what happens somewhere billions of light years away. One or two physicists, notably Heisenberg, have suggested something of the sort but, for my part, I cannot seriously contemplate such a state of affairs.  Moreover, experience seems to confirm that there is indeed a ‘speed limit’ for all causal processes, the limit we refer to by the name of c.
However, this ‘upper speed limit’ has a somewhat different and sharper meaning in Ultimate Event Theory than it does in matter-based physics because c (actually c*) is an integer and corresponds to a specific number of adjacent ‘grid-positions’ on the Locality existing at or during a single ksana. It is a distance rather than a speed and even this is not quite right : it is a ‘distance’ estimated not in terms of ‘lengths’ but only in terms of the number of the quantity of intermediary ultimate events that could conceivably be crammed into this interval.
In UET a distinction is made between an attainable limiting number of grid-positions to right (or left) denoted c* and the lowest unattainable limit, c, though this finicky distinction in many cases can be neglected. But the basic schema is this. A  ‘causal influence’, to be effective, must not only be able to at least traverse the distance between one ksana and the next ‘vertically’ (otherwise nothing would happen) but must also stretch out ‘laterally’ i.e. ‘traverse’ or rather ‘leap over’ a particular number of  grid-positions. There is an upper limit to the number of positions that can be ‘traversed’, namely c*, an integer. This number, which is very great but not infinite ─ actual infinity is completely banished from UET ─ defines the universe we (think we) live in since it puts a limit to the operation of causality (as  Einstein clearly recognized), and without causality there can, as far as I am concerned, be nothing worth calling a universe. Quite conceivably, the value of this constant c i(or c*) is very different in other universes, supposing they exist, but we are concerned only  with this ‘universe’ (massive causally connected more or less identically repeating event-cluster).
So far, so good. This sounds a rather odd way of putting things, but we are still pretty close to Special Relativity as it is commonly taught. What of Einstein’s other principle? Well, firstly, I don’t much care for the mention of “laws of physics”, a concept which Einstein along with practically every other modern scientist inherited from Newton and which harks back to a theistic world-view whereby God, the supreme law-giver, formulated a collection of ‘laws’ that everything must from the moment of Creation obey ─ everything material at any rate. My concern is with what actually happens whether or not what happens is ‘lawful’ or not. Nonetheless, there do seem to be certain very general principles that apply across the board and which may, somewhat misleadingly, be classed as laws. So I shall leave this question aside for the moment.
The UET Principle that replaces Einstein’s First Principle (“that the laws of physics are the same in all inertial frames”) is rather tricky to formulate but, if the reader is patient and broad-minded enough, he or she should get a good idea of what I have in mind. As a first formulation, it goes something like this:

The occupied region between two or more successive causally related positions on the Locality is invariant. 

         This requires a little elucidation. To start with, what do I understand by ‘occupied region’? At least to a first approximation, I view the Locality (the ‘place’ where ultimate events can and do have occurrence) as a sort of three-dimensional lattice extending in all directions which  flashes on and off rhythmically. It would seem that extremely few ‘grid-spots’ ever get occupied at all, and even less spots ever become the seats of repeating events, i.e. the location of the  first event of an event-chain. The ‘Event Locality’ of UET, like the Space/Time  of matter-based physics, is a very sparsely populated place.
Now, suppose that an elementary event-chain has formed but is marooned in an empty region of the Locality. In such a case, it makes no sense to speak of ‘lateral displacement’ : each event follows its predecessor and re-appears at the ‘same’ ─ i.e.  ‘equivalent’ ─ spot. Since there are no landmark events and every grid-space looks like every other, we can call such an event-chain ‘stationary’. This is the default case, the ‘inertial’ case to use the usual term.
We concentrate for the moment on just two events, one the clone of the other re-appearing at the ‘same spot’ a ksana later. These two events in effect define an ‘Event Capsule’ extending from the centre (called ‘kernel’ in UET) of the previous grid-space to the centre of the current one and span a temporal interval of one ksana. Strictly speaking, this ‘Event Capsule’ has two parts, one half belonging to the previous ksana and the other to the second ksana, but, at this stage, there is no more than a thin demarcation line separating the two extremities of the successive ksanas. Nonetheless, it would be quite wrong (from the point of view of UET) to think of this ‘Event Capsule’ and the whole underlying ‘spatial/temporal’ set-up as being ‘continuous’. There is no such thing as a ‘Space/Time Continuum’ as Minkowski understood the term.  ‘Time’ is not a dimension like ‘depth’ which can seamlessly be added on to ‘length’ or ‘width’ : there is a fundamental opposition between the spatial and temporal aspect of things that no physical theory or mathematical artifice can completely abolish. In the UET  model, the demarcations between the ‘spatial’ parts of adjacent Event Capsules do not widen, they  remain simple boundaries, but the demarcations between successive ksanas widen enormously, i.e. there are gaps in the ‘fabric’ of time. To be sure there must be ‘something’ underneath which persists and stops everything collapsing, but this underlying ‘substratum’ has no physical properties whatsoever, no ‘identity’, which is why it is often referred to, not inaccurately, both in Buddhism and sometimes even in modern physics, as ‘nothing’.
To return to the ‘Constant Region Postulate’. The elementary ‘occupied region’ may be conceived as a ‘Capsule’ having the dimensions  s0 × s0  × s= s03  for the spatial extent  and t0 ­for time, i.e. a region of extent s03 × t0 ­. These dimensions are fixed once and for all and, in the simplest UET model, s0 is a maximum and t0 ­is a minimum. Restricting ourselves for simplicity to a single spatial dimension and a single temporal dimension, we  thus have an ‘Event Rectangle’ of  s0  by t0­ .  
        For anything of interest to happen, we need more than one event-chain and, in particular, we need at least three ultimate events, one of which is to serve as a sort of landmark for the remaining pair. It is only by referring to this hypothetical or actual third event, occurring as it does at a particular spot independently of the event-pair, that we can meaningfully talk of the ‘movement’ to left or right of the second ultimate event in the pair with relation to the first. Alternatively, one could imagine an ultimate event giving rise to two events, one occurring ‘at the same spot’ and the other so many grid-spaces to the right (or left). In either case, we have an enormously expanded ‘Event Capsule’ spatially speaking compared to the original one. The Principle of the Constancy of the Area of the Occupied Region asserts that this ‘expanded’ Event Capsule which we can imagine as a ‘Space/Time rectangle’ (rather than Space/Time parallelipod), always has the ‘same’ area.
How can this be possible? Quite simply by making the spatial and temporal ‘dimensions’ inversely proportional to each other. As I have detailed in previous posts, we have in effect a ‘Space/Time Rectangle’ of sides sv and tv (subscript v for variable) such that sv × tv  = s0 × t0  = Ω = constant. Just conceivably, one could make s0  a minimum and t0 a maximum but this would result in a very strange universe indeed. In this model of UET, I take s0 as a maximum and t0 as a minimum. These dimensions are those of the archetypal ‘stationary’ or ‘inertial’ Event Capsule, one far removed from the possible influence of any other event-chains. I do not see how the ‘mixed ratio’ s0 : t0 can be determined on the basis of any fundamental physical or logical considerations, so this ratio just ‘happens to be’ what it is in the universe we (think we) live in. This ratio, along with the determination of c which RELATIVITY  HYPERBOLA DIAGRAMis a number (positive integer), are the most important constants in UET and different values would give rise to very different universes. In UET s0/t0 is often envisaged  in geometrical terms : tan β = s0/t0 = constant.    s0  and   t0   also have minimum and maximum values respectively, noted as  su    and tu  respectively, the subscript u standing for ‘ultimate’. We thus have a hyperbola but one constrained within limits so that there is no risk of ‘infinite’ values.

 

 

What is ‘speed’?   Speed is not one of the basic SI units. The three SI mechanical units are the metre, the standard of length, the kilogram, the standard of mass, and the second, the standard of time. (The remaining four units are the ampere, kelvin, candela and mole). Speed is a secondary entity, being the ratio of space to time, metre to second. For a long time, since Galileo in fact, physicists have recognized the ‘relational’ nature of speed, or rather velocity (which is a ‘vector’ quantity, speed + direction). To talk meaningfully about a body’s speed you need to refer it to some other body, preferably a body that is, or appears to be, fixed (Note 1). This makes speed a rather insubstantial sort of entity, a will-o’-the-wisp, at any rate compared to  ‘weight’, ‘impact’, ‘position’, ‘pain’ and so forth. The difficulty is compounded by the fact that we almost always consider ourselves to be ‘at rest’ : it is the countryside we see and experience whizzing by us when seated in a train. It requires a tremendous effort of imagination to see things from ‘the other object’s point of view’. Even a sudden jolt, an acceleration, is registered as a temporary annoyance that is soon replaced by the same self-centred ‘state of rest’. Highly complex and contrived set-ups like roller-coasters and other fairground machines are required to give us the sensation of ‘acceleration’ or ‘irregular movement’, a sensation we find thrilling precisely because it is so inhabitual. Basically, we think of ourselves as more or less permanently at rest, even when we know we are moving around. In UET everything actually is at rest for the space of a single ksana, it does not just appear to be and everything that happens occurs ‘at’ or ‘within’ a ksana (the elementary temporal interval).
I propose to take things further ─ not in terms of personal experience but physical theory. As stated, there is in UET no such thing as ‘continuous motion’, only succession ─ a succession of stills. An event takes place here, then a ksana or more later, another event, its replica perhaps, takes place there. What matters is what occurs and the number and order of the events that occur, everything else is secondary. This means not only that ultimate events do not move around ─ they simply have occurrence where they do have occurrence ─  but also that the distances between the events are in a sense ‘neither here nor there’, to use the remarkably  apt everyday expression. In UET v signifies a certain number of grid-spaces to right or left of a fixed point, a shift that gets repeated every ksana (or in more complex cases with respect to more than one ksana). In the case of a truncated event-chain consisting of just two successive events, v is the same as d, the ‘lateral displacement’ of event 2 with respect to the position of event 1 on the Locality (more correctly, the ‘equivalent’ of such a position a ksana later). Now, although the actual number of ‘grid-positions’ to right or left of an identifiable spot on the Locality is fixed, and continues to be the same if we are dealing with a ‘regular’ event-chain, the distance between the centres (‘kernels’) of adjacent spots is not fixed but can take any number (sic) of permissible values ranging from 0 to c* according to the circumstances. The ‘distance’ from one spot to another can thus be reckoned in a variety of legitimate ways ─ though the choice is not ‘infinite’. The force of the Constancy of the Occupied Region Principle is that, no matter how these intra-event distances are measured or experienced, the overall ‘area’ remains the same and is equal to that of the ‘default’ case, that of a ‘stationary’ Event Capsule (or in the more extended case a succession of such capsules).
This is a very different conception from that which usually prevails within Special Relativity as it is understood and taught today. Discussing the question of the ‘true’ speed of a particular object whose speed  is different according to what co-ordinate system you use, the popular writer on mathematics, Martin Gardner, famously wrote, “There no truth of the matter”. Although I understand what he meant, this is not how I would put it. Rather, all permissible ‘speeds’, i.e. all integral values of v, are “the truth of the matter”. And this does not lead us into a hopeless morass of uncertainty where “everything is relative” because, in contrast to ‘normal’ Special Relativity, there is in UET always a fixed framework of ultimate events whose number within a certain region of the Locality and whose individual ‘size’ never changes. How we evaluate the distances between them, or more precisely between the spots where they can and do occur, is an entirely secondary matter (though often one of great interest to us humans).

Space contraction and Time dilation 

In most books on Relativity, one has hardly begun before being launched into what is pretty straightforward stuff for someone at undergraduate level but what is, for the layman, a completely indigestible mass of algebra. This is a pity because the actual physical principle at work, though it took the genius of Einstein to detect its presence, is actually extreme simple and can much more conveniently be presented geometrically rather than, as usual today, algebraically. As far as I am concerned, space contraction and time dilation are facts of existence that have been shown to be true in any number of experiments : we do not notice them because the effects are very small at our perceptual level. Although it is probably impossible to completely avoid talking about ‘points of view’ and ‘relative states of motion’ and so forth, I shall try to reduce such talk to a minimum. It makes a lot more sense to forget about hypothetical ‘observers’ (who most of the time do not and could not possibly exist) and instead envisage length contraction and time dilation as actual mechanisms which ‘kick in’ automatically much as the centrifugal governor on Watt’s steam-engine kicks in to regulate the supply of heat and the consequent rate of expansion of the piston. See things like this and keep at the back of your mind a skeletal framework of ultimate events and you won’t have too much trouble with the concepts of space contraction and time dilation. After all why should the distances between events have to stay the same? It is like only being allowed to take photographs from a standing position. These distances don’t need to stay the same provided the overall area or extent of the ‘occupied region’ remains constant since it is this, and the causally connected events within it, that really matters.
Take v to represent a certain number of grid-spaces in one direction which repeats; for our simple truncated event-chain of just two events it is d , the ‘distance’ between two spots. d is itself conceived as a multiple of the ‘intra-event distance’, that  between the ‘kernels’ of any two adjacent ‘grid-positions’ in a particular direction. For any specific case, i.e. a given value of d or v, this ‘inter-possible-event’ distance does not change, and the specific extent of the kernel, where every ultimate event has occurrence if it does have occurrence, never changes ever. There is, as it were, a certain amount of ‘pulpy’, ‘squishy’ material (cf. cytoplasm in a cell) which surrounds the ‘kernel’ and which is, as it were, compressible. This for the ‘spatial’ part of the ‘Event Capsule’. The ‘temporal’ part, however, has no pulp but is ‘stretchy’, or rather the interval between ksanas is.
If the Constant Region Postulate is to work, we have somehow to arrange things that, for a given value of v or d, the spatial and temporal distances sort Relativity Circle Diagram tan sinthemselves out so that the overall area nonetheless remains the same. How to do this? The following geometrical diagram illustrates one way of doing this by using the simple formula tan θ = v/c  =  sin φ . Here v is an integral number of grid-positions ─ the more complex case where v is a rational number will be considered in due course ─ and c is the lowest unattainable limit of grid-positions (in effect (c* + 1) ).
Do these contractions and dilations ‘actually exist’ or are they just mathematical toys? As far as I am concerned, the ‘universe’ or whatever else you want to call what is out there, does exist and such simultaneous contractions and expansions likewise. Put it like this. The dimensions of loci (spots where ultimate events could in principle have occurrence) in a completely empty region of the Locality do not expand and contract because there is no ‘reason’ for them to do so : the default dimensions suffice. Even when we have two spots occupied by independent, i.e. completely disconnected,  ultimate events nothing happens : the ‘distances’ remain the ordinary stationary ones. HOWEVER, as soon as there are causal links between events at different spots, or even the possibility of such links, the network tightens up, as it were, and one can imagine causal tendrils stretching out in different directions like the tentacles of an octopus. These filaments or tendrils can and do cause contractions and expansions of the lattice ─ though there are definite elastic limits. More precisely, the greater the value of v, the more grid-spaces the causal influence ‘misses out’ and the more tilted the original rectangle becomes in order to preserve the same overall area.
We are for the moment only considering a single ‘Event Capsule’ but, in the case of a ‘regular event-chain’ with constant v ─ the equivalent of ‘constant straight-line motion’ in matter-based physics ─ we have  a causally connected sequence of more or less identical ‘Event Capsules’ each tilted from the default position as much as, but no more than, the last (since v is constant for this event-chain).
This simple schema will take us quite a long way. If we compare the ‘tilted’ spatial dimension to the horizontal one, calling the latter d and the former d′ we find from the diagram that d′ cos φ = d and likewise that t′ = t/cos φ . Don’t bother about the numerical values : they can be worked out  by calculator later.
These are essentially the relations that give rise to the Lorentz Transformations but, rather than state these formulae and get involved in the whole business of convertible co-ordinate systems, it is better for the moment to stay with the basic idea and its geometrical representation. The quantity noted cos φ which depends on  v and c , and only on v and c, crops up a lot in Special Relativity. Using the Pythagorean Formula for the case of a right-angled triangle with hypotenuse of unit length, we have

(1 cos φ)2 + (1 sin φ)2 = 12  or cos2 φ + sin2 φ = 1
        Since sin φ is set at v/c we have
        cos2 φ  = 1– sin2 φ   = 1 – (v/c)2       cos φ = √(1 – (v/c)2

         More often than not, this quantity  (√(1 – (v2/c2)  (referred to as 1/γ in the literature) is transferred over to the other side so we get the formula

         d′ = (1/cos φ) d   =     d /( √(1 – (v2/c2))      =  γ d

Viewed as an angle, or rather the reciprocal of the cosine of an angle, the ubiquitous γ of Special Relativity is considerably less frightening.

A Problem
It would appear that there is going to be a problem as d, or in the case of a repeating ‘rate’, v, approaches the limit c. Indeed, it was for this reason that I originally made a distinction between an attainable distance (attainable in one ksana), c*, and an unattainable one, c. Unfortunately, this does not eliminate all the difficulties but discussion of this important point will  be left to another post. For the moment we confine ourselves to ‘distances’ that range from 0 to c* and to integral values of d (or v).

Importance of the constant c* 

Now, it must be clearly understood that all sorts of ‘relations’ ─   perhaps correlations is an apter term ─ ‘exist’ between arbitrarily distant spots on the Locality (distant either spatially or  temporally or both) but we are only concerned with spots that are either occupied by causally connected ultimate events, or could conceivably be so occupied. For event-chains with a 1/1 ‘reappearance rhythm’  i.e. one event per ksana, the relation tan θ = v/c = sin φ (v < c) applies (see diagram) and this means that grid-spots beyond the point labelled c (and indeed c itself) lie ‘outside’ the causal ‘Event Capsule’ Anything that I am about to deduce, or propose, about such an ‘Event Capsule’ in consequence does not apply to such points and the region containing them. Causality operates only within the confines of single ‘Event Capsules’ of fixed maximum size, and, by extension, connected chains of similar ‘Event Capsules’.
Within the bounds of the ‘Event Capsule’ the Principle of Constant Area applies. Any way of distinguishing or separating the spots where ultimate events can occur is acceptable, provided the setting is appropriate to the requirements of the situation. Distances are in this respect no more significant than, say, colours, because they do not affect what really matters : the number of ultimate events (or number of possible emplacements of ultimate events) between two chosen spots on the Locality, and the order of such events.
Now, suppose an ultimate event can simultaneously produce a  clone just underneath the original spot,  and  also a clone as far as possible to the right. (I doubt whether this could actually happen but it is a revealing way of making a certain point.)
What is the least shift to the right or left? Zero. In such a case we have the default case, a ‘stationary’ event-chain, or a pair belonging to such a chain. The occupied area, however, is not zero : it is the minimal s03 t0 . The setting v = 0 in the formula d′ = (1/cos φ) d makes γ = 1/√(1 – (02/c2) = 1 so there is no difference between d′ and d. (But it is not the formula that dictates the size of the occupied region, as physicists tend to think : it is the underlying reality that validates the formula.)
For any value of d, or, in the case of repetition of the same lateral distance at each ksana, any value of v, we tilt the rectangle by the appropriate amount, or fit this value into the formula. For v = 10 grid-spaces for example, we will have a tilted Space/Time Rectangle with one side (10 cos φ) sand the other side                 (1/10 cos φ) t0 where sin φ = 10/c   so cos φ = √1 – (10/c)2  This is an equally valid space/time setting because the overall area is
         (10 cos φ) s0    ×   (1/10 cos φ) t0   =  s t0      

We can legitimately apply any integral value of v < c and we will get a setting which keeps the overall area constant. However, this is done at a cost : the distance between the centres of the spatial element of the event capsules shrink while the temporal distances expand. The default distance s0 has been shrunk to s0 cos φ, a somewhat smaller intra-event distance, and the default temporal interval t0 has been stretched to t0 /cos φ , a somewhat greater distance. Remark, however, that sticking to integral values of d or v means that cos φ does not, as in ‘normal’ physics, run through an ‘infinite’ gamut of values ─ and even when we consider the more complex case, taking reappearance rhythms into account, v is never, strictly never, irrational.
What is the greatest possible lateral distance? Is there one? Yes, by Postulate 2 there is and this maximal number of grid-points is labelled c*. This is a large but finite number and is, in the case of integral values of v, equal to c – 1. In other words, a grid-space c spaces to the left or right is just out of causal range and everything beyond likewise (Note 2).

Dimensions of the Elementary Space Capsule

I repeat the two basic postulates of Ultimate Event Theory that are in some sense equivalent to Einstein’s two postulates. They are

1. The mixed Space/Time volume/area of the occupied parallelipod/rectangle remains constant in all circumstances

 2. There is an upper limit to the lateral displacement of a causally connected event relative to its predecessor in the previous ksana

        Now, suppose we have an ultimate event that simultaneously produces a clone at the very next ksana in an equivalent spot AND another clone at the furthest possible grid-point c*. Even, taking things to a ridiculous extreme to make a point, suppose that a clone event is produced at every possible emplacement in between as well. Now, by the Principle of the Constancy of the Occupied Region, the entire occupied line of events in the second ksana can either have the ‘normal’ spacing between events which is that of the ‘rest’ distance between kernels, s0, or, alternatively, we may view the entire line as being squeezed into the dimensions of a single ‘rest’ capsule, a dimension s0 in each of three spatial directions (only one of which concerns us). In the latter case, the ‘intra-event’ spacing will have shrunk to zero ─ though the precise region occupied by an ultimate event remains the same. Since intra-event distancing is really of no importance, either of these two opposed treatments are ‘valid’.
What follows is rather interesting: we have the spatial dimension of a single ‘rest’ Event Capsule in terms of su, the dimension of the kernel. Since, in this extreme case, we have c* events squashed inside a lateral dimension of s0, this means that
s0 = c* su , i.e. the relation s0 : su = c*: 1. But s0 and su are, by hypothesis, universal constants and so is c* . Furthermore, since by definition sv tv = s0 t0 = Ω = constant , t0 /tv = sv/s0 and, fitting in the ‘ultimate’ s value, we have t0 /tu = su/c* su    = 1 : c*. In the case of ‘time’, the ‘ultimate’ dimension tu is a maximum since (by hypothesis) t0 is a minimum. c* is a measure of the extent of the elementary Event Capsule and this is why it is so important.
In UET everything is, during the space of a single ksana, at rest and in effect problems of motion in normal matter-based physics become problems of statics in UET ─ in effect I am picking up the lead given by the ancient Greek physicists for whom statics was all and infinity non-existent. Anticipating the discussion of mass in UET, or its equivalent, this interpretation ‘explains’ the tremendously increased resistance of a body to (relative) acceleration : something that Bucherer and others have demonstrated experimentally. This resistance is not the result of some arbitrary “You mustn’t go faster than light” law : it is the resistance of a region on the Locality of fixed extent to being crammed full to bursting with ultimate events. And it does not matter if the emplacements inside a single Event Capsule are not actually filled : these emplacements, the ‘kernels’, cannot be compressed whether occupied or not. But an event occurring at the maximum number of places to the right, is going to put the ‘Occupied Region’ under extreme pressure to say the least. In another post I will also speculate as to what happens if c* is exceeded supposing this to be possible.      SH    9/3/14

Notes:

Note 1  Zeno of Elea noted the ‘relativity of speed’ about two and a half thousand years before Einstein. In his “Paradox of the Chariot”, the least known of his paradoxes, Zeno asks what is the ‘true’ speed of a chariot engaged in a chariot race. A particular chariot has one speed with respect to its nearest competitor, another compared to the slowest chariot, and a completely different one again relative to the spectators. Zeno concluded that “there was no true speed” ─ I would say, “no single true speed”.

Note 2  The observant reader will have noticed that when evaluating sin φ = v/c and thus, by implication, cos φ as well, I have used the ‘unattainable’ limit c while restricting v to the values 0 to c*, thus stopping 1/cos φ from becoming infinite. Unfortunately, this finicky distinction, which makes actual numerical calculations much more complicated,  does not entirely eliminate the problem as v goes to c, but this important issue will be left aside for the moment to be discussed in detail in a separate post.
If we allow only integral values of v ranging from 0 to c* = (c – 1), the final tilted Casual Rectangle has  a ludicrously short ‘spatial side’ and a ridiculously long ‘temporal side’ (which means there is an enormous gap between ksanas). We have in effect

tan θ = (c–1)/c  (i.e. the angle is nearly 45 degrees or π/4)
and γ = 1/√1 – (c–1)2/c2 =  c/√c2 – (c–1)2 = c/√(2c –1)
Now, 2c – 1 is very close to 2c  so     γ  ≈ √c/2   

I am undecided as to whether any particular physical importance should be given to this value ─ possibly experiment will decide the issue one day.
In the event of v taking rational values (which requires a re-appearance rhythm other than 1/1), we get even more outrageous ‘lengths’  for sv and tv . In principle, such an enormous gap between ksanas, viewed from a vantage-point outside the speeding event-chain, should become detectable by delicate instruments and would thus, by implication, allow us to get approximate values for c and c* in terms of the ‘absolute units’ s0 and t0 . This sort of experiment, which I have no doubt will be carried out in this century, would be the equivalent in UET of the famous Millikan ‘oil-drop’ series of experiments that gave us the first good value of e, the basic unit of charge.

In Ultimate Event Theory (UET) the basic building-block of physical reality is not the atom or elementary particle (or the string whatever that is) but the ultimate event enclosed by a four-dimensional  ‘Space/Time Event-capsule’. This capsule has fixed extent s3t = s03t0 where s0 and t0 are universal constants, s0 being the maximum ‘length’ of s, the ‘spatial’ dimension,  and t0 being the minimal ‘length’ of t, the basic temporal interval or ksana. Although s3t = s03 t0  = Ω (a constant), s and t can and do vary though they have maximum and minimum values (as does everything in UET).
All ultimate events are, by hypothesis, of the same dimensions, or better they occupy a particular spot on the Event Locality, K0 , whose dimensions do not change (Note 1). The spatial region occupied by an ultimate event is very small compared to the dimensions of the ‘Event capsule’ that contains it and, as is demonstrated in the previous post (Causality and the Upper Limit), the ratio of ‘ultimate event volume’ to ‘capsule volume’ or  su3/s03 is
1: (c*)3 and of single dimension to single dimension 1 : c* (where c* is the space/time displacement rate of a causal impulse (Note 2)). Thus, s3 varies from a minimum value su3, the exact region occupied by an ultimate event, to a maximum value of  s03  where s0 = c* su. In practice, when the direction of a force or velocity is known, we only need bother about the ‘Space/Time Event Rectangle’  st = constant but we should not forget that this is only a matter of convenience : the ‘event capsule’ cannot be decomposed and  always exists in four dimensions (possibly more).

Movement and ‘speed’ in UET     If by ‘movement’ we mean change, then obviously there is movement on the physical level unless all our senses are in error. If, however, by ‘movement’ we are to understand ‘continuous motion of an otherwise unchanging entity’, then, in UET, there is no movement. Instead there is succession : one event-capsule is succeeded by another with the same dimensions. The idea of ‘continuous motion’ is thus thrown into the trash-can along with the notion of ‘infinity’ with which it has been fatally associated because of the conceptual underpinning of the Calculus. It is admittedly difficult to avoid reverting to traditional science-speak from time to time but I repeat that, strictly speaking, in UET there is no ‘velocity’ in the usual sense : instead there is a ‘space/time ratio’ which may remain constant, as in a ‘regular event-chain, or may change, as in the case of an ‘irregular (accelerated) event-chain. For the moment we will restrict ourselves to considering only regular event-chains and, amongst regular event-chains, only those with a 1/1 reappearance rate, i.e. when one or more constituent ultimate event recurs at each ksana.
An event-chain is a bonded sequence of events which in its simplest form is simply a single repeating ultimate event. We associate with every event-chain an ‘occupied region’ of the Locality constituted by the successive ‘event-capsules’. This region is always increasing since, at any ksana,  any ‘previous spots’ occupied by members of the chain remain occupied (cannot be emptied). This is an important feature of UET and depends on the Axiom of Irreversibility which says that once an event has occurrence on the Locality there is no way it can be removed from the Locality or altered in any way. This property of ‘irreversible occurrence’ is, if you like, the equivalent of entropy in UET since it is a quantity that can only increase ‘with time’.
So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

So, if we have two regular event-chains, a and d , each with the same 1/1 re-appearance rhythm, and which emanate from the same spot (or from two adjacent spots), they define a ‘Causal Four-Dimensional Parallelipod’ which increases in volume at each ksana. The two event-chains can be represented as two columns, one  strictly vertical, the other slanting, since we only need to concern ourselves with the growing Space-Time Rectangle.

•   

•        

•    •    •    

•    •    •    •    

The two bold dotted lines (black and  red) thus define the limits of the ‘occupied region’ of the Locality, although these ‘guard-lines’ of ultimate events standing there like sentinels are not capable of preventing other events from occurring within the region whose extreme limits they define. Possible emplacements for ultimate events not belonging to these two chains are marked by grey points. The red dotted line may be viewed as displacing itself by so many spaces to the right at each ksana (relative to the vertical column). If we consider the vertical distance from bold black dot to dot to represent t0 , the ‘length’ of a single ksana (the smallest possible temporal interval), and the distance between neighbouring dots in a single row to be 0  then, if there are v spaces in a row (numbered 0, 1,2…..v) we have a Space/Time Event Rectangle of v s­0  × 1 t­0  , the ‘Space/time ratio’ being v grid-spaces per ksana.

It is important to realize what v signifies. ‘Speed’ (undirected velocity) is not a fundamental unit in the Système Internationale but a ratio of the fundamental SI units of spatial distance and time. For all that, v is normally conceived today as an independent  ‘continuous’ variable which can take any possible value, rational or irrational, between specified limits (which are usually 0 and c). In UET v is, in the first instance, simply a positive integer  which indicates “the number of simultaneously existing neighbouring spots on the Event Locality where ultimate events can have occurrence between two specified spots”. Often, the first spot where an ultimate event does or can occur is taken as the ‘Origin’ and the vth spot in one direction (usually to the right) is where another event has occurrence (or could have). The spots are thus numbered from 0 to v where v is a positive integer. Thus

0      1      2       3      4       5                v
•       •       •       •       •       • ………….•     

There are thus v intervals, the same number as the label applied to the final event ─ though, if we include the very first spot, there are (v + 1) spots in all where ultimate events could have (had) occurrence. This number, since it applies to ultimate events and not to distances or forces or anything else, is absolute.
      A secondary meaning of v is : the ratio of ‘values of lateral spatial displacement’ compared to ‘vertical’ temporal displacement’. In the simplest case this ratio will be v : 1 where the ‘rest’ values 0  and 0 are understood. This is the nearest equivalent to ‘speed’  as most of you have come across it in physics books (or in everyday conversation). But, at the risk of seeming pedantic, I repeat that there are (at least) two significant  differences between the meaning of v in UET and that of v  in traditional physics. In UET, v is (1) strictly a static space/time ratio (when it is not just a number) and (2) it cannot ever take irrational values (except in one’s imagination). If we are dealing with event-chains with a 1/1 reapperance rate, v is a positive integer but the meaning can be intelligibly extended to include m/n where m, n are integers. Thus  v = m/n spaces/ksana  would mean that successive events in an event-chain are displaced laterally by m spaces every nth ksana. But, in contrast to ‘normal’ usage, there is no such thing as a displacement of m/n spaces per (single) ksana. For both the ‘elementary’ spatial interval, the ‘grid-space’, and the ksana are irreducible.
One might suppose that the ‘distance’ from the 0th  to the vth spot does not change ─ ‘v is v’ as it were. However, in UET, ‘distance’ is not an absolute but a variable quantity that depends on ‘speed’ ─ somewhat the reverse of how we see things in our everyday lives since we usually think of distances between spots as fixed but the time it takes to get from one spot to the other as variable.

The basic ‘Space-Time Rectangle’ st can be represented thus

Relativity cos diagram

Rectangle   s0 × t0   =   s0 cos φ  × t0 /cos φ
where  PR cos φ = t0     

sv = s0 cos φ        tv = t0 /cos φ       sv = s0 cos φ       tv = t0 /cos φ    sv /s0  =  cos φ     tv /t0  =  1/cos φ s0 /t0  = tan β = constant       tv2  =  t02 + v2 s02     v2 s02 = t02 ( (1/cos2φ) – 1) s02/ t02  tan2 β  =  (1/v2) ((1/cos2φ) – 1) =  (1/v2) tan2 φ  

tan β  = s0 /t0   =    (tan θ)/(v cos θ)      since  sin φ =  tan θ = (v/c)                          

    v =    ( tan θ)/ (tan β cos φ)                  

 

So we have s = s0 cos φ  where φ ranges from 0 to the highest possible value that gives a non-zero length, in effect that value of  φ for which cos φ = s0/c* = su . What is the relation of s to v ? If sv is the spacing associated with the ratio v , and dependent on it, we have sv = s0 cos φ  and so sv /s0  = cos φ. So cos φ is the ‘shrink factor’ which, when applied to  any distance reckoned in terms of s0, converts it by changing the spacing. The ‘distance’ between two spots on the Locality is composed of two parts. Firstly, there is the number of intermediate spots where ultimate events can/could have/had occurrence and this ‘Event-number’ does not change ever. Secondly, there is the spacing between these spots which has a minimum value su which is just the diameter of the exact spot where an ultimate event can occur, and s0 which is the diameter of the Event capsule and thus the maximum distance between one spot where an ultimate event can have occurrence and the nearest neighbouring spot. The spacing  varies according to v  and it is now incumbent on us to determine the ‘shrink factor’ in terms of v.
The spacing s is dependent on so s = g(v) . It is inversely proportional to v since as v increases, the spacing is reduced while it is at a maximum when v = 0. One might make a first guess that the function will be of the form s = 1 – f(v)/h(c)   where f(v) ranges from 0 to h(c) . The simplest such function is just  s = (1 – v/c).
As stated, v in UET can only take rational values since it denotes a specific integral number of spots on the Locality. There  is a maximum number of such spots between  any two ultimate events or their emplacements, namely c –1  such spots if we label spot A as 0 and spot B as c. (If we make the second spot c + 1 we have c intermediate positions.) Thus v  = c/m where m is a rational number.  If we are concerned only with event-chains that have a 1/1 reappearance ratio, i.e. one event per ksana, then m  is an integer. So v  = c/n

We thus have tan θ = n/c  where n  varies from 0 to c* =  (c – 1) (since in UET a distinction is made between the highest attainable space/time displacement ratio and the lowest unattainable ratio c) .
So 0 ≤ θ < π/4  ─ since tan π/4 = 1. These are the only permissible values for tan θ .

Relativity tangent diagram

  

   

 

 

 

 

 

 

 

 

 

If now we superimpose the ‘v/c’ triangle above on the st Rectangle to the previous diagram we obtain

Relativity Circle Diagram tan sin

 

Thus tan θ = sin φ which gives
                cos φ = (1 – sin2 θ)1/2  = (1 – (v/c)2 )1/2  

This is more complicated than our first guess, cos φ = (1 – (v/c), but it has the same desired features that it goes to cos φ = 1 as v goes to zero and has a maximum value when v approaches c.
(This maximum value is 1/c √2c – 1  =  √2/√c )

     

 

 

         cos φ = (1 – (v/c)2 )1/2  is thus a ‘shrinkage factor’ which is to be applied to all lengths within event-chains that are in lateral motion with respect to a ‘stationary’ event chain. Students of physics will, of course, recognize this factor as the famous ‘Fitzgerald contraction’ of all lengths and distances along the direction of motion within an ‘inertial system’ that is moving at constant speed relative to a stationary one (Note 3)

Parable of the Two Friends and Railway Stations

It is important to understand what exactly is happening. As all books on Relativity emphasize, the situation is exactly symmetrical. An observer in system A would judge all distances within system B to be ‘contracted’, but an observer situated within system B would think exactly the same about the distances in system A. This symmetricality is a consequence of Einstein’s original assumption that  ‘the laws of physics take the same form in all inertial frames’. In effect, this means  that one inertial frame is as good as any other because if we could distinguish between two frames, for example by carrying out identical  mechanical or optical experiments, the two frames would not be equivalent with respect to  their physical behaviour. (In UET, ‘relativity’ is a consequence of the constancy of the area on the Locality occupied by the Event-capsule, whereas Minkowski deduced an equivalent principle from Einstein’s assumption of relativity.)
As an illustration of what is at stake, consider two friends undertaking train journeys from a station which spans the frontier between two countries. The train will stop at exactly the same number of stations, say 10, and both friends are assured that the stations are ‘equally spaced’ along each line. The friends start at the same time in Grand Central Station but go to platforms which take passengers to places in different countries.
In each journey there are to be exactly ten stops (including the final destination) of the same duration and the friends are assured that the two trains will be running at the ‘same’ constant speed. The two  friends agree to stop at the respective tenth station along the respective lines and then relative to each other. The  tracks are straight and close to the border so it is easy to compare the location of one station to another.
Friend B will thus be surprised if he finds that friend A has travelled a lot further when they  both get off at the tenth station.  He might conclude that the tracks were not straight, that the trains were  dissimilar or that they didn’t keep to the ‘same’ speed. Even might conclude  that, even though the distances between stations as marked on a map were the same for both countries, say 20 kilometres, the map makers had got it wrong. However, the problem would be cleared up if the two friends eventually learned that, although the two countries assessed distances in metres, the standard metre in the two countries was not the same. (This could not happen today but in the not sp distant past measurements of distance, often employing the same terms, did differ not only from one country to another but, at any rate within the Holy Roman Empire, from one ‘free town’ to another. A Leipzig ‘metre’ (or other basic unit of length) was thus not necessarily the same as a Basle one. It was only since the advent of the French Revolution and the Napoleonic system that weights and measures were standardized throughout most of Europe.’)

    This analogy is far from exact but makes the following point. On each journey, there are exactly the same number of stops, in this case 10, and both friends would agree on this. There is no question of a train in one country stopping at a station which did not exist for the friend in the other country. The trouble comes because of the spacing between stations which is not the same in the two countries, though at first sight it would appear to be because the same term is used.
    The stops correspond to ultimate events : their number and the precise region they occupy on the Locality is not relative but absolute. The ‘distance’ between one event and the next is, however, not absolute but varies according to the way you ‘slice’ the Event capsules and the region they occupy, though there is a minimum distance which is that of a ‘rest chain’.  As Rosser puts it, “It is often asked whether the length contraction is ‘real’?  What
the principle of relativity says is that the laws of physics are the same in all inertial frames, but the actual measures of particular quantities may be
different in different systems” (Note 4)

Is the contraction real?  And, if so,  why is the situation symmetrical? 

   What is not covered in the train journey analogy is the symmetricality of the situation. But if the situation is symmetrical, how can there be any observed discrepancy?
This is a question frequently asked by students and quite rightly so. The normal way of introducing Special Relativity does not, to my mind, satisfactorily answer the question. First of all, rest assured that the discrepancy really does exist : it is not a mathematical fiction invented by Einstein and passed off on the public by the powers that be.
μ mesons from cosmic rays hitting the atmosphere get much farther than they ought to — some even get close to sea level before decaying. Distance contraction explains this and, as far as I know, no other theory does. From the point of view of UET, the μ meson is an event-chain and, from its inception to its ‘decay’, there is a finite number of constituent ultimate events. This number is absolute and has nothing to do with inertial frames or relative velocities or anything you like to mention. We, however, do not see these occurrences and cannot count the number of ultimate events — if we could there would be no need for Special Relativity or UET. What we do register, albeit somewhat unprecisely, is the first and last members of this (finite) chain : we consider that the μ meson ‘comes into existence’ at one spot and goes out of existence at another spot on the Locality (‘Space/Time’ if you like). These events are recognizable to us even though we are not moving in sync with the μ meson (or at rest compared to it). But, as for the distance between the first and last event, that is another matter. For the μ meson (and us if we were in sync with it) there would be a ‘rest distance’ counted in multiples of s (or su).  But since we are not in sync with the meson, these distances are (from our point of view) contracted — but not from the meson’s ‘point of view’. We have thus to convert ‘his’ distances back into ours. Now, for the falling μ meson, the Earth is moving upwards at a speed close to that of light and so the Earth distances are contracted. If then the μ meson covers n units of distance in its own terms, this corresponds to rather more in our terms. The situation is somewhat like holding n strong dollars as against n debased dollars. Although the number of dollars remains the same, or could conceivably remain the same, what you can get with them is not the same : the strong dollars buy more essential goods and services. Thus, when converting back to our values we must increase the number. We find, then, that the meson has fallen much farther than expected though the number of ultimate events in its ‘life’ is exactly the same. We reckon, and must reckon, in our own distances which are debased compared to that of a rest event-chain. So the meson falls much farther than it would travel (longitudinally) in a laboratory. (If the meson were projected downwards in a laboratory there would be a comparable contraction.) This prediction of Special relativity has been confirmed innumerable times and constitutes the main argument in favour of its validity.
From the point of view of UET, what has been lost (or gained) in distance is gained (or lost) in ‘time’, since the area occupied by the event capsule or event capsules remains constant (by hypothesis).  The next post will deal with the time aspect.        SH  1 September 2013

 

Note 1  An ultimate event is, by definition, an event that cannot be further decomposed. To me, if something has occurrence, it must have occurrence somewhere, hence the necessity of an Event Locality, K0, whose function is, in the first instance, simply to provide a ‘place’ where ultimate events can have occurrence and, moreover, to stop them from merging. However, as time went on I found it more natural and plausible to consider an ultimate event, not as an entity in its own right, but rather as a sort of localized ‘distortion’ or ‘condensation’ of the Event Locality. Thus attention shifts from the ultimate event as primary entity to that of the Locality. There has been a similar shift in Relativity from concentration on isolated events and inertial systems (Special Relativity) to concentration on Space-Time itself. Einstein, though he pioneered the ‘particle/finitist’ approach ended up by believing that ‘matter’ was an illusion, simply being “that part of the [Space/Time] field where it is particularly intense”. Despite the failure of Einstein’s ‘Unified Field Theory’, this has, on the whole,  been the dominant trend in cosmological thinking up to the present time.
But today, Lee Smolin and others, reject the whole idea of ‘Space/Time’ as a bona fide entity and regard both Space and Time as no more than “relations between causal nodes”. This is a perfectly reasonable point of view which in its essentials goes back to Leibnitz, but I don’t personally find it either plausible or attractive. Newton differed from Leibnitz in that he emphatically believed in ‘absolute’ Space and Time and ‘absolute’ motion ─ although he accepted that we could not determine what was absolute and what was relative with current means (and perhaps never would be able to). Although I don’t use this terminology I am also convinced that there is a ‘backdrop’ or ‘event arena’ which ‘really exists’ and which does in effect provide ‘ultimate’ space/time units. 

Note 2. Does m have to be an integer? Since all ‘speeds’ are integral grid-space/ksana ratios, it would seem at first sight that m must be integral since c  (or c*) is an exact number of grid-spaces per ksana and v = (c*/m). However, this is to neglect the matter of reappearance ratios. In a regular event-chain with a 1/1 reappearance ratio, m would have to be integral ─ and this is the simplified event-chain we are considering here. However, if a certain event-chain has a space/time ratio of 4/7 , i.e. there is a lateral displacement of 4 grid-spaces every 7  ksanas, this can be converted to an ‘ideal’ unitary rate of 4/7 sp/ks.
In contemporary physics space and time are assumed to be continuous, so any sort of ‘speed’ is possible. However, in UET there is no such thing as a fractional unitary rate, e.g. 4/7th of a grid-space per ksana since grid-spaces cannot be chopped up into anything smaller. An ‘idealfractional rate per ksana is intelligible but it does not correspond to anything that actually takes place. Also, although a rate of m/n is intelligible, all rates, whether simple or ideal, must be rational numbers ─ irrational numbers are man-made conventions that do not represent anything that can actually occur in the  real world.

Note 3  Rosser continues :
     “For example, in the example of the game of tennis on board a ship going out to sea, it was reasonable to find that the velocity of the tennis ball was different relative to the ship than relative to the beach. Is this change of velocity ‘real’? According to the theory of special relativity, not only the measures of the velocity of the ball relative to the ship and relative to the seashore will be different, but the measures of the dimensions of the tennis court parallel to the direction of relative motion and the measures of the times of events will also be different. Both the reference frames at rest relative to the beach and to the ship can be used to describe the course of the game and the laws of physics will be the same in both systems, but the measures of certain quantities will differ.”                          W.G.V. Rosser, Introductory Relativity

 

 

Almost everyone schoolboy these days has heard of the Lorentz transformations which replace the Galileian transformations in Special Relativity. They are basically a means of dealing with the relative motion of two bodies with respect to two orthogonal co-ordinate systems. Lorentz first developed them in an ad hoc manner somewhat out of desperation in order to ‘explain’ the null result of the Michelson-Morley experiment and other puzzling experimental results. Einstein, in his 1905 paper, developed them from first principles and always maintained that he did not at the time even know of Lorentz’s work. What were Einstein’s assumptions?

  1. 1.  The laws of physics take the same form in all inertial frames.
  2. 2.  The speed of light in free space has the same value for all observers in inertial frames irrespective of the relative motion of the source and the observer.

As has since been pointed out, Einstein did, in fact, assume rather more than this. For one thing, he assumed that ‘free space’ is homogeneous and isotropic (the same in all directions) (Note 1). A further assumption that Einstein seems to have made is that ‘space’ and ‘time’ are continuous ─ certainly all physicists at the time  assumed this without question and the wave theory of ele tro-magnetism required it as Maxwell was aware. However, the continuity postulate does not seem to have played much of a part in the derivation of the equations of Special Relativity  though it did stop Einstein’s successors from thinking in rather different ways about ‘Space/Time’. Despite everything that has happened and the success of Quantum Mechanics and the photo-electric effect and all the rest of it, practically all students of physics think of ‘space’, ‘time’ and electro-magnetism as being ‘continuous’, rather than made up of discrete bits especially since Calculus is primarily concerned with ‘continuous functions’. Since nothing in the physical world is continuous, Calculus is in the main a false model of reality.

Inertial frames, which play such a big role in Special Relativity, as it is currently taught, do not exist in Nature : they are entirely man-made. It was essentially this realisation that motivated Einstein’s decision to try to formulate physics in a way that did not depend on any particular co-ordinate system whatsoever. Einstein assumed relativity and the constancy of the speed of light and independently deduced the Lorentz  transformations. This post would be far too long if I went into the details of Special Relativity (I have done this elsewhere) but, for the sake of the general reader, a brief summary can and should be given. Those who are familiar with Special Relativity can skip this section.

The Lorentz/Einstein Transformations     Ordinary people sometimes find it useful, and physicists find it indispensable, to locate an object inside a real or imaginary  three dimensional box. Then, if one corner of the imaginary box (e.g. room of house, railway carriage &c.) is taken as the Origin, the spot to which everything else is related, we can pinpoint an object by giving its distance from the corner/Origin, either directly or by giving the distance in terms of three directions. That is, we say the object is so many spaces to the right on the ground, so many spaces on the ground at right angles to this, and so many spaces upwards. These are the three co-ordinate axes x, y and z. (They do not actually need to be at right angles but usually they are and we will assume this.)

Also, if we are locating an event rather than an object, we will need a fourth specification, a ‘time’ co-ordinate telling us when such and such an event happened. For example, if a balloon floating around the room at a particular time, to pinpoint the event, it would not be sufficient to give its three spatial co-ordinates, we would need to give the precise time as well. Despite all the hoo-ha, there is nothing in the least strange or paradoxical about us living in a ‘four-dimensional universe’. Of course, we do done  : the only slight problem is that the so-called fourth dimension, time, is rather different from the other three. For one thing, it seems to only have one possible direction instead of two; also the three ‘spatial’ directions are much more intimately connected to each other than they are to the ‘time’ dimension. A single unit serves for the first three, say the metre, but for the fourth we need a completely different unit, the second, and we cannot ‘see’ or ‘touch’ a second whereas we can see and touch a metre rod or ruler.
Now, suppose we have a second ‘box’ moving within the original box and moving in a single direction at a constant speed. We select the x axis for the direction of motion. Now, an event inside the smaller box, say a pistol shot, also takes place within the larger box : one could imagine a man firing from inside the carriage of a train while it has not yet left the station. If we take the corner of the railway carriage to be the origin, clearly the distance from where the shot was fired to the railway carriage origin will be different from the distance from where the buffers train are. In other words, relative to the railway carriage origin, the distance is less than the distance to the buffers. How much less? Well, that depends on the speed of the train as it pulls out. The difference will be the distance the train has covered since it pulled out. If the train pulls out at constant speed 20 metres/second  metres/second and there has been a lapse of, say, 4 seconds, the distance will be  80 metres. More generally, the difference will be vt where t starts at 0 and is counted in seconds. So, supposing relative to the buffers, the distance is x, relative to the railway carriage the distance is v – xt a rather lesser distance.
Everything else, however, remains the same. The time is the same in the railway carriage as what is marked on the station clock. And, if there is only displacement in one dimension, the other co-ordinates don’t change : the shot is fired from a metre above ground level for example in both systems and so many spaces in from the near side in both systems. This all seems commonsensical and, putting this in formal mathematical language, we have the Galilean Transformations so-called

x = x – vt    y  = y    z  – z     t= t 

All well and good and nobody before the dawn of the 20th century gave much more thought to the matter. Newton was somewhat puzzled as to whether there was such a thing as ‘absolute distance’ and ‘absolute time’, hence ‘absolute motion’, and though he believed in all three, he accepted that, in practice, we always had to deal with relative quantities, including speed.
If we consider sound in a fluid medium such as air or water, the ‘speed’ at which the disturbance propagates differs markedly depending on whether you are yourself stationary with respect to the medium or in motion, in a motor-boat for example. Even if you are blind, or close your eyes, you can tell whether a police car is moving towards or away from you by the pitch of the siren, the well-known Doppler effect. The speed of sound is not absolute but depends on the relative motion of the source and the observer. There is something a little unsettling in the idea that an object does not have a single ‘speed’ full stop, but rather a variety of speeds depending on where you are and how you are yourself moving. However, this is not too troublesome.
What about light? In the latter 19th century it was viewed as a disturbance rather like sound that was propagated in an invisible medium, and so it also should have a variable speed depending on one’s own state of motion with respect to this background, the ether. However, no differences could be detected. Various methods were suggested, essentially to make the figures come right, but Einstein cut the Gordian knot once and for all and introduced as an axiom (basic assumption) that the speed of light in a vacuum (‘free empty space’) was fixed and completely independent of the observer’s state of motion. In other words, c, the speed of light, was the same in all co-ordinate systems (provided they were moving at a relative constant speed to each other). This sounded crazy and brought about a completely different set of ‘transformations’, known as the Lorentz Transformations  although Einstein derived them independently from his own first principles. This derivation is given by Einstein himself in the Appendix to his ‘popular’ book “Relativity : The Special and General Theory”, a book which I heartily recommend. Whereas physicists today look down on books which are intelligible to the general reader, Einstein himself who was not a brilliant student at university (he got the lowest physics pass of his year) and was, unlike Newton, not a particularly gifted pure mathematician, took the writing of accessible ‘popular’ books extremely seriously. Einstein is the author of the staggering put-down, “If you cannot state an issue clearly and simply, you probably don’t understand it”.
If we use the Galileian Tranformations and set v = c , the speed of light (or any form of electro-magnetism) in a vacuum, we have x = ct  or with x in metres and t in seconds, x = 3 × 108 metres (approximately) when t = 1 second. Transferring to the other co-ordinate system which is moving at v metres/sec relative to the first, we have  x’  x – vt  and, since t is the same as t, when dividing we obtain for x’ /t ,  (x – vt)/t = ((x/t) – v)  = (c – v), a somewhat smaller speed than c. This is exactly what we would expect if dealing with a phenomenon such as sound in a fluid medium. However, Einstein’s postulate is that, when dealing with light, the ratio distance/time is constant in all inertial frames, i.e. in all real or imaginary ‘boxes’ moving in a single direction with a constant difference in their speeds.

One might doubt whether it is possible to produce ‘transformations’ that do keep c the same for different frames. But it is. We need not bother about the y and z co-ordinates because they are most likely going to stay the same ─ we can arrange to set both at zero if we are just considering an object moving along in one direction. However, the x and t equations are radically changed. In particular, it is not possible to set t = t, meaning that ‘time’ itself (whatever that means) will be modified when we switch to the other frame.           The equations are

         x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

The reader unused to mathematics will find them forbidding and they are indeed rather tiresome to handle though one gets used to them. If you take the ratio If  x /t you will find ─ unless you make a slip ─ that, using the Lorentz Transformations you eventually obtain c as desired.

We have x = ct  or t = x/c  and the Lorentz Transformations

                    x = γ (x – vt)     t = γ(t – vx/c2)  where γ = (1/(1 – v2/c2)1/2 )

Then  x/t  = γ (x – vt)        =   (x – vt)       =    c2(x – vt)
γ
(t – vx/c2)         (t – vx/c2)         (c2t – vx)   

               = c2(x – vt)      =  c2x – cv(ct)
                 
(c2t – vx)            (c(ct) – vx)

                                        =  c2x – cvx)       = (cx)(c – v)
                                            (cx – vx)            x(c – v)                  

                                          =   c

The amazing thing that this is true for any value of v ─ provided it is less than c ─ so it applies to any sort of system moving relative to the original ‘box’, as long as the relative motion is constant and in a straight line. It is true for v = 0 , i.e. the two boxes are not moving relatively to each other : in such a case the complicated Lorentz Transformations reduce to x = x      t = t   and so on.
The Lorentz/Einstein Transformations have several interesting and revealing properties. Though complicated, they do not contain terms in x2 or t2 or higher powers : they are, in mathematical parlance, ‘linear’. This is what we want for systems moving at a steady pace relatively to each other : squares and higher powers rapidly produce erratic increases and a curved trajectory on a space/time graph. Secondly, if v is very small compared to c, the ratio v/c which appears throughout the formulae is negligible since c is so enormous. For normal speeds we do not need to bother about these terms and the Galileian formulae give  satisfactory results.
Finally, and this is possibly the most important feature : the Lorentz/Einstein Transformations are ‘symmetric’. That is, if you work backwards, starting with the ‘primed’ frame and x and t, and convert to the original frame, you end up with a mirror image of the formulae with a single difference, a change of sign in the xto formula denoting motion in the opposite direction (since this time it is the original frame that is moving away). Poincaré was the first to notice this and could have beaten Einstein to the finishing line by enunciating the Principle of Relativity several years earlier ─ but for some reason he didn’t, or couldn’t, make the conceptual leap that Einstein made. The point is that each way of looking at the motion is equally valid, or so Einstein believed, whether we envisage the countryside as moving towards us when we are in the train, or the train moving relative to the static countryside.

Relativity from Ultimate Event Theory?

    Einstein assumed relativity and the constancy of the speed and deduced the Lorentz Transformations : I shall proceed somewhat in the opposite direction and attempt to derive certain well-known features of Special Relativity from basic assumptions of Ultimate Event Theory (UET). What assumptions?

To start with, the Event Number Postulate  which says that
  Between any two  events in an event-chain there are a fixed number of ultimate events. 
And (to recap basic definitions) an ultimate event is an event that cannot be further decomposed — this is why it is called ultimate.
Thus, if the ultimate events in a chain, or subsection of a chain, are numbered 0, 1, 2, 3…….n  there are n intervals. And if the event-chain is ‘regular’, sort of equivalent of an intertial system, the ‘distance’  between any two successive events stays the same. By convention, we can treat the ‘time’ dimension as vertical — though, of course, this is no more than a useful convention.   The ‘vertical’ distance between the first and last ultimate events of a  regular event-chain thus has the value n × ‘vertical’ spacing, or n × t.  Note that whereas the number indicating the quantity of ultimate events and intervals, is fixed in a particular case,  t turns out to be a parameter which, however, has a minimum ‘rest’ value noted t0. This minimal ‘rest’ value is (by hypothesis) the same for all regular event-chains.

….        Likewise, between any two ‘contemporary’ i.e. non-successive, ultimate events there are a fixed number of spots where ultimate events could have (had) occurrence. If there are two or more neighbouring contemporary ultimate events bonded together we speak of an event-conglomerate and, if this conglomerate repeats or gives rise to another conglomerate of the same size, we have a ‘conglomerate event-chain’. (But normally we will just speak of an event-chain).
A conglomerate is termed ‘tight’, and the region it occupies within a single ksana (the minimal temporal interval) is ‘full’ if one could not fit in any more ultimate events (because there are no available spots). And, if all the contemporary ultimate events are aligned, i.e. have a single ‘direction’, and are labelled   0, 1, 2, 3…….n  , then, there are likewise n ‘lateral’ intervals along a single line.

♦        ♦       ♦       ♦       ♦    ………

If the event-conglomerate is ‘regular’, the distance between any two neighbouring events will be the same and, for n events has the value n × ‘lateral’ inter-event spacing, or n × s. Although s, the spacing between contemporary ultimate events must obviously always be greater than the spot occupied by an ultimate event, for all normal circumstances it does not have a minimum. It has, however, a maximum value s0 .

The ‘Space-Time’ Capsule

Each ultimate event is thus enclosed in a four-dimensional ‘space-time capsule’ much, much larger than itself — but not so large that it can accommodate another ultimate event. This ‘space-time capsule’ has the mixed dimension s3t.
In practice, when dealing with ‘straight-line’ motion, it is only necessary to consider a single spatial dimension which can be set as the x axis. The other two dimensions remain unaffected by the motion and retain the ‘rest’ value, s­0.  Thus we only need to be concerned with the ‘space-time’ rectangle st.
We now introduce the Constant Size Postulate

      The extent, or size, of the ‘space-time capsule’ within which an ultimate event can have occurrence (and within which only one can have occurrence) is absolute. This size is completely unaffected by the nature of the ultimate events and their interactions with each other.

           We are talking about the dimensions of the ‘container’ of an ultimate event. The actual region occupied by an ultimate event, while being non-zero, is extremely small compared to the dimensions of the container and may for most purposes be considered negligible, much as we generally do not count the mass of an electron when calculating an atom’s mass. Just as an atom is mainly empty space, a space time capsule is mainly empty ‘space-time’, if the expression is allowed.
Note that the postulate does not state that the ‘shape’ of the container remains constant, or that the two ‘spatial’ and ‘temporal’ dimensions should individually remain constant. It is the extent of the space-time parallelipod’ s3t which remains the same or, in the case of the rectangle it is the product st ,that is fixed, not s and t individually.  All quantities have minimum and maximum values, so let the minimum temporal interval be named  t0 and, Space time Area diagramconversely, let s0 be the maximum value of s. Thus the quantity s0 t0 ,  the ‘area’ of the space-time rectangle, is fixed once and for all even though the temporal and spatial lengths can, and do, vary enormously. We have, in effect a hyperbola where xy = constant but with the difference that the hyperbola is traced out by a series of dots (is not continuous) and does not extend indefinitely in any direction (Note 3).
         This quantity s0 t0  is an extremely important constant, perhaps the most important of all. I would guess that different values of s0 t0   would lead to very different universes. The quantity is mixed so it is tacitly assumed that there is a common unit. What this common unit is, is not clear : it can only be  based on the dimensions of an ultimate event itself, or its precise emplacement (not its container capsule), since K0 , the backdrop or Event Locality does not have a metric, is elastic, indeterminate in extent.
         Although one can, in imagination, associate or combine all sorts of events with each other, only events that are bonded sequentially constitute an event-chain, and only bonded contemporary events remain contemporary in successive ksanas. This ‘bonding’ is not a mathematical fiction but a very real force, indeed the most basic and most important force in the physical universe without which the latter would collapse at any and every moment — or rather at every ksana.
         Now, within a single ksana one and only one ultimate event can have occurrence. However, the ‘length’ of a ksana varies from one event-chain to another since, although the size of the emplacements where the ultimate events occur is (by hypothesis) fixed, the spacing is not fixed, is indeterminate though the same in similar conditions (Note 5). The length of a ksana has a minimum and this minimal length is attained only when an event-chain is at rest, i.e. when it is considered without reference to any other event-chain. This is the equivalent of a ‘proper interval’ in Relativity. So t is a parameter with minimal value t0. It is not clear what the maximum value is though there must be one.
         The inter-space distance s does not have a minimum, or not one that is, in normal conditions ever attained — this minimum would be the exact ‘width’ of the emplacement of an ultimate event, an extremely small distance. It transpires that the inter-space distance s is at a maximum in a rest-chain taking the value s0. I am not absolutely sure whether this needs to be stated as an assumption or whether it can be derived later from the assumptions already made.)

         Thus, the ‘space-time’ paralleliped s3t has the value (s0)3t0 , an absolute value.

The Rest Postulate

This says that

          Every event-chain is at rest with respect to the Event Locality K0 and may be considered to be ‘stationary’.

          Why this postulate and what does it mean? We all have experience of objects immersed in a fluid medium and there can also be events, such as sounds, located in this medium. Now, from experience, it is possible to distinguish between an object ‘at rest’ in a fluid medium such as the ocean and ‘in motion’ relative to this medium. And similarly there will be a clear difference between a series of siren calls or other sounds emitted from a ship in a calm sea, and the same sequence of sounds when the ship is in motion. Essentially, I envisage ultimate events as, in some sense, immersed in an invisible omnipresent ‘medium’, 0, — indeed I envisage ultimate events as being localized disturbances of K0. (But if you don’t like this analogy, you can simply conceive of ultimate events having occurrence on an ‘Event Locality’ whose purpose is simply to allow ultimate events to have occurrence and to keep them separate from one another.) The Rest Postulate simply means that, on the analogy with objects in a fluid medium, there is no friction or drag associated with chains of ultimate events and the medium in or on which they have occurrence. This is basically what Einstein meant when he said that “the ether does not have a physical existence but it does have a geometric existence”.

What’s the point of this constant if no one knows what it is? Firstly, it by no means follows that this constant s0 t0 is unknowable since we can work backwards from experiments using more usual units such as metres and seconds, giving at least an approximate value. I am convinced that the value of s0 t0  will be determined experimentally within the next twenty years, though probably not in my lifetime unfortunately. But even if it cannot be accurately determined, it can still function as a reference point. Galileo was not able to determine the speed of light even approximately with the apparatus at his disposal (though he tried) but this did not stop him stating that this speed was finite and using the limit in his theories without knowing what it was.

Diverging Regular Event-chains

Imagine a whole series of event-chains with the same reappearance rate which diverge from neighbouring spots — ideally which fork off from a single spot. Now, if all of them are regular with the same reappearance rate, the nth member of Event-chain E0 will be ‘contemporaneous’ with the nth members of all the other chains, i.e. they will have occurrence within the same ksana. Imagine them spaced out so that each nth ultimate event of each chain is as close as possible to the neighbouring chains. Thus, we imagine E0 as a vertical column of dots (not a continuous vertical line) and E1 a slanting line next to it, then E2 and so on. The first event of each of these chains (not counting the original event common to all) will thus be displaced by a single ‘grid-space’ and there will be no room for any events to have occurrence in between. The ‘speed’ or displacement distance of each event-chain relative to the first (vertical one) is thus lateral distance in successive ksanas/vertical distance in successive ksanas.  For a ‘regular’ event-chain the ‘slant’ or speed remains the same and is tan θ   =  1 s/t0 , 2 s/t0  and so on where, if θ is the slant angle,

tan θr  = vr  = 1, 2, 3, 4……   ­­

“What,” asked Zeno of Elea “is the speed of a particular chariot in a chariot race?”  Clearly, this depends on what your reference body is. We usually take the stadium as the reference body but the charioteer himself perceives the spectators as moving towards or away from him and he is much more concerned about his speed relative to that of his nearest competitor than to his speed relative to arena. We have grown used to the idea that ‘speed’ is relative, counter-intuitive though it appears at first.
But ‘distance’ is a man-made convenience as well : it is not an ‘absolute’ feature of reality. People were extremely put out by the idea that lengths and time intervals could be ‘relative’ when the concept was first proposed but scientists have ‘relatively’ got used to the idea. But everything seems to be slipping away — is there anything at all that is absolute, anything at all that is real? Ultimate Event Theory evolved from my attempts to ponder this question.
The answer is, as far as I am concerned, yes. To start with, there are such things as events and there is a Locality where events occur. Most people would go along with that. But it is also one of the postulates of UET that every macroscopic ‘event’ is composed of a specific number of ultimate events which cannot be further decomposed. Also, it is postulated that certain ultimate events are strongly bonded together into event-chains temporally and event-conglomerates laterally. There is a bonding force, causality.
Also, associated with every event chain is its Event Number, the number of ultimate events between the first event A and the last Z. This number is not relative but absolute. Unlike speed, it does not change as the event-chain is perceived in relation to different bodies or frames of reference. Every ultimate event is precisely localised and there are only a certain number of ultimate events that can be interposed between two events both ‘laterally’ (spatially) and ‘vertically’ (temporally). Finally, the size of the ‘space-time capsule’ is fixed once and for all. And there is also a maximum ‘space/time displacement ratio’ for all event-chains.
This is quite a lot of absolutes. But the distance between ultimate events is a variable since, although the dimensions of each ultimate event are fixed, the spacing is not fixed though it will remain the same within a so-called ‘regular’ event-chain.
It is important to realize that the ‘time’ dimension, the temporal interval measured in ksanas, is not connected up to any of the three spatial dimensions whereas each of the three spatial dimensions is connected directly to the other two. It is customary to take the time dimension as vertical and there is a temptation to think of t, time, being ‘in the same direction’ as the z axis in a normal co-ordinate system. But this is not so : the time dimension is not in any spatial direction but is conceived as being orthogonal (at right angles) to the whole lot. To be closer to reality, instead of having a printed diagram on the page, i.e. in two dimensions, we should have a three dimensional optical set-up which flashes on and off at rhythmic intervals and the trajectory of a ‘particle’ (repeating event-chain) would be presented as a repeating pinpoint of light in a different colour.
Supposing we have a repeating regular event-chain consisting for simplicity of just one ultimate event. We [resent it as a column of dots, i.e. conceive of it as vertical though it is not. The dots are numbered 0, 1, 2….    and the vertical spacing does not change (since this is a regular event-chain) and is set at  t0 since this is a ‘rest chain’.  Similar regular event-chains can then be presented as slanting lines to the right (or left) regularly spaced along the x axis. The slant of the line represents the ‘speed’. Unlike the treatment in Calculus and conventional physics, increasing v does not ‘pass through a continuous set of values’, it can only move up by a single ‘lateral’ space each time. The speeds of the different event-chains are thus 0s/t0  (= 0) ;  1s/t0 ;
2s/t0 ; 
 3s/t0 ;  4s/t0 ;……  n s/t0 and so on right up to  c s/t0 .  But to what do we relate the spacing s ?  To the ‘vertical’ event-chain or to slanting one? We must relate s to the event-chain under consideration so that its value depends on v so v =  v sv    The ratio  s/t0 is thus a mixed ratio sv/t0 .   tv  gives the intervals between successive events in the ‘moving’ event-chains and the number of these intervals does not increase because there are only a fixed number of events in any event-chain evaluated in any way. These temporal intervals thus undoubtedly increase because the hypotenuse gets larger. What about the spacing along the horizontal ? Does it also increase? Stay the same?  If we now introduce the Constant Size Postulate which says that the product  sv  tv  = s0 t0    we find that   sv  decreases with increasing v since tv  certainly increases. There is thus an inverse ratio and one consequence of this is that the mixed ratio sv/t0 = s0/tv    and we get symmetry. This leads to relativity whereas any other relation does not and we would have to specify which regular event-chain ‘really’ is the vertical one. One can legitimately ask which is the ‘real’ spatial distance between neighbouring events? The answer is that every distance is real and not just real for a particular observer. Most phenomena are not observed at all but they still occur and the distances between these events are real : we as it were take our pick, or more usually one distance is imposed on us.

Now the real pay off is that each of these regular event-chains with different speeds v is an equally valid description of the same event-chain. Each of these varying descriptions is true even though the time intervals and distances vary. This is possible because the important thing, what really matters, does not change : in any event-chain the number and order of the individual events is fixed once and for all although the distances and times are not. Rosser, in his excellent book Introductory Relativity, when discussing such issues gives the useful analogy of a gamer of tennis being played on a cruise liner in calm weather. The game would proceed much as on land, and if in a covered court, exactly as on land. And yet the ‘speed’ of the ball varies depending on whether you are a traveller on the boat or someone watching with a telescope from another boat or from land. The ‘real’ speed doesn’t actually matter, or, as I prefer to put it, is indeterminate though fixed within a particular inertial frame (event system). Taking this one step further, not just the relative speed but the spacing between the events of a regular  event-chain  ‘doesn’t matter’ because the constituent events are the same and appear in the same order. It is interesting that. on this interpretation, a certain indeterminacy with regard to distance is already making its appearance before Quantum Theory has even been mentioned. 

Which distance or time interval to choose?

Since, apparently, the situation between regular event-chains is symmetric (or between inertial systems if you like) one might legitimately wonder how there ever could be any observed discrepancy since any set of measurements a hypothetical observer can make within his own frame (repeating event system) will be entirely consistent and unambiguous. In Ultimate Event Theory, the situation is, in a sense, worse since I am saying that, whether or not there is or can be an observer present, the time-distance set-up is ‘indeterminate’ — though the number and order of events in the chain is not. Any old ‘speed’ will do provided it is less than the limiting value c. So this would seem to make the issue quite academic and there would be no need to learn about Relativity. The answer is that this would indeed be the case if we as observers and measurers or simply inhabitants of an event-environment could move from one ‘frame’ to another effortlessly and make our observations how and where we choose. But we can’t : we are stuck in our repeating event-environment constituted by the Earth and are at rest within it, at least when making our observations. We are stuck with the distance and time units of the laboratory/Earth event-chain and cannot make observations using the units of the electron event-chain (except in imagination). Our set of observations is fully a part of our system and the units are imposed on us. And this does make a difference, a discernible, observable difference when dealing with certain fast-moving objects.
Take the µ-meson. µ-mesons are produced by cosmic rays in the upper reaches of the atmosphere and are normally extremely short-lived, about  2.2 × 10–6 sec.  This is the (average) ‘proper’ time, i.e.  when the µ-meson is at rest — in my terms it would be N × t0 ksanas. Now, these mesons would, using this t value, hardly go more than 660 metres even if they were falling with the speed of light (Note 4). But a substantial portion actually reach sea level which seems impossible. Now, we have two systems, the meson event-chain which flashes on and off N times whatever N is before terminating, i.e. not reappearing. Its own ‘units’ are t0 and s0 since it is certainly at rest with itself. For the meson, the Earth and the lower atmosphere is rushing up with something approaching the limiting speed towards it. We are inside the Earth system and use Earth units : we cannot make observations within the meson. The time intervals of the meson’s existence are, from our rest perspective, distended : there are exactly the same number of ksanas for us as for the meson but, from our point of view, the meson is in motion and each ‘motion’ ksana is longer, in this case much much  longer. It thus ‘lives’ longer, if by living longer we mean having a longer time span in a vague absolute way,  rather than having more ‘moments of existence’. The meson’s ksana is worth, say, eight of our ksanas. But the first and last ultimate event of the meson’s existence are events in both ‘frames’, in ours as well as its. And if we suppose that each time it flashed into existence there was a (slightly delayed) flash in our event-chain, the flashes would be much more spaced out and so would be the effects. So we would ‘observe’, say, a duration of, say, eight of ‘our’ ksanas between consecutive flashings instead of one. And the spatial distance between flashes would also be evaluated in our system of metres and kilometres : this is imposed on us since we cannot measure what is going on within the meson event-chain. The meson actually would travel a good deal further in our system — not ‘would appear to travel farther’. Calculations show that it is well within the meson’s capacity to reach sea level (see full discussion in Rosser, Introductory Relativity pp. 71-3).
What about if we envisaged things from the perspective of the meson? Supposing, just supposing, we could transfer to the meson event-chain or its immediate environment and could remember what things were like in the world outside, the familiar Earth event-frame. We would notice nothing strange about ‘time’, the intervals between ultimate events, or the brain’s merging of them, would not surprise us at all. We would consider ourselves to be at rest. What about if we looked out of the window at the Earth’s atmosphere speeding by? Not only would we recognize that there was relative motion but, supposing there were clear enough landmarks (skymarks rather), the distances between these marks would appear to be far closer than expected — in effect there would be a double or triple sense of motion since our perception of motion is itself based on estimates of distance. As the books say, the Earth and its atmosphere would be ‘Lorentz contracted’. There would be exactly the same number of ultimate events in the meson’s trajectory, temporarily our trajectory also. The first and last event of the meson’s lifetime would be separated by the same number of temporal intervals and if these first and last events left marks on the outside system, these marks would also be separated by exactly the same number of spatial intervals. Only these spatial intervals — distances — would be smaller. This would very definitely be observed : it is as if we were looking out at the countryside on a familiar journey in a train fantastically speeded up. We would still consider ourselves at rest but what we saw out of the window would be ludicrously foreshortened and for this reason we would conclude that we were travelling a good deal faster than on our habitual journey. I do not think there would be any obvious way to recognize the time dilation of the outside system.

One is often tempted to think that the time dilation and the spatial contraction cancel each other out so all this talk of relativity is purely academic since any discrepancies should cancel out. This would indeed be the case if we were able to make our observations inside the event-chain we are observing, but we make the measurements (or perceptions) in a single frame. Although it is the meson event-chain that is dictating what is happening, both the time and spatial distance observations are made in our system. It is indeed only because of this that there is so much talk about ‘observers’ in Special Relativity. The point is not that some intelligent being observes something because usually he or she doesn’t : the point is that the fact of observation, i.e. the interaction with another system seriously confuses the issue. The ‘rest-motion’ situation is symmetrical but the ‘observing’ situation is not symmetrical, nor can it be in such circumstances.

This raises an important point.  In Ultimate Event Theory, as in Relativity, the situation is ‘kinematically’ symmetrical. But is it causally symmetrical? Although Einstein stressed that c was a limit to the “transfer of causality”  he was more concerned with light and electro-magnetism than causality. UET is concerned above all with causality — I have not mentioned the speed of light yet and don’t need to. In situations of this type, it is essential to clearly identify the primary causal chain. This is obviously the meson : we observe it, or rather we pick up indications of its flashings into and out of existence. The observations we make, or simply perceptions,  are dependent on the meson, they do not by themselves constitute a causal chain. So it looks at first sight as if we have a fundamental asymmetry : the meson event-chain is the controlling one and the Earth/observer event chain  is essentially passive. This is how things first appeared to me. But on reflection I am not so sure. In many (all?) cases of ‘observation’ there is interaction with the system being observed and it is inevitably going to be affected by this even if it has no senses or observing apparatus of its own. One could thus argue that there is causal symmetry after all, at least in some cases. There is thus a kind of ‘uncertainty principle’ due to the  interaction of two systems latent in Relativity before even Quantum Mechanics had been formulated. This issue and the related one of the limiting speed of transmission of causality will be dealt with in the subsequent post.

Sebastian Hayes  26 July
Note 1. And in point of fact, if General Relativity is to be believed, ‘free space’ is not strictly homogeneous even when empty of matter and neither is the speed of light strictly constant since light rays deviate from a straight path in the neighbourhood of massive bodies.

Note 2  For those people like me who wish to believe in the reality of 0 — rather than seeing it as a mere mathematical convenience like a co-ordinate system —  the lack of any ‘friction’ between the medium or backdrop and the events or foreground would, I think. be quite unobjectionable, even ‘obvious’, likewise the entire lack of any ‘normal’ metrical properties such as distance. The ‘backdrop’, that which lies ‘behind’ material reality though in some sense permeating it, is not physical and hence is not obliged to possess familiar properties such as a shape, a metric, a fixed distance between two points and so on. Nevertheless, this backdrop is not completely devoid of properties : it does have the capacity to receive (or produce) ultimate events and to keep them separate which involves a rudimentary type of ‘geometry’ (or topology). Later, as we shall see, it would seem that it is affected by the material points on it, so that this ‘geometry’, or ‘topology’, is changed, and so, in turn,  affects the subsequent patterning of events. And so it goes on in a vicious or creative circler, or rather spiral.
            The relation between K0, the underlying substratum or omnipresent medium, and the network of ultimate events we call the physical universe, K1  is somewhat analogous to the distinction between nirvana and samsara in Hinayana Buddhism. Nirvana  is completely still and is totally non-metrical, indeed non-everything (except non-real), whereas samsara is turbulence and is the world of measure and distancing. It is alma de-peruda, the ‘domain of separation’, as the Zohar puts it.  The physical world is ruled by causality, or karma, whereas nirvana is precisely the extinction of karma, the end of causality and the end of measurement.

Note 3   The ‘Space-time hyperbola’ , as stated, does not extend indefinitely either along the ‘space’ axis s (equivalent of x) or indefinitely upwards Space time hyperbolaalong the ‘time’ axis (equivalent of y).  — at any rate for the purposes of then present discussion. The variable t has a minimum t0   and the variable s a maximum s0  which one suspects is very much greater than  tc  .  Since there is an upper limit to the speed of propagation of a causal influence, c , there will in practice be no values of t greater than tc  and no s values smaller than sc  .   It thus seems appropriate to start marking off the s axis at the smallest value sc  =   s0/ c  which can function as the basic unit of distance.  Then s0 is equal to c of these ‘units’. We thus have a hyperbola something like this — except that the curve should consist of a string of separate dots which, for convenience I have run together.

Note 4  See Rosser, Introductory Relativity pp. 70-73. Incidentally, I cannot recommend too highly this book.

Note 5   I have not completely decided whether it is the ‘containers’ of ultimate events that are elastic, indeterminate, or the ‘space’ between the containers (which have the ultimate events inside them)’. I am inclined to think that there really are temporal gaps not just between ultimate events themselves but even between their containers, whereas this is probably not so in the case of spatial proximity. This may be one of the reasons, perhaps even the principal reason, why ‘time’ is felt to be a very different ‘dimension’. Intuitively, or rather experientially, we ‘feel’ time to be different from space and all the talk about the ‘Space/Time continuum’ — a very misleading phrase — is not enough to dispel this feeling.

To be continued  SH  18 July 2013

 

A ksana is the minimal temporal interval : within the space of a ksana one and only one ultimate event can have occurrence. There can thus be no change whatsoever within the space of a ksana — everything is at rest.
In Ultimate Event Theory every ultimate event is conceived to fill a single spot on the Locality (K0) and every such spot has the same extent, a ‘spatial’ extent which includes (at least) three dimensions and a single temporal dimension. A ksana is  the temporal interval between the ‘end’ of one ultimate event and the ‘end’ of the next one. Since there can be nothing smaller than an ultimate event, it does not make too much sense to speak of ‘ends’, or ‘beginnings’ or ‘middles’ of ultimate events, or their emplacements, but, practically speaking, it is impossible to avoid using such words. Certainly the extent of the spot occupied by an ultimate event is not zero.
The ksana is, however, considerably more extensive than the ‘vertical’ dimension of the spot occupied by an ultimate event. Physical reality is, in Ultimate Event theory, a ‘gapped’ reality and, just as an atom is apparently mainly empty space, a ksana is mainly empty time (if the term is allowed)..  Thus, when evaluating temporal intervals the ‘temporal extent’ of the ultimate events that have occurrence within this interval can, to a first approximation, be neglected. As to the actual value of a ksana in terms of seconds or nanoseconds, this remains to be determined by experiment but certainly the extent of a ksana must be at least as small as the Planck scale, 6.626 × 10–34 seconds.
A ‘full’ event-chain is a succession of bonded ultimate events within where it would not be possible to fit in any more ultimate events. So if we label the successive ultimate events of a ‘full’ event-chain, 0, 1, 2, 3……N  there will be as many ksanas in this temporal interval as there are ultimate events.
Suppose we have a full event-chain which, in its simplest form, may be just a single ultimate event repeated identically at or during each successive ksana. Such an event-chain can be imaged as a column of dots where each dot represents an ultimate event and the space in between the dots represents the gap between successive ultimate events of the chain. Thus , using the standard spacing of 2.5  this computer we have

Now, although the ‘space’ occupied by all ultimate events is fixed and  an absolute quantity (true for ‘all inertial and non-inertial frames’ if you like), the spacing between the spots where ultimate events can occur both ‘laterally’ — laterally is to be understood as including all three normal spatial dimensions — and vertically, i.e. in the temporal direction, is not  constant but variable. So, although the spots where ultimate events can occur have fixed (minuscule) dimensions, the ‘grid-distance’, the distance between the closest spots which have occurrence within the same ksana,  and so  does the temporal distance between successive ultimate events of a full event-chain. So the ksana varies in extent.  However, there is, by hypothesis,  a minimum value for both the grid-distance and the ksana. The minimal value of both is attained whenever we have a completely isolated event-chain. In practice, there is no such event-chain any more than, in traditional physics, there is a body that is completely isolated  from all other bodies in the universe. However, these minimal values can be considered to be attained for event-chains that are sufficiently ‘far away’ from all other chains. And, more significantly, these minimal values apply whenever we have a full regular event-chain considered in isolation from its event environment.
The most important point, that cannot be too strongly emphasized, is that although the number of ultimate events in an event-chain, or any continuous section of an event-chain, is absolute, the interval between successive events varies from one chain to another, though remaining constant within a single event-chain (providing it is regular). Unless stated otherwise, by ‘ksana’ I mean the interval between successive ultimate events in a ‘static’ or isolated regular event-chain. This need not cause any more trouble than the concept of intervals of time in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given.
Thus, the ‘vertical’  spacing of events in different chains can and does differ and the minimal value will be represented by the smallest spacing available on the computer I am using. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                     •

moment’, is not an absolute. However, unless stated otherwise, by ‘ksana’ we are to understand the duration of a ksana within a ‘static’ or isolated regular event-chain. This should not cause any more trouble than the concept of ‘time’ in Special Relativity where ‘time’ is understood to mean ‘proper time’, the ‘time’ of a system at rest, unless a contrary indication is given. However, the ‘vertical’  spacing of events in different chains can and does differ. I could, for example, increase the spacing from the standard spacing to

•           or to                     •

•                                        •

•                                       •

S.H. 11/7/13

General Laws :  I suspect that there are no absolutely general ‘laws of Nature’, no timeless laws such as those given by a mathematical formula : such a formula at best only indicates norms or physical constraints. Of all so-called laws, however, the most general and the most solidly established are arithmetic (not physical) laws, rules based on the properties of the natural numbers. To this extent Pythagoras was in the right.

Platonic Forms  Plato was also essentially right in proclaiming the need for ‘ideal’ forms : patterns which are not themselves physical but which dictate the shape and behaviour of physical things. But he was wrong to see these patterns as geometrical, and thus both static and timeless (the two terms are equivalent). With one or two exceptions contemporary science has done away with Platonic Forms though it still puts mathematics in the supreme position.
In practice, I do not see how one can avoid bringing in a secondary ‘ideal’ domain which has a powerful effect on actual behaviour. In Ultimate Event Theory, associations of events and event-chains, once they have attained a critical point, bring into existence ‘event schemas’ which from then on dictate the behaviour of similar collections of events. From this point onwards they are ‘laws’ to all intents and purposes but there was a time when they did not exist and there will perhaps be a future time when they will cease to be operative.
Random GenerationTake the well-known example of interference patterns produced by photons or electrons on a blank screen. It is possible to fire off these ‘particles’ one at a time so that the pattern takes shape point by point, or pixel by pixel if you like. At first the dots are distributed randomly and in different experiments the pattern builds up differently. But the final pattern, i.e., distribution of dots, is identical ─ or as nearly identical as experiment allows. This makes no kind of sense in terms of traditional physics with its assumption of strict causality. The occurrence of a particular event, a dot in a particular place, has no effect whatsoever (as far as we can tell) on the position of the next dot. So the order of events is not fixed even though the final pattern is completely determinate. So what dictates which event comes next? ‘Chance’ it would seem. But nonetheless the eventual configuration is absolutely fixed. This only makes sense if the final configuration follows an ‘event schema’ which does, in some sense, ‘exist’ though it has no place in the physical universe. This is a thoroughly Platonic conception. O

 Ultimate Reality   Relatively persistent patterns on an  underlying invisible ‘substance’ ─ that is all there is in the last resort. Hinduism was quite right to see all this as an essentially purposeless, i.e. gratuitous, display ─ the dance of Shiva. Far from being disheartening, this perspective is inspiring. It is at the opposite extreme both to the goal-directed ethos of traditional Christianity ─ the goal being to ‘save’ your soul ─ and to the drearily functional universe of contemporary biology where everything is perpetually seeking a fleeting  physical advantage over competitors.
What, then, is the difference between the organic and the inorganic?  Both are persistent, the inorganic more so than the organic. Without a basic ‘something’, nothing visible or tactile could or does exist. Without persistence there would be no recognizable patterns, merely noise, random flashes of light emerging from the darkness and subsiding into darkness after existing for a moment only. ‘Matter’ is an illusion, a mental construct : patterns of light (radiation) emerging and disappearing, that is all there is.

Dominance  The ‘universe’ must be maintained by some sort of force, otherwise it would collapse into nothingness at any moment. For Descartes this force came from God, Schopenhauer views it as something inherent in Nature, as what he calls ‘Will’ and which he views as being entirely negative, indeed monstrous. This ‘force’ is what I term dominance, the constraining effect one event or event-chain has on another (including on itself), and without it everything would slow down and very soon disappear without leaving a trace. Take away Schopenhauer’s Will, the force of karma, and this is what would happen ─ and in the Buddhist world schema will eventually happen. For Buddhism, the natural state of everything is rest, inaction, and the universe came about because of some unexplained disturbance of the initial state of rest, indeed is this disturbance. Subsequently, it is as if the ‘universe’ were frantically trying to get back to its original state of complete rest but  by its ceaseless striving is precisely making this goal more and more unattainable.

Disappearance  In both traditional and contemporary physics, it is impossible for an object to simply disappear without leaving a trace. The dogma of the conservation of mass/energy says that nothing ever really disappears, merely changes its form. However, according to Ultimate Event Theory, ultimate events are appearing and disappearing all the time and they need no ‘energy’ to do this. Certain of these ultimate events produced at random eventually coalesce into repeating event-chains we perceive as solids or liquids because they acquire ‘persistence’ or self-dominance, but it is conceivable that they can, in certain exceptional circumstances, lose this property and in such a case they will simply stop reappearing.
Are there any genuine cases where objects have completely disappeared in this way? The only evidence for them would seem to be anecdotal : one hears of certain Hindu magic-men who are able to make small objects disappear and appear in a different place but it is, of course, difficult to judge to distinguish genuine magic from the stage variety. And any such alleged cases rarely if ever get investigated by scientists since the latter are terrified of being accused of credulity or worse. Professor Taylor who investigated Uri Geller was told by colleagues that no reputable scientists would do such a thing. Clearly, if one is not allowed to investigate a phenomenon it has no chance of ever being verified which is what the rationalist/scientific lobby desire.
Contemporary science and rationalist thinking implicitly assumes that ‘real’ entities, while they actually exist, exist continuously ─ in fact the previous statement would be regarded as so obvious as to be hardly worth stating. But in UET nothing exists for more than an instant (ksana) and entities that seem to exist for a ‘long time’ are in reality composed of repeating ultimate events strongly bonded together. If reality is ‘gapped’, as UET affirms, all so-called objects alternately appear and disappear (though so rapidly that we do not notice the change) so  there is much less of a problem involved in making something disappear. Instead of actually destroying the object in some way (and in the destructive process transferring the object’s mass into different mass or pure energy) it would simply be sufficient to prevent an event cluster reappearing which is not quite so hard to imagine. In UET, an apparent object reappears regularly because  it possesses ‘self-dominance’; if it could be made to lose this property, it would not reappear, i.e. would disappear, and it would not necessarily leave any trace. Moreover, to make something disappear in this manner. it would not be necessary to use any kind of physical force, high temperature, pressure and so on. To say that the theoretical possibility is there is not, of course, the same thing as saying that a supposed occurrence actually takes place : that is a matter of experiment and observation. In my unfinished SF novel The Web of Aoullnnia devotees of a  mystical sect called the Yther are not only convinced that the entire universe is going to disappear into the nothingness from which it emerged, but  believe that they should hasten this progressive movement which they call Aoullnnia-yther where yther means ‘ebbing’, ‘withdrawal’, hence the name of the sect. Although contemporary Buddhists do not usually put it quite so starkly, essentially the aim of Buddhism is to return the entire universe to an entirely quiescent state “from which it never will arise again”.

On the other hand, deliberately bringing something into existence from nothing is just as inconceivable in Ultimate Event Theory as in contemporary physics, maybe more so.                  SH  22/5/13