Velocity has a different meaning in Ultimate Event Theory to that which it has in object-based physics. In the latter a particle traverses a multitude ─ usually an infinite number ─ of positions during a given time interval and the speed is the distance traversed divided by the time. One might, for example, note that it was 1 p.m. when driving through a  certain village and 3 p.m. when driving through a different one known to be 120 kilometres distant from the first. Supposing the speed was constant throughout this interval, it would be 120 kilometres per 2 hours. However, speed is practically never cited in this fashion : it is always reduced to a certain number of kilometres, or metres, with respect to a unitary interval of time, an hour, minute or second. Thus my speed on this particular journey would be quoted in a physics textbook as 60 kilometres per hour  or more likely as  (60 × 103)/(602»   16.67 metres per second  = 16.67  m s–1 (to two decimal places).
By doing this different speeds can be compared at a glance whereas if we quoted  speeds as 356 metres per 7 seconds and 720 metres per 8 seconds it would not be immediately obvious which speed is the greater. When dealing with such measures as metres and seconds there would normally be no difference between object-based physics and event-based physics, However, even when dealing with minute distances and tiny intervals of time such as nanometres and nanoseconds, ‘speed’ is still stated in so many units of lengths per interval of time. This automatic conversion to standard unitary measures presupposes that space and time are ‘infinitely divisible’ in the sense that, no matter how small the interval of time, it is always possible for a particle to change its position, i.e. ‘move’. This assumption is, to say the least, hardly plausible and Hume went so far as to write, “No priestly dogma invented on purpose to tame and subdue the rebellious reason of mankind ever shocked common sense more than the doctrine of the infinite divisibility with its consequences” ( Hume, Essay on Human Understanding).
In Ultimate Event Theory, which includes as an axiom that time and space are not infinitely divisible, this automatic conversion is not always feasible. Lengths eventually reduce to so many ‘grid-spaces’ on the Locality, and intervals of time to so many ksanas (‘instants’), and there is no such thing as a half or a third of a ‘grid-space’ or a quarter of a ksana. The ‘speed’, or  ‘displacement rate’ of an ultimate event or event-cluster is defined as the distance on the Locality between two spots where the event has occurred. This distance is always a positive integer corresponding to the number of intermediary positions (+1) where an ultimate event could have had occurrence. If the position of the earlier occurrence is not the original position, we relate both positions to that of a repeating landmark event-sequence, the equivalent of the origin. So if the occurrences take place at consecutive ksanas, the current reappearance rate (‘speed’) is simply the ‘distance’ between  the two spots divided by unity, i.e. a positive integer. But what if an event reoccurs 7 spaces to the left every 4 ksanas? The ‘actual’ reappearance rate is 7 spaces per 4 ksanas  which, when converted to the ‘standard’ measure comes out as 7/4 spaces per ksana or 7/4 sp ks–1 . However, since there is no such thing as seven-fourths of a position on the Locality, displacement rates like  7/4 sp ks–1 are simply a convenient but somewhat misleading way of tracking a recurring event.
The ‘Finite Space/Time Axiom’ has curious consequences. It means that except when the space/ksana ratio is an integer, all event-chains are ‘gapped’ : that is, there are intermediary ksanas between successive occurrences when the event or event-cluster does not make an appearance at all. Thus, the reappearance pattern ksana by ksana for an ultimate event displacing itself along a line at the ‘standardized’ rate of 7/4 sp ks–1  will be

……..o■oooooooooooooooooo……………
……..oooooooooooooooooooo…………..
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooo■ooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..oooooooooooooooooooo……………
……..ooooooooooooooo■oooo……………
……..ooooooooooooooooooooo…………

And this in turn means that when s/n is a ratio of relatively prime numbers, there will be gaps n–1 ksanas long, and the ‘particle’ (repeating ultimate event) will completely disappear during this time interval !   The importance of the distribution of primes and factorisation generally, which has been so intensively studied over the last two centuries, may thus have practical applications after all since it relates to the important question of whether there can be ‘full’ reappearance rates for certain processes (Note 1).
In consequence, when specifying in  full detail the re-appearance rate of an event, or, what comes to the same thing, the re-occurrence speed of members of an event-chain, we not only beed to give the magnitude of the displacement, the change of direction, but also the ‘gap number’ or ‘true’ eappearance rate of an event-chain.

Constants of Ultimate Event Theory

n* , the number of ksanas to a second, and s*, the number of grid-spaces to a metre, are basic constants in Ultimate Event Theory that remain to be determined but which I have no doubt will be determined during this century.  (s*/n*) is thus the conversion factor required to reduce speeds given in metres/second to spaces/ksana.  Thus c (s*/n*) =  3 × 108 (s*/n*) sp ks–1  is seemingly a displacement rate that cannot be exceeded. c(s*/n*) is  not, as I view things, the actual speed of light but merely the limiting speed for all ‘particles’, or, more precisely, the limiting value of the possible ‘lateral’ displacement of members of a single event-chain. Any actual event-chain would have at most a lateral displacement rate that approaches but does not attain this limit. While there is good reason to believe that there must be a limiting value for all event-chains (particles) — since there is a limit to everything — there is no need to believe that anything  actually attains such a limit. In object-based physics, the neutrino used until recently to be thought to travel at the speed of light and thus to be massless, but it is now known that it has a small mass. Te idea of a ‘massless’ particle is ridiculous (Note 2) for if there really were such a thing it would have absolutely no resistance to any attempt to change its state of rest or straight-line motion and so it is hard to see how it could be anything at all even for a single instanrt, or maybe it would just be a perpetually changing erratic ‘noise’. Mass, of course, does not have the same meaning in Ultimate Event Theory and will be defined in a subsequent post, but the idea that there is a ‘displacement limit’ to an event-chain passes over into UET. This ‘speed’ in reality shows the absolute limit of the lateral ‘bonding’ between events in an event-chain and in this sense is a measure of ‘event-energy’. Any greater lateral displacement than c(s*/n*) would result in the proto-event aborting, as it were, since it would no longer be tied to the same event-chain.

Reappearance rates

So far I have assumed that an event in an event-chain reappears as soon as it is able to do so. This may well not be the case, indeed I think it very unlikely that it is the case. For example, an event in an event-chain with a standardized ‘speed’ of 1/2 sp ks–1  might not in reality re-appear every second ksana : it could reappear two spaces to the left (or right) every fourth ksana, or three spaces every sixth ksana and so on. In this respect the ‘gap number’ would be more informative than the reappearance rate as such and it may be that slight interference with other event-chains would shift the gap number without actually changing the overall displacement ratio. Thus, an event shifting one space to the right every second ksana might only appear every fourth ksana shifted two spaces in the same direction and so on. It is tempting to see these shifts as in some way analogous to the orbital shifts of electrons, while more serious interference would completely disrupt the displacement ratio. Once we evolve instruments sensitive enough to register the ‘flicker’ of ultimate events, we mjay find that rthere are all sorts of different event patterns, as intricate as the close packing of molecules.
It has also occurred to me that different re-appearance rates for  event-chains that have  the same standard displace,ment rate might xplain why certain event-chains behave in very different ways despite having, as far as we can judge, the same ‘speed’.  In our macroscopic world, the effect of skipping a large number of grid-spaces and ksanas (which might well be occupied by other event-clusters) would give the impression that a particularly dense event-cluster (‘object’)  had literally gone right through some other cluster if the latter were thinly extended spatio-temporally. Far from being impossible or incredible, something like this actually happens all the time since, according to object-based physics, neutrinos are passing through us in their millions every time we blink. Why, then, is it so easy to block the passage of light which travels at roughly the same speed, certainly no less? I found this a serious conceptual problem but a difference of reappearance rates might explain this : maybe a stream of photons has the same ‘speed’ but a much tighter re-appearance rate than a stream of neutrinos. This is only a conjecture, of course, and there may be other factors at work, but there may be some way to test whether there really is such a discrepancy in the ‘speeds’ of the two ‘particles’. which would result in the neutrino having far better penetrating power with regard to obstructions.

Extended and combined reappearance rates

Einstein wondered what would happen if an object exceeded the speed of light, i.e. in UET terms when an event-chain got too extended laterally. One  might also wonder what would happen if an event-chain got too extended temporally, i.e. if its re-appearance rate was 1/N where N was an absolutely huge number of ksanas. In such a case, the re-appearance of an event would not be recognized as being a re-appearance : it would simply be interpreted as an event (or event-cluster) that was entirely unrelated to anything in its immediate vicinity. Certain macroscopic events we consider to be random are perhaps not really so : the event-chains they belong to are so extended temporally that we just don’t recognize them as being event-chains (the previous appearance might have been years or centuries ago). Likewise the interaction of different event-chains in the form of ‘cause and effect’ might be so spread out in time that a ‘result’ would appear to come completely out of the blue (Note 3).
There must, however, be a limit to vertical extension (since everything has a limit). This would be an important number for it would show the maximum temporal extension of the ‘bonding’ between events of a single chain. We may also conjecture that there is a combined limit to lateral and vertical extension taken together, i.e. the product grid-positions × ksanas has a maximum which again would be a basic constant of nature.     S.H. 8/10/12

_________________________

Note 1  A ‘full’ re-appearance rate is one where an ultimate event m akes an appearance at every ksana from its first appearance to its last.

Note 2. De Broglie, who first derived the famous relation “p = h/ λ”linking a particle’s momentum p  with Planck’s constant divided by the wave-length, believed that photons, like all particles, had a small mass. There is no particular reason why the observed speed of light should be strictly equated with c, the limiting speed for all particles, except that this makes the equations easier to handle, and no experiment will ever be able to determine whether the two are strictly identical.

Note 3 This, of course, is exactly what Buddhists maintain with regard to the consequences of bad or good actions ─ they can follow you around in endless reincarnations. Note that it is only certain events that have this temporal extension in the karma theory: those involving the will, deliberate acts of malice or benevolence. If we take all this out of the moral context, the idea that effects can be widely separated temporally from their causes and that these effects come up  repeatedly is quite possibly a useful insight into what actually goes on in the case of certain abnormal event-chains that are over-extended vertically.