**CALCULUS**

*“He who examines things in their growth and first origins, obtains the clearest view of them” *Aristotle.

Calculus was developed mainly in order to deal with two seemingly intractable problems: (1) how to estimate accurately the areas and volumes of irregularly shaped figures and (2) how to predict physical behaviour once you know the initial conditions and the ‘rates of change’.

We humans have a strong penchant towards visualizing distances and areas in terms of straight lines, squares and rectangles ― I have sometimes wondered whether there might be an amoeba-type civilization which would do the reverse, visualizing straight lines as consisting of curves, and rectangles as extreme versions of ellipses. ‘Geo-metria’ (lit. ‘land measurement’) was, according to Herodotus, first developed by the Egyptians for taxation purposes. Now, once you have chosen a standard unit of distance for a straight line and a standard square as a unit of area, it becomes a relatively simple matter to evaluate the length of *any *straight line and *any *rectangle (provided they are not too large or too distant, of course). Taking things a giant step forward, various Greek mathematicians, notably Archimedes, wondered whether one could in like manner estimate accurately the ‘length’ of arbitrary curves and the areas of arbitrarily shaped expanses.

At first sight, this seems impossible. A curve such as the circumference of a circle is *not *a straight line and never will become one. However, by making your unit of length progressively smaller and smaller, you can ‘measure’ a given curve by seeing how many equal little straight lines are needed to ‘cover’ it as nearly as possible. Lacking power tools, I remember once deciding to reduce a piece of wood of square section to a cylinder using a hand plane and repeatedly running across the edges. This took me a very long time indeed but I did see the piece of wood becoming progressively more and more cylindrical before my eyes. One could view a circle as the ‘limiting case’ of a regular polygon with an absolutely enormous number of sides which is basically how Archimedes went about things with his ‘method of exhaustion’ (**Note 1**).

It is important to stop at this point and ask under what conditions this stratagem is likely to work. The most important requirement is the ability to make your original base unit progressively smaller at each successive trial measurement while keeping them proportionate to each other. Though there is no need to drag in the infinite which the Greeks avoided like the plague, we do need to suppose that we can reduce in a regular manner our original unit of length *indefinitely*, say by halving it at each trial. In practice, this is never possible and craftsmen and engineers have to call a halt at some stage, though, hopefully, only when an acceptable level of precision has been attained. This is the point historically where mathematics and technology part company since mathematics typically deals with the ‘ideal’ case, not with what is realizable or directly observable. With the Greeks, the gulf between observable physical reality and the mathematical model has started to widen.

What about (2), predicting physical behaviour when you know the initial conditions and the ‘rates of change’? This was the great achievement of the age of Leibnitz and Newton. Newton seems to have invented his version of the Calculus in order to show, amongst other things, that planetary orbits *had* to be ellipses, as Kepler had found was in fact the case for Mars. Knowing the orbit, one could predict where a given planet or comet would be at a given time. Now, a ‘rate of change’ is not an independently ‘real’ entity: it is a *ratio* of two more fundamental items. Velocity, our best known ‘rate of change’, does not have its own unit in the SI system ― but the metre (the unit of distance) and the second (the unit of time) are internationally agreed basic units. So we define speed in terms of metres per second.

Now, the distance covered in a given time by a body is easy enough to estimate if the body’s motion is in a straight line and does not increase or decrease; but what about the case where velocity is changing from one moment to the next? As long as we have a reliable correlation between distance and time, preferably in the form of an algebraic formula *y = f(t)*, Newton and others showed that we *can * cope with this case in somewhat the same way as the Greeks coped with irregular shapes. The trick is to assume that the supposedly ever-changing velocity is *constant* (and thus representable by a straight line) over a *very brief* interval of time. Then we add up the distances covered in all the relevant time intervals. In effect, what the age of Newton did was to transfer the exhaustion procedure of Archimedes from the domain of statics to dynamics. Calculus does the impossible twice over: the Integral Calculus ‘squares the circle’, i.e. gives its area in terms of so many unit squares, while the Differential Calculus allows us to predict the exact whereabouts of something that is perpetually on the move (and thus *never *has a fixed position).

For this procedure to work, it must be possible, at least in principle, to reduce all spatial and temporal intervals indefinitely. Is physical reality actually like this? The post-Renaissance physicists and mathematicians seem to have assumed that it was, though such assumptions were rarely made explicit. Leibnitz got round the problem mathematically by positing ‘infinitesimals’ and ultimate ratios between them : his ‘Infinitesimal Calculus’ gloriously “has its cake and eats it too”. For, in practice, when dealing with an ‘infinitesimal’, we are (or were once) at liberty to regard it as entirely negligible in extent when this suits our purposes, while never permitting it to be strictly zero since division by zero is meaningless. Already in Newton’s own lifetime, Bishop Berkeley pointed out the illogicality of the procedure, as indeed of the very concept of ‘instantaneous velocity’.

The justification of the procedure was essentially that it seemed to work magnificently in most cases. Why did it work? Calculus typically deals with cases where there are two levels, a ‘micro’ scale’ and a ‘macro scale’ which is all that is directly observable to humans ― the world of seconds, metres, kilos and so on. If a macro-scale property or entity is believed to increase by micro-scale chunks, we can (sometimes) safely discard all terms involving *δt *(or *δx*) which appear on the Right Hand Side but still have a ‘micro/micro’ ratio on the Left Hand Side of the equation (**Note 2**). This ‘original sin’ of Calculus was only cleaned up in the late 19^{th} century by the key concept of the mathematical *limit*. But there was a price to pay: the mathematical model had become even further away removed from observable physical reality.

The artful concept of a limit does away with the need for infinitesimals as such. An indefinitely extendable sequence or series is said to ‘converge to a limit’ if the gap between the suggested limit and any and every term after a certain point is less than *any * proposed non-negative quantity. For example, it would seem that the sequence *½; 1/3; ¼……1/n *gets closer and closer to zero as *n *increases, since for any proposed gap, we can do better by making *n *twice as large and *1/n *twice as small. This definition gets round problem of actual division by zero.

But what the mathematician does not address is whether in actual fact a given process ever actually attains the mathematical limit (**Note 3**), or how near it gets to it. In a working machine, for example, the input energy *cannot *be indefinitely reduced and still give an output, because there comes a point when the input is not capable of overcoming internal friction and the machine stalls. All energy exchange is now known to be ‘quantized’ ― but, oddly, ‘space’ and ‘time’ are to this day still treated as being ‘continuous’ (which I do not believe they are). In practice, there is almost always a gulf between how things ought to behave according to the mathematical treatment and the way things actually do or can behave. Today, because of computers, the trend is towards slogging it out numerically to a given level of precision rather than using fancy analytic techniques. Calculus is still used even in cases where the minimal value of the independent variable is actually *known*. In population studies and thermo-dynamics, for example, the increase *δx *or *δn* cannot be less than a single person, or a single molecule. But if we are dealing with hundreds of millions of people or molecules, Calculus treatment still gives satisfactory results. Over some three hundred years or so Calculus has evolved from being an ingenious but logically flawed branch of applied mathematics to being a logically impeccable branch of pure mathematics that is rarely if ever directly embodied in real world conditions. *SH*

**Note 1 ** It is still a subject of controversy whether Archimedes can really be said to have invented what we now call the Integral Calculus, but certainly he was very close.

**Note 2 **Suppose we have two variables, one of which depends on the other. The dependent variable is usually noted as *y* while the independent variable is, in the context of dynamics, usually *t *(for time). We believe, or suppose, that *any *change in *t*, no matter how tiny, will result in a corresponding increase (or decrease) in *y *the dependent variable. We then narrow down the temporal interval *δt *to get closer and closer to what happens at a particular ‘moment’, and take the ‘final’ ratio which we call *dy/dt*. The trouble is that we need to completely get rid of *δt *on the Right Hand Side but keep it non-zero on the Left Hand Side because *dy/0 *is meaningless ― it would correspond to the ‘velocity’ of a body when it is completely at rest.

**Note 3 ** Contrary to what is generally believed, practically all the sequences we are interested in do *not *actually attain the limit to which they are said to converge. Mathematically, this does no9t matter — but logically and physically it often does.