When Alan Turing was born 100 years ago, on June 23, 1912, a computer was not a thing–it was a person. Computers, most of whom were women, were hired to perform repetitive calculations for hours on end. The practice dated back to the 1750s, when Alexis-Claude ÂClairaut recruited two fellow astronomers to help him plot the orbit of Halley’s comet. ÂClairaut’s approach was to slice time into segments and, using Newton’s laws, calculate the changes to the comet’s position as it passed Jupiter and Saturn. The team worked for five months, repeating the process again and again as they slowly plotted the course of the celestial bodies.
Today we call this process dynamic simulation; Clairaut’s contemporaries called it an abomination. They desired a science of fundamental laws and beautiful equations, not tables and tables of numbers. Still, his team made a close prediction of the perihelion of Halley’s comet. Over the following century and a half, computational methods came to dominate astronomy and engineering.
By the time Turing entered King’s College in 1931, human computers had been employed for a wide variety of purposes–and often they were assisted by calculating machines. Punch cards were used to control looms and tabulate the results of the American census. Telephone calls were switched using numbers dialed on a ring and interpreted by series of 10-step relays. Cash registers were ubiquitous.
A “millionaire” was not just a very rich person–it was also a mechanical calculator that could multiply and divide with astonishing speed.