A Daily History of Holes, Dots, Lines, Science, History, Math, the Unintentional Absurd & Nothing |1.6 million words, 7000 images, 3.5 million hits| Press & appearances in The Times, The Paris Review, Le Figaro, The Economist, The Guardian, Discovery News, Slate, Le Monde, Sci American Blogs, Le Point, and many other places... 3,000+ total posts
This was a surprise, finding M. Bollee's article (Sur une nouvelle machine a calculer) in this 1889 Comptes Rendus, pecking around in that big 10-pound volume looking for something else. It was very easy to miss if you weren't looking for it, just a few pages long in a 1000-page book. But there it was, nestled comfortably in pp 737-739. It these few pages Bollee describes his machine and with particular reference to his innovative approach to direct multipilication--a fine addition (ha!) to the long line of contributions by Babbage and Clement, Scheutz, Wiberg and Grant and Hamann.
Léon Bollée: "Sur une nouvelle machine a calculer", in Comptes Rendus de l'Academie Sciences (Paris), volume 109, 1889, pp. 737-9.
An image of the machine from The Manufacturer and Builder:
“The most ignorant person at a reasonable charge, and with little bodily labor, may write books in philosophy, poetry, law, mathematics, and theology, without the least assistance from genius or study.” Jonathan Swift, in Gulliver’s Travels (Actually, Travels into Several Remote Nations of the World, in Four Parts. By Lemuel Gulliver, First a Surgeon, and then a Captain of several Ships). 1726.
I've produced the beginning of an alphabet of --Punkisms for variations of robot.machine/computer past and futures, science fiction indicators of possibility. Why should we stop at "Steampunk" when there's FuturePunk and DeadPunk and such to be had? So, please find folllowing a few possibilities, and accept them in the playful way in which they are offered--also, the very abbreviated descriptions of the science fiction works desscribed are open to interpretation. And please give this a "pass" for the over-abundance of hyphens.
ActorPunk: Walter Miller, 'The Darfsteller' (1954), human actors are replaced by robots on stage, as compared to being replaced by digital figures online. Some steps have been made with great care over the years by “perfecting” the imaging of women in magazine advertisement—in this way even the models who appear in the ad and are modified find it impossible to live up to the expectations of what their ads depict.
Anti-technologicalPunk-topia: Samuel Butler, Erewhon, (1872).
AutomatoPunk: Kurt Vonnegut, Player Piano (1955), like Brazil and 1984, but with machines.
BiologoPunk: Philip K. Dick, 'Autofac' (1955), machines find that they can reproduce themselves in a '50's iron-bio kind of way.
BrainPunk: Miles J. Breuer, 'Paradise and Iron' (1930).
ConsciousnessPunk: Philip K. Dick, Vulcan's Hammer (1960) and the development of computer consciousness. Also David Gerrold, When Harlie Was One (1972); Frank Herbert, Destination Void (1966); Harlan Ellison, 'I Have no Mouth and I Must Scream' (1967); Robert Heinlein, The Moon is a Harsh Mistress (1966), and many others.
I'm unsure of when the first images appear representing the human mind as a sort of anthropomorphic filing system, utilizing a desk or filing cabinet or (later) a computer. From the early history of human memory-making mnemonic devices and memory palaces there are represnetations of where information in the brain could be stored for rapid access and retrieval, like memories being stored in rows of theater seating, or in the branches of complex trees, or in the buildings of a bird's eye street map. But these show where the memory "goes", and not where this memory set sits in the brain of the individual.
This came to mind seeing this odd little ad in the magazine Illustrated World for July 1919 seeing these fairly high-creep factor faces endowed with different sorts of cerebral applications. The top man is depicted with a messy desk and a hand-cranked calculator; at bottom we see the organized man, with papers sorted in their labeled places. (As it turns out, the image used for a company selling memory-improvement books).
Perhaps this image was in a small way a pre-historic insight to brain computer interface (the acquired, direct signal processing of the brain to a computer), in the same way, say, as Hans Berger's 1924 invention of the electroencephologram, where we can actually see electrical activity of the brain displayed on a piece of paper. Of course, one image is a simple semi-folly to help hawk a mostly-useless huckerter book on memory improvement, while the other is a bona fide medical breakthrough. But in similar ways they were insights into looking at the activity of the brain in connection to an external resource.
In 1944 there was something else, something quite different allocated to the Leonardo-like head, something far in advance of the filing system of 1919: the computer.
This may well be the first public, popular, report on the Harvard Automatic Sequence Controlled Calculator (ASCC) (appearing in the American Weekly, 15 October 1944), the first automatic, general-purpose, digital calculator. Known as the MARK 1, it was the brainchild of Howard Aiken (1900-1973), a graduate student at Harvard, who started it all in 1937 by proposing a series of coordinated Monroe calculators to function as a unified whole that would cross the threshold of the physically-impossible calculation (though theoretically possible) to the eminently doable. The project was immeasurably aided by the input of Harvard astronomer Harlow Shapley (who had earlier dealt with the enormously problematic aspects of the scale of the universe) who put Aiken in the hands of IBM at Endicott (NY). From there the building of the computer came under the supervision of Clair D. Lake, with the engineering and theoretical team of Francis Hamilton, Ben Durfee and Howard Aiken. The machine was basically completed in 1943 and tested in Endicott for the better part of the year before it was shipped of to Harvard in February 1944, where it was put almost instantly to work on ballistic calculations (like its cousin ENIAC at the Moore School at U Penn), as well as naval engineering and design issues.
The author of this article, Gordind Bhari Lal (“noted science analyst”), actually does a pretty decent job describing the machine and its (1940’s) possibilities, noting at the end that “it may even unleash for Man’s Use the long-dreamed-of energy of the atom”. This part did come true, especially post-war, when the machine was put to fair use by the US Atomic Energy Commission.
The part I really don’t understand in this article is comparing the speed and function of the ASCC to two women working on calculators and Albert Einstein, working with a pencil, paper and pipe, none of whom look comfortable or happy. (Actually the dresses on the women look a little hiked-up to me, just a little too high.) Lal does make a decent comparative point (of uncertain veracity) about four generations of humans (the three above-mentioned calculators?) doing calculations that the ASCC could do in seconds. Right or wrong, it gave the crowds in 1944 a real something to think about.
And so at the end of this post I believe that the representation of the brain as a mechanical device is relatively new, and I wonder if it isn't a mostly-20th century creation. Finding images of SteamPunk humans and robots and such roaming their ways through the literature in the 1920's-1950's is fairly easy, but showing the stuff inside the head and representations of how the mechanical brain was functioning seems to be a different matter.
Leonardo wrote backwards and from right to left, Benjamin Button lived backwards at the hands of Scott Fitzgerald, Rene Magritte's man in the bowler saw the back of his head, Herrimann's Ignatz the Mouse I am sure saw the back of his head looking around the world with the world's most powerful telescope, rugby passes are all done backwards, paper images of vue optiques appear backwards, lightning for all intents and purposes starts backwards from the ground up, reverse mathematics are worked from theorems to axioms, and the Chicago River (1900) was engineered to flow backwards for the foreseeable future, while the Mississippi River famously flowed backwards for just a bit in the New Madrid Earthquake of 1812.
I can only imagine what audiences must have felt when they saw the first moving pictures played backwards--seeing them played forwards was a novel-enough (and revolutionary) idea, but the simple idea of reversing the direction of the film would have proved to be equally fascinating.
Imagine the first time you witnessed a staged train wreck on film, back there in 1897, and imagine being able to see it played over and over again, until you were filled. I'm not so sure that there were even any still photographs of a train wreck as it occurred to this point, even with advances in film speed and lens, so seeing the even unfold in front of you at leisure must have been overwhelming. Now imagine these same folks seeing the event and watching the locomotives reconstitute themselves. It would have been an extraordinary event. Even observing the Etienne Marey sequences and seeing what actually happens when a person bends over to pick up a pail of water would have revealed almost as much in new detail as when Galileo was in the middle of his earliest observations.
Looking at things backwards is a good idea so far as thinking about engineering problems and of course in checking experimental results in the sciences--its not so good an idea though to change the results produced by the scientific method because they're not a good intuitive fit to expected parameters.
Such was the cased with the first (and successful) employment of a computer to predict the outcome of a presidential election. THe computer was the UNIVAC (the world's first commercial computer and a blazingly fast machine at 10k operations a second, nearly six orders of magnitude lower than "superfast" by contemporary standards), which was brought in by Remington Rand to CBS News to crunch the numbers on the tight race between General Dwight Eisenhower and Gov. Adlai Stevenson (II) on 4 November 1952. (Stevenson was the son of a former U.S. Vice President and would run again against Eisenhower in 1956.) Pioneers Pres Eckert and John Mauchley, along with Max Woodbury (and programmer Harold Sweeney, who is seated at the UNIVAC's control panel and who seems never to be mentioned in the iconic photo at top, with Eckert at center and anchorman Walter Cronkite at left). CBS News Chief Sig Mickelson and Cronkite were not comfortable with the proposal, but ran with it anyway, sensing a moment of the-future-is-now.
The Eisenhower/Stevenson race was seen by the large majority of pundits to be too close to call, so when the UNIVAC's results pointed to a landslide for Eisenhower (438 electoral votes and 43 states to Stevenson with 93 electoral votes and 5 states) folks got very sweaty and nervous, not trusting the outcome. As this was still a very early age in human-machine interaction, and the computed results fell far away from perception and expected response, changes were made in the UNIVAC's programming to determine a more "reasonable" response by the machine, the new results making the race very tight and fitting human expectations and giving Eisenhower a very slim margin of victory. As poll results started to sweep in an hour or so later indicating that Eisenhower was showing with a huge victory, the UNIVAC was again reprogrammed and at about midnight the announcement was made that the UNIVAC had indeed been correct in the first place. The final results were 442 electoral votes for Eisenhower and 89 for Stevenson. In the next presidential election in 1956 the three networks all had computers working for them--with them--and a different perception had been formed on working with computers.
I've found a supplement for computer tree above. . The new one is interesting and has its differences from its predecessor, and divides its generations of computers in terms of logic technology. It is found in a 1960 NSF pamphlet called "The Family Tree of Computer Design", a Brief Summary of Computer Development, and I found its reference in a good book by I.B. Cohen, Howard Aiken, Portrait of a Computer Pioneer, published by MIT and available here.
I wrote a few days ago on one great event of 1876--the invention of the Graham Bell telephone, the successful, most appropriate, best working telephone--mentioning that there were others great achievements in this year as well.. The four stroke engine (Nikolaus von Otto, the application of thermodynamics as applied to chemical change (J. Willard Gibb s, one of the very few portraits of whom would later hang on Einstein;s Princeton walls), Robert Koch's bacterial cultivation, Eugen Goldstein's work on Pluecker's (cathode) rays, all came into being in this year. But there was another remarkable development in that year that also contained a pretty clear vision of the future, of the shape of things to come.
That vision belonged to Lord Kelvin, whose work across many different fields, and at great and expanded levels, was extraordinary; but it was in his tide predictor that the future materialized.. He was one a few people who could really wear the future-specs really well, right alongside Strickland, and Babbage, and Bush, and Turing and von Neumann, a true visionary whose vision awaited the appropriate technology to machine it.
Thomson was attracted to many things, not the least of which were gadgets, like slide rules, to which he brought his profound capacities. He saw that these arithmetical tools were analog computers, and that bound together, they represented much more than just themselves, the seeds of far more powerful calculating engines. His great breakthrough was devising a tidal predictor, something that he devised in 1873, and wrote about in a seminal paper in 1876, describing what is basically the world's first analog computer.
"1876-1878, Baron [ Lord ] Kelvin builds his harmonic analyzer and tide predictor machines. The harmonic analyzer broke down complex harmonic, or repeating, waves into the simpler waves that made them up. The tide predictor machine could calculate the time and height of the ebb and flood tides for any day of the year."--York University, here.
See Thomson's excellent lecture given at the 25 August 1882 Southampton Meeting of the British Association for the Advancement of Science, here.
I've found a supplement to an earlier computer tree that I published on this blog (here) as a part of a chronological list of (nearly every) computer manufactured from 1943 to 1990. The new one is interesting and has its differences from its predecessor, and divides its generations of computers in terms of logic technology. It is found in a 1960 NSF pamphlet called "The Family Tree of Computer Design", a Brief Summary of Computer Development, and I found its reference in a good book by I.B. Cohen, Howard Aiken, Portrait of a Computer Pioneer, published by MIT and available here.
There is a terrific find on Alex Bellos' website exhibiting Alan Turing's “report cards” for his time at the great Sherborne School from 1926-1930 (and which were transcribed by archivist Rachel Hassall), from the time when Turing was 14 to 19 years old. Turing (1912-1954) I think needs no introduction for his importance to mathematics and computing (and code breaking during WWII), and it is very interesting—thrilling even—to see how his instructors were coming to grips with the developing genius. Even at such a school as Sherborne (a very old school with 39 headmasters overseeing the place since 1437) where the teachers were I am sure familiar with gifted pupils, The comments on the reports of Turing's progressed showed that many weren't quite sure about what Turing was all about. Obviously Turing as a boy was very gifted, but many instructors reported as many hindrances to his intellectual development as there were advances—more, even.
Perhaps people at the school didn't know exactly how to deal with him; perhaps they did, but still at the end of the day Turing had to meet the common standards of the school. Or perhaps not—I really can't tell from the transcripts presented by Bellos and I don't know the intricate history of the school. But certainly as time progressed Turing's abilities were more readily recognized, but early on it seems that his talents didn't overwhelm his many supposed shortcomings, the faults of the parts larger than the whole of what he could accomplish. In instructors' comments across all of his disciplines, Turing was “capricious”, “untidy”, “lacking in life”, “need(ed) concentration”, “depressing unless it amuses him”, “careless”, “absent minded”, “un-methodological”, “slovenly”, (made) “mistakes as a result of hastywork”, and so on. He “could do much better” though one instructor felt that “he may fail through carelessness”. All of which may well have been true—from the outside. These statements may have simply been the result of teachers not being able to reach a boy genius, and perhaps the boy couldn't be reached, at least early on in his academic career.
The statements in general—especially in the maths—I think are fascinating things. It may be easy to judge some of the remarks as intemperate, the teachers unable to clearly see the genius-in-the-making who (70 years later) we can so clearly see today. I think the remarks need more careful consideration than that, and that is where they become interesting.
Here are some selection from reports on Alan Turing, 1926-1930, below; a more full list exists at the Bellos site, here.
1926. Works well. He is still very untidy. He must try to improve in this respect
1927. Very good. He has considerable powers of reasoning and should do well if he can quicken up a little and improve his style.
____. A very good term’s work, but his style is dreadful and his paper always dirty.
____. Not very good. He spends a good deal of time apparently in investigations in advanced mathematics to the neglect of his elementary work. A sound ground work is essential in any subject. His work is dirty.
____. Despite absence he has done a really remarkable examination (1st paper). A mathematician I think.
____ I think he has been somewhat tidier, though there is still plenty of room for improvement. A keen & able mathematician.
The History of ASCII. I just wanted to include this short bit on a small archive of material related to the development of ASCII from one of the team members who helped to create it. A more full description appears in the "continue reading" section, below.
"Leprechaun, an Automatic Digital Computer the Size of a Television Set" was a short article written for Computers and Automation in the July 1957 issue. It is uncredited, by the two-pager I think was the product of the editor, Edmund Berkeley, and discussed a remarkable new computer that was as "small" as a not-by-today's-standards small television set. The Leprechaun used seriously fewer components than a "regular" machine (though that standard is not identified), using "only about 9,000 electrical components", half of which were transistors).
Leprechaun represents a significant advance in computer design, with its innards very reachable and accessible, making it a highly useful tool for testing other components for other machines. The TRADIC (for TRAnsistor DIgital Computer or TRansistorized Airborne DIgital Computer) was also the first transistorized computer in America, completed in 1954, and the godchild of J. H. Felker of Bell Labs for the for the U.S. Air Force, and was initially designed for use aboard an aircraft or a naval vessel (making it the first airborne transistorized digital computer). The idea of the manageably-sized computer had only recently lifted itself from science fiction thoughts of it being able to fit in the trunk of a car (as envisioned somewhat earlier by Isaac Asimov) and here we see it, already in the future-made-present, miniaturized to the size of a television set.
Well, perhaps not "fun", unless that was an acronym for "fabulously understated nomenclature". The first mobile computers--as science fiction-y
The Mobile Digital Computer was intended to be a transistorized van-mounted computer used to store and route data as part of the U.S. Army’s Fieldata system. The machine was indeed built and deployed by 1959–as were the MOBIDIC A,B,C,D,E and 7a by the early 1960's–and it was a successful component, even though the overall network was not successful. Fieldata was supposed to integrate all manner of information and distribute it to battlefield recipients. My friend (Dr.) Carl Hammer (1914-1904), who I knew from being in the neighborhood in Georgetown, was a delightful man who had long and significant history in the development of the modern computer. He told me one afternoon–stopping in to visit on his constitutional–in his sly and amusing way about working on the MOBIDIC while he was at Sylvania. (He had just finished heading up Remington Rand’s UNIVAC European Division before going to Sylvania.) Anyway he started his story about the MOBIDIC by telling me that it was the world’s first portable computer (sitting in a 42-foot-long semitrailer) and that it had gun racks. The reason for the gun racks was simple–if something was made by the U.S. Army, and it had wheels, then it had to have a gun rack. Case closed.
Now, to the contender, the "other" first mobile computer, the DYSEAC on its computer trailer. Most of what I have read places the MOBIDIC with priority, but others clearly place the machine in operation in 1954, years before the MOBIDIC became operational. In any event the DYSEAC was the Second Standards Electronic Automatic Computer, a first generation National Bureau of Standards computer built for the U.S. Army Signal Corps. Here's the cross-section cutaway for it:
And so on to the battle between the two, outfitting them perhaps with crunching and sawing devices, metal biting bits, and so on, I wonder which might be the one to come out on top? I think I'd like to claim the MOBIDIC, if for no other reason than it was armed. And the name, of course.
[Source: Columbia University computing history site, here.]
I guess that the only reason why HP charges for their printers is that they can. When you buy one of these products you're basically purchasing the need to keep the things fed with semi-proprietary HP ink--and as everyone knows, printers are notoriously thirsty creatures, and one can easily spend multiples of the cost of the printer on ink in the first year alone.
This is a great idea so far as the manufacturer goes, but it is hardly a new one--International Business Machines counted on this sort of income for several decades, partially getting the company through the Great Depression.
And what was the IBM necessary-suppliable that their customers had to keep buying over and over? It was the business machines themselves, because it was IBM practice to rent their machines out (which would pay for the initial investment and production in the machine in about two years, and most customers seemed to keep their rented mater1al for 5 or 7 or 10 years. What IBM kept supplying their customers with was the stuff that they sent through the machines--the IBM cards. The customer needed the cards from IBM itself, mainly because it was part of the contractual agreement for the lease of the machine, and also because the IBM product was superior to other mass-produced cards. In the 1930's the card business for IBM accounted to something like a few billion cards per year, which evidently would account for 30-40% of IBM's yearly profit. And that's quite something.
The idea of the necessary refill is not IBM's to claim for themselves--years earlier, Eastman Kodak accomplished the same deal with film for their cameras; and razor blades were supplied by Gillette to users of their razors. And although you don't need Ford gasoline to run a Ford automobile, in the 1930's you did need a General-Motors spark plug to run a G-M vehicle. Radio Corporation of America sold radios and also the necessary tubes to replace the ones in the stock radio; Thomas Edison too had a vastly controlling interest on how his light bulbs would be installed and replaced.
So as annoying as it might be to have to pay a fair amount of money for a small amount of ink to make your printer function, the printer-producing companies are just following an old (and highly profitable) business practice.
The cover of this magazine--Computers and Automation, one of the earliest popularly-based journals dealing with electronic computation--features a rather remarkable image, an English message, a response to a problem, from a computer. The editor, Edmund C. Berkeley, wrote a short appreciation of this "output device for an automatic computer", a "symbol generator and viewer".
"The screen of the picture tube shown will present as many as 10,000 characters per second. Each character is formed by an array of bright spots, a selection from a rectangular array of a total of 35 spots, five wide and seven deep. For a capital letter T, for example, the selection is five spots across the top and six more spots down through the middle...
For 1957, it would have been a remarkable thing to see messages displayed in such a fashion, in a revolutionary new way.
This lovely illustrated story appears in the December 1957 issue of Edmund Berkeley's Computers and Automation, which was the world's first semi-popular magazine devoted to the computer. The article is divided into questioned sections, including:
"What is 'Operating a Computer' Like?, showing the "new Computing Center" at the famous Moore School of Electrical Engineering at the University of Pennsylvania, the place where most of modern computing in America was born in 1944/5/6.