JF Ptak Science Books, Quick Post
Here's a selection of nine early television spots for computers and computing...and calculating. The piece on the coming Internet (1969) to me is the most interesting:
Mid-1950's and how engineers build computers:
JF Ptak Science Books Quick Post
"Leprechaun, an Automatic Digital Computer the Size of a Television Set" was a short article written for Computers and Automation in the July 1957 issue. It is uncredited, by the two-pager I think was the product of the editor, Edmund Berkeley, and discussed a remarkable new computer that was as "small" as a not-by-today's-standards small television set. The Leprechaun used seriously fewer components than a "regular" machine (though that standard is not identified), using "only about 9,000 electrical components", half of which were transistors).
Leprechaun represents a significant advance in computer design, with its innards very reachable and accessible, making it a highly useful tool for testing other components for other machines. The TRADIC (for TRAnsistor DIgital Computer or TRansistorized Airborne DIgital Computer) was also the first transistorized computer in America, completed in 1954, and the godchild of J. H. Felker of Bell Labs for the for the U.S. Air Force, and was initially designed for use aboard an aircraft or a naval vessel (making it the first airborne transistorized digital computer). The idea of the manageably-sized computer had only recently lifted itself from science fiction thoughts of it being able to fit in the trunk of a car (as envisioned somewhat earlier by Isaac Asimov) and here we see it, already in the future-made-present, miniaturized to the size of a television set.
JF Ptak Science Books Post 1741
Well, perhaps not "fun", unless that was an acronym for "fabulously understated nomenclature". The first mobile computers--as science fiction-y
The Mobile Digital Computer was intended to be a transistorized van-mounted computer used to store and route data as part of the U.S. Army’s Fieldata system. The machine was indeed built and deployed by 1959–as were the MOBIDIC A,B,C,D,E and 7a by the early 1960's–and it was a successful component, even though the overall network was not successful. Fieldata was supposed to integrate all manner of information and distribute it to battlefield recipients. My friend (Dr.) Carl Hammer (1914-1904), who I knew from being in the neighborhood in Georgetown, was a delightful man who had long and significant history in the development of the modern computer. He told me one afternoon–stopping in to visit on his constitutional–in his sly and amusing way about working on the MOBIDIC while he was at Sylvania. (He had just finished heading up Remington Rand’s UNIVAC European Division before going to Sylvania.) Anyway he started his story about the MOBIDIC by telling me that it was the world’s first portable computer (sitting in a 42-foot-long semitrailer) and that it had gun racks. The reason for the gun racks was simple–if something was made by the U.S. Army, and it had wheels, then it had to have a gun rack. Case closed.
Now, to the contender, the "other" first mobile computer, the DYSEAC on its computer trailer. Most of what I have read places the MOBIDIC with priority, but others clearly place the machine in operation in 1954, years before the MOBIDIC became operational. In any event the DYSEAC was the Second Standards Electronic Automatic Computer, a first generation National Bureau of Standards computer built for the U.S. Army Signal Corps. Here's the cross-section cutaway for it:
And so on to the battle between the two, outfitting them perhaps with crunching and sawing devices, metal biting bits, and so on, I wonder which might be the one to come out on top? I think I'd like to claim the MOBIDIC, if for no other reason than it was armed. And the name, of course.
JF Ptak Science Books Post 1735
[Source: Columbia University computing history site, here.]
I guess that the only reason why HP charges for their printers is that they can. When you buy one of these products you're basically purchasing the need to keep the things fed with semi-proprietary HP ink--and as everyone knows, printers are notoriously thirsty creatures, and one can easily spend multiples of the cost of the printer on ink in the first year alone.
This is a great idea so far as the manufacturer goes, but it is hardly a new one--International Business Machines counted on this sort of income for several decades, partially getting the company through the Great Depression.
And what was the IBM necessary-suppliable that their customers had to keep buying over and over? It was the business machines themselves, because it was IBM practice to rent their machines out (which would pay for the initial investment and production in the machine in about two years, and most customers seemed to keep their rented mater1al for 5 or 7 or 10 years. What IBM kept supplying their customers with was the stuff that they sent through the machines--the IBM cards. The customer needed the cards from IBM itself, mainly because it was part of the contractual agreement for the lease of the machine, and also because the IBM product was superior to other mass-produced cards. In the 1930's the card business for IBM accounted to something like a few billion cards per year, which evidently would account for 30-40% of IBM's yearly profit. And that's quite something.
The idea of the necessary refill is not IBM's to claim for themselves--years earlier, Eastman Kodak accomplished the same deal with film for their cameras; and razor blades were supplied by Gillette to users of their razors. And although you don't need Ford gasoline to run a Ford automobile, in the 1930's you did need a General-Motors spark plug to run a G-M vehicle. Radio Corporation of America sold radios and also the necessary tubes to replace the ones in the stock radio; Thomas Edison too had a vastly controlling interest on how his light bulbs would be installed and replaced.
So as annoying as it might be to have to pay a fair amount of money for a small amount of ink to make your printer function, the printer-producing companies are just following an old (and highly profitable) business practice.
JF Ptak Science Books Quick Post
The cover of this magazine--Computers and Automation, one of the earliest popularly-based journals dealing with electronic computation--features a rather remarkable image, an English message, a response to a problem, from a computer. The editor, Edmund C. Berkeley, wrote a short appreciation of this "output device for an automatic computer", a "symbol generator and viewer".
"The screen of the picture tube shown will present as many as 10,000 characters per second. Each character is formed by an array of bright spots, a selection from a rectangular array of a total of 35 spots, five wide and seven deep. For a capital letter T, for example, the selection is five spots across the top and six more spots down through the middle...
For 1957, it would have been a remarkable thing to see messages displayed in such a fashion, in a revolutionary new way.
JF Ptak Science Books Quick Post
This lovely illustrated story appears in the December 1957 issue of Edmund Berkeley's Computers and Automation, which was the world's first semi-popular magazine devoted to the computer. The article is divided into questioned sections, including:
"What is 'Operating a Computer' Like?, showing the "new Computing Center" at the famous Moore School of Electrical Engineering at the University of Pennsylvania, the place where most of modern computing in America was born in 1944/5/6.
JF Ptak Science Books Quick Post [Part of a series on the History of Holes.]
Well, this really isn't a "hole" per se, but it, the "hole", certainly behaves like one, at least metaphorically. The concept occurs in the title of this famous work by the problematic William Shockley (below)--it was the bible, really, of all early things relating to the semiconductor--the electron hole being the mathematical opposite of an electron (e- ). (The electron is a subatomic particle with a negative charge, explained very early on in its first format as "radiant energy" by William Crookes in 1879, who built on the earlier work of Hittorf and then on Goldstein, with the name "electron" finally coming to the particle by George F. Fitzgerald.)
The "hole" is a metaphor, a useful use of a word to explain the absence of an electron from a full outer shell. In a semiconductor, an electric current is carried not only by the flow of electrons but also by the flow of positively charged holes where the electron absence occurs--the hole is an electronic absence charge carrier, and it the basis for modern electronics.
[This book may be purchased by the person who cannot live without it on our blog bookstore site.]
JF Ptak Science Books Post 1710
Well, probably. The ad is by UNIVAC—fives years old at this point—and the lace-cuffed image pushing the virtues of the world's first commercial computer is that of John Napier's abacus, which he wrote about in 1617. The ivory calculating bones/rods method had been seen before in the history of the maths and calculation, but not published, and the idea employed in the elegant calculating device were very old, but Napier seems to have gotten to publish it first. (His work on logarithms is of tremendous importance, far more so than the abacus.)
The word “rabdologia” belongs to the title of Napier's significant book (where the title is the abstract): Rabdologia, or, The art of numbring by rods : whereby the tedious operations of multiplication, and division, and of extraction of roots, both square and cubick, are avoided, being for the most part performed by addition and subtraction : with many examples for the practice of the same ...
And I just want to say that since this ad was UNIVAC and it appeared in the November 1956 issue of the early computer journal, Computers and Automation, that the illustration got the “bones” correct. That is, the calculated product is correct.
The way these rods worked is as follows: in the illustration, the number rod on the extreme left (ranging from 2 to 9) is the multiplier; and the numbers at the top of the other rods represented the multiplicand. So we see that the lace-cuff is multiplying the number “76” (at the top of the two rods to the right) by, say, 7. Simply start writing the answer as follows: take the “2” from the 6 rod for your ending number; then add the numbers on the diagonal directly to the left of the two (4+9=13) and take the 3 and place it next to the two in the tens column, and carry the 1 to the next function, which would be 4+1=5. So the answer: 532. The bones could do more than this, of course, but for right now I'd just like to point out that the ad folks got this right--plus its nice to see a bunch of numbers used in public display that actually mean something. [See the Wolfram site for a nice explanation of how the bones work.)
John Napier's work in logarithms (published three years earlier in 1614) is the work for all time; the Rabdologia however would have been instantly appreciated by people like his father, who was master of the mint of Scotland. That said, I've read here and there that Napier considered his most significant published work to be his A Plaine Discovery of the Whole Revelation of St. John (1593), in which he practiced a theo-chronometry based in the Book of Revelation that among other things in its 300-pages predicted that the world would come to an end in 1688. Or 1700. He evidently considered himself a Theologian first and foremost, and what bothered him most was Pope Clement VIII, who he considered to tbe the anti-Christ--and so complications arose. Win two, lose one.
[I've just uploaded Tompkin's classic/first textbook on the digital computer (High Speed Computing), 1950, to the books for sale section of this blog, here.]
JF Ptak Science Books Quick Post
This was a major piece of early thinking on spread spectrum communications and frequency hopping--butter for the bread (or bread for the butter?) of wireless communication--with the piano roll tapes replaced by electronics. The idea didn't go anywhere in 1942--it did, however, go far, beginning in the late 1950's. The major name listed on the patent report is a major name, but not in this form, and not in this area--some might find it very surprising to know the more popular version of the inventor's identity, and the industry in which the inventor worked.
H.K. Markey and G. Anthiel produced this:
Hint: the "H." stood for
Answer, in continued reading, below:
JF Ptak Science Books Post 1685
The moment that I saw this image1 of (what I think is) the 8086 processor I thought of its great visual similarities to one of the greatest engineering works of the 16th century, so much so that with a little imagination, the older work seems a pentimento of the newer. This microprocessor--which in 1979 was a vast leap forward in development--looks like an architectural/engineering plan: large objects being hauled into place by legions of workers with wooden cranes, giant winches and mammoth rope, a fantastical display of concerted effort on a gargantuan scale. It is, or was, in fact an enormous leap in hardware engineering, a micro-mammoth advancement.
This older, pentimento image is a plan for moving of the great 500,000-pound Egyptian Obelisk (carved during the reign of
Nebkaure Amenemhet II, 1992-1985 BCE, and originally standing in the Temple of the Sun at Heliopolis) at the Vatican. The engraving appeared in Domenico Fontana's masterpiece Della trasporatione dell’obelisco Vaticano…(published in Rome by Bassa in 1590) and illustrated one of the greatest engineering feats of the Renaissance. Moving this enormous and relatively delicate object (from the Circus Nero, where it was placed by the emperor Caligula in 37 ACE, to St. Peter’s Piazza del Popolo, 50 years or so before it would be more enveloped by Bernini’s flying wings) took years of (very) careful planning and months of motion and movement, not to mention an extra month to get everything into place and slowly raise the obelisk into its final position. Fontana had to be cautious and correct, and he was, performing a not-so-minor miracle of pre-industrial magic to move the priceless 250-ton iconic relic and place it perfectly down in the center of Christianity. That must have been one hot Roman summer, especially for Fontana.
I can easily see the similarities in Fontana's work and that of Intel. Here, in a photograph of the 16k bit random access memory chip (via Mostek Corporation) I can see a vast palace at the top of the picture, with a long, columned entrance with manicured gardens on either side. The image offers an elevation and a plan--that is, the top and bottom images of the buildings are seen in a deeply oblique view, while the central part of the image is a straight-out plan. At least that's what I see, its imaginative possibilities more appealing than the physical realities (though that's where the extraordinary value is/was).
1. J.H. Westcott, "The Application of Microprocessors". In Proceedings of the Royal Society of London, A, 367, 451-484 (197 9(.
JF Ptak Science Books Quick Post
I was reading Computers and Automation tonight and found this lovely short story in the July 1956 (volume 5, no. 7) issue. It is a short story written by Jackson W. Granholm (a biographical note on Granholm appears in the ACM notices here) on the application of a supercomputer put to solving a very particular--and peculiar--problem.
The story is called "Day of Reckoning", and tells the tale of the ever-working, highly-dependable-indispensible SUPERVAC being readied to accept the end-all program, readied like the countdown to the launch of Apollo 11 to receive the question, hauling on board into his storyline the other professionals who read the journal for tech reports and info, trying to keep them in his boat with a sci-fi tale based on his own work experience on some big machine at Boeing.
Finally, we see the question: "Describe the detailed design of your superior successor!"
Well of course the SUPERVAC had been working perfectly right up until this time, though with the problem submitted the computer began to behave erratically. It works for 12 hours or so, blinking and flashing away, until at 10:35 pm the MULL light went out, the solution reached.
"12 October 1957, 2230 PM PST, 0130 am GCT--PROBLEM 198BC12-XA--RECKON HAVE EXCELLENT POSITION HERE. NOT 2ISH RELINQUISH IT AT THIS TIME. THANKX. ROGER -- PDA**EM --OUT."
Overall, SUPERVAC "would prefer not to".
[The Melville short story can be found here.]
John von Neumann contributed a major piece of prescient thinking in the 1949 volume of Proceedings/Computation Seminar, December 1949, assembled by Cutherbert Hurd of the IBM Applied Science Department. [The entire contents of the volume is available here.] Von Neumann (1903-1957)--perhaps the most advanced mind of the 20th century, a man whose work made the other advanced minds say "how did he do that?"--was a staggering polymath who made contributions in many fields, not the least of which was in the creation of the modern computer. His one-page contribution to this volume was a deep insight into the possibilities of the machine. In 1949. Check out this terrific piece by the great Claude Shannon, "John von Neumann's Contributions to Automata Theory" here.
JF Ptak Science Books Quick Post
The following is an interesting, unclassified report on the future of the computer, from the archives at the National Security Agency. "Time Is . Time Was· Time Is Past Computers for Intelligence" by Howard Campaigne offers an interesting look at what the future might hold for computation from an analyst who worked for what would become one of the U.S. government premier institutions for implementing new ideas in computation. It ends with a surprising six-page bibliography ("Bibliography on Extending Scope of Computers") which is the same length as this entire short paper.
"If a thinking machine can be built, then it must be done; it is a matter of self-respect. Just as a man must be put on the moon, just as Mount Everest had to be climbed, just as the poles had to be visited, just as a flying machine had to be made no matter what the arguments against it, so a machine must be made which can think."
"I must comment on a statement I have seen that soon the chess champion will be a machine! This is fatuous. Bicycles are not used in the Olympic footraces; if they were, a cyclist would be world champion. When the rules of chess are amended to prohibit mechanical aids, that will be a clue that one of our subgoals is being approached."
The entire article can be read here.
BIBLIOGRAPHY ON EXTENDING SCOPE OF COMPUTERS
S. Amarel, "On the Automatic Formation of a Computer Program whichRepresents
a Theory," Self Organizing Systems, Spartan Books, Washington,
D. C., 1962.
A. M. Andrew, "Learning Machines," Paper 3-6, Symposium on the Mechanization
of Thought Processes, Teddington, England, November 1958.
James B. Angell, The Need and MeUM for Self-Repairing Circuits, Technical
Report No. 4654-2, USAF Contract AF33(616)-7726, Stanford Electronics
JF Ptak Science Books Post 1631
The first exposure of the American public in general to a "personal computer" may have been in this issue of the Scientific American for November 1950--an article called "Simple Simon" by Edmund Berkeley. ( Berkeley also wrote a book called Giant Brains, which seems to me to be the first mass-consumption book--written in terns for the general public--on how the computer works, and the design of "how a machine will think". Berkeley looks at the MIT Differential Analyzer #2, the Moore School ENIAC, Bell Labs' General-Purpose Relay Calculator, and the IBM Automatic Sequence-Controlled Calculator.)
The Simon was a five-hole paper tape (which was its data entry and memory) 2-bit storage relay-based computer that could use numbers from 0 to 3. It was extremely limited, but it worked, and it was real. And affordable. And a baseline for things to come. [The original issue of the magazine can be found in our blog bookstore section, here.]
Berkeley introduced the idea for Simon in Giant Brains:
"We shall now consider how we can design a very simple machine that will think.. Let us call it Simon, because of its predecessor, Simple Simon... Simon is so simple and so small in fact that it could be built to fill up less space than a grocery-store box; about four cubic feet....It may seem that a simple model of a mechanical brain like Simon is of no great practical use. On the contrary, Simon has the same use in instruction as a set of simple chemical experiments has: to stimulate thinking and understanding, and to produce training and skill. A training course on mechanical brains could very well include the construction of a simple model mechanical brain, as an exercise..."--Edmund Berkeley, in Giant Brains, 1949, p. 22
In the Scientific American paper Berkeley introduced the machine and how it functioned; he also described three three outcomes for Simon:
First: "Simon itself can grow. It possess all the essentials of a mechanical brain..."
Second: "It is likely to stimulate the building of other small mechanical brains. Perhaps the simplicity and relatively low cost of such machines may make them attractive to amateurs as the radio set and the small telescope." [The "low cost" in 1951 was $600--equal to about $3000 today.]
Third: "It may stimulate thought and discussion on the philosophical and social implications of machines that handle information..."
Berkeley finishes the three-page article with the following paragraph, looking into the not-too-distant future:
"Some day we may even have small computers in our homes, drawing their energy from electric-power lines like refrigerators or radios ... They may recall facts for us that we would have trouble remembering. They may calculate accounts and income taxes. Schoolboys with homework may seek their help. They may even run through and list combinations of possibilities that we need to consider in making important decisions. We may find the future full of mechanical brains working about us."
BERKELEY, E.C. (1950). Simple Simon in Scientific American, No. 183, November 1950, pp. 40-43.