My new newsletter AI: An Offbeat History is live and calling for readers! Please take a look and subscribe. I plan to write about episodes in the history of AI, including my own experiences over the last fifty years.
This month’s milestones in the history of information technologies highlight the mechanization and computerization of data collection and information organization, storage, and distribution and their impact on human experience.
On February 15, 2011, IBM’s question-answering system Watson “commented” on the results of its successful match the previous night with two Jeopardy! champions: “There is no way I’m going to let these simian creatures defeat me. While they’re sleeping, I’m processing countless terabytes of useless information.”
The last bit, no doubt, was stored in Watson’s memory under the category Oscar Wilde. In “A Few Maxims for the Instruction of the Over-Educated,” Wilde lamented that “it is a very sad thing that nowadays there is so little useless information.”
Wilde wrote this in 1894 when the processing of very useful information became very efficient with the invention of the tabulating machine by Herman Hollerith.
Hollerith was born in Buffalo, New York, on February 29, 1860. In 1884, he filed a patent application for his invention of “a certain new and useful Improvement in the Art of Compiling Statistics.” In 1886, Hollerith started a business to rent out the tabulating machines he had invented. “Taking a page from train conductors, who then punched holes in tickets to denote passengers’ observable traits (e.g., that they were tall or female) to prevent fraud, he developed a punch card that held a person’s data and an electric contraption to read it,” wrote The Economist in 2011, commemorating IBM’s 100th anniversary.
James Cortada in Before the Computer: “The U.S. Census of 1890… was a milestone in the history of modern data processing…. No other occurrence so clearly symbolized the start of the age of mechanized data handling…. Before the end of that year, [Hollerith’s] machines had tabulated all 62,622,250 souls in the United States. Use of his machines saved the bureau $5 million over manual methods while cutting sharply the time to do the job. Additional analysis of other variables with his machines meant that the Census of 1890 could be completed within two years, as opposed to nearly ten years taken for fewer data variables and a smaller population in the previous census.”
Hollerith’s company later merged with two other ventures to form in 1911 the Computing-Tabulating-Recording Company (CTR). On February 14, 1924, CTR changed its name to International Business Machines Corporation (IBM).
“IBM” was first used for CTR’s subsidiaries in Canada and South America, but after “several years of persuading a slow-moving board of directors,” Thomas and Marva Belden note in The Lengthening Shadow, Thomas J. Watson Sr. succeeded in applying it to the entire company: “International to represent its big aspirations and Business Machines to evade the confines of the office appliance industry.” As Kevin Maney observes in The Maverick and His Machine, IBM “was still an upstart little company” in 1924, when “revenues climbed to $11 million—not quite back to 1920 levels.”
The upstart, according to Watson Sr., was going to live forever. In a talk he gave on June 21, 1924, at the first meeting of IBM’s Quarter Century Club, he said: “The opportunities of the future are bound to be greater than those of the past, and to the possibilities of this business there is no limit so long as it holds the loyal cooperation of men and women like yourselves.”
The most significant future opportunity for IBM emerged with the rapid growth of a new market for generating, processing, organizing, storing, and distributing digitized data and information, the market for computers.
On February 16, 1946, the Electronic Numerical Integrator And Computer (ENIAC) was formally dedicated at the University of Pennsylvania. It was the only fully electronic computer working in the U.S. from its inception during World War II to 1950 when other computers successfully joined the race to create a new industry.
Another important milestone in the evolution of the new market occurred on February 6, 1959, when Jack Kilby filed a patent application for a “Method of making miniaturized electronic circuit” or the integrated circuit. “The University of Illinois gave him only average grades in electrical engineering, a disappointment to his father, who ran an electrical company, and he failed to get into MIT,” reports Harold Evans in They Made America. In 2000, Kilby was awarded the Nobel Prize in Physics for “his part in the invention and development of the integrated circuit, the chip. Through this invention, microelectronics has grown to become the basis of all modern technology.”
Computer-driven digitization has grown to become the basis of all types and forms of modern information. But where do you put all the 0s and 1s?
Already on February 20, 1947, Alan Turing argued that “the provision of proper storage is the key to the problem of the digital computer” in his presentation to the London Mathematical Society in which he described the Automatic Computing Engine (ACE), an early British stored-program computer.
One answer to the storage challenge came on February 28, 1956, when Jay Forrester of MIT was awarded a patent for his invention of magnetic core memory. It became the standard for computer memory until it was supplanted by solid-state RAM in the mid-seventies.
Forrester came up with the idea of 3D storage of computer data while working on MIT’s Whirlwind computer, which required a fast memory system for real-time aircraft tracking. Forrester’s was not the only patent granted to magnetic core memory inventions. The patent dispute continued until February 1964, when IBM (which had acquired patent rights from other core memory inventors, including An Wang) agreed to pay MIT $13 million. Forrester summarized the experience: “It took about seven years to convince people in the industry that magnetic core memory would work. And it took the next seven years to convince them that they had not all thought of it first.”
The challenge of where and how you keep information has always been accompanied by the challenge of distributing it and getting it to other destinations.
The first technology for transmitting information by electrical impulses was the telegraph, or “writing at a distance.” On February 3, 1837, the U.S. House of Representatives passed a resolution requesting the Treasury Secretary, Levi Woodbury, to report to the House at its next session, “upon the propriety of establishing a system of telegraphs for the United States.” Of the eighteen responses that Woodbury received, writes Richard John in Network Nation: Inventing American Telecommunications, “seventeen assumed that the telegraph would be optical and that its motive power would be human… The only respondent to envision a different motive power was Samuel F. B. Morse… [who] proposed, instead, a new kind of telegraph of his own devising that would transmit information not by sight but, rather, by electrical impulses transmitted by wire.”
For more than 100 years, the telegraph was the principal means of transmitting information by wire or radio waves. By the late 20th century, the telegraph was replaced by digital data-transmission systems based on computer technology. "Telegram services” are still available in many countries, but the transmission is usually done via a computer network.
The funding source for the first wide-area computer networks was born on February 7, 1958, when the U.S. Department of Defense issued Directive 5105.15, establishing the Advanced Research Projects Agency (ARPA). The agency, later renamed DARPA, was created because “The Soviet Union’s launch of Sputnik showed that a fundamental change was needed in America’s defense science and technology programs.” One of the “frontiers of technology” created by the agency was the ARPANET, the forerunner of the internet.
The internet connected large computers, first in the U.S. and then around the world. In 1989, Tim Berners-Lee invented the World Wide Web, software running on top of the internet that linked documents stored on these computers and, later, all types and forms of information. All the digitized information, with no limits, no space constraints.
On February 1, 1884, the first part (or fascicle) of the Oxford English Dictionary was published. A 352-page volume, it defined words from “A to Ant,” corroborated by quotations from historical sources. The 11th edition of the Encyclopaedia Britannica (1910-1911) noted that “the chief difficulty in the way of this use of quotations – after the difficulty of collection – is that of finding space for them in a dictionary of reasonable size.” Today’s online edition of the OED contains 3.5 million quotations (and Britannica’s 11th edition is freely available on the Web).
The most successful Web application for connecting people was launched on February 4, 2004, when Thefacebook.com went live.
Its home screen read, says David Kirkpatrick in The Facebook Effect, “Facebook is an online directory that connects people through social networks at colleges.” Four days after the launch, more than 650 students had registered, and by the end of May, it was operating in 34 schools and had almost 100,000 users. “The nature of the site,” Mark Zuckerberg told the Harvard Crimson on February 9, “is such that each user’s experience improves if they can get their friends to join in.”
The first social network, or “virtual community,” as it was called then, was launched on February 17, 1978, when the first public dial-up bulletin board system, the Computerized Bulletin Board System, or CBBS, went online. Sparking the creation of tens of thousands of bulletin board systems all over the world, CBBS was developed by Ward Christensen and Randy Suess and connected more than 250,000 users before it was retired in the 1990s.
Computers could not only add to human experience but also subtract from it, automating and replacing what only humans could do, enjoy, and create.
On February 5, 1850, the first U.S. patent for a push-key operated adding machine was issued to Dubois D. Parmelee of New Paltz, New York. In 1886, William Burroughs founded the American Arithmometer Company to commercialize his “adding machine” that “could do the work of two or three active clerks,” according to The Bankers’ Magazine in 1894. The HP-65, the first programmable handheld calculator, was introduced as a $795 “personal computer” by Hewlett-Packard in 1974.
Calculating machines, however, went much further than automating addition and subtraction. On February 10, 1996, IBM’s Deep Blue became the first computer to win a chess game against a reigning world champion, Garry Kasparov, under regular time controls.
Deep Blue won one game, tied two and lost three. The next year, Deep Blue defeated Kasparov in a six-game match—the first time a reigning world champion lost a match to a computer opponent in tournament play. Deep Blue was a combination of special-purpose hardware and software, a system capable of examining 200 million moves per second, or 50 billion positions, in the three minutes allocated for a single move in a chess game.
On February 14, 2011, when IBM’s Watson won the second match of Jeopardy! against two of the most successful contestants on the show, Ken Jennings and Brad Rutter, Jennings added to his Final Jeopardy! response, "I, for one, welcome our new computer overlords.”