AI: An Offbeat History, my other newsletter, could be of interest to you, dear subscriber
The film 2001: A Space Odyssey had its world premiere at the Uptown Theater in Washington, D.C., on April 2, 1968. Reflecting the mixed reactions to the film, Renata Adler wrote in The New York Times that it was "somewhere between hypnotic and immensely boring.” Pauline Kael was more decisive, calling the movie "monumentally unimaginative." Reacting more favorably were “the potheads and pill-poppers of the late 1960s counterculture,” observed Paul Whitington fifty years later, reporting that “one particularly enthusiastic hippie is said to have charged at the screen shouting ‘it's God!’”
The “God” of this anecdote may have referred to HAL 9000, the artificially intelligent computer killing one after the other the astronauts on a space mission to Jupiter. Or it may have referred to the God-like humans capable of creating sentient machines, expressing the core belief of the Zeitgeist, the modern religion worshipping tool-wielding humans as masters of the universe. Four months after the movie's release, Stewart Brand opened the first issue of the Whole Earth Catalog with the statement “We are as gods and might as well get good at it.”
The 160-minute film with only 40 minutes of dialogue became “the movie that changed all movies forever,” as the poster for its 50th-anniversary re-release modestly, perhaps accurately, proclaimed. There is no doubt, however, of its influence on popular culture, specifically on the perception of sentient machines and their killer instincts. HAL's chilling admonishment to the last surviving astronaut, “I'm sorry, Dave. I’m afraid I can't do that,” entered the popular lexicon.
The science fiction writer and futurist Arthur C. Clarke, who worked with the director Stanley Kubrick on the plot for the movie (and later published a novel with the same title), said in a later interview:
Of course the key person in the expedition was the computer HAL, who as everyone said is the only human character in the movie. HAL developed slowly. At one time we were going to have a female voice. Athena, I think was a suggested name. I don't know again when we changed to HAL. I've been trying for years to stamp out the legend that HAL was derived from IBM by the transmission of one letter. But, in fact, as I've said, in the book, HAL stands for Heuristic Algorithmic, H-A-L. And that means that it can work on a program's already set up, or it can look around for better solutions and you get the best of both worlds. So, that's how the name HAL originated.
That both Clarke and Kubrick denied the allusion to IBM may had to do with the fact that IBM was one of the many organizations and individuals consulted while creating the film. They started the four-year process in 1964, the year IBM introduced its own masterpiece, the computer that sealed the company’s domination of the industry for the next quarter of a century.
On April 7, 1964, IBM announced the System 360, the first family of computers spanning the performance range of all existing (and incompatible) IBM computers. Thomas J. Watson Jr., IBM’s CEO at the time, wrote in his autobiography Father, Son, & Co.:
By September 1960, we had eight computers in our sales catalog, plus a number of older vacuum-tube machines. The internal architecture of each of these computers was quite different; different software and different peripheral equipment, such as printers and disk drives, had to be used with each machine. If a customer’s business grew and he wanted to shift from a small computer to a large one, he had to get all new everything and rewrite all his programs often at great expense. …
[The] new line was named System/360—after the 360 degrees in a circle—because we intended it to encompass every need of every user in the business and the scientific worlds. Fortune magazine christened the project “IBM’s $5,000,000,000 Gamble” and billed it as “the most crucial and portentous—as well as perhaps the riskiest—business judgment of recent times.”… It was the biggest privately financed commercial project ever undertaken. The writer at Fortune pointed out that it was substantially larger than the World War II effort that produced the atom bomb.
Like nuclear energy, the System/360 and all the computers and networks of computers that came after it could be used for creation or destruction. It was a tool, and as 2001: A Space Odyssey depicts in the first part of the movie, tools can be used to help humanity or as weapons in humanity’s wars.
In Profiles of the Future, published in 1962, Arthur Clarke wrote:
The old idea that Man invented tools is… a misleading half-truth; it would be more accurate to say that tools invented Man. They were very primitive tools… yet they led to us—and to the eventual extinction of the apeman who first wielded them… The tools the apemen invented caused them to evolve into their successor, Homo sapiens. The tool we have invented is our successor. Biological evolution has given way to a far more rapid process—technological evolution. To put it bluntly and brutally, the machine is going to take over.
Talk of the machine taking over has risen to the surface of public discourse over the last 50-plus years each time computer engineers have added yet another “human-like” capability to the tools they create. After IBM’s Watson AI defeated Jeopardy-champion Ken Jennings in 2011, he wrote in Slate:
I understood then why the engineers wanted to beat me so badly: To them, I wasn’t the good guy, playing for the human race. That was Watson’s role, as a symbol and product of human innovation and ingenuity. So my defeat at the hands of a machine has a happy ending, after all. At least until the whole system becomes sentient and figures out the nuclear launch codes…
The fear that machines will figure out the nuclear codes and destroy their creators was called “absurd” by one of 2001’s creators who went on to co-create a few years later, Clarke wrote in Profiles of the Future:
The popular idea, fostered by comic strips and the cheaper forms of science fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent… Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.
According to this materialistic fantasy, which many computer and AI engineers adhere to today, tools create us and will surpass us, but will do no harm or possibly wipe out humanity.
Here’s Piers Bizony in Nature in 2018:
Certainly, in the film, the surviving astronaut’s final conflict with HAL prefigures a critical problem with today’s artificial-intelligence (AI) systems. How do we optimize them to deliver good outcomes? HAL thinks that the mission to Jupiter is more important than the safety of the spaceship’s crew. Why did no one program that idea out of him? Now, we face similar questions about the automated editorship of our searches and news feeds, and the increasing presence of AI inside semi-autonomous weapons…
Should we watch out for superior “aliens” closer to home, and guard against AI systems one day supplanting us in the evolutionary story yet to unfold? Or does the absence of anything like HAL, even after 50 years, suggest that there is, after all, something fundamental about intelligence that is impossible to replicate inside a machine?
Yes, our imagination, for example.
The imagination, creativity, and drive behind the evolution of modern computing, or “artificial intelligence,” constantly expanding the functionality and capabilities of the original giant digital calculator of the late 1940s, taking advantage of the steadily increasing power of computation, or what has come to be known as “Moore’s Law.”
On April 19, 1965, Gordon E. Moore published “Cramming more components onto integrated circuits” in Electronics, predicting that the number of components that could be placed on a chip would double every year, doubling the speed of computers.
Four years earlier, on April 25, 1961, Robert Noyce was granted a patent for a “Semiconductor Device-and-Lead Structure,” a type of integrated circuit made of Silicon. Integrating large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using electronic components. The integrated circuit’s mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs instead of designs based on discrete transistors.
Three years after the publication of Moore’s paper, he and Noyce left Fairchild Semiconductor to establish Intel with $2.5 million in funding arranged by Arthur Rock who coined the term venture capitalist. Everything we associate with today’s Silicon Valley was already there: audacious risk taking, willing investors, creating (temporary) monopolies, and lots of luck. Arthur Rock was a personal friend (another attribute of today’s Silicon Valley), and he convinced others, with a 3-page memo, to invest in the “nebulous” idea of the two entrepreneurs. Moore’s article and prediction probably also helped convince the other investors that they are betting on a technology with a guaranteed exponential future.
It was a very specific prediction, couched in quantitative, “scientific” terms, with the convincing appearance of a law of nature. There was no better way to sell a new industry to a bunch of fellows with money who were also creating a new one, the venture capital industry. And there was competition for the funds provided by the nascent VC industry: some twenty-six new semiconductor firms were established between 1967 and 1970, observes George Gilder in Microcosm.
More important than the VCs, however, were the potential customers for Moore’s and Noyce’s (and a few other tinkerers’) innovations, at Fairchild and Intel. Moore’s 1965 article was written earlier as an internal document titled “The Future of Integrated Electronics” to encourage his company’s customers to adopt the most advanced technology in their new computer designs.
In Understanding Moore’s Law, David C. Brock writes: “While the market for silicon integrated circuits was growing in the early 1960s, Moore and others in the semiconductor industry experienced customer resistance to and skepticism of the new microchips… In addition to advancing the new technology itself, Moore was getting his message across to potential customers and the semiconductor industry. Finally, in early 1965 came the opportunity to publicize and advance the cause.”
Moore told Jeffrey Zygmont in 2001: “When we tried to sell these things, we did not run into a receptive audience.” Writes Zygmont: “[by 1965], under assault by competing approaches to circuit miniaturization, feeling their product poorly appreciated, IC advocates felt a competitive urgency to popularize the concept. Therefore, they proselytized… [Moore’s] prophecy was desperate propaganda.”
One of the industry leaders proselytizing, for example, was C. Harry Knowles, manager of Westinghouse’s molecular electronics division, writing in June 1964 in IEEE Spectrum: “Speed has doubled every year over the past seven years on average.” Moore did not “discover” his law, as many commentators write. Moore brilliantly came up with the best formulation of a marketing slogan.
Like other marketing slogans—and unlike the laws of physics—Moore’s Law was revised to fit the changing competitive and technological environment. In a 2006 IEEE Annuals of the History of Computing article, Ethan Mollick has convincingly showed that the “law” and the prediction were adjusted periodically in response to changing conditions (e.g., the rise of the Japanese semiconductor industry): “The semiconductor industry has undergone dramatic transformations over the past 40 years, rendering irrelevant many of the original assumptions embodied in Moore’s Law. These changes in the nature of the industry have coincided with periodic revisions of the law, so that when the semiconductor market evolved, the law evolved with it.”
Humans are not machines, and their intelligence is adaptable, flexible, and creative. Our imagination can follow many paths: inducing anxiety and fear, driving ambition and desire, or exaggerating consequences and possibilities. Some people envision malevolent machines, others hope to see or create rational and perfect thinking machines devoid of human foibles and emotions, others are worried about hype and anticipated disappointments, and others easily imagine all of the above.
In the must-see Korean TV series My Mister, one of the protagonists (played by Kwon Nara) imagines living in a future “AI world” when AI will be smarter than lawyers and doctors and other educated people so no one will be able “to boast” and we will all be liberated and be able to just love one another.
Careful Gil! You are getting very close to Scientific, fictional thinking. Your historical cover can only go so far!