Before I turned to writing as a full-time occupation, I had been engaged in the marketing of computer technology for more than two decades. Watching and experiencing a rapidly changing industry, driven by constant innovation—of the underlying computer technology, the business models of developing and selling it, and how and where companies and consumers use computers—I became skeptical of “laws” and “competitive strategies,” rigid prescriptions attempting to explain business success and technology trends.
I also learned one key lesson, articulated by one of my bosses as “marketing is the selling of ideas.” I’ll take it further: Business success and technological trajectories are the result of the timely selling of ideas, ideas that motivate investors, employees, and customers.
On June 16, 1911, the Computing-Tabulating-Recording Company was incorporated. It changed its name to IBM in 1924.
In “Ideas make IBM 100 years young,” IBM’s Bernard Meyerson writes: “…if you really think about what keeps a company going, it’s that you have to keep reinventing yourself. You cannot reinvent yourself in the absence of great ideas. You have to have the great ideas, and you have to follow them through.” Meyerson equates the great ideas that sustain the life of a company with great technological innovations, but in “1100100 and counting,” The Economist quotes Forrester Research’s George Colony, “IBM is not a technology company, but a company solving business problems using technology” and concludes:
Over time [the close relationships between IBM and its customers] became IBM’s most important platform—and the main reason for its longevity. Customers were happy to buy electric “calculating machines,” as Thomas Watson senior insisted on calling them, from the same firm that had sold them their electromechanical predecessors. They hoped that their trusted supplier would survive in the early 1990s. And they are now willing to let IBM’s services division tell them how to organise their businesses better.
Kevin Maney lists five lessons he drew from his close study of IBM’s history (The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM), the first one being “At the start, convince the troops you’re a company of destiny, even if that seems crazy.” Thomas Watson Sr. did this and more. In a 1917 speech, he said: “My duty is not the building of this business; it is rather the building of the organization. … I [know] only one definition of good management; that is, good organization. So, as I see it, my work consists in trying to build a bigger and better organization. The organization, in its turn, will take care of the building of the business.”
So what was the Big IBM Idea? A trusted supplier? A focus on destiny and longevity? Building a bigger and better organization? All of the above?
In a 1994 Harvard Business Review article titled “The theory of the business,” management guru Peter Drucker argued that great businesses revolve around a specific idea or “a theory of the business,” articulating the company’s assumptions about its environment, its mission, and its core competencies.
In response, I discussed in a letter-to-the-editor the similarities and dissimilarities between scientists and managers:
Managers [like scientists] must articulate their theories and how they can be refuted and then seek data that proves their theories wrong. That will prevent them from falling into the trap of discarding successful theories… the theory of the business may not just explain reality or past business success; it may also define it by communicating and convincing employees and customers that the company is unique. A business theory, then, unlike a scientific theory, can be true and false at the same time. That is how, as Drucker has illustrated, IBM and General Motors could both succeed and fail when they applied the same business theory to two different businesses.
In short, an idea or a set of ideas may explain past business success. But, business school education and management gurus notwithstanding, one cannot extract from history “management lessons,” prescriptions, and predictions about the future of this or any other business. Even if we had a perfect understanding of the reasons for IBM’s longevity (and its relative decline after 1990), that would not tell us anything about the future of Apple or Google or Facebook. There is no one explanation or theory of business success; the reasons for success in one case can be the very same reasons for failure in another or even for the same business.
I didn’t know it in 1994, but it turns out I was channeling Thomas Watson Sr. who said in another speech, this one in January 1915, shortly after he joined C-T-R:
We all know there have been numerous books written on scientific factory management, scientific sales management, the psychology of selling goods, etc. Many of us have read some of those books. Some of them are good; but we can’t accept any of them as a basis for us to work on. Neither can you afford to accept my ideas as whole and attempt to carry them out, because I do not believe in a fixed method–in any fixed way of selling goods, or of running a business.
Watson Sr. was a great marketer, a great motivator, a great salesman of the idea of building “a bigger and better organization,” with IBM successfully transitioning from an office equipment supplier to the dominant supplier of computer technology, or what today we call artificial intelligence or AI. The narrowly-defined (“thinking machines”) AI marketing campaign launched in 1955, and the subsequent AGI marketing campaign (“superintelligence”) launched in 2007, mask the historical reality of “artificial intelligence,” the steady expansion of the capabilities and functionality of modern computers.
In A New History of Modern Computing, Thomas Haigh and Paul E. Ceruzzi trace the transformation of computing from scientific calculations to administrative assistance to personal appliances to a communications medium. Starting with “superintelligence” calculators surpassing humans in speed and complexity eighty years ago, this constant reinvention continues with today’s “AI,” the perfect storm of GPUs, statistical analysis models, and big data, adding content analysis and processing (not content “generation”) to what a computer can do.
This evolution has been punctuated by peaks of inflated expectations, troughs of disillusionment, and plateaus of productivity. It has been defined by various marketing campaigns and by specific “moments” describing a new stage of where computing is done and its impact on how we live and work.
The first such milestone, the IT moment, highlighted the shift from scientific calculations to business use. In 1958, Harold Leavitt and Thomas Whisler published “Management in the 1980s” in the Harvard Business Review, inventing the term “Information Technology.” Stating that “over the last decade, a new technology has begun to take hold in American business, one so new that its significance is still difficult to evaluate,” they predicted IT “will challenge many long-established practices and doctrines.”
Leavitt and Whisler delineated three major IT components. The first described what they observed at the time—high-speed computers processing large amounts of information. The second was just emerging—applying statistical and mathematical methods to decision-making problems. The third was a prediction based on the authority of leading contemporary AI researchers—the “simulation of higher-order thinking through computer programs,” including programming “much of the top job [in corporations] before too many decades have passed.”
The article focused on the organizational implications of this new technology, such as the centralization (or recentralization) of authority and decision-making facilitated by IT and IT’s negative impact on the middle manager's job. While the pros and cons of “automation” continued to be debated in the following years, the main topic of discussion, admiration, and trepidation, was the constantly increasing speed of the modern computer and its implications for new computing devices and their applications.
The idea of constantly increasing speed as the key driver of modern computing was captured by Gordon E. Moore in “Cramming more components onto integrated circuits,” published in Electronics in 1965, predicting that the number of components that could be placed on a chip would double every year, doubling the speed of computers. It was a very specific prediction, couched in quantitative, “scientific” terms, with the convincing appearance of a law of nature.
“Moore’s Law,” as this prediction came to be known, is not about physics, not even about economics. And it is much more than a self-fulfilling prophecy guiding an industry as it is generally perceived to be. Moore’s Law is a marketing campaign promoting a specific way of designing computer chips, selling this innovation to investors, potential customers, and future employees.
Like other marketing slogans—and unlike the laws of physics—Moore’s Law was revised to fit with evolving market realities. In a 2006 article in the IEEE Annuals of the History of Computing, Ethan Mollick has convincingly showed that the “law” was adjusted periodically in response to changing competitive conditions (e.g., the rise of the Japanese semiconductor industry): “The semiconductor industry has undergone dramatic transformations over the past 40 years, rendering irrelevant many of the original assumptions embodied in Moore’s Law. These changes in the nature of the industry have coincided with periodic revisions of the law, so that when the semiconductor market evolved, the law evolved with it.”
As the power and miniaturization of computers steadily advanced, another milestone arrived: the networking moment. In 1973, Bob Metcalfe and David Boggs invented Ethernet and implemented the first local-area network or LAN at Xerox Parc. Metcalfe explained later that what became known as “Metcalfe’s Law” is a “vision thing.” It helped him jump over a big hurdle: The first Ethernet card Metcalfe sold went for $5000 in 1980 and he used the “law” to “convince early Ethernet adopters to try LANs large enough to exhibit network effects,” in effect promising them that the value of their investment will grow as more people get connected to the office network. Metcalfe’s law—the value of a network is proportional to the square of the number of its users—encapsulates a brilliant marketing concept, engineered to get early adopters—and their accountants—over the difficulty of calculating the ROI for a new, expensive, unproven technology. It provided the ultimate promise: This technology gets more “valuable” the more you invest in it.
The networking moment made scale—how many people and devices are connected—even more important than speed—how fast is the processing of data. Together with the PC moment (1982) and the internet moment (1993) and the mobile moment (2007), it begat Big Data and today’s version of AI. It also turned into the rallying cry of an energized Silicon Valley—"At Scale” became the guiding light of internet startups. And it also led to yet another marketing campaign—the one about the “new economy”—and to the dot-com bubble.
In 2006, IEEE Spectrum published “Metcalfe’s Law is Wrong.” Its authors convincingly corrected Metcalfe’s mathematical formulation of network effects: “…for a small but rapidly growing network, it may be a decent approximation for a while,” said one of the authors in 2023. “But our correction applies after you hit scale.”
The IEEE Spectrum published my response to the article, which started with the following: “Asking whether Metcalfe’s Law is right or wrong is like asking whether ‘reach out and touch someone’ is right or wrong. A successful marketing slogan is a promise, not a verifiable empirical statement. Metcalfe’s Law – and Moore’s – proved only one thing: Engineers, or more generally, entrepreneurs, are the best marketers.”
Moore’s and Metcalfe’s “laws” are two prominent examples of the remarkable marriage of engineering and marketing ingenuity that has made many American entrepreneurs successful. However, the most successful marketing and branding campaign invented by an engineer (John McCarthy) has been promoting “artificial intelligence” since 1955. Today’s entrepreneurs, asking for billions of dollars of venture capital and seeking to distinguish what they develop from the somewhat tarnished image of old-fashioned AI, are promoting the new AGI campaign, promising to finally deliver what’s been promised many times in the past.
By doing so, they obscure the true distinction between what they have achieved and the various practical approaches taken in the 1950s (and later) to achieving computers' “simulation of higher-order thinking.” Most importantly, they mislead the public with this disinformation campaign, diverting attention from the potential impact of their machine learning models on how we work.
Researchers and business executives not occupied with raising billions of dollars have started to provide empirical indicators of the organizational implications of AI, similar to what was captured by the IT moment of 1958. For example, a recent randomized controlled trial of the use of AI by 776 professionals at Procter and Gamble found that “AI effectively replicated the performance benefits of having a human teammate—one person with AI could match what previously required two-person collaboration,” reports Ethan Mollick.
As a professor at the Wharton Business School (and author of a popular guide to the new AI), Mollick is sensitive to the real-world impact of AI on how work is done in organizations: “AI sometimes functions more like a teammate than a tool… The most exciting implication may be that AI doesn't just automate existing tasks, it changes how we can think about work itself. The future of work isn't just about individuals adapting to AI, it's about organizations reimagining the fundamental nature of teamwork and management structures themselves.”
To paraphrase a re-engineering guru of the 1990s, don’t automate; recreate with AI. We may be arriving at a new milestone, the AI moment, with new, empirically validated insights into new ways to refashion the organization of work. We may also have arrived at the peak of the AI bubble, or it may still have a year or two before deflation. A very safe prediction, however, is that the marketing campaign about creating superintelligence will never expire.