Today in 1295, Catalan poet and theologian Ramon Llull started to publish Arbor Scientiae (Tree of Science), presenting sixteen trees of knowledge.
The sixteen trees range from earthly and moral to divine and pedagogical. Each tree is divided into seven parts (roots, trunk, branches, twigs, leaves, flowers, fruits). The roots always consist of the Lullian divine principles and from there the tree grows into the differentiated aspects of its respective category of reality.
In 1308, Llull completed Ars Generalis Ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.
Many histories of AI start with Homer and his description of how the crippled, blacksmith god Hephaestus fashioned for himself self-propelled tripods on wheels and “golden” assistants, “in appearance like living young women” who “from the immortal gods learned how to do things.”
In my histories of AI, I prefer to stay as close as possible to the notion of “artificial intelligence” in the sense of intelligent humans actually creating, not just imagining, tools and mechanisms for assisting our cognitive processes or automating (and imitating) them.
Llull devised a system of thought that he wanted to impart to others to assist them in theological debates, among other intellectual pursuits. He wanted to create a universal language using a logical combination of terms. The tool Llull created was comprised of seven paper discs or circles, that listed concepts (e.g., attributes of God such as goodness, greatness, eternity, power, wisdom, love, virtue, truth, and glory) and that could be rotated to create combinations of concepts to produce answers to theological questions.
Llull’s system was based on the belief that only a limited number of undeniable truths exists in all fields of knowledge and by studying all combinations of these elementary truths, humankind could attain the ultimate truth. His art could be used to “banish all erroneous opinions” and to arrive at “true intellectual certitude removed from any doubt.”
In early 1666, 19-year-old Gottfried Leibniz wrote De Arte Combinatoria (On the Combinatorial Art), an extended version of his doctoral dissertation in philosophy. Influenced by the works of previous philosophers, including Ramon Llull, Leibniz proposed an alphabet of human thought. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters, he argued. All truths may be expressed as appropriate combinations of concepts, which in turn can be decomposed into simple ideas.
Leibniz wrote: “Thomas Hobbes, everywhere a profound examiner of principles, rightly stated that everything done by our mind is a computation.” He believed such calculations could resolve differences of opinion: “The only way to rectify our reasonings is to make them as tangible as those of the mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right” (The Art of Discovery, 1685). In addition to settling disputes, the combinatorial art could provide the means to compose new ideas and inventions.
Contrary to today’s common portrayal of these early descriptions of cognitive aids as “thinking machines,” Leibniz, just as Llull before him, was anti-materialist, rejecting the notion that perception and consciousness can be given mechanical or physical explanations. Perception and consciousness cannot possibly be explained mechanically, he argued, and therefore could not be physical processes.
In Monadology (1714), Leibniz wrote:
One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.
For Leibniz, no matter how complex the inner workings of a “thinking machine,” nothing about them reveals that what is being observed are the inner workings of a conscious being. Two and a half centuries later, the founders of the new discipline of “artificial intelligence,” materialists all, assumed that the human brain is a machine, and therefore, could be replicated with physical components, with computer hardware and software. They believed that if they could only find the basic computations, the universal language, they could create an intelligent system, equal or better than the human “system.” Being rational was replaced by being digital.
In 1726, Jonathan Swift published Gulliver's Travels in which he described (possibly as a parody of Llull’s system), a device that generates permutations of word sets. At the Academy of Projectors, Gulliver meets a professor who tells him that
Everyone knew how laborious the usual method is of attaining to arts and sciences; whereas, by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.
He then led me to the frame, about the sides, whereof all his pupils stood in ranks. It was twenty feet square, placed in the middle of the room. The superfices was composed of several bits of wood, about the bigness of a die, but some larger than others. They were all linked together by slender wires. These bits of wood were covered, on every square, with paper pasted on them; and on these papers were written all the words of their language, in their several moods, tenses, and declensions; but without any order. The professor then desired to observe; for he was going to set his engine at work.
The pupils, at his command, took each of them hold of an iron handle, whereof there were forty fixed round the edges of the frame; and giving them a sudden turn, the whole disposition of the words was entirely changed. He then commanded six-and-thirty of the lads, to read the several lines softly, as they appeared upon the frame; and where they found three or four words together that might make part of a sentence, they dictated to the four remaining boys, who were scribes. This work was repeated three or four times, and at every turn, the engine was so contrived, that the words shifted into new places, as the square bits of wood moved upside down.
Six hours a-day the young students were employed in this labour, and the Professor showed me several volumes in large folio already collected, of broken sentences, which he intended to piece together, and out of those rich materials to give the world a complete body of all arts and sciences; which however might be still improved, and much expedited, if the publick would raise a fund for making and employing five hundred such frames in Lagado, and oblige the managers to contribute in common their several collections.
There you have it, brute force deep learning (today’s AI) and its deep pockets requirements, described in the 18th century. ChatGPT envisioned by Jonathan Swift.
In the late 2000s, when “data science” emerged, bringing to the fore the sophisticated statistical analysis that is the foundation of deep learning, some observers and participants reminded us that “correlation does not imply causation.” A Swift today would probably add: “Correlation does not imply creativity.”