A History Of Artificial Intelligence

 A History Of Artificial Intelligence
1138     22:45     31 05 2022    
In the seminal paper on AI, titled Computing Machinery and Intelligence, Alan Turing famously asked: “Can machines think?” — or, more accurately, can machines successfully imitate thought?

70 years later, the answer is still “no,” as a machine hasn’t passed the Turing test.

Turing clarifies that he’s interested in machines that “are intended to carry out any operations which could be done by a human computer.” In other words, he’s interested in complex digital machines.

Since the achievement of a thinking digital machine is a matter of the evolution of machines, it reasons to start at the beginning of machine history.

The History of Machines

A machine is a device that does work. In engineering terms, work means transferring energy from one object to another. Machines enable us to apply more force, and/or do it more efficiently, resulting in more work being done.

Modern machines — like the above Boston Dynamics robot, Atlas — use hundreds of parts, including hydraulic joints, pistons, gears, valves, and so on to accomplish complex tasks, such as self-correcting stabilization, or even backflips.

Simple Machines

However, “simple machines” fit our earlier definition as well, including wheels, levers, pulleys, inclined planes, wedges, and screws. In fact, all mechanical machines are made of some combination of those six simple machines.

Atlas is not just a mechanical machine, but also a digital one.

Simple mechanical machines are millions of years old. For instance, “stonecutting tools [a type of wedge] are as old as human society,” and archaeologists have found stone tools “from 1.5 to 2 million years ago.”

Complex Machines

Combinations of simple machines could be used to make everything from a wheelbarrow to a bicycle to a mechanical robot.

In fact, records of mechanical robots date back to over 3,000 years ago.

The Daoist text Lieh-tzu, written in the 5th century BCE, includes an account of a much earlier meeting between King Mu of the Zhou Dynasty (1023–957 BCE) and an engineer named Yen Shi. Yen Shi presented the king with a life-sized, human-shaped mechanical automaton:

“The king stared at the figure in astonishment. It walked with rapid strides, moving its head up and down, so that anyone would have taken it for a live human being. The artificer touched its chin, and it began singing, perfectly in tune. He touched its hand, and it began posturing, keeping perfect time… As the performance was drawing to an end, the robot winked its eye and made advances to the ladies in attendance, whereupon the king became incensed and would have had Yen Shi executed on the spot had not the latter, in mortal fear, instantly taken the robot to pieces to let him see what it really was. And, indeed, it turned out to be only a construction of leather, wood, glue and lacquer...”

The king asked: “Can it be that human skill [in creating a machine] is on a par with that of the great Author of Nature [God]?”

In other words, Turing’s question of whether machines can imitate humans is actually thousands of years old.
At the same time, Greek scientists were creating a wide range of automata. Archytas (c. 428–347 BC) created a mechanical bird that could fly some 200 meters, described as an artificial, steam-propelled flying device in the shape of a bird.

“Archytas made a wooden model of a dove with such mechanical ingenuity and art that it flew.”

Some modern historians believe it may have been aided by suspension from wires, but in any case, it was a clear attempt to create a machine.

Another Greek scientist, Daedalus, created statues that moved:

“Daedalus was said to have created statues that were so lifelike that they could move by themselves.”
The “first cuckoo clock” was described in the book The Rise and Fall of Alexandria: Birthplace of the Modern World (page 132):

“Soon Ctesibius’s clocks were smothered in stopcocks and valves, controlling a host of devices from bells to puppets to mechanical doves that sang to mark the passing of each hour — the very first cuckoo clock!”
Over the centuries, more and more complex contraptions were used to create automata, such as wind-powered moving machines.

Programmable Complex Mechanical Machines

It took until the 9th century CE for the first recorded programmable complex mechanical machine:

“The earliest known design of a programmable machine is the automatic flute player that was described in the 9th century by the brothers Musa in Baghdad.”

This was also described as “the instrument that plays itself.” A book on these devices is kept in the Vatican Library.

Mechanical Calculating Machines

Another step on the long road to modern AI was the creation of mechanical calculators.

The first mechanical calculator was built by Wilhelm Schickard in the first half of the 17th century, allowing addition and multiplication.

The next mechanical calculator, built by Blaise Pascal, could also perform subtraction.

These machines inspired thinkers like Gottfried Wilhelm Leibniz to consider the following idea:

“If every area of human experience can be understood by means of mathematical thinking and if thinking is a form of calculation and calculation can be mechanised, then all questions about reality can, in principle, be answered by means of a calculation executed by a machine.”

In many ways, this is similar to our concept of Artificial General Intelligence today.
Leibniz’s idea was that a characteristica universalis, or a universal logical programme, could then answer all questions about reality.

Programmable Calculating Machines

In 1833, Charles Babbage combined the 9th-century innovation of programmable machines and the 17th-century innovation of calculating machines to conceive of an Analytical Engine: A programmable calculating machine.

Babbage never managed to build a complete machine, but his “punched cards technique” was later used in the first digital machines.

Digital Machines (Computers)

The move from mechanical to digital computers was a massive leap to getting to where we are today.
In the late 1930s to 40s, several digital computers emerged, competing to take the spot as the “first digital computer.”

The ENIAC is widely considered to be the first digital computer, completing construction in 1946, as it was the first that was fully functional.

Other digital computers included the Colossus in 1943, which helped British code breakers read encrypted German messages, and the ABC computer in 1942.

Progress from here rapidly accelerated, with advancements such as storing programs in memory, RAM, real-time graphics, and transistors being released in relatively quick succession.

Machine Learning

Finally, with the advent of complex digital machines, we can broach the subject of machine learning.
As explored in the beginning, the rise of machines prompted Alan Turing to ask, in 1950, “can machines think?” Five years later, Dartmouth released a seminal paper on AI, and the field’s fundamental principles have remained similar since then.

In 1955, M.L. Minsky wrote:

A “machine may be ‘trained’ by a ‘trial and error’ process to acquire one of a range of input-output functions. Such a machine, when placed in an appropriate environment and given a criterior of ‘success’ or ‘failure’ can be trained to exhibit ‘goal-seeking’ behavior.”

In other words, machine learning algorithms build mathematical models on “training data” to make decisions, without being explicitly programmed to make those decisions.

That is the key difference between a calculator and machine learning (or AI): A calculator, or any form of automata, has pre-determined output. AI makes probabilistic decisions on-the-fly.

A mechanical machine also has much stricter physical limitations, in terms of how many machine components (e.g. pulleys, levers, gears) can be fit in a contraption, while a modern digital machine’s CPU can fit billions of transistors.

The actual phrase “machine learning” was coined by Arthur Samuel in 1952, after he developed a computer program for playing checkers using rote learning.

In 1957, Frank Rosenblatt created the Mark I perceptron — a supervised learning algorithm of binary classifiers — for the purpose of image recognition.

After presenting his work to the US Navy in 1958, The New York Times reported:

The perceptron is “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

Even in 1958, researchers were foreseeing a day of sentient AI.

Later achievements included feedforward neural networks (like a perceptron, but with multiple layers), the nearest neighbor algorithm in ‘67, backpropagation on computers in the 70s (which is now used to train deep neural networks), boosting algorithms in the early 90s, and LSTMs in ‘97.

Improvements Due to Data and Computing Power

In leading AI researcher Andrew Ng’s recent AI course, he notes that there’s been “almost no progress” in Artificial General Intelligence, but that incredible progress has made been made in “narrow intelligence” — input-output functions “that do one thing such as a smart speaker or a self-driving car.”

At a high-level, AI is still about “learning a function that maps from x to y.”

The incredible advancements we’ve seen recently are mainly due to an explosion in data and computational power, alongside better (higher-quality) data and more AI engineers.

More data and computational power naturally increase the accuracy of most AI models, especially in deep learning.

The Democratization of AI

Alongside the evolution of AI architectures, computing power, and data, AI has recently taken a strong hold in industry, thanks to the proliferation of more accessible AI tools.

The emergence of tools that make technologies more accessible has a long history. For instance, Gutenberg’s printing press democratized knowledge in the 15th century.

In the Internet age, “no-code” tools like Wordpress and Wix democratized site-building.
Along the same thread, for decades after the proposals of AI in the ‘50s, AI was largely limited to academia, without seeing much practical use.

Tools like TensorFlow and Keras made it feasible for more businesses to implement AI, although they’re still technologically complicated tools that require the use of highly-paid machine learning engineers.

Compounding this issue of complexity, a shortage of data science professionals results in sky-high salaries for those who can create AI systems. As a result, large corporations like the FAANGs are dominating much of AI.
The emergence of no-code AI tools like Apteo reduce up-front costs, while removing the need for technical expertise, enabling truly democratized AI.

No Code AI

No code AI tools are the logical next step in the path to democratizing AI.

Early humans made stonecutting tools 2 million years ago to be able to do more work than with their hands.
Today, AI makes us more efficient and can do work for us, while no-code AI brings these benefits to everyone.
With the rise of no-code AI tools, we’re moving to an era of accessible AI.
Strategyvision.org

Tags: Artificial   intelligence  


Similar news