This year I have had the pleasure of teaching the subject of Business Innovation at ESIC BUSINESS & MARKETING SCHOOL, where I have worked with 22 year olds full of enthusiasm and curiosity. This challenge has led me to delve into the fascinating world of innovation, and I have discovered two aspects that have particularly caught my attention:
The first is the enormous confusion about what people understand by innovation and what it really means. This topic deserves a deeper analysis, and I promise to dedicate an article to it in the future.
The second revelation is that many of the technologies that we consider “new” actually have their roots a long time ago. I have realised that, for various reasons unknown to me, certain technologies only reach their explosion point at specific times, when certain people or aligned interests decide that it is the right time to push them into society.
To explain this last thought, I will focus on the popular Artificial Intelligence (AI). Although many believe that AI is a recent development, its beginnings date back several decades. What is surprising is that its true explosion and mass adoption has occurred only recently, influenced by economic interests, technological advances and the perception of its necessity in our daily lives.
This “new” technology of several decades ago has been carefully developed and finally introduced at the time when it was considered most beneficial and necessary for society. And this raises a question for me : Who or who, and why did they decide to do so?
I do not know the answer, but I can assure you that the question is more than accurate and that true innovation goes beyond popular perception and is often linked to historical and strategic factors that determine when and how a technology becomes an essential part of our lives.
From Past to Present: A Look at the Evolution of AI over Time
The trajectory of artificial intelligence (AI) is long and captivating, marked by a series of milestones and achievements that have shaped its evolution. In this article, we will review the highlights of its history, from its early days to its current applications.
AI is an ever-expanding field that has attracted a great deal of interest in recent years. It has become a fundamental part of our lives, present in virtual assistants, chatbots, autonomous vehicles and medical diagnostics. Its history spans several decades and has undergone numerous changes and advances along the way.
Exploring the Ancient Origins of Artificial Intelligence
Although the 1950s are commonly considered to mark the formal beginning of artificial intelligence (AI), it is important to recognise that the precursors of this fascinating discipline lie much earlier in history, even centuries earlier. From antiquity to the Middle Ages, and then during the Renaissance and the Modern Age, concepts and developments can be traced that laid the foundations for what would eventually become AI as we know it today.
Robots and Legends: Exploring Technology in Greek Mythology
In ancient Greek mythology, stories are told of automatons created by Hephaestus, the god of technology, capable of moving and speaking like humans. Greek mythology features tales of intelligent robots and artificial beings such as Pandora, as well as the notion of “biotechnology”, exploring how technology can alter biological phenomena.
Centuries of Vision: Ramon Llull and Gottfried Leibniz and their Contribution to the Creation of Intelligent Machines
In the 13th century, the Spanish philosopher Ramon Llull developed a system of mechanical logic based on symbols and diagrams. Later, in the 17th century, the philosopher and mathematician Gottfried Wilhelm Leibniz imagined a universal language of symbols to solve problems, thus laying the foundations for the future creation of intelligent machines.
Medieval Innovation: Programmable Automata and Logic Machines
During the Middle Ages, Al-Jazari invented the first programmable humanoid robot, in the form of a water-powered boat with four mechanical musicians, a prototype later recreated and studied by the computer scientist Noel Sharkey as the first programmable automata. In addition, the Mallorcan philosopher and mathematician Ramon Llull developed logic machines to produce as much knowledge as possible by combining basic truths through simple logical operations.
The Machine Renaissance: Mechanising Human Thought
During the Renaissance, the possibility of mechanising “human” thought into the non-human was considered. At this time, Leonardo da Vinci exhibited his “mechanical knight”, able to move his arms and legs like a human by means of pulleys and cables. Also, in 1533, Regiomontanus built an iron automaton eagle capable of flight.
Creative Minds, Revolutionary Ideas: Literature and Technology in the Nineteenth Century
Literature begins to suggest advances towards modern technology. For example, “Gulliver’s Travels” includes an “engine” that can enhance knowledge and skills with the help of a non-human mind, and “Erewhon” entertains the idea of future conscious machines. In 1833, the collaboration between Charles Babbage and Ada Lovelace led to the invention of the analytical machine. In addition, Bernard Bolzano made the first modern attempt to formalise semantics, and George Boole invented Boolean algebra. Samuel Butler also proposed the idea that machines could become conscious and replace mankind.
Seeds of Revolution: Crucial Advances in 20th Century AI
During this period, there were several important milestones in the development of artificial intelligence. In 1931, the mathematicians Bertrand Russell and Alfred North Whitehead’s “Principia Mathematica” laid the foundations of type checking and type inference algorithms. In 1921, Czech playwright Karel Čapek’s science fiction play “Rossum’s Universal Robots” introduced the concept of factory-made artificial persons, called robots. This concept inspired the use of the term in research, art and discovery. In 1939, John Vincent Atanasoff and Clifford Berry created the Atanasoff-Berry computer (ABC), and in 1949, Edmund Berkely published “Great Minds: Or Machines That Think”, in which he highlighted how machines can handle large amounts of information at great speed.
Turing’s Impact: From the Universal Machine to Artificial Intelligence
As we have observed, the evolution of artificial intelligence (AI) takes us on a fascinating journey through time. To continue this journey, we must go back to the year 1936 and delve into one of the most outstanding creations of the father of modern computing, Alan Turing, a British mathematician and computer scientist.
With the invention of Turing’s Universal Turing Machine, an amazing story begins, full of key elements that have evolved to levels beyond our own imagination, giving life to intelligent robots capable of assisting us in solving everyday problems. This iconic machine had the ability to carry out complex, formally defined calculations, with the flexibility to adapt to various specific contexts.
This innovation not only contributed to formalising the concept of algorithm, but also served as the initial catalyst that triggered a revolution in the evolution of the computers we now have in our homes. However, Alan Turing’s legacy goes beyond this remarkable contribution. In his work “Computing Machinery and Intelligence”, there are several fundamentals that have served as inspiration for formal developments in the field of artificial intelligence.
Later, in 1950, Alan Turing again made significant contributions to the field of artificial intelligence (AI), including the creation of the Turing Test. This test evaluates the ability of a machine to exhibit intelligent behaviour that is equivalent or indistinguishable from that of a human being.
Challenges and Achievements: A Journey Through AI since the 1950s
Let’s revisit this exciting journey through the history and development of Artificial Intelligence, exploring the turning points, the challenges faced and the significant advances that have shaped its trajectory from the 1950s to the current era. Throughout this journey, we will dive into a vast ocean of discoveries, from the first attempts to mimic human intelligence to the advanced systems that dominate modern applications.
1950s: The First Steps of Artificial Intelligence
The 1950s marked a crucial milestone in artificial intelligence (AI) research. With the appearance of the first digital computer, scientists and researchers embarked on exploring the possibilities of AI. However, progress was slow due to limitations in computing power and a shortage of funding. Despite these challenges, the 1950s laid the foundation for future advances in the field of AI.
In 1950, Claude Shannon, known as the “Father of Information Theory”, published “Programming a Computer for Playing Chess”.
In the same year, as I mentioned earlier, Alan Turing made significant contributions to the field of AI with the creation of the Turing test. This test evaluates the ability of a machine to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human.
Also in 1950, Isaac Asimov publishes “I, Robot”, a work that explores the ethical dilemmas and implications of AI in society. In 1955, John McCarthy and his team coin the term “artificial intelligence”, giving a formal name to this emerging area of research.
In the same year, Allen Newell, Herbert Simon and Cliff Shaw create the first artificial intelligence computer program, a major milestone in the history of AI. In 1956, Arthur Samuel develops the first computer program for playing chess, paving the way for future research in strategy games.
Finally, in 1958, John McCarthy developed Lisp, a programming language that would become one of the most widely used in artificial intelligence research.
Although the 1950s laid the foundations for modern AI, it was not until 1969 that Arthur Samuel coined the term “machine learning”, a fundamental concept in the development of AI systems that can improve their performance through experience.
The Golden Age of AI: Advances and Discoveries in the 1960s
In the 1960s, the field of artificial intelligence experienced a major surge of interest and investment. This decade saw the development of the first natural language processing program, the first machine learning algorithm and an increase in the representation of artificial intelligence in popular culture.
In 1961, George Devol invented Unimate, the first industrial robot to work on the General Motors assembly line. In the same year, James Slagle develops the SAINT problem-solving program.
In 1963, John McCarthy starts Project MAC, later to become the MIT Artificial Intelligence Laboratory. In 1964, Daniel Bobrow creates STUDENT, cited as one of the first milestones in NLP. The first AI program, written in Lisp, solves algebra problems.
In 1965, Joseph Weizenbaum develops the first chatbot, ELIZA. In 1966, Ross Quillian demonstrates that semantic networks can use graphs to model the structure and storage of human knowledge. In the same year, Charles Rosen develops the Shakey robot.
In 1968, Stanley Kubrick’s “2001: A Space Odyssey” is released, featuring HAL, a sentient computer. Also in 1968, Terry Winograd creates the first natural language computer program, SHRDLU.
The Quiet Revolution: The Era of Expert Systems in the 1970s
In the 1970s, AI shifted its focus from symbolic reasoning to more practical applications, such as expert systems and natural language processing. Expert systems were designed to mimic the decision-making ability of human experts in specific domains, while natural language processing sought to develop machines capable of understanding and responding to human language. However, advances in AI were limited by computational constraints and lack of funding, leading to what became known as the “AI winter”.
In 1970, Waseda University in Japan builds the first anthropomorphic robot, WABOT-1. In 1973, in a report to the British Science Council, James Lighthill states that AI research has not had a major impact, leading to a reduction in government support for AI research. In 1977, George Lucas’ “Star Wars” is released, introducing the humanoid robot C-3PO. Finally, in 1979, a television-equipped, remote-controlled mobile robot, known as The Stanford Cart, becomes one of the first examples of an autonomous vehicle.
From Failure to Success: The Ups and Downs of AI in the 1980s
In the 1980s, the development of machine learning algorithms marked an important turning point in the history of AI. These algorithms allowed computers to learn and adapt based on input data, rather than being explicitly programmed to perform a specific task. This opened the door to more complex and sophisticated AI systems. However, despite these advances, the rise of AI in the 1980s led to an “AI winter”, as the technology failed to live up to some of the high expectations that had been placed on it.
In 1981, Japan’s Ministry of International Trade and Industry allocates $850 million to the Fifth Generation Computers project. In 1984, Roger Schank and Marvin Minsky warn of AI winter. In 1986, Mercedes-Benz launches a driverless van with cameras and sensors. Finally, in 1988, Rollo Carpenter develops Jabberwacky to “simulate natural human speech in an interesting, entertaining and humorous way”.
Discovering the Future: The Innovation Explosion of the 1990s
In the 1990s, there was a resurgence of interest in artificial intelligence after a period of reduced funding and attention in the 1980s. This was partly due to the emergence of new technologies such as neural networks. In addition, the World Wide Web became accessible to the public, leading to the development of search engines that used natural language processing to improve the accuracy of search results. This decade also saw significant advances in the development of intelligent agents and multi-agent systems, which contributed to the advancement of AI research.
In 1995, Richard Wallace developed the chatbot A.L.I.C.E. In 1997, Sepp Hochreiter and Jurgen Schmidhuber developed the long-term memory (LSTM), a type of recurrent neural network (RNN) for handwriting and speech recognition. In 1998, Dave Hampton and Caleb Chung invented the first toy robot for children, Furby. Finally, in 1999, Sony introduced a robotic dog, AIBO, capable of understanding and responding to over 100 voice commands.
The Future Arrives: Artificial Intelligence Milestones in the 2000s
The turn of the century saw the emergence of intelligent assistants, such as Apple’s Siri and Amazon’s Alexa, which use NLP technology to understand and respond to voice commands. The development of self-driving cars also began in the 2000s, with companies such as Google and Tesla leading the way in this field.
In 2000, the Y2K problem arose, a challenge related to computer failures linked to the formatting and storage of the electronic calendar from 01/01/2000. In the same year, Cynthia Breazeal developed a robot capable of recognising and simulating emotions, Kismet, and Honda launched the humanoid AI robot ASIMO.
In 2001, Steven Spielberg released “Artificial Intelligence A.I.”. In 2002, i-Robot launched an autonomous robot hoover. In 2004, NASA’s robotic explorations navigated Mars without human intervention. In 2006, Oren Etzioni, Michele Banko and Michael Cafarella coined the term “machine reading”. Finally, in 2009, Google began work on a driverless car.
AI Takes Off: Key Innovations in the 2010s
The 2010s saw major advances in AI, including the development of deep learning algorithms, which enabled the creation of even more sophisticated AI systems. AI began to play a key role in a number of sectors, such as healthcare, finance and customer service.
In 2010, Microsoft launched the first gaming device that tracks the movement of the human body, the Xbox 360. In 2011, Watson, a natural language question-answering computer created by IBM, participated in the popular televised game Jeopardy. In the same year, Apple launched Siri.
In 2012, Jeff Dean and Andrew Ng trained a large neural network of 16,000 processors to recognise images of cats. In 2013, Carnegie Mellon University launched a semantic machine learning system that can analyse relationships between images. In 2014, Microsoft launched Cortana and Amazon created Amazon Alexa.
In 2016, Sophia, the humanoid robot created by Hanson Robotics, became the first ‘robot citizen’, and Google launched Google Home. In 2017, Facebook trained two chatbots to communicate and learn to negotiate. In 2018, Alibaba, the AI that processes language, outperformed human intellect in a Stanford test, and Google developed BERT. In the same year, Samsung launched Bixby.
AI in the 2020s: Towards an Unstoppable Future
Artificial Intelligence continues its ascent at an unprecedented pace, ushering in this decade with a series of extraordinary advances in chatbots, virtual assistants, natural language processing and machine learning. These advances have vastly expanded the capabilities of AI, enabling its application in areas ranging from data analytics to decision support.
The positive transformation that artificial intelligence promises is evident in its growing presence in key sectors such as customer service, personalisation of experiences, content management, and disease diagnosis and treatment. Notable examples of these developments include OpenAI Codex, introduced in 2021 as a revolutionary tool to assist programmers through automatic code generation, and ChatGPT, launched in 2022 as an AI chatbot that has generated both praise for its fluency and debate about its social impact.
The year 2023 brought the introduction of GPT-4, an enhanced, multi-modal version of OpenAI’s language model, and Google Bard, an alternative based on Google’s LaMDA model, challenging the chatbot landscape. By then, ChatGPT had already reached over 100 million users, establishing itself as one of the fastest growing consumer applications, while GPT-4 achieved impressive scores in standardised tests, demonstrating its ability to understand and reason at levels comparable to humans in various fields of knowledge. These developments are just one indication of the continued progress and potential of AI in the future.
The Future of AI: Between Promises and Ethical Concerns”
The history of Artificial Intelligence has been a fascinating journey from its humble beginnings in the 1950s to today’s sophistication. Each milestone in this timeline has represented a significant advance in our understanding and ability to develop machines that think and act similarly to humans. From the earliest chess programs to today’s deep learning systems, we have witnessed astonishing progress in the field of AI.
However, as we move into the future, it is essential to reflect on the impact of AI on our society. These technologies have the potential to radically transform various aspects of our lives, from healthcare to transportation and beyond. But along with these opportunities come significant ethical and moral challenges and considerations.
On the one hand, AI can improve efficiency, accuracy and accessibility in areas such as medical diagnosis, resource management and automation of tedious tasks. However, it also raises concerns about job losses, data privacy, algorithmic discrimination and control over our daily lives.
It is essential to address these ethical dilemmas and ensure that AI is developed and implemented in a responsible and ethical manner. This involves the participation of various stakeholders, including policymakers, technology companies, academics, ethical practitioners and society as a whole.
Ultimately, the future of AI will depend on how we manage these challenges and how we use this technology to improve people’s quality of life and promote overall social well-being. With careful planning and a human-centred approach, we can harness the transformative potential of AI while mitigating its potential negative impacts.