Artificial Intelligence or AI is a concept that has been much talked about in the recent past. As the name suggests, Artificial Intelligence is not real, in the sense that it is simulated. Intelligence is a broad term that can mean different things. Generally, intelligence can be defined in different spectrums, ranging from logic to problem-solving abilities. Thus, Artificial Intelligence is a subfield of computer science that simulates machines and software to mimic the human cognitive functions. This leads to the production of machinery that observes, analyzes and outputs data just as the human brain would.
Here is a brief history of Artificial Intelligence that highlights how the concept has grown leaps and bounds in the recent years.
Ancient History: A fascinating fact about artificial intelligence is that it has been incorporated since as early as the 4th century B.C. According to Greek mythology, a blacksmith called Hephaestus manufactured mechanical servants to help him out with his work. With time, many models and toys were constructed with the same underlying principles. Examples of such models include the Archytas of Tarentum and the syllogistic logic model by Aristotle.
Also Read : The Promises of Artificial Intelligence: Introduction
History of AI since the last 100 years:
Rossum’s Universal Robots
In the year 1920, a Czech writer named Karel Capek wrote and directed a sci-fi play. This play introduced the concept of robots known as Rossum’s Universal Robots (R.U.R) The play depicted artificial people who are similar to clones in today’s day and age. This gave researchers an idea about Artificial Intelligence and Robotics and thus threw light on the importance of AI in society and research.
The Alan Turing Machine
Alan Turing, a renowned scientist, introduced a model called the Turing Machine. This machine is an abstract one that constructs the logic of any given algorithm. Up until today, the Turing Machine holds a special place in the field of Computer Science. Shortly after the World War 2, Turing introduced his famous Turing Test. This test was created to determine the intelligence of any machine. The test used one machine and one person. Both the machine as well as the person would communicate through a natural language. If a second person who is overhearing the conversation cannot differentiate between the person and the machine, then the machine is said to be intelligent.
The Dartmouth Conference
In the year 1956, the first recorded Artificial Intelligence workshop was held. This marked the start of research in the field where researchers from MIT, CMU and IBM met together and brainstormed ideas related to the concept. According to them, creating artificially intelligent machines will ensure that the work to be done by man will significantly reduce with time.
Research in AI significantly decreased in the subsequent years due to lack of enthusiasm and funding by the US and British Governments. Thus, during the early 1970s, AI was a concept that was neglected. However, after a few years, expert systems were introduced. These systems are programs pertaining to a particular domain that answers as well as solves any problems posed to it. Thus, they emulate an expert in a particular domain and solve the posed problem according to certain rules that are set. The expert program used a knowledge engine that represented facts and rules with respect to a certain topic while the inference engine applied these represented facts to new facts.
Shortly after, in the 1990s, AI research suffered another setback due to lack of funding.
21st Century: The 21st century saw the growth of Artificial Intelligence that extended to Machine Learning and Big Data Analysis.