An Overview of Artificial Intelligence

John McCarthy is regarded as the father of artificial intelligence. He coined the term and was one of the founders of the concept that developed into a discipline over the years. John McCarthy was a computer scientist. He was also a cognitive scientist. McCarthy is credited with developing a family of computer programming languages called Lisp. He influenced and improved ALGOL. He contributed to the popularizing of timesharing and made significant progress in developing artificial intelligence in its early phase. The American scientist was the recipient of the Turing Award in 1971. He won the Kyoto Prize and U.S. National Medal of Science.

Origin of Artificial Intelligence

The pursuit of an intelligent machine predates the first computer. Scientists and engineers have been working on smarter, more efficient and reliable, faster and multipurpose machines for centuries. The invention of the first computer set the stage for a truly intelligent machine. Till then, most machines were capable of one specific task or multiple tasks but with one particular objective. The computer was the first device or machine that had seemingly infinite possibilities. The basic pursuit was to create a system that would be as intelligent as humans but significantly faster and more efficient. Efficiency, accuracy and speed were already accomplished with the improvement of the primitive computers and then perfecting them. Intelligence remained elusive since computers could only work on the basis of inputs and outputs.

What is Artificial Intelligence?

John McCarthy described artificial intelligence as the science & engineering of creating intelligent machines and more specifically intelligent programs. Hardware and software do not think. They do not have a mind of their own. Their abilities or scopes are limited to what humans determine. Artificial intelligence is basically an inorganic version of human intelligence, which is why the use of the term artificial. Humans have the ability to learn, grow and improve, decide, work, solve problems that prop up from time to time, act and react different depending on the triggers of the moment and can evolve. Computers or programs cannot do so unless there is human intervention. Artificial intelligence deals with the premise wherein programs will have the ability to learn, grow and improve, albeit digitally and inorganically, decide, work, solve specific and generic problems, act and react distinctly, evolve and keep thriving on its own without human intervention.

An Illustration of Artificial Intelligence

The philosophy of artificial intelligence is fairly simple. Computers, programs or systems can possess the curiosity of humans and lead them to think, wonder, behave and grow like humans. There are different objectives of artificial intelligence. One goal is to develop systems that can exhibit intelligence in the form of behavior, explanation, demonstration, advice and learning. Another goal is to develop systems that can understand, learn on its own, think for itself and actually behave just like humans without an organic physical existence. Robots or machines are inorganic although they have a physical form.

There are many components of artificial intelligence. It is not just computer science or engineering. It is both and involves other disciplines such as biology, linguistics, mathematics and psychology. Reasoning, problem solving and learning are the primary objectives of developing artificial intelligence. These are inherent qualities of humans or the brain to be more precise. A program that is empowered by artificial intelligence can answer generic and specific questions, the former being a result of its learning and adaptability and the latter being a predetermined or programmed response. Artificial intelligence can enable a program to undergo modifications without any changes to its original structure.

Artificial intelligence is a realm that has unlimited possibilities and the potential developments or outcomes are unimaginable. It is a challenging discipline because of many reasons. Artificial intelligence can be extremely difficult to organize owing to the sheer volume of data. Formatting, sorting or any kind of regulation may be difficult as the program would keep changing all the time. The natural evolution of the program may or may not have any adverse effect on its functioning and how it is perceived by users. Only a truly effective artificial intelligence program would be able to identify accurate information or learning as separate from inaccurate or incomplete data.

Applications of Artificial Intelligence

Artificial intelligence is already a reality. We may not have a thinking, walking, talking, responding and growing robot yet. We may not have an omnipotent program that can control all other programs or millions of independent programs operating in autonomy and intelligently. However, we have artificial intelligence in gaming including chess, tic-tac-toe and poker. The programs used in these games can consider a vast number of possibilities and can respond differently in distinct situations. Machine learning, natural language processing (even writing a blog according to theblogstarter.com), expert systems helping with reasoning & advising and vision systems understanding, comprehending and interpreting data based on the input are some of the examples of artificial intelligence that have been already developed and deployed.

Doctors and healthcare professionals use expert systems in clinical settings to diagnose different kinds of medical conditions. Law enforcement agencies use different types of computer software that use artificial intelligence to recognize images and in forensics. Speech recognition programs have become quite common. There are software applications that can recognize handwriting. Intelligent robots have also been developed that can perform tasks that are usually done by humans. Equipped with sensors, these robots are sensitive to physical touch, heat or temperature, light, movement, pressure and sound. They have state of the art processors providing sufficient memory and they behave intelligently in many situations. However, robots have not truly become intelligent to an extent the premise of artificial intelligence promises. Although some programs help robots to learn from mistakes and make them adaptable, they are yet to become truly autonomous and the scope of growth of their intelligence is yet to be tested.

A Timeline of Artificial Intelligence

The first time the word robot was ever used was in 1923. This predates computers and even the concept of artificial intelligence. The word robot was used in a sci-fi play called R.U.R or Rossum’s Universal Robots scripted by Karel Čapek. Neural networks date back to 1943 and later in 1945 an alumni of Columbia University Isaac Asimov came up with the word robotics. Alan Turing developed a test to evaluate intelligence in 1950. It came to be known as the Turing Test. He detailed the test in Computing Machinery and Intelligence.

John McCarthy came up with the word artificial intelligence in 1956. He demonstrated the first ever artificial intelligence program in operation at the Carnegie Mellon University. Two years later, McCarthy created the family of programming languages called LISP. This was specifically for artificial intelligence. In 1964, MIT student Danny Bobrow submitted his dissertation showing how computers can actually understand the natural language spoken by humans and can be programmed to deal with algebra. A year later, Joseph Weizenbaum developed ELIZA at MIT. It was an interactive program that can converse in English.

The first ever robot that could move, perceive and solve problems was developed in 1969 at by Stanford Research Institute scientists. This robot was called Shakey. Then came Freddy, developed by Assembly Robotics in 1973. The first ever autonomous but computer controlled vehicle was built in 1979. It was called Stanford Cart. A drawing program called Aaron was created by Harold Cohen in 1985. The most significant progresses in artificial intelligence happened in the nineties. Machine learning was improved and it subsequently leapfrogged to what we have today. Noteworthy developments during the nineties include scheduling and data mining, multi agent planning and case based reasoning, web crawling, vision and virtual reality, understanding & translating natural language and the first chess program to beat the reigning world champion Garry Kasparov in 1997.