May 1, 2025
Trending News

Artificial intelligence: what is it, history and definition

  • March 22, 2023
  • 0

From improving browsers in our search results to collaborating in spinal surgeries, artificial intelligence (AI) is increasingly taking over responsibility and essential functions for the functioning of our

Artificial intelligence: what is it, history and definition

From improving browsers in our search results to collaborating in spinal surgeries, artificial intelligence (AI) is increasingly taking over responsibility and essential functions for the functioning of our daily lives.

But what is AI?

What is artificial intelligence?

Artificial intelligence is a field that combines computer science with robust datasets for problem solving. It also covers the secondary areas of machine learning and deep learning that are often mentioned along with it and which we will explain in this article. It is also the ability of a machine to replicate human skills such as reasoning, learning, planning, and creativity.

History of artificial intelligence

In 1943, a paper published by Warren McCulloch and Walter Pitts described a structure of mathematical reasoning that resembled the human nervous system. This article introduced neural networks, a technique that is currently used in any AI for data processing and decision making.

A 1950 paper by Alan Turing provides a philosophical, computational, and mathematical analysis of the first concepts of a machine’s ability to have artificial intelligence. Turing presents his paper with the first and most famous theoretical dynamics between “thinking” machine and man “The Imitation Game”, where he theorizes on the question “Can machines think?”.

Two years after Turing theorized about AI, his lab colleague Christopher Strachey creates the world’s first functional AI with his MUC Draughts checkers game, which managed to beat amateur players even though it was in its infancy.


Continuation after commercial


Finally, in 1956, during a seminar that brought together the largest academic authorities in the field of computer science of the time, the field of artificial intelligence research became official. The eight-week event was such a success that several public and private institutions began investing heavily in the research that resulted from it.

Between the 1950s and the early 1970s, enormous scientific advances needed for research were made thanks to large corporate and government funding.

One of the highlights was the creation of the Natural Language Processing (NLP) field, responsible for the recognition and creation of texts, transcribing speech and translating languages. PLN was needed to build intelligent personal assistants and chatbots like ELIZA, the first functional chatbot.

Also created was the Expert System (SE), software focused on creating and incentivizing a system to specialize in a certain area, making decisions based on the interests necessary to carry out its tasks.

It also launched avant-garde interactive robotics projects such as Shakey, the first mobile intelligent robot, which debuted several technology concepts still in use today.

AI winters

Excessive optimism of scientists gave rise to very high expectations among investors who counted on better and better results. However, due to the continued need for computers with more processing power and memory, as well as other technological limitations, many investments have been reduced or even cancelled.

This event became known as the “First Winter of AI”, which lasted from 1974 to 1980 and was heavily criticized both within and outside of academia.

In the early 1980s, the field of AI research and development received its investment again, as there was a huge take-up of SE consumer markets and very high funding from the Japanese government, which outpaced the development of fifth-generation computers.


Continuation after commercial


The application of artificial intelligence in the software of these computers would be more than necessary to provide advanced hardware with capabilities that consumers have not seen before. What caused the Defense Advanced Research Projects Agency (DARPA) to triple its investment in AI.

But in 1987, the US economic crisis rocked the AI ​​field hard, setting off the “Second AI Winter” that has been raging for nearly a decade.

The reasons are different, the constant change of generations of computers, which bankrupted many companies that could not withstand the technological race, the failure to fulfill the tasks of multimillion-dollar projects and, most importantly, the fear of shareholders and investors of not having guarantees to maintain financial conditions, as a result of which the budget of the research area is again frozen.

Until the mid-1990s, more than 200 AI companies were closed or bought and merged by other companies.

AI of the new millennium

In the 1990s, AI continued to grow and expand with the first prototypes of sixth-generation computers and the advent of techniques such as data mining and probabilistic logic.

A gigantic example of this development was demonstrated in 1997 by Deep Blue, the first supercomputer chess player to defeat world champion Garry Kasparov. The development of the Internet and the increased availability of information have made data mining a key area of ​​artificial intelligence. Probabilistic logic allowed machines to handle uncertainty more accurately and efficiently.

With the advent of the new millennium, AI was driven by the increased processing and storage capabilities of the recently introduced sixth-generation computers. New techniques such as deep learning and convolutional neural networks have enabled machines to perform more complex tasks such as face recognition and autonomous driving.

An example of this was the North American DARPA Grand Challenge, which rewarded autonomous vehicles for the longest distance traveled off-road and in urban areas. AI is also increasingly used in areas such as finance, marketing, and customer service.

And from 2010 to the present day, AI has experienced explosive growth driven by increased demand for automation and data mining solutions in many areas. Companies such as Google, Microsoft, Apple, Amazon, and IBM have invested heavily in research by launching AI-based products and services; Google Assistant, Cortana, Siri, Alexa, and Watson have been great virtual assistants for these big techies.

AI is already present in many areas, whether it’s improving supply chain efficiency, like Amazon uses in its warehouses, streamlining industrial processes, like Tesla does in its factories, and improving cybersecurity, like in offerings developed by IBM. Non-tech areas also cover AI, such as advertising, journalism, education, and public relations.

However, big problems can arise at higher powers. We already have AI that is being used for espionage, cybercrime and other illegal activities that are radically damaging to society. The outlook is so complex that governments are already taking regulatory action on the use and capabilities of AI, given its harmful potential.

The future is uncertain, but history cannot fail to show the enormous benefits that this type of technology is capable of bringing to the world.

Source: History of Artificial Intelligence.

Source: Mundo Conectado

Leave a Reply

Your email address will not be published. Required fields are marked *