May 5, 2025
Trending News

Everything we call artificial intelligence – not scientists

  • January 7, 2023
  • 0

In August 1955, a group of scientists requested $13,500 in funding for a summer seminar at Dartmouth College in New Hampshire. The field they proposed to research was

Everything we call artificial intelligence – not scientists

In August 1955, a group of scientists requested $13,500 in funding for a summer seminar at Dartmouth College in New Hampshire. The field they proposed to research was artificial intelligence (AI). While the demand for funding was modest, the researchers’ assumption was not: “Every aspect of learning or any other aspect of intelligence can in principle be described so precisely that a machine can be simulated.” Since these humble beginnings, movies and media have romanticized AI or portrayed it as a villain. However, for most people, AI remains a topic of discussion, not part of conscious life experience.

AI entered our lives

Late last month, artificial intelligence in the form of ChatGPT left the science fiction speculation and research labs and entered the public’s computers and phones. It’s called “productive artificial intelligence” – suddenly a clever prompt can write an article, or create a recipe and a shopping list, or create an Elvis Presley-style poem.

While ChatGPT is the most dramatic contributor in a year of prolific AI success, similar systems have shown even greater potential for creating new content, with text-to-image prompts used to create live images that have even won art competitions. AI may not yet have the living consciousness or theory of mind popular in science fiction movies and novels, but it’s at least getting close to disrupting what we think AI systems can do.

Researchers working closely with these systems are stunned by the prospect of hearing, as in Google’s LaMDA Large Language Model (LLM). LLM is a trained model for processing and generating natural language. Generative AI has also raised concerns about plagiarism, the use of original content to build models, the ethics of information manipulation and abuse of trust, and even the “end of programming.” At the heart of all this is a question that has become increasingly relevant after the summer seminar at Dartmouth: Is AI different from human intelligence?

What does “AI” really mean?

For a system to be considered artificial intelligence, it must show a certain level of learning and adaptability. Therefore, decision systems, automation and statistics are not artificial intelligence. AI falls into two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). AGI is not available today. The main challenge for building general AI is to adequately model the world in a consistent and useful way with all its knowledge. This is a large-scale enterprise, to put it mildly.

Most of what we know as artificial intelligence today has the narrow intelligence that a particular system would solve a particular problem. Unlike human intelligence, this type of narrow AI is only effective in the field in which it is trained: fraud detection, facial recognition or social recommendations, for example. However, AGI will function like humans. Currently, the most obvious example of trying to achieve this is the use of neural networks and “deep learning” trained on large amounts of data.

Neural networks are created on the principle of the human brain. Unlike most machine learning models that do calculations based on training data, neural networks work by passing each data point through an interconnected network and adjusting parameters each time. As more data is transferred over the network, the parameters become fixed; The end result is a “trained” neural network that can produce the desired result on new data, such as recognizing whether an image contains a cat or a dog.

A huge leap forward in artificial intelligence today is due to technological advances in how we can train large neural networks by reconfiguring large numbers of parameters on each run, thanks to the capabilities of large cloud computing infrastructures. For example, GPT-3 (AI system supporting ChatGPT) is a large neural network with 175 billion parameters.

What does AI need to work?

AI needs three things to be successful.

First, it needs high-quality, unbiased data and is plentiful. Researchers creating neural networks use large datasets that emerge as society digitizes. To increase the work of human programmers, Co-Pilot draws its data from billions of lines of code posted on GitHub. ChatGPT and other major language models are used by billions of sites and text documents stored on the Internet.

Text-to-image tools such as Stable Diffusion, DALLE-2 and Midjourney use image-text pairs from datasets such as LAION-5B. AI models will continue to evolve and influence as we further digitize our lives and feed them with alternative data sources such as simulated data or data from game settings like Minecraft.

AI also needs a computing infrastructure to train effectively. As computers become more powerful, models that now require intensive effort and large-scale computations can be processed locally in the near future. For example, Stable Difusion can already be run on local computers, not cloud environments.

A third need for artificial intelligence is improved models and algorithms. Data-driven systems continue to advance rapidly after what was previously considered the domain of human cognition. But as the world around us is constantly changing, artificial intelligence systems need to be constantly retrained using new data. Without this important step, AI systems will not give answers that are actually incorrect or ignore new information that has emerged since their training.

Neural networks are not the only approach to artificial intelligence. Another prominent camp in AI research is symbolic artificial intelligence—based on rules and knowledge similar to the human process of constructing internal symbolic representations of certain phenomena, rather than digesting huge datasets.

But in the last decade, the balance of power has largely shifted towards data-driven approaches, and the “founding fathers” of modern deep learning have recently been awarded the Turing Prize, the equivalent of the Nobel Prize in computer science. Data, calculations and algorithms form the basis of future artificial intelligence. All indicators point to rapid progress in all three categories for the foreseeable future.

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *