(Andes)
In 2023, the world witnessed Big innovations in artificial intelligence (AI).. From what I’ve read, these advances were made to improve people’s lives or were completely destroyed by some kind of machine uprising. One of the most impactful innovations this year was the launch of ChatGPT, which created excitement as well as fear among people.
ChatGPT is part of a new generation of artificial intelligence systems that can speak, generate readable text, and produce new images and videos based on what they’ve “learned” from a large database of digital books, online writing, and other media.
Derek Thompson, magazine editor and journalist AtlanticA series of questions were asked to find out if you really should be afraid of new advances in artificial intelligence and think that lead to the end of humanity or if they are inspirational tools that will improve people’s lives.
Computer scientist Steven Wolfram explains that large linguistic models (LLM) like ChatGPT work in a very simple way: they build and train a neural network to generate texts fed from a large sample of text. on the Internet, such as books, digital libraries, etc.
If one asks an LLM to imitate Shakespeare, they are producing a text with an iambic pentameter structure. Or if you ask him to write in the style of some science fiction writer, he will imitate the more general characteristics of that author.
“Experts have known for years that LLMs are awesome, they make fictional stuff, they can be useful, but They are really dumb systems and not terrible” said Ian LeCun, Chief AI Scientist at Meta, who consulted Atlantic.
OpenAI, the company that created ChatGPT
American media reports that the development of artificial intelligence is concentrated in large companies and start-ups backed by the capital of technology investment firms.
The fact that this development is concentrated in companies rather than universities and governments can improve the efficiency and quality of these AI systems.
“I have no doubt that AI will develop faster at Microsoft, Meta, and Google than it will at, say, the United States Army.Derek Thompson points out.
However, American media warn that companies can make mistakes when they try to quickly introduce a product that is not in optimal conditions to the market. For example, the Bing (Microsoft) chatbot was aggressive towards people who used it when it was first released. There are other examples of this type of error, such as the Google chatbot that failed due to a hasty launch.
philosopher Toby Ord warns that these advances in AI technology are not keeping pace with the evolving ethics of AI use. Consulted AtlanticOrd likened the use of artificial intelligence to “a prototype jet engine capable of speeds never seen before, but without corresponding improvements in steering and control.” For the philosopher, it was as if humanity was on a Mach 5 powerful jet plane, but Controlling the aircraft in the desired direction without a manual.
As for the fear that artificial intelligence is the beginning of the end of the human race, the publication notes that systems like Bing and ChatGPT are not good examples of artificial intelligence. But Yes, they can show us our ability to develop a super intelligent machine.
Others fear that artificial intelligence systems will not live up to the intentions of their designers. On this topic, many machine scientists have warned about this potential problem.
A young man in front of a screen (Getty)
“How can we ensure that the artificial intelligence created, which may be far more intelligent than anyone who has ever lived, is in the interests of its creators and humanity?– he asks Atlantic.
And a big fear regarding the previous question: super-intelligent AI could be a serious problem for humanity.
Another question that worries experts and shapes the American media: “Should we be more afraid of an artificial intelligence or an artificial intelligence that serves the interests of bad actors?”
One possible solution to this is to develop a set of laws and regulations that ensure that the AIs being developed are aligned with the interests of their creators and that those interests do not harm humanity. And the development of artificial intelligence outside of these laws would be illegal.
however, There will be actors or regimes with nefarious interests that can develop artificial intelligence with dangerous behaviors..
Another issue that raises questions is: how much should education change with the development of these AI systems?
The development of these AI systems is also useful in other industries such as finance or programming. In some companies, some artificial intelligence systems outperform analysts in picking the best stocks.
“ChatGPT demonstrated good writing skills for motion letters, summary pleadings and judgments, and drafted questions for cross-examination,” said Michael Giambalest, president of investment and market strategy at JP Morgan Asset Management.
“An LLM is not a substitute for lawyers, but it can increase your productivity, especially when legal databases like Westlaw and Lexis are used to prepare them.Cembalest added.
For several decades, it has been said that AIs will replace workers in some professions, such as radiologists. However, the use of artificial intelligence in radiology remains an aid to clinicians, not a replacement. As in radiology, these technologies are expected to serve to improve people’s lives.
Continue reading: