April 24, 2025
Trending News

Are our machines really creative?

  • September 20, 2023
  • 0

With the sudden hype surrounding ChatGPT and some of its controversial excesses, it suddenly became clear to the masses how far the technology has already advanced and is

With the sudden hype surrounding ChatGPT and some of its controversial excesses, it suddenly became clear to the masses how far the technology has already advanced and is approaching (or exceeding) human intelligence. Should we fear that machines will replace us? Have our machines really become creative? And can we still trust our eyes and ears with all of today’s deep fakes? According to Sogeti’s Michiel Boreel, as long as we prepare and draw the right conclusions, we don’t have to worry.

Distorting reality is nothing new. Humanity has always played with reality. Even when we still lived in caves, we created an alternative reality with shadow plays and wall drawings. Thanks to the enormous technical advances we have made in recent years, we can now do this in an increasingly realistic and therefore deceptive way. Synthesia (with GPT-4) can create a complete welcome video for you in minutes with a very realistic avatar based on a few keywords, an audience description and the desired reading tone. Until a few years ago, this would have taken weeks.

The biggest problem is that there are still no clear agreements on the use of these new resources. We know that a Netflix series is entirely fiction unless it is a documentary. From deep fakes We should take the same approach: you can have a lot of fun with it, as long as it’s clear to everyone that it’s fiction or sponsored news.

This distinction is important because the potential of “generative AI” to improve our lives is enormous. You go from the first idea to the first delivery very quickly, whereas in the past this would have taken a lot of time and effort and therefore many ideas were lost unused. Generative AI offers us shortcuts that allow us to formulate our ideas more often and thus reach a useful result more quickly.

The ideas still have to come from people and the implementation will certainly not be successful the first time. But if you use AI intelligently, you can take many steps in a very short time. In other words: new technologies only make a difference if they also create new behavior and new values.

The technology is now almost completely ready for this. We have become so advanced that many machines seem to pass the Turing Test (which makes humans believe they are talking to another human). GPT-4 passed the American LSAT exam for law school admission. With flying colors: With a percentile of 88, the course would be included in the top 20 law schools.

Intelligence and creativity

Is GPT-4 ready to become a lawyer or judge? Absolutely not. After all, it is nothing more than a language model that manages to create the right connections between words and sentences and generate meaningful content from them. This language model is not the same as a knowledge model. So GPT-4 actually knows nothing about rights and the legal world.

GPT-4 actually knows nothing about rights and the legal world.

WedChiel Boreel, Global Chief Technology Officer at Sogeti (part of Capgemini)

This limitation is not recognized by many, which has already led to many disappointments. It is already noticeable that GPT-4 has lost popularity in the last few weeks. It’s not uncommon for a technology that was introduced very quickly to lose users quickly, because that’s the nature of hype. But it suggests the great disappointment with the supposed intelligence of this software. Only when language models are linked to mathematical models and other models that contain knowledge about the world will we really make great progress.

What is the state of creativity in our machines? Is beating a Go world champion a testament to creativity? In any case, it is the result of a machine that could teach itself – by playing against itself an infinite number of times – to become better at this mental sport, which can be seen as a form of creativity. The same thing happens with artificial image generators; You can give them as many examples as possible of what a particular image should look like, or you can provide a limited number of images and then have the generator run an infinite number of trials, each of which may or may not be validated.

Federer playing tennis against himself? Dall-E adding dogs to a painting? A teddy bear on a skateboard in Times Square, including shadow effects? You can call it creativity, but ultimately it is the result of an enormous amount of input, an unimaginable amount Trial and error and/or the necessary interaction between man and machine about the desired and generated result.

Wanted: Rules and tools

In any case, the possible uses are extremely diverse and – as mentioned – sometimes very questionable. What should we do with extremely realistic photos of Trump in prison? How do we deal with this type of “lying with pictures”? This requires well-thought-out regulations, ideally in combination with the necessary tools that recognize the difference between counterfeit and authentic.

You also face similar debates in art: Is a photo taken with AI worse than one edited by a photographer? Or is the end result – the beauty or impact of the image – ultimately the most important thing? Clear rules apply here too. For example, photographer Boris Eldagsen is advocating for competitions in a new category: “promptography,” in which images are the result not of light but of input prompts in a generative AI.

But it goes further. According to Europol, by 2026, 90 percent of what you find on the internet will be AI-generated. What happens when next generations of AI build their “intelligence” with these artificially generated texts and images? Do you then get a kind of innate internet reality? What about copyright if Dall-E creates a Hopper-esque painting? How do we deal with ChatGPT “hallucinating” references and values? Who is responsible if you rely on generative AI that gives you incorrect values?

Language models like ChatGPT are not knowledge models.

microphoneHeel Boreel, Global Chief Technology Officer at Sogeti (part of Capgemini)

In any case, we must be clear about this: language models like ChatGPT are not knowledge models. You can use them as support, but not as the only source of knowledge. The fact that this is happening now can be attributed to OpenAI, which suddenly showed the world this imperfect model without warning about the limitations. But of course we also have ourselves to thank for that. And the lack of a legal and social framework that puts all of these solutions into perspective.

Should we be afraid of AI?

Our fear of artificial intelligence has largely been forced upon us by Hollywood, which almost always associates AI with catastrophic consequences for humanity. But scientists and inventors such as Stephen Hawking, Bill Gates and Elon Musk have also repeatedly pointed out the danger of over-intelligent machines and advocated strict guidelines for the use of AI.

Perhaps the most concrete fear today is the fear that AI will cost us jobs. This is of course true: many repetitive and simple tasks such as accounting and telemarketing will disappear through automation. But now everyone knows that a large number of jobs will take its place. Jobs such as AI and ML experts, BI analysts, but also sustainability specialists and agricultural equipment operators, according to the latest forecast from the World Economic Forum. The well-known saying – it is not AI that will replace you, but the one who understands AI better than you – is completely justified.

In general, no one has to be afraid of jobs being destroyed by AI. The increased productivity that each of us will achieve thanks to AI – programmers with code automation, each of us with a digital assistant to help us focus on our core business – will stimulate the economy and lead to a significant increase in the number of open people Jobs lead to consequences. If there’s anything we have to worry about, it’s that there are too few people doing the work that AI can’t (yet) do.

Diploma

A law was named after the American futurologist Roy Amara that states that the effects of a new technology are usually overestimated in the short term and underestimated in the long term. AI may not make much progress in the short term, but in the long term it will radically change our society. And that’s why it’s important to now create a framework within which we can and are allowed to use this technology. AI itself may have limited creativity, but it can do wonders for our creativity.

This is a post by Michiel Boreel, Global Chief Technology Officer at Sogeti (part of Capgemini). He describes his main mission as “getting customers to dream about what technology makes possible.” Click here for more information about the company.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version