Artificial intelligence makes its presence felt in thousands of different ways. It helps scientists make sense of large sets of data; helps detect financial fraud; He drives our cars; gives us musical suggestions; chatbots are driving us crazy. And this is just beginning. Can we understand how quickly artificial intelligence will develop? If the answer is no, the Great Filter?
Fermi’s paradox is the discrepancy between the high probability that advanced civilizations exist and the fact that there is no evidence of their existence. Many solutions have been proposed as to why the discrepancy exists. One idea is the “Big Filter.”
The Great Filter is a hypothetical event or condition that prevents intelligent life from becoming interplanetary and interstellar, or even leading to its extinction. Consider climate change, nuclear war, asteroid impacts, supernova explosions, plague epidemics, or any of the trickster’s gallery of disasters.
What about the rapid development of artificial intelligence?
new article Acta Astronautica explores the idea of AI evolving into artificial superintelligence (ASI) and ASI being a major filter. The article is titled: Is Artificial Intelligence the Great Filter in the Universe That Makes Advanced Technical Civilizations Rare? Its author is Michael Garrett, Department of Physics and Astronomy, University of Manchester.
“There is every reason to believe that without practical regulation, artificial intelligence could pose a serious threat to the future development of not only our technical civilization, but all technical civilizations.” Michael Garrett, University of Manchester
Some believe that the Great Filter prevents technological species like ours from becoming multi-planetary. This is bad because there is a higher risk that the species will become extinct or stagnate with a single nest. According to Garrett, the species is racing against time without a backup planet.
“Such a filter is believed to have arisen before these civilizations developed a stable multi-planetary existence, suggesting that the typical lifespan (L) of a technical civilization is less than 200 years,” writes Garrett.
If this is true, it could explain why we can’t detect technosignatures or other evidence of ETI (extraterrestrial intelligence). What does this tell us about our own technological trajectory? If we face a 200-year limit and ASI is the cause, what does that leave us?
Garrett emphasizes “…the critical need to rapidly establish a regulatory framework for the development of artificial intelligence on Earth and the development of a multi-planetary society to mitigate such existential threats.”
Many scientists and other thinkers say we are on the verge of a major transformation. Artificial intelligence is just starting to change the way we do things; Much of the transformation is behind the scenes. Artificial intelligence looks set to eliminate millions of jobs, and when combined with robotics, the transformation seems almost limitless. This is a pretty obvious concern.
But there are deeper, more systemic problems. Who writes the algorithms? Will AI discriminate in some way? Almost certainly. Will competing algorithms undermine strong democratic societies? Will open societies stay open? Will ASI start making decisions on our behalf and who will be responsible for this?
This is a vast tree of branching questions with no clear end.
Stephen Hawking (RIP) warned that artificial intelligence could destroy humanity if it begins to develop independently.
“I fear that artificial intelligence may completely replace humans. If people develop computer viruses, someone will develop artificial intelligence that improves and reproduces itself. There will be a new life form that will leave humans behind,” he said. wired In 2017. people become ASI.
Hawking may be one of the most recognizable voices warning about artificial intelligence, but he is far from the only one. The media is full of discussions and warnings, as well as articles about the work AI is doing for us. The most worrying warnings are that ASI may be a scam. Some dismiss this as science fiction, but Garrett doesn’t.
“The concern that an artificial superintelligence (ASI) will eventually go rogue is considered a serious concern; grappling with this possibility over the next few years is a growing challenge for leaders in the field,” Garrett writes.
The problem would be much simpler if artificial intelligence did not provide an advantage. But it offers all kinds of benefits, from improved medical imaging and diagnostics to safer transportation systems. The trick for governments is to allow benefits to increase while limiting harms.
“This is especially true in areas such as national security and defense, where responsible and ethical development must be a priority,” Garrett writes.
The problem is that we and our governments are not ready. There has never been such a thing as artificial intelligence, and no matter how hard we try to conceptualize it and understand its trajectory, we remain in the wrong place.
And if we find ourselves in such a situation, this will also apply to other biological species that develop artificial intelligence. The emergence of artificial intelligence and later ASI could become universal, making it a candidate for the Great Filter.
This is the risk ASI presents in concrete terms: the biological life that created it may no longer need it.
“Once technologically singular, ASI systems will rapidly evolve at a pace that will surpass biological intelligence and completely surpass traditional surveillance mechanisms, leading to unintended and unintended consequences that are unlikely to be compatible with biological interests or ethics,” explains Garrett.
How can ASI escape the pesky biological life that has engulfed it? It can create a deadly virus, interfere with the production and distribution of agricultural food, cause a nuclear power plant to melt down, start wars.
We really don’t know because this is uncharted territory. Hundreds of years ago, cartographers charted monsters in uncharted parts of the world, and that’s what we’re doing now.
If this all sounds bleak and inevitable, Garrett says it’s not.
His analysis so far is based on AI and humans occupying the same space. However, if we can achieve multi-planet status, the perspective will change.
“For example, a multi-planetary species could benefit from independent experiences on different planets, diversifying survival strategies and perhaps avoiding the single-point failure faced by a planetary civilization,” Garrett writes.
If we can spread the risk across multiple planets and multiple stars, we can protect ourselves from the worst possible ASI consequences.
“This distributed model of existence creates redundancy, increasing the resilience of biological civilization to AI-induced disasters,” he writes.
If one of the planets or outposts occupied by future humans cannot survive the ASI technological singularity, the others may. And they would find out.
Multi-planet status could help with more than just survival in ASI. It can help us master this. Garrett envisions situations where we could experiment with AI more extensively while keeping it under control. Imagine an AI on an isolated asteroid or dwarf planet carrying out our orders without access to the resources needed to escape from prison.
“This allows for isolated environments where the impact of advanced AI can be studied without the risk of immediate global extinction,” Garrett writes.
But here’s the puzzle. While the development of artificial intelligence is accelerating, our efforts to become multi-planetary are not.
“The difference between the rapid development of artificial intelligence and the slow advancement of space technology is striking,” writes Garrett.
The difference is that AI is computational and informative, but space travel involves a lot of physical obstacles that we don’t yet know how to overcome. Our own biological nature hinders space travel, but there is no such obstacle hindering artificial intelligence.
“While AI can theoretically evolve its capabilities with virtually no physical limitations, space travel must contend with energy constraints, materials science limits, and the harsh realities of the space environment,” Garrett writes.
Currently, artificial intelligence works within the limits we set. However, this is not always the case. We don’t know when, or even if, AI will evolve into ASI. However, we cannot ignore this possibility. This leads to two interrelated consequences.
If Garrett is right, humanity needs to work harder on space travel. It may sound far-fetched, but people who know this subject know it’s true: Earth won’t be habitable forever. Unless we expand into space, humanity will perish here either by its own hand or by the hand of nature. Garrett’s 200-year estimate merely puts an exclamation point. The renewed emphasis on reaching the Moon and Mars offers some hope.
The second implication concerns the legal regulation and governance of artificial intelligence, a difficult task in a world where psychopaths can take control of entire nations and are determined to wage war.
“With industry stakeholders, politicians, individual experts and governments already warning of the need for regulation, creating a regulatory framework that can be accepted globally will be a difficult task,” writes Garrett.
Difficulty barely explains this. Humanity’s destructive struggle makes this even more unmanageable. Additionally, the faster we develop guidelines, the faster ASI can change.
“There is every reason to believe that without practical regulation, artificial intelligence could pose a serious threat to the future development of not only our technological civilization, but all technological civilizations,” Garrett writes.
If we do this, we may end up in a situation that seems boring and mundane: arguing about legislation.
“The preservation of intelligent and sentient life in the universe may depend on the timely and effective implementation of such international regulatory measures and technological efforts,” Garrett writes.