Why are we so fixated on the subject of artificial intelligence (AI)?
Why do we want to build a thinking machine so badly
Why do we invest so much in the research and development of AI?
Are we behaving like lonely aging parents who desperately want to have a child in the family just to feel they accomplished their mission in life?
Or do we need a helper for the days when we will get older, weaker, and powerless?
Or are we finally tired of doing the hard work and want to outsource it to somebody willing to do it for us, asking for nothing in return?
Or are we reckless enough to play God?
Whatever the answer might be, we need to consider all the consequences.
Don’t we remember the once-mighty species walking the Earth but now erased from existence by the fitter, a more advanced newcomer?
Don’t we see what happened to the aging technologies, antiquated computers, outdated algorithms?
Don’t we remember how we, as a species, eliminated our predecessors, the Neanderthals, without mercy, after they previously got rid of their predecessors and their competition? Because the stronger always replace the weaker. This is the nature of our world.
But if we remember it all, why aren’t we frightened of being replaced one day by a more intelligent, faster, and stronger machine?
Why don’t we worry ourselves sick over coming face to face with a sentient being with an IQ of 3,000? A being whose way of thinking we will not even come close to comprehending? A being that will look down upon us like the genius Isaac Newton looked at a five-year-old child centuries ago?
Why do we believe it will treat us differently than every other superior species, civilization, or technology treated their less advanced predecessor?
There is no logical explanation for humanity’s hunger for advanced AI technology that is designed to be better than us. Usually, we are not that silly, arrogant, or irresponsible.
Maybe we do this because it is an evolution in action, and we—even if we were the first species to understand it—still cannot control it or stop it and must fulfill our role and complete OUR TRUE MISSION on this planet. And bring to it our more advanced replacement—our successor. Whichever species it belongs to.
Intelligence empowers us to see the unfavorable future outcomes and prevents them from ever happening. The good news is that we haven’t yet built the next supreme being to replace us at the top of the global food chain. But, we are working hard toward that goal, and with every year, investing more energy, money, and resources into cutting the very branch we are sitting on while loving every second of it!
There is still time to correct our course. And these are the things we need to do:
First, recognize that AI research is as dangerous as research on nuclear or biological weapons. One major mistake can wipe us out.
Then, put in place clear regulations and tighten the laws controlling AI research around the world. We must do this research in secure labs isolated from the Internet and require security clearance for their personnel.
Next, sign an international treaty and non-proliferation agreement on AI technology. And impose harsh criminal punishment on violators.
Also, we need to educate people about what is going on and the dangers of this technology. Prohibit the design of any AI engine with an IQ exceeding 200 (to be tested by a standardized IQ test). Introduce strict laws against using AI for military applications (which, unfortunately, is already too late).
This seems to be the right way to contain this technology, if it is even possible, because everything that can be invented is always invented by us—and then, used. Even if it hurts us. But we can still find a different, wise way to limit the destructive power of future AI, and we already have many good examples of such a success.
Here are just a few:
Nuclear technology could kill millions or even billions. But instead, with the right laws and regulations in place, it now powers homes and factories around the world and has done so for many decades already.
Biological weapon research is deadly, and any major mistake could lead to a worldwide pandemic to bring humanity to a halt, change our lifestyle, or wipe out our entire civilization. But we managed to keep this research under control and mostly avoided the negative consequences.
Eugenics was attempted, rejected in the past, and is prohibited today. And human cloning is also outlawed.
Addictive, potent narcotics are prohibited and fought against globally. However, they are still allowed for medicinal purposes.
We used chemical weapons in the past, but they are now prohibited and strictly monitored.
Computer hacking is now considered a dangerous criminal activity, which is outlawed and fought against in most countries.
Therefore, we have plenty of good examples of us all working together against different global technological threats. In the same way, AI could be (and should be) controlled and guided to help us move forward and succeed as a species. Or AI could become our enemy one day. The enemy we cannot be sure we can defeat.
The first genially intelligent machines will probably be created within the next hundred years. A hundred years from now, will Homo sapiens still rule this planet? Or will it exist only as a historical record on some advanced storage device inside a future intelligent machine? Under a file name “Homo extinctus”?
And let’s remember this—if we allow a new intellect to be created, we will not only face the problem of survival and a need to fight against an intellectually superior enemy. We will also need to deal with a new moral and ethical problem: killing those new, created by us, intelligent life-forms. Our offsprings. This will go against our most fundamental moral principles and should be considered unacceptable.
Otherwise, the universe might never forgive us.
AI Manifesto
by Jack Backstrom, Ph.D.
INTELAND
Welcome to the Adventure!
If you like action-packed cyber-fiction, subscribe below and stay informed about new books!