Venturing into the era of artificial intelligence: the other side of the coin (Parts 2, 3, 4)
February 1, 2020
This is part of a four-part news feature package. Click here to read part one.
Though the idea of an AI-driven car is often criticized, it’s hard to ignore that 95 percent of automobile accidents (which are one of the leading causes of injuries in the United States) happen due to human error. However, there are no opportunities for a computer to be distracted. Although it is not clear to what extent lives would be saved, many agree that human-driven cars come at a very high cost in terms of risk and that AI-driven cars could bring down automobile accident numbers.
There is a theory that often finds itself being used by those who doubt the AI. Based on the famous philosophical trolley problem, the hypothetical question perfectly frames a majority of ethical questions raised around the use of AI.
Say one sets an AI-driven car on a straight road, where it reaches high speed. All of a sudden, two objects appear on the road: a tall tree and a child. In this hypothetical situation, the car cannot stop on time, nor can it go in between the two objects. It must choose to hit one.
Driving into the tree may damage the car, but a human’s foundational ethics dictate that a wrecked car does not weigh more than a child’s life.
However, AI is essentially a blur of ones and zeros; it cannot grasp morality and ethics the same way as a human brain. From its perspective, the tree is a greater danger simply because it is a bigger object, whereas the child is a much smaller object and will, therefore, bring less damage to the car. Hypothetically, the car would ultimately go toward the child instead.
Now, imagine an AI system with the same undeveloped set of morals, but equipped with nukes. A single mishap would bring chaos. But this is scarily close to the reality the world might face if the powers given to AI is not kept in check.
“So the rate of improvement is really dramatic,” billionaire tech entrepreneur Elon Musk said at the South by Southwest event in March 2018. “We have to figure out some way to ensure that the advent of digital super-intelligence is one which is symbiotic with humanity. That is the single biggest existential crisis that we face and the most pressing one.”
It is questions and hypothetical situations such as this that daunt some. The argument has been addressed by many leading scientific and tech brains. While most excitedly support the advancement of AI, some–including Musk, Stephen Hawking and Bill Gates–have warned people time and time again that AI could ultimately be detrimental to humans if left unchecked.
AI will be crucial technology in the near future and has jaw-dropping raw potential– but like a child, as many are realizing, it must be taught and kept under check.
“[AI] has the potential to be prosperous in the future,” CHS9 engineering teacher Grant Garner said. “But, it’s going to all boil down to what it’s used for. If you’re creating something like artificial intelligence, which has the potential to learn, you have to realize it’s basing all its decisions on knowledge, past precedents, what it deems best for that moment; it will be devoid of emotions and morals. So ultimately, you’re going to have problems with them making decisions that have no moral bases.”
Contrary to the depictions seen in pop culture, numerous experts in the field state AI will not stage a threat to humankind as long as it is kept under ropes and given rules, regulations and boundaries–somewhat similar to the famous Three Laws of Robotics by Isaac Asimov.
The Greek mythological figure of King Midas serves as an example. The “greediest king in the land”, his wish to be blessed with the ability to turn everything he touched to gold became a curse in disguise when he killed his own daughter by accidentally turning her into gold.
Like Midas, humans also have a wish, per se, and they have to be careful how they use it. AI is a tremendous power, but the outcomes of its evolution can be either good or bad. With the right guidance, experts say it can be greatly profitable to humanity.
Shouvik Pradhan is a senior data scientist at Fidelity Investments. As someone who regularly interacts with AI, Pradhan predicts AI to be a significant component in humanity’s future.
“Some people believe AI and machines will run everything, and we will lose our jobs,” Pradhan said. “However, I believe the future is better than what we think. The mobile phone we have today is as powerful as the mainframe computer that took us to the moon; therefore, in 10 years, technology will be infinitely more powerful than what we can imagine. So to make AI safe for humans, we need to observe what is coming and adapt ourselves to it.”
But AI is also not a computer that can be shut down by just pulling the plug, which is why experts say it is critical to tread carefully down this road. With a strategy and a clear-cut set of rules, humanity can keep this power under control as AI’s full potential has yet to be established.
The late theoretical physicist Stephen Hawkings had warned the world.
“The development of full artificial intelligence could spell the end of the human race,” Hawking said in a BBC interview in 2014. “It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Life on this planet has been ruled by the unchanging, merciless law of evolution for billions of years–adapt or be left behind. It has seen the advent and extinction of millions of species. Homo sapiens have held the throne for ages, but the question still remains: will AI be humankind’s greatest ally? Or will it be the beast to finally dethrone it?
Follow Akif (@AkifAbidi) and @CHSCampusNews on Twitter.