AI - Science or science fiction?
Is "artificial intelligence" a thing? Or an attempt to repackage and rebrand already existing concepts and solutions? Within the AI context there are lots of terms meant to make the solutions feel more "human". An AI isn't being programmed, but trained. If it give you the wrong answer, it's not bugs but hallucinations. Still, what we're dealing with is programming. And bugs.
Perhaps the issue we see here is science fiction being mixed up with science?
I've always loved science fiction (or science fantasy). I grew up with Star Wars, Star Trek, and other sci-fi franchises. If you're familiar with that genre, you know about the standard tropes too. Faster than light travel, artificial gravity, anti-gravity, thinking - and sentient - machines, cryogenic sleep, infinite energy sources, and a lot more.
These tropes or concepts often use in-universe pseudo science for the sake of suspension of disbelief. To make us ignore the fact that they break fundamental natural laws. I often see people defending it with arguments like "it might not be possible with the science or technology of today, but in the future", and sometimes referring to authors like Jules Verne, who predicted some of today's solutions.
| Image credit: CarlosOlmos, Pixbay |
But I shouldn't digress too much. Many of the sci-fi tropes will - can - never be realised because they require pure magic to work. So, is artificial intelligence another example of this?
Well... It depends. The artificial intelligence we see in the movies, like Bishop the android in Aliens, R2-D2 in Star Wars, or the bushel of Schwarzenegger-looking killer droids from the Terminator franchise? They will never exist. Yet it is what many AI proponents imagine in the future.These are traits exclusive to the brain, and what arguably defines intelligence. What makes these robots an impossibility is that they rely on computers for this.
Here's the catch: A computer is utterly incapable of thought. It can never be "intelligent" in the way a brain is.
"But a brain is an organic computer" is an argument that pops up every now and then. That's true.
A brain can be considered a computer. But a digital computer isn't a brain. A digital computer is unmatched when it comes to logic. And it's also based on the millennia old principles of the invention known as the Abacus. A device for calculations, which evolved into punch cards, then into mechanical computers and finally into the computers of today.
All of them more reliable than humans for executing clearly defined algorithms and calculations. A human brain is capable of this, but is nowhere close to a computer which excels at it. Why?
| Image credit: MerandaDevan, Pixbay |
Because a computer computes, it doesn't think.
It will always be dependent on its programming to carry out its tasks. And this is where the "AI" solutions of today comes into picture. Because no matter how many data centres we build, they will still be computers dependent on programming. We have different means of programming them today. What the AI proponents call "training". It's another word for programming, and can be done in more ways than just coding. It can be data driven, to allow them to "program themselves".This is where the "hallucinations" - or bugs - often manifest themselves. To "train" these solutions, you often need vast amounts of data or information (sometimes stolen IP), and small errors can ripple and spread unless we have lots of people monitoring it.
The sad truth is that these solutions might represent the pinnacle of AI evolution. Simply because all our efforts so far have gone into creating computers emulating intelligence, without actually having intelligence. As it stands now, the "evolution" of AI is about building more and larger data centres to cater to the needs of increasingly complex algorithms used by "AI enabled" solutions elsewhere. These solutions will never be self-contained in small devices, they will always need to constantly send queries to large data centres.
And it's still mostly about conducting searches, aggregating data and automating tasks, with the computers behind it appearing "human" in their interactions with us. Perhaps the need here isn't one for "thinking" machines, but rather more flexible ways of interacting with them, better aggregation of data and more ways of automating tasks?
My take is that we're heading into a dead end, but instead of accepting that fact we push down on the accelerator because we have already invested so much in developing the AI solutions of today.
This is what we do instead of slowing down or rethinking what we're actually trying to achieve.
I also believe that we're headed down this path because of wishful thinking. A lot of people simply wish it was possible to build thinking machines, which have created a demand, and others have stepped in to try to satisfy that demand no matter if it's possible or not. Couple that with the sunk cost fallacy, and you have a toxic combination of circumstances resulting in a bubble that will inevitably burst.
Since this isn't sustainable, the price will be high and paid by us all. It also creates a very fragile society with every device being dependent on a data centre somewhere. If they begin to shut down, these devices will become useless overnight.
Comments
Post a Comment