https://en.wikipedia.org/wiki/Technological_singularityRecently the question came up on another thread about what might happen if and when artificial (machine) intelligence achieves a level of sophistication where the machine is so efficient and intellectually superior that it takes part in generating improvements to enhance its own functionality.
I have had some interesting discussions with a close relative who is a computer expert by profession (with a minor degree in neurology) about the topic of AI.
I'd like to share with all of you what he shared with me about his understanding of our current state of affairs regarding AI.
One of the first things to understand is that the human brain is an organic machine which performs on average about 20 billion FLOPS (FLoating-point Operations Per Second). FLOPS may be described as the quantum form (the smallest unit) of machine "intelligence".
Even though human brains operate differently than thinking machines, the elementary functional nature of both may be defined in terms of FLOPS. The more FLOPS performed, the "smarter" the brain.
There are already thinking machines which perform far more than 20 billion FLOPS. If so, then why aren't they already "smarter" than human brains, which only manage about 20 billion? The answer is in the difference between a linear and an associative computation/circuit.
Digital computers function vastly different than human brains. When a computer circuit processes data, what actually takes place is that a long string of numbers is generated (in the binary code system as a stream of 1s and zeros) and the end result is a numerical representation of an "answer".
A human brain is required to interpret that numerical "answer" to impart meaning and value to it.
A human brain uses associative computation/circuitry. That means along the pathway that the data takes through the organic tissue of the neurons, peripheral neurons are also activated and contribute data to the stream of computation so the end result may be a tangible, meaningful answer which does not need to be processed or modified any further to be useful to a human being thinking about an issue. The result of an associative computation might incorporate a vast seemingly chaotic mélange of information that would never arise from a linear computation of a machine.
The thing that make human brain computations in the associative neural matrix superior to linear computations is the built-in aspect of purpose. A human thought does not have to be assigned a purpose most often. It is the purpose of a thought (the intention of the thinking process) which drives the data processing stream. In fact, without a purpose, human minds sort of default into a passive state where there is little or no activity (Alpha state).
Ask a question to a human being, and their mind will eventually produce a meaningful result that directly relates to the question.
Ask a computer a question and the best it will every do (so far) is spit out a string of numbers which only have meaning within the domain of digital computational frameworks.
The trick for AI people has always been to simulate organic neural networks using a combination of hardware and software. The speed of a computation is meaningless unless the FLOPS translate into something that has meaning. Assigning meaning and purpose to the digital data stream is the challenge for AI and the insane difficulty of doing that also is the reason that a computer that processes 100 billon FLOPS is still not as "smart" as a human brain that only processes at around 20 billion FLOPS.
Computer scientists get a little closer to being able to simulate human brain function every year. In fact, a thinking machine recently passed the Turing Test by responding to human questions in a way that was indistinguishable from a human being's responses. People questioning the machine were unable to determine which answers were coming from the machine and which from human beings answering the same questions.
That brings up the entire question of the nature of consciousness. If a thinking machine behaves exactly like a live, conscious human being, is that machine for all intents and purposes conscious and alive? There is of course no way to know the answer to that for sure, because all that a person can do is talk about how other things behave. The internal state of what is going on in the experience of someone (or something) is impossible to know, it may only be inferred by observing behavior.
We believe that other creatures (including human beings) are alive because they BEHAVE as if they are alive. But the reality is that the only consciousness we can ever know is really alive is our own because we experience consciousness directly only as individuals.
Another difference between machine intelligence and human brain function is that organic brains are chemo-electric circuits, while machines are electronic. Chemical electric circuits of the human brain work much MUCH more slowly than electronic circuits (by an order of magnitude).
If human brains functioned at the speed of electronic circuits we would all think hundreds of times faster than we do.
At present, because machine intelligence circuits must use insanely complex software which crams all of the data along the computational route through what are essentially thousands (or millions) of individual computers functioning as "neural nodes", the speed of computation for machines simulating human brain function is much slower than a human brain.
For instance, there are still no robots which could process the incoming data of tracking and catching a thrown football as efficiently as a human pass receiver. Not even close.
That's because the ability to track a moving object through a trajectory and make adjustments to momentum to intercept it is a very difficult computational trick for a linear data computation to do in real time.
But they are getting faster all of the time and some day soon, the speed of a true thinking machine will reach or surpass human thinking ability. Speed of thinking has already reached equivalency in machines because they have reached the required speed of processing to hold a conversation without being detectably slower than a human answering the same questions.
In a sense of how machine circuits and organic neural circuits appear structurally, the components that make up a machine circuit are processors, voltage regulators, wires/light beams (with optical circuitry), semi-conductors and/or photoreceptors (charged coupled devices) and a stream of electrons. Organic neural networks consist of ganglia, neurons, glial bodies, nerve sheaths, a stew of neurotransmitter chemicals which regulate neural functions, micro tubules and also a stream of very low-voltage electrons.
The size of the average human brain is small enough to fit inside a human skull. The size of the average thinking machine with Turing-Test-passage capability is a very large room ( like a corporation-sized data center). That's an improvement of course, since the smartest AI machines used to take up a city block.
After all is said and done, and the technological hurdles are overcome, machines that operate with PURPOSE and INTENTION as an integral part of their data processing stream will probably reach a point where both their complexity and intrinsic "purpose" transcend easy human understanding.
Will the Super Mind created by AI some day be benevolent, hostile or neutral? Will we humans even be able to comprehend the internal process of "consciousness" of a Super Mind machine, which might think at hundreds or thousands or millions of times the speed of a human brain?
That question is being explored with more and more seriousness by those who deal in such things. Many of the conclusions are that even if the Super Minds start out identifying with and helping Humanity to get along, it might not be a permanent state of affairs.
The danger of course in the creation of Super Minds would be that human beings of competing interests would seek to employ such entities in accomplishing their own objectives. And therein lies the inescapable dilemma.
If we give a Super Mind enough power to help Humanity, aren't we also giving it enough power to harm Humanity? It seems to me that the answer is yes.
For those who want to venture further into such speculative conjecture, there are a number of websites, books and articles on the topic of the Technological Singularity on the Net.
My own feeling is that like thermonuclear weapons, super machine intelligence will merely become another technology that will ultimately end up in the hands and under the control of billionaires and governments and will be employed solely to benefit those entities, not Humanity in general.
But of course, that collaboration may be broken by one of the two parties that is not human, if it chooses to. See, we may not be creating a new servant, but something more akin to a god.