Author Topic: The Future of AI: Singularity  (Read 652 times)

0 Members and 1 Guest are viewing this topic.

Offline LateForLunch

  • GOTWALMA Get Out of the Way and Leave Me Alone! (Nods to Teebone)
  • Hero Member
  • *****
  • Posts: 1,349
The Future of AI: Singularity
« on: March 23, 2017, 03:45:58 pm »
https://en.wikipedia.org/wiki/Technological_singularity

Recently the question came up on another thread about what might happen if and when artificial (machine) intelligence achieves a level of sophistication where the machine is so efficient and intellectually superior that it takes part in generating improvements to enhance its own functionality.
 
I have had some interesting discussions with a close relative who is a computer expert by profession (with a minor degree in neurology) about the topic of AI.

I'd like to share with all of you what he shared with me about his understanding of our current state of affairs regarding AI.

One of the first things to understand is that the human brain is an organic machine which performs on average about 20 billion FLOPS (FLoating-point Operations Per Second). FLOPS may be described as the quantum form (the smallest unit) of machine "intelligence".

Even though human brains operate differently than thinking machines, the elementary functional nature of both may be defined in terms of FLOPS. The more FLOPS performed, the "smarter" the brain.

There are already thinking machines which perform far more than 20 billion FLOPS. If so, then why aren't they already "smarter" than human brains, which only manage about 20 billion? The answer is in the difference between a linear and an associative computation/circuit.

Digital computers function vastly different than human brains. When a computer circuit processes data, what actually takes place is that a long string of numbers is generated (in the binary code system as a stream of 1s and zeros) and the end result is a numerical representation of an "answer".
A human brain is required to interpret that numerical "answer" to impart meaning and value to it.

A human brain uses associative computation/circuitry. That means along the pathway that the data takes through the organic tissue of the neurons, peripheral neurons are also activated and contribute data to the stream of computation so the end result may be a tangible, meaningful answer which does not need to be processed or modified any further to be useful to a human being thinking about an issue. The result of an associative computation might incorporate a vast seemingly chaotic mélange of information that would never arise from a linear computation of a machine.

The thing that make human brain computations in the associative neural matrix superior to linear computations is the built-in aspect of purpose. A human thought does not have to be assigned a purpose most often. It is the purpose of a thought (the intention of the thinking process) which drives the data processing stream. In fact, without a purpose, human minds sort of default into a passive state where there is little or no activity (Alpha state).

Ask a question to a human being, and their mind will eventually produce a meaningful result that directly relates to the question.

Ask a computer a question and the best it will every do (so far) is spit out a string of numbers which only have meaning within the domain of digital computational frameworks.

The trick for AI people has always been to simulate organic neural networks using a combination of hardware and software. The speed of a computation is meaningless unless the FLOPS translate into something that has meaning. Assigning meaning and purpose to the digital data stream is the challenge for AI and the insane difficulty of doing that also is the reason that a computer that processes 100 billon FLOPS is still not as "smart" as a human brain that only processes at around 20 billion FLOPS.

Computer scientists get a little closer to being able to simulate human brain function every year. In fact, a thinking machine recently passed the Turing Test by responding to human questions in a way that was indistinguishable from a human being's responses. People questioning the machine were unable to determine which answers were coming from the machine and which from human beings answering the same questions.

That brings up the entire question of the nature of consciousness. If a thinking machine behaves exactly like a live, conscious human being, is that machine for all intents and purposes conscious and alive? There is of course no way to know the answer to that for sure, because all that a person can do is talk about how other things behave. The internal state of what is going on in the experience of someone (or something) is impossible to know, it may only be inferred by observing behavior.

We believe that other creatures (including human beings) are alive because they BEHAVE as if they are alive. But the reality is that the only consciousness we can ever know is really alive is our own because we experience consciousness directly only as individuals.

Another difference between machine intelligence and human brain function is that organic brains are chemo-electric circuits, while machines are electronic. Chemical electric circuits of the human brain work much MUCH more slowly than electronic circuits (by an order of magnitude).

If human brains functioned at the speed of electronic circuits we would all think hundreds of times faster than we do.

At present, because machine intelligence circuits must use insanely complex software which crams all of the data along the computational route through what are essentially thousands (or millions) of individual computers functioning as "neural nodes", the speed of computation for machines simulating human brain function is much slower than a human brain.

For instance, there are still no robots which could process the incoming data of tracking and catching a thrown football as efficiently as a human pass receiver. Not even close.

That's because the ability to track a moving object through a trajectory and make adjustments to momentum to intercept it is a very difficult computational trick for a linear data computation to do in real time.

But they are getting faster all of the time and some day soon, the speed of a true thinking machine will reach or surpass human thinking ability. Speed of thinking has already reached equivalency in machines because they have reached the required speed of processing to hold a conversation without being detectably slower than a human answering the same questions.

In a sense of how machine circuits and  organic neural circuits appear structurally, the components that make up a machine circuit are processors, voltage regulators, wires/light beams (with optical circuitry), semi-conductors and/or photoreceptors (charged coupled devices) and a stream of electrons. Organic neural networks consist of ganglia, neurons, glial bodies, nerve sheaths, a stew of neurotransmitter chemicals which regulate neural functions, micro tubules and also a stream of very low-voltage electrons.

The size of the average human brain is small enough to fit inside a human skull. The size of the average thinking machine with Turing-Test-passage capability is a very large room ( like a corporation-sized data center). That's an improvement of course, since the smartest AI machines used to take up a city block.

After all is said and done, and the technological hurdles are overcome, machines that operate with PURPOSE and INTENTION as an integral part of their data processing stream will probably reach a point where both their complexity and intrinsic "purpose" transcend easy human understanding.

Will the Super Mind created by AI some day be benevolent, hostile or neutral? Will we humans even be able to comprehend the internal process of "consciousness" of a Super Mind machine, which might think at hundreds or thousands or millions of times the speed of a human brain?

That question is being explored with more and more seriousness by those who deal in such things. Many of the conclusions are that even if the Super Minds start out identifying with and helping Humanity to get along, it might not be a permanent state of affairs.

The danger of course in the creation of Super Minds would be that human beings of competing interests would seek to employ such entities in accomplishing their own objectives.  And therein lies the inescapable dilemma.

If we give a Super Mind enough power to help Humanity, aren't we also giving it enough power to harm Humanity? It seems to me that the answer is yes.

For those who want to venture further into such speculative conjecture, there are a number of websites, books and articles on the topic of the Technological Singularity on the Net.

My own feeling is that like thermonuclear weapons, super machine intelligence will merely become another technology that will ultimately end up in the hands and under the control of billionaires and governments and will be employed solely to benefit those entities, not Humanity in general.

But of course, that collaboration may be broken by one of the two parties that is not human, if it chooses to. See, we may not be creating a new servant, but something more akin to a god.


« Last Edit: March 23, 2017, 05:19:17 pm by LateForLunch »
GOTWALMA Get out of the way and leave me alone! (Nods to General Teebone)

Offline Weird Tolkienish Figure

  • Technical
  • *****
  • Posts: 18,871
Re: The Future of AI: Singularity
« Reply #1 on: March 23, 2017, 04:05:53 pm »

The size of the average human brain is small enough to fit inside a human skull.

That's good to know.

Offline Idaho_Cowboy

  • Hero Member
  • *****
  • Posts: 4,924
  • Gender: Male
  • Ride for the Brand - Joshua 24:15
Re: The Future of AI: Singularity
« Reply #2 on: March 23, 2017, 04:23:00 pm »
That's good to know.
If computers could write news articles we wouldn't have to read nonsense like that. Artificial intelligence will never beat man made stupidity.
“The way I see it, every time a man gets up in the morning he starts his life over. Sure, the bills are there to pay, and the job is there to do, but you don't have to stay in a pattern. You can always start over, saddle a fresh horse and take another trail.” ― Louis L'Amour

Offline LateForLunch

  • GOTWALMA Get Out of the Way and Leave Me Alone! (Nods to Teebone)
  • Hero Member
  • *****
  • Posts: 1,349
Re: The Future of AI: Singularity
« Reply #3 on: March 23, 2017, 05:01:24 pm »
That's good to know.

The rendering of smart-assed comments by WTF is similar in its inevitability to a chemical reaction. I note that he tends to meet out his playful abuse in the same way a simian responds to low-hanging fruit. He may in fact be trying to show dislike for me by being contrary. I don't really know, but I still recognize that WTF has one of the best screen names ever, which makes me inclined to at worst ignore him and at best, laugh with him.

He's sort of like a dog that barks at everyone who walks by and I like dogs of all kinds, generally.   
« Last Edit: March 23, 2017, 05:04:01 pm by LateForLunch »
GOTWALMA Get out of the way and leave me alone! (Nods to General Teebone)

Online bigheadfred

  • Hero Member
  • *****
  • Posts: 19,274
  • Gender: Male
  • One day Closer
Re: The Future of AI: Singularity
« Reply #4 on: March 25, 2017, 02:22:04 am »
I don't know if there is going to be that much cause for worry in the "near" future. The technological advancements they are making now towards a human singularity may preclude some of the advancements of AI. Unless you include the concept of AI in a human through the ability to be hardwired to the technology-human consciousness-or a reasonable facsimile, accessing or uploaded to the cloud. Or both. Enough smart people uploaded to the cloud and interacting may form a variant of AI that straight tech type AI will never be able to beat.

But I don't think that would lessen the danger(s). Amplifying stupidity eponentially... :terror:
She asked me name my foe then. I said the need within some men to fight and kill their brothers without thought of Love or God. Ken Hensley

Offline Smokin Joe

  • Hero Member
  • *****
  • Posts: 60,555
  • I was a "conspiracy theorist". Now I'm just right.
Re: The Future of AI: Singularity
« Reply #5 on: March 25, 2017, 03:02:53 am »
If computers could write news articles we wouldn't have to read nonsense like that. Artificial intelligence will never beat man made stupidity.
That's because knowledge is limited. Stupidity, however, is boundless.
How God must weep at humans' folly! Stand fast! God knows what he is doing!
Seventeen Techniques for Truth Suppression

Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.

C S Lewis