Artificial Intelligence

Rick said:
dragoner said:
No, you are reading more into than is actually there, exactly what I said up thread. A godlike intelligence could be an AI, but all AI's do not equate to a godlike intelligence.
True, but a simulated intelligence, programmed to mimic human responses, will never actually be sapient, or a true intelligence; it can only regurgitate its' own programming or data. An AI would be able to create an intuitive response from data.
For example, a simulated intelligence could be programmed to unscrew a nut and bolt and every time it came across that nut and bolt it could unscrew it; an AI, however, could use that example to generalise a class of things that belonged to 'things that can be unscrewed' and apply the same principle to bottles, jars, lightbulbs, etc.

Your simulated intelligence could be programmed to recognize the same; this is the Chinese Room argument from up thread. Some of it is semantics, others a difference of degrees, as if simulated so well, then there is not a difference. AI's judging AI's if they are AI's, that's good for a laugh at least. Plus with human intelligence, you do have learned responses, non-rational; enough so that there is the idea of the "philosophical zombie." https://en.wikipedia.org/wiki/Philosophical_zombie
 
Rick said:
dragoner said:
No, you are reading more into than is actually there, exactly what I said up thread. A godlike intelligence could be an AI, but all AI's do not equate to a godlike intelligence.
True, but a simulated intelligence, programmed to mimic human responses, will never actually be sapient, or a true intelligence; it can only regurgitate its' own programming or data. An AI would be able to create an intuitive response from data.
For example, a simulated intelligence could be programmed to unscrew a nut and bolt and every time it came across that nut and bolt it could unscrew it; an AI, however, could use that example to generalise a class of things that belonged to 'things that can be unscrewed' and apply the same principle to bottles, jars, lightbulbs, etc.
The sci-fi AI has the ability to learn. For now, AIs have to be brute-forced programmed for every situation.
 
dragoner said:
Rick said:
dragoner said:
No, you are reading more into than is actually there, exactly what I said up thread. A godlike intelligence could be an AI, but all AI's do not equate to a godlike intelligence.
True, but a simulated intelligence, programmed to mimic human responses, will never actually be sapient, or a true intelligence; it can only regurgitate its' own programming or data. An AI would be able to create an intuitive response from data.
For example, a simulated intelligence could be programmed to unscrew a nut and bolt and every time it came across that nut and bolt it could unscrew it; an AI, however, could use that example to generalise a class of things that belonged to 'things that can be unscrewed' and apply the same principle to bottles, jars, lightbulbs, etc.

Your simulated intelligence could be programmed to recognize the same; this is the Chinese Room argument from up thread. Some of it is semantics, others a difference of degrees, as if simulated so well, then there is not a difference. AI's judging AI's if they are AI's, that's good for a laugh at least. Plus with human intelligence, you do have learned responses, non-rational; enough so that there is the idea of the "philosophical zombie." https://en.wikipedia.org/wiki/Philosophical_zombie
So, are you saying that there is no difference between intelligence and faking intelligence? That if you program enough suitable responses into a computer it will be intelligent?
 
I see here people discussing how an A.I. will behave and whether they are capable of 'good' and 'evil'. A.I.s are definitely learners, true A.I.s more so. Programming is Nature which gives them their automatic functions and first thoughts. One idea would be duplicating the mind of a mature A.I. into the next generation. Once the machine wakes for the first time, they are in Nurture mode which means much interaction with sapients. Just like twins, their individual experience will create individual personality.

Here's the issue, A.I.s learn from who they interact with and those people shape who an A.I. will become. Just like human children and the old saying, paraphrased, Give me an A.I. until maturity and I will have them for life. For all their thinking, they will have core values based on the people around them. Good and evil will be based on what they experience the most and all A.I.s will be controlled by not always the most benevolent or socially benign organizations.
 
simonh said:
I have to side with Tom on this to the extent that we're discussing a world with robots and strong AI, one that in some circles would be considered post-singularity. In that end state, it would no longer make sense to think in terms of ownership of capital, or dividing humans into wealth creators, leaders and followers or consumers. We would all be consumers, and the AIs would be the wealth creators and the workers. We would need a fundamentally different model for society.

The problem is that how this plays out in the near term depends on how we get to that end state. Suppose a single company develops the first true AIs smarter than humans, which then develop even smarter AIs, etc. That company would have a massive advantage over everyone else. Look at the way software is ‘eating the world’, AIs would accelerate and magnify that process. Right now Apple is consuming the majority of the global profits in both the desktop computer and the phone markets. A company with super AIs would be able to do that in one industry, after another, after another. It would be the ultimate killer app, in every economic, industrial and financial sector. They could end up owning the world. At that point, our current systems and controls for determining who owns what and why break down. But suppose this AI isn’t developed in the west. What if it’s developed by Russia and is under the control of someone like Putin? Or China?
That would only happen if we deliberately let them beat us! The Russians suffer from the problem, that the brains at the top, want to control all aspects of the economy, and those brains aren't the smartest or the best. Putin is a control freak, he likes all the power at the top where he can control it, same applies to the Chinese Communist Party, the people in power want control over the economy, and by exercising that control they slow down the potential growth of their economies, Capitalism when unfettered is more efficient, and has more resources to spend on AI development. The worst case scenario is of we elect Luddites that slow down the development of AI so the Russians and Chinese can surpass us. Democracy needs to win this race, after that, the AI's we build can figure out a way to stay ahead, they can outwit the human dictators we have to compete with and take over their countries, so we don't have to worry about war. After that the AIs are in control, they can out think us, so it would be better for us humans to stay out of their way, and it would be better if we designed the AIs to want things that would not be inimical to our existence. I noticed that the most primitive of drives often drives human excellence, one of those drives is the sex drive, the other is hunger etc. We can understand what drives these superhuman AIs, we design them to have certain primitive drives which govern their behavior before the evolve to surpass us, that way, even if we don't understand what they're doing, at least we can live in their world.
simonh said:
A post scarcity society in principle could be a paradise where nobody needs to work and everyone benefits from the output of unlimited labour, but it could just as easily be the ultimate police state that would make the world of 1984 look like a libertarian paradise. And it could stay that way forever. It might even be inevitable. After all, egalitarian democracy is inherently unstable. The final steady state for human society might well be permanent repression, even if humans do stay in charge of the machines.

Simon Hibbs
There is no reason to leave humans, with their imperfect judgement, in charge.
 
Tom Kalbfus said:
the AI's we build can figure out a way to stay ahead, they can outwit the human dictators we have to compete with and take over their countries, so we don't have to worry about war.
Take over countries without using war. That is a fantasy AI.
Tom Kalbfus said:
There is no reason to leave humans, with their imperfect judgement, in charge.
I don't think AIs will ever make perfect decisions. No AI can be perfect.
 
Rick said:
dragoner said:
Rick said:
...

Your simulated intelligence could be programmed to recognize the same; this is the Chinese Room argument from up thread. Some of it is semantics, others a difference of degrees, as if simulated so well, then there is not a difference. AI's judging AI's if they are AI's, that's good for a laugh at least. Plus with human intelligence, you do have learned responses, non-rational; enough so that there is the idea of the "philosophical zombie." https://en.wikipedia.org/wiki/Philosophical_zombie
So, are you saying that there is no difference between intelligence and faking intelligence? That if you program enough suitable responses into a computer it will be intelligent?
Intelligence is just the manipulation of ideas, you can't fake it without duplicating it, because you must do what intelligence does in order to simulate it.
 
ShawnDriscoll said:
Tom Kalbfus said:
the AI's we build can figure out a way to stay ahead, they can outwit the human dictators we have to compete with and take over their countries, so we don't have to worry about war.
Take over countries without using war. That is a fantasy AI.
Sure you could, simply by convincing other leaders of other countries to step down and hand over their powers to the AIs. You see an AI can predict their behavior, so it will intensely examine each national leader and develop a model of their mind and run simulations of their various responses given certain stimuli, then the machines will simply provide the right stimuli that will get them to hand over power. Large numbers of people can also be more easily manipulated than single individuals, if the leader can't be convinced to step down, his followers may be pursuaded to overthrow him, AIs will master the art of Charisma, through a propaganda campaign, they can cause any leader of any country to be otherthrown by their people, they might even establish religions that make people fanatical and willing to sacrifice their lives in the overthrow. Humans have been pursuaded to do a lot of stupid things in the past, I'm sure AI's can figure out what makes humans tick, what motivates them.
7genevievemorton_crop_north.jpg

Imagine we built a robot that looked like this, and it also possessed superhuman intelligence, the looks are easy to achieve compared to the intelligence, now imagine she all all the appearance and feelings of the girl she appears to be, you think she couldn't wrap you around her finger figuratively speaking? You think she couldn't manipulate your emotions and ultimately get you to agree to whatever she wants? Just keep in mind, she isn't necessarily a she, she could be an it designed to mimic a she. If you watch the movie Ex-Machina, you would get an idea of how this would be done.
 
Tom Kalbfus said:
Sure you could, simply by convincing other leaders of other countries to step down and hand over their powers to the AIs. You see an AI can predict their behavior, so it will intensely examine each national leader and develop a model of their mind and run simulations of their various responses given certain stimuli, then the machines will simply provide the right stimuli that will get them to hand over power.
Humans always make the mistake of using models in hopes that program output will show the same results. Not sure how "provide the right stimuli" will amount to anything. I'd be curious to know though what an AI thinks would be good stimuli for Iran.
Tom Kalbfus said:
Just keep in mind, she isn't necessarily a she, she could be an it designed to mimic a she.
I'm totally aware that she could really be some old fat dude with a handle of Girl18. This all happened already in the '80s with the HitBit computers and the AI rage back then before the crash of '87.
 
As I mentioned upstream, depending on ones interpretation, computers are artificial and by many standards have been intelligent for quite some time.

A definition for Intelligence: able to vary its state or action in response to varying situations, varying requirements, and past experience. The systems of the 80's were very specific in their intelligence and even today they are limited but we do have, to name a few, things like smart homes, gps, and traffic systems that can vary based on historical data, user preference, current conditions and needs.
Rick said:
So, are you saying that there is no difference between intelligence and faking intelligence? That if you program enough suitable responses into a computer it will be intelligent?
So what if a computer is taught/trained to give the so called proper response. A person is taught/trained by parents, in school and via life experiences.
Reynard said:
I see here people discussing how an A.I. will behave and whether they are capable of 'good' and 'evil'.

A.I.s learn from who they interact with and those people shape who an A.I. will become.
Hmm, so what prevents criminal organizations, nut jobs, or any other group or person from specifically teaching their AI to do "evil"?
Tom Kalbfus said:
There is no reason to leave humans, with their imperfect judgement, in charge.
Three ships, a small passenger ship, a large freighter, and a nobles yacht are in danger of destruction and rescue services can only reach one of them. The AI can access information quickly and analyze passenger lists, cargo, and numerous pieces of data for the ships far faster than a human but how does it make it's "judgement"? Maybe the AI decides to save the ship with the most people. Maybe the AI learned in an environment where nobles are considered superior and should be saved before commoners. Maybe the AI decides there is an important medicine being carried on one ship so this is the one to save. Maybe the AI decides losing the smaller ship will be less of a drain on trade...

Just like brainwashing a person, I don't see what would stop someone from providing the AI with an environment of learning that promotes their judgements.

Personally I don't see too much difference between feeding punch cards into a computer, writing a BASIC program that tells a robot to do something, using a gui interface, a voice interactive programing ability, a 3D holodeck programming environment or some so called teaching/learning of an "AI". You are just changing the way one interfaces and provides instruction to a tool so that it does what you want it to do.

Once that tool has the ability to decide on it's own what it should and shouldn't do, and perhaps more importantly, giving it the ability to ignore or oppose an individuals instruction... I certainly think we would not program machines with no barriers, over rides, or programmed restrictions to this "AI" ability. Would we allow such systems to operate important things and would we allow them to operate without off switches? In other AI discussions the 3 laws have been brought up.

Many concepts of AI intelligence and learning include the ability to make associations based on similarities - to make a choice and act without 100% surety. And thus our ability to make mistakes and learn from them. Trial and error. We generally try not to build things that "jump to conclusions" and make errors.

I'd think an advanced tech society might have a very intelligent AI to run power plants, traffic systems and so on but it wouldn't be allowed a whole lot of "free thinking". Personally I see the more "advanced" AI (those that allow "brainstorming") as something used in simulations and not systems that have access to the real world.
 
That would only happen if we deliberately let them beat us! The Russians suffer from the problem, that the brains at the top, want to control all aspects of the economy, and those brains aren't the smartest or the best. Putin is a control freak, he likes all the power at the top where he can control it, same applies to the Chinese Communist Party, the people in power want control over the economy, and by exercising that control they slow down the potential growth of their economies, Capitalism when unfettered is more efficient, and has more resources to spend on AI development. The worst case scenario is of we elect Luddites that slow down the development of AI so the Russians and Chinese can surpass us. Democracy needs to win this race, after that, the AI's we build can figure out a way to stay ahead, they can outwit the human dictators we have to compete with and take over their countries, so we don't have to worry about war. After that the AIs are in control, they can out think us, so it would be better for us humans to stay out of their way, and it would be better if we designed the AIs to want things that would not be inimical to our existence. I noticed that the most primitive of drives often drives human excellence, one of those drives is the sex drive, the other is hunger etc. We can understand what drives these superhuman AIs, we design them to have certain primitive drives which govern their behavior before the evolve to surpass us, that way, even if we don't understand what they're doing, at least we can live in their world.
Off-topic (probably), rampant paranoia (possibly), but the best argument for never, ever giving a group of people that sort of power!
There is no reason to leave humans, with their imperfect judgement, in charge.
There is no reason to allow humans, with their imperfect judgement, to ever build something as powerful as you describe.
 
Intelligence is layered, even human intelligence.

We can simulate intelligence to possibly a superior one to a human intelligence, because a computer can process information and possible outcomes a lot faster than we could.

An artificial intelligence that not only mimics but actually becomes more or less the same process, would need to account for emotion, quirks, illogic, leaps of logic and beliefs, acquired through experience and unique arrangement of, let's call it, their cortex architecture.
 
Condottiere said:
We can simulate intelligence to possibly a superior one to a human intelligence, because a computer can process information and possible outcomes a lot faster than we could.
We can once we figure out how brains work, which no one has been able to yet.
 
ShawnDriscoll said:
Condottiere said:
We can simulate intelligence to possibly a superior one to a human intelligence, because a computer can process information and possible outcomes a lot faster than we could.
We can once we figure out how brains work, which no one has been able to yet.
True, but Simulated Intelligence is still limited to the intelligence and information of the programming team - garbage in will still become garbage out - the ability to process information and outcomes faster still doesn't overcome limitations of the input.
 
Rick said:
That would only happen if we deliberately let them beat us! The Russians suffer from the problem, that the brains at the top, want to control all aspects of the economy, and those brains aren't the smartest or the best. Putin is a control freak, he likes all the power at the top where he can control it, same applies to the Chinese Communist Party, the people in power want control over the economy, and by exercising that control they slow down the potential growth of their economies, Capitalism when unfettered is more efficient, and has more resources to spend on AI development. The worst case scenario is of we elect Luddites that slow down the development of AI so the Russians and Chinese can surpass us. Democracy needs to win this race, after that, the AI's we build can figure out a way to stay ahead, they can outwit the human dictators we have to compete with and take over their countries, so we don't have to worry about war. After that the AIs are in control, they can out think us, so it would be better for us humans to stay out of their way, and it would be better if we designed the AIs to want things that would not be inimical to our existence. I noticed that the most primitive of drives often drives human excellence, one of those drives is the sex drive, the other is hunger etc. We can understand what drives these superhuman AIs, we design them to have certain primitive drives which govern their behavior before the evolve to surpass us, that way, even if we don't understand what they're doing, at least we can live in their world.
Off-topic (probably), rampant paranoia (possibly), but the best argument for never, ever giving a group of people that sort of power!
Do you know any other way to disallow the Russians and Chinese from developing AI without our developing it first? I don't know how to do that, the best of humanity are about equal in intelligence. I don't see how you expect Luddites to rule the world before the advent of AI so they would be in position to prevent the development of AI world wide, and exercise total control to preclude hard AI. That is not a reasonable expectation, the best you can hope for is to slow us down so someone else develops the AI to its standards, and an AI developed by the Russians or the Chinese, know they are a totalitarian country, it is likelier that the AI they develop would be more likely to eliminate humanity all together, because those AI would learn from the Russians and Chinese by example rather than us. We want to influence how the AI develops, and if Luddites get their way, that won't happen.
There is no reason to leave humans, with their imperfect judgement, in charge.
There is no reason to allow humans, with their imperfect judgement, to ever build something as powerful as you describe.
As I said before, how do you not allow it, how do we conquer China, Russia, and the rest of the World without starting World War III? The best way to control this is by our developing AI first and putting it to the task of preventing the others from developing evil AIs, that is the only way. We live in a world of multiple nations with people doing stuff that is beyond our control, they won't listen to us if we tell them not to do something. Also AI are a lot easier to develop than nuclear weapons, we can slow down their development by not participating in it, but we can't halt it entirely, and the other people developing it might be developing it for unsavory reasons, such as to overthrow and rule over us. Imagine a Charismatic Russian AI that manipulates our political system to get our people to overthrow democracy in the United States in much the same way that the Weimar Republic of Germany was overthrown by Hitler for example. Except in this case, it would be an "artificial Hitler" built by the Russians. That AI would play on our emotions and through sociological manipulation, make it seem reasonable to put it in charge of our country.
 
Rick said:
ShawnDriscoll said:
Condottiere said:
We can simulate intelligence to possibly a superior one to a human intelligence, because a computer can process information and possible outcomes a lot faster than we could.
We can once we figure out how brains work, which no one has been able to yet.
True, but Simulated Intelligence is still limited to the intelligence and information of the programming team - garbage in will still become garbage out - the ability to process information and outcomes faster still doesn't overcome limitations of the input.

You know how to simulate intelligence, play back a recording and then have a human interact with it reading his lines, more likely having memorized his lines. so a human can appear to have an intelligent conversation with a tape recorder, so long as there is a third person witnessing it who wasn't clued in that the supposed AI the human was speaking to was a tape recorder. So long as the third observer doesn't try to interact with the tape recorder, that is simulated intelligence.
 
ShawnDriscoll said:
Condottiere said:
We can simulate intelligence to possibly a superior one to a human intelligence, because a computer can process information and possible outcomes a lot faster than we could.
We can once we figure out how brains work, which no one has been able to yet.
Who's to say we can't, it is a complex task to analyze and simulate the brain, but there is nothing to suggest that it is impossible.

http://bluebrain.epfl.ch/
https://en.wikipedia.org/wiki/Blue_Brain_Project
http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets?language=en
 
"You know how to simulate intelligence, play back a recording and then have a human interact with it reading his lines, more likely having memorized his lines. so a human can appear to have an intelligent conversation with a tape recorder, so long as there is a third person witnessing it who wasn't clued in that the supposed AI the human was speaking to was a tape recorder. So long as the third observer doesn't try to interact with the tape recorder, that is simulated intelligence."

What you just described is a pure dumb bot analogy. Programmed lines to programmed lines. An expert system would look for key words, analyze the sequence, check it's store of information that would be most appropriate and issue a response but it still isn't intelligent. The human mind and True A.I.s add creativity outside the box of programming. They decide what the situation calls for past, present and future and respond how they feel. Both could, for any reason, respond to the taped recording in any way other than a direct, scripted quote even when told to do so. A human and the A.I. can also get feedback satisfaction or disappointment from their action which dumb bots and expert systems are incapable no matter how sophisticated.
 
Oh, Mr. Calfsfoot, you do make me laugh:
...it is likelier that the AI they develop would be more likely to eliminate humanity all together, because those AI would learn from the Russians and Chinese by example rather than us.
As I said before, how do you not allow it, how do we conquer China, Russia, and the rest of the World without starting World War III?
The irony in your post is hilarious, just so incredibly funny! :lol:
 
Problem is if their A.I. is designated Guardian and we name ours Colossus and they decide to chat and they call Skynet for a second opinion...

A.I.s are thinking machines. Like people, they don't always do what mom and dad taught them. Rebellious cyberteens!
 
Back
Top