As I mentioned upstream, depending on ones interpretation, computers are artificial and by many standards have been intelligent for quite some time.
A definition for Intelligence: able to vary its state or action in response to varying situations, varying requirements, and past experience. The systems of the 80's were very specific in their intelligence and even today they are limited but we do have, to name a few, things like smart homes, gps, and traffic systems that can vary based on historical data, user preference, current conditions and needs.
Rick said:
So, are you saying that there is no difference between intelligence and faking intelligence? That if you program enough suitable responses into a computer it will be intelligent?
So what if a computer is taught/trained to give the so called proper response. A person is taught/trained by parents, in school and via life experiences.
Reynard said:
I see here people discussing how an A.I. will behave and whether they are capable of 'good' and 'evil'.
A.I.s learn from who they interact with and those people shape who an A.I. will become.
Hmm, so what prevents criminal organizations, nut jobs, or any other group or person from specifically teaching their AI to do "evil"?
Tom Kalbfus said:
There is no reason to leave humans, with their imperfect judgement, in charge.
Three ships, a small passenger ship, a large freighter, and a nobles yacht are in danger of destruction and rescue services can only reach one of them. The AI can access information quickly and analyze passenger lists, cargo, and numerous pieces of data for the ships far faster than a human but how does it make it's "judgement"? Maybe the AI decides to save the ship with the most people. Maybe the AI learned in an environment where nobles are considered superior and should be saved before commoners. Maybe the AI decides there is an important medicine being carried on one ship so this is the one to save. Maybe the AI decides losing the smaller ship will be less of a drain on trade...
Just like brainwashing a person, I don't see what would stop someone from providing the AI with an environment of learning that promotes their judgements.
Personally I don't see too much difference between feeding punch cards into a computer, writing a BASIC program that tells a robot to do something, using a gui interface, a voice interactive programing ability, a 3D holodeck programming environment or some so called teaching/learning of an "AI". You are just changing the way one interfaces and provides instruction to a tool so that it does what you want it to do.
Once that tool has the ability to decide on it's own what it should and shouldn't do, and perhaps more importantly, giving it the ability to ignore or oppose an individuals instruction... I certainly think we would not program machines with no barriers, over rides, or programmed restrictions to this "AI" ability. Would we allow such systems to operate important things and would we allow them to operate without off switches? In other AI discussions the 3 laws have been brought up.
Many concepts of AI intelligence and learning include the ability to make associations based on similarities - to make a choice and act without 100% surety. And thus our ability to make mistakes and learn from them. Trial and error. We generally try not to build things that "jump to conclusions" and make errors.
I'd think an advanced tech society might have a very intelligent AI to run power plants, traffic systems and so on but it wouldn't be allowed a whole lot of "free thinking". Personally I see the more "advanced" AI (those that allow "brainstorming") as something used in simulations and not systems that have access to the real world.