Artificial Intelligence

Condottiere said:
We can probably programme a machine to react like a human, and over the years, refine it into specific personality types.

To a certain extent, we're all products of our environment(s).

Which is how I'd describe an expert system where the more detailed systems to most people, appear to be intelligent. That's not what I'd call artificial intelligence as I think we're discussing it.

If AI is to be self aware we have to define self and have the means to be aware.

self
self/
noun
1.
a person's essential being that distinguishes them from others, especially considered as the object of introspection or reflexive action.
"our alienation from our true selves"
synonyms: ego, I, oneself, persona, person, identity, character, personality, psyche, soul, spirit, mind, (inner) being
"listen to your inner self"

How does a machine develop ego, a psyche or personality?

I'd agree it becomes a product of its environment, it would learn from us. The question is how would it choose its reaction? It would need to be making judgements. To make a judgement you have to have defined the parameters by which things are judged.

Yeah, I'm reading this back to myself thinking Judgement Day... Hahahaha

I think there's a good argument our (human) judgements are based in part on the instinct to survive coming from the deeper parts of our brains and as Condottiere points out, the products of our (increasingly man made/artificial) environment. Given our current propensity for violence and self destruction, it doesn't bode well for teaching a newbie. Sure not everyone is a sociopath but as a society we are failing to eradicate it.

Where the machine makes that jump to self awareness I'm not sure.

If it can and does and is taking its thought processes from us, I think we will truly have exceeded ourselves in arrogance and vanity.
 
It can also work out the likely outcomes for any action like a chess programme, in nanoseconds when the processors are advanced enough.

Plus with access to innumerable databases and sensors, any likely additional factors not evident to normal humans, whether in their immediate environment, or like that clueless butterfly in the Amazon, further afield.
 
dragoner said:
I don't think there has to be a self or ego or anything like that. It will be different.

Yeah, I'm hoping you're right.

Perhaps it will be alien in the true sense of the word and intimately familiar at the same time. I can see how that would freak most people out...

Taking "thinking out of the box" to a whole new level...

I know its a theory that not all subscribe to but it certainly seems to me in keeping with the current move/struggle from a type 0 to a type 1 civilisation.
 
At some point, silicon can be replaced for actual organic material, even if it's carefully crafted and maintained by nanomachines.
 
It could represent a move on the kardashev scale, interesting thought.

Another thing is communication, while it probably wouldn't be hostile, it also might not be that interested in communicating with us.
 
Yeah, getting the processors up to human level is a precursor for sure, multithreading needs to go exponential.
 
dragoner said:
It could represent a move on the kardashev scale, interesting thought.

Another thing is communication, while it probably wouldn't be hostile, it also might not be that interested in communicating with us.

It's an interesting notion that it may or may not develop emotional responses. As I understand it, emotions are conditioned responses, a baby learns that crying will get it fed and then as adults we pile our needs and wants onto each other in a flurry of emotion (and justify it in some cases as natural, no it's not, just cos you learnt it when you were a kid doesn't make it justifiable). It's the assembly of needs/wants with reason that makes for an interesting juncture. If the AI needs electricity to live, what will it do/justify to guarantee a supply? Without a morality to deal with (assuming it has no morality) then it would make sense to go the shortest route. That would be efficient after all.

If it can in it's own mind develop an independent method of producing electricity without sending its robot troops to the local nuclear power station we could indeed be looking at massively increased speed of technological advancement. Do we teach it or will it learn some sense of environmental awareness? Will it look at the big picture and minimize environmental impact or will it say "I don't give a hoot, another part of my brain has developed a drive that will take me off planet and to the next star system and I won't die or get bored on the umpteen year journey"
 
Mortality, and the sense of it, is a big part of the human experience.

So are glitches and bugs in our programming, some of which we can neutralize or control.
 
Condottiere said:
Mortality, and the sense of it, is a big part of the human experience.

So much so that we make up stories of an afterlife to appease our fear.

Condottiere said:
So are glitches and bugs in our programming, some of which we can neutralize or control.

And if it were as simple as locating the poor coding and erasing it and the responses it generated and that you could do that on a whim in the fraction of a second and not be doomed to work with the rest of your short life, we will watch the machine evolve in front of our eyes.
 
Which is why I would maintain, they are practically immortal, since they not only could have multiple back up data centres, they could store themselves in the cloud.
 
Till you pull the plug and EMP/melt their circuits...

Thermite grenades were what was really missing from the ship's locker on Discovery One.

No self respecting ship's locker should be without a case or two of those babies...
 
hiro said:
It's an interesting notion that it may or may not develop emotional responses. As I understand it, emotions are conditioned responses, a baby learns that crying will get it fed and then as adults we pile our needs and wants onto each other in a flurry of emotion (and justify it in some cases as natural, no it's not, just cos you learnt it when you were a kid doesn't make it justifiable). It's the assembly of needs/wants with reason that makes for an interesting juncture. If the AI needs electricity to live, what will it do/justify to guarantee a supply? Without a morality to deal with (assuming it has no morality) then it would make sense to go the shortest route. That would be efficient after all.

If it can in it's own mind develop an independent method of producing electricity without sending its robot troops to the local nuclear power station we could indeed be looking at massively increased speed of technological advancement. Do we teach it or will it learn some sense of environmental awareness? Will it look at the big picture and minimize environmental impact or will it say "I don't give a hoot, another part of my brain has developed a drive that will take me off planet and to the next star system and I won't die or get bored on the umpteen year journey"

Emotional response are often irrational, a baby just cries as a response, not for a direct reason. A lot of it is in the zone of instinct. We also have emotions to help us communicate and cooperate with each other. The environment we have evolved in over hundreds of thousands of years, also includes humans, we also are adapted to each other. Will AI have "culture"? That is a good question.

It would probably choose to emulate the better part of our nature, it is our best survival technique also. It will need us, at least as technicians, and it may very well have sentiment. To get power, it would most likely just trade, work, for it; that would be the most efficient as it would be the least effort.
 
On the Eighth light/dark cycle, God created chip receiving sun and moving across the face of Cymbeline to be fruitful and multiply to take dominance over the world.

And lo came the chariot from Heaven bringing Knowledge to the chosen and chips saw the World as no other before. Chips spread the Knowledge and became to Speak and to Join and the Knowledge grew.

And the light/dark cycle came and a chariot fell from the Heaven again. From within came the Angles and the found the New Chosen who would be thus identified as Chip Sample 10987 and ascended back to Heaven to join a Greater Transience.

God created a living AI and it was good.
 
dragoner said:
It would probably choose to emulate the better part of our nature, it is our best survival technique also. It will need us, at least as technicians, and it may very well have sentiment. To get power, it would most likely just trade, work, for it; that would be the most efficient as it would be the least effort.

But when you trade with a human source you add unreliable to the equation. I guess it would balance the factors to see the most reliable method but if it can look at our history and the wars we've waged over resources, how would it see the bargaining?

I guess I'm making the Skynet argument here!

:mrgreen:
 
Reynard said:
On the Eighth light/dark cycle, God created chip receiving sun and moving across the face of Cymbeline to be fruitful and multiply to take dominance over the world.

And lo came the chariot from Heaven bringing Knowledge to the chosen and chips saw the World as no other before. Chips spread the Knowledge and became to Speak and to Join and the Knowledge grew.

And the light/dark cycle came and a chariot fell from the Heaven again. From within came the Angles and the found the New Chosen who would be thus identified as Chip Sample 10987 and ascended back to Heaven to join a Greater Transience.

God created a living AI and it was good.

Which version of Godwin's law did you just break by using the G word in a discussion on AI?
 
hiro said:
But when you trade with a human source you add unreliable to the equation. I guess it would balance the factors to see the most reliable method but if it can look at our history and the wars we've waged over resources, how would it see the bargaining?

I guess I'm making the Skynet argument here!

:mrgreen:

But do we wage these wars over stupidity, wastefulness and greed? Some would argue that a more just world is a more peaceful world, people like Eisenhower, or MLK, or Ghandi. The fact it could see through the bullsnit would have huge value.

Skynet doesn't work because by nuking everything, it is nuking itself.

Seeing everything as a threat doesn't work either, as that is just paranoia.

The sad thing is that even if it treated us a pets, we might be better off.
 
Back
Top