Artificial Intelligence

Rick said:
As to the development of a benevolent, post-scarcity society of equals - wasn't something similar said of that marvelous workers paradise, the Soviet Union?

No, it wasn't. It is mentioned in Banks' Culture series, as well as AI's being common. But I have to say, if one is an extreme technophobe, why do you like Sci-fi anyways?
 
dragoner said:
Rick said:
As to the development of a benevolent, post-scarcity society of equals - wasn't something similar said of that marvelous workers paradise, the Soviet Union?

No, it wasn't. It is mentioned in Banks' Culture series, as well as AI's being common. But I have to say, if one is an extreme technophobe, why do you like Sci-fi anyways?

How am I an extreme technophobe? I just don't buy into the unjustifiably optimistic wish-fulfilment fantasies. My Sci-fi is a little more pessimistic, that way I can always be pleasantly surprised, rather than continually disappointed!
 
"A robot is by definition a mechanical slave, that is their purpose, to do the work. A robot probably doesn't mind being a slave, because that is the way it is built."

By definition - "a person may still be described as a slave if he or she is forced to work for another person without an ability on their part to unilaterally terminate the arrangement." Robots are not persons, they are sophisticated tools and machines. Property, no rights. Even the Traveller False A.I. is still a robot.

The problem will come when a True A.I. is developed similar if there is ever creation of uplifted non-human animals. Are beings with actual sapience still property to the creator? Property can't be a slave. Does sapience confer self-determination and independence? This is probably why the majority opinion in Traveller concerning A.I.s is a True A.I. can not be property. That's why Traveller, at least for OTU, makes it outlaws and illegal and that is based on either conjecture or the extremely rare experience from TL 15 and 16 worlds and I assume has become part of the Shudusham Concords. I'm also sure there is enough 'skynet' syndrome to instill nightmares in the population.
 
Rick said:
dragoner said:
Rick said:
As to the development of a benevolent, post-scarcity society of equals - wasn't something similar said of that marvelous workers paradise, the Soviet Union?

No, it wasn't. It is mentioned in Banks' Culture series, as well as AI's being common. But I have to say, if one is an extreme technophobe, why do you like Sci-fi anyways?

How am I an extreme technophobe? I just don't buy into the unjustifiably optimistic wish-fulfilment fantasies. My Sci-fi is a little more pessimistic, that way I can always be pleasantly surprised, rather than continually disappointed!

Pessimistic sci-fi sounds like technophobia; AI is little more than a more efficient OS and UI. People might be shocked to find out how computers operate in markets and economics today, judging by the conversation. Sans-AI, it is safe to assume that even in the near future life will be even more computerized, AI is just another level, no more likely to be good or evil than any other technological development.
 
That's the issue with an A.I.. You put constraints on it to prevent from thinking too much you've lobotomized it. An A.I. thinks, is aware of it's surrounding both perceptional and informational, is aware of itself and puts it all together to form opinions and assess. At some point it determines why it's here, what it is capable of and what is important and what their resources and capabilities are. Good and evil is relevant. You see that all the time in stories with A.I.s.

" This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die."
Colossus: The Forbin Project
 
dragoner said:
Pessimistic sci-fi sounds like technophobia; AI is little more than a more efficient OS and UI. People might be shocked to find out how computers operate in markets and economics today, judging by the conversation. Sans-AI, it is safe to assume that even in the near future life will be even more computerized, AI is just another level, no more likely to be good or evil than any other technological development.
No. What you're describing is not an AI, it is a Simulated Intelligence. An SI uses a wide range of programmed responses and data to function, but is only capable of operating within its programmed boundaries - it can build on this by 'learning' from past decisions, but only within its' programmed limitations. An AI is a true self programming intellect, capable of determining its' own boundaries, of abstract thinking and intuitive learning. An AI is not just a bigger and better OS, or 'another level', it is completely different in concept. I have no problems with fitting SI's into the concepts you have put forward, but then we would not be discussing AI's.
 
Tom Kalbfus said:
What rich person voluntarily pays his taxes?
None. That's why taxes are made into laws so people are forced to pay them.
Tom Kalbfus said:
It is the government which taxes and redistributes, and having robots build more robots is all it takes, it doesn't subtract from the number of robots the rich person has.
Your math is unclear there.
Tom Kalbfus said:
But if we assume AI is smarter than humans, there is really no economic reason for rich people to be rich that is intrinsic to themselves, rather than to them just having smarter AIs.
Not sure what you're saying. Smart has nothing to do with how rich one is.
Tom Kalbfus said:
You see in such a world, it would be AIs running companies because they would make better decisions than humans. Natural talent wouldn't come into it.
Companies still need "money" to run though, regardless of who is in charge. Materials need to be bought. Or are the AIs a war tribe that conquer planets and just take stuff?
simonh said:
I have to side with Tom on this to the extent that we're discussing a world with robots and strong AI, one that in some circles would be considered post-singularity. In that end state, it would no longer make sense to think in terms of ownership of capital, or dividing humans into wealth creators, leaders and followers or consumers. We would all be consumers, and the AIs would be the wealth creators and the workers. We would need a fundamentally different model for society.
If the AIs were that far up the food chain, humans would be out of the equation. Ais would have there own planets ("planes") they exist on, while humans are still doing what humans do now on their own worlds.
simonh said:
The problem is that how this plays out in the near term depends on how we get to that end state. Suppose a single company develops the first true AIs smarter than humans, which then develop even smarter AIs, etc. That company would have a massive advantage over everyone else.
How is that different from having just an expert system?
simonh said:
Look at the way software is ‘eating the world’, AIs would accelerate and magnify that process.
Infrastructure would need upgrading for further "eating".
simonh said:
Right now Apple is consuming the majority of the global profits in both the desktop computer and the phone markets. A company with super AIs would be able to do that in one industry, after another, after another.
In a sci-fi setting, yes that would probably happen. If an AI knows what human consumers will need decades ahead of time before those humans would even know they would be needed such stuff.
simonh said:
It would be the ultimate killer app, in every economic, industrial and financial sector. They could end up owning the world. At that point, our current systems and controls for determining who owns what and why break down.
More sci-fi required.
simonh said:
But suppose this AI isn’t developed in the west. What if it’s developed by Russia and is under the control of someone like Putin? Or China?
Ok. Either AIs are controlled/regulated by humans, or they are not. Which is it?
simonh said:
A post scarcity society in principle could be a paradise where nobody needs to work and everyone benefits from the output of unlimited labour
And how is it all paid for?
simonh said:
but it could just as easily be the ultimate police state that would make the world of 1984 look like a libertarian paradise. And it could stay that way forever. It might even be inevitable.
That might be the cheaper way to run things.
simonh said:
After all, egalitarian democracy is inherently unstable. The final steady state for human society might well be permanent repression, even if humans do stay in charge of the machines.
Hopefully, there is an OFF switch somewhere. The default setting for human governments is communism. Not sure if we'd want to add AIs to their administrations.
 
Rick said:
dragoner said:
Pessimistic sci-fi sounds like technophobia; AI is little more than a more efficient OS and UI. People might be shocked to find out how computers operate in markets and economics today, judging by the conversation. Sans-AI, it is safe to assume that even in the near future life will be even more computerized, AI is just another level, no more likely to be good or evil than any other technological development.
No. What you're describing is not an AI, it is a Simulated Intelligence. An SI uses a wide range of programmed responses and data to function, but is only capable of operating within its programmed boundaries - it can build on this by 'learning' from past decisions, but only within its' programmed limitations. An AI is a true self programming intellect, capable of determining its' own boundaries, of abstract thinking and intuitive learning. An AI is not just a bigger and better OS, or 'another level', it is completely different in concept. I have no problems with fitting SI's into the concepts you have put forward, but then we would not be discussing AI's.

It is an AI, you are probably thinking of some godlike intelligence, which may or may not ever exist. The problem with your definition is that it would preclude an enormous amount of humanity from being intelligent. "True AI" merely operates on a human level, which could be described as dumb AI, but whatever. I left it open to interpretation on purpose.

edit- The AI definition from Merriam Webster: the capability of a machine to imitate intelligent human behavior.
 
dragoner said:
AI is little more than a more efficient OS and UI. People might be shocked to find out how computers operate in markets and economics today, judging by the conversation. Sans-AI, it is safe to assume that even in the near future life will be even more computerized, AI is just another level, no more likely to be good or evil than any other technological development.
Depends on your kind of AI and SyFy used. Humans have a history of thinking computers will solve any problem for them. Bad humans will fudge computers so they follow a model of some kind that has the results they want.
dragoner said:
It is an AI, you are probably thinking of some godlike intelligence, which may or may not ever exist.
Actually, it is just good programming. Nothing AI about it.
 
We are still talking about 2 different concepts. Every human being is basically a "self programming intellect, capable of determining its' own boundaries, of abstract thinking and intuitive learning", we have to be really! Whereas, a Simulated Intelligence uses extensive data to mimic human-like responses, but can never be truly aware or sapient (and I'm going to try to side-step any arguments on what constitutes true sapience or we'll be here forever). A true, or full, AI is sapient, self-aware and is capable of learning in a similar way to human intelligence; there is nothing 'god-like' about it.
Admittedly, after the AI Winter, the term 'AI' has been used much more pessimistically to refer to Simulated Intelligence, whereas Synthetic Intelligence has become more popular, after Haugeland, to describe what I would consider a true AI; this has possibly confused the whole issue! :shock:
 
Rick said:
We are still talking about 2 different concepts. Every human being is basically a "self programming intellect, capable of determining its' own boundaries, of abstract thinking and intuitive learning", we have to be really! Whereas, a Simulated Intelligence uses extensive data to mimic human-like responses, but can never be truly aware or sapient (and I'm going to try to side-step any arguments on what constitutes true sapience or we'll be here forever). A true, or full, AI is sapient, self-aware and is capable of learning in a similar way to human intelligence; there is nothing 'god-like' about it.
Admittedly, after the AI Winter, the term 'AI' has been used much more pessimistically to refer to Simulated Intelligence, whereas Synthetic Intelligence has become more popular, after Haugeland, to describe what I would consider a true AI; this has possibly confused the whole issue! :shock:
People are confused just with how their own governments and economies work. So, ya. AIs are just another layer of more confusion. That's why this stuff gets hand-waved 99% of the time in game sessions. And the players don't really care how it works anyway. See Disney park chair rides, for player participation in games.
 
Rick said:
We are still talking about 2 different concepts.

I'm using the accepted definition, if you are creating your own definition, I can't help you there. Now if the bar seems low ... /shrug That is humanity for you, not everyone is an Einstein.
 
I'm interested in what appears to be a popular notion here - that AIs would inevitably set their own agendas, set their own goals and would have unknowable objectives. I don't see why that is inevitable, after all the one example of intelligence we currently have - humans - are extremely limited in terms of their ability to change themselves and their motivations and goals.

If you don't believe me, try this simple experiment. Chose something you believe. It doesn't matter what it is, how important or insignificant it is to your daily life as long as it is a genuinely held belief of which you are completely certain, not just an assumption or expectation. Now change it and believe it's opposite, or some other alternative that is completely mutually exclusive with original conviction, just as firmly and certainly as your original belief. Don't just pretend to change it for a while, but permanently and irrevocably change your belief by a pure act of will. If you can manage it, I will be extremely impressed.

I'm not saying humans are incapable of change or don't have free will, but I do believe we are highly constrained cognitively by a whole mass of assumptions, beliefs, instincts and desires. It seems perfectly reasonable to me that we should be able to engineer our AIs with whatever inhibitions, beliefs and instincts we choose. We'll just need to be very careful about it.


Edit:

This is why I disagree with Rick. I think that completely 'free' AIs that are truly self deterministic are possible, but I don't think they are the only type that would qualify as a true AI. To me, an AI just needs to be a problem solving and learning system about as smart, or smarter than a human. O course there's a continuum from limited expert systems up to 'Strong AI' intelligences. There's a danger we'll get caught up in fruitless arguing based on different assumptions about what exactly constitutes different levels of intelligence. But the way I see it a smart system capable of learning, innovative problem solving and judgement just as good as a human, or better, does not necessarily have to be completely unbounded in it's possible range of behaviour and ability to change.

Edit2: I'm looking forward to responding to Shawn's posts(s) but just flat out don't have the time right now, but as always you raise interesting questions that look like they'll move the debate forward.

Simon Hibbs
 
dragoner said:
Rick said:
We are still talking about 2 different concepts.

I'm using the accepted definition, if you are creating your own definition, I can't help you there. Now if the bar seems low ... /shrug That is humanity for you, not everyone is an Einstein.
Please read all of that post. Then look up 'AI Winter' and 'Haugeland: Synthetic Intelligence' and you'll realise how the terms have changed and that I am not "creating my own definition" by any means. I took the time to check my facts and provide references to concepts that might be helpful, whereas the Merriam Webster definition is merely the first that comes up in an online search term for AI. Yes, I do realise the bar seems very low in some cases.
 
simonh said:
I'm interested in what appears to be a popular notion here - that AIs would inevitably set their own agendas, set their own goals and would have unknowable objectives. I don't see why that is inevitable, after all the one example of intelligence we currently have - humans - are extremely limited in terms of their ability to change themselves and their motivations and goals.
Not necessarily inevitably, it's just that an AI (or Synthetic Intelligence) wouldn't be as predictable in its development as a human might be: we simply don't know how an AI will develop until it does - it might form intelligence along broadly human lines, or it might go off at a tangent so great that we would have little in common and have difficulty in understanding it.
 
simonh said:
I don't see why that is inevitable, after all the one example of intelligence we currently have - humans - are extremely limited in terms of their ability to change themselves and their motivations and goals.

I'm not into the whole doom and gloom scenario, such as Skynet, because most of them don't make sense, such as by nuking the world, it would be nuking itself, essentially. Plus, cooperation is a bigger survival technique than conflict. Most of the negatives can be chalked up to technophobia.
 
Rick said:
dragoner said:
Rick said:
We are still talking about 2 different concepts.

I'm using the accepted definition, if you are creating your own definition, I can't help you there. Now if the bar seems low ... /shrug That is humanity for you, not everyone is an Einstein.
Please read all of that post. Then look up 'AI Winter' and 'Haugeland: Synthetic Intelligence' and you'll realise how the terms have changed and that I am not "creating my own definition" by any means. I took the time to check my facts and provide references to concepts that might be helpful, whereas the Merriam Webster definition is merely the first that comes up in an online search term for AI. Yes, I do realise the bar seems very low in some cases.
Webster does not use the SyFy definition of AI.
Rick said:
Not necessarily inevitably, it's just that an AI (or Synthetic Intelligence) wouldn't be as predictable in its development as a human might be: we simply don't know how an AI will develop until it does - it might form intelligence along broadly human lines, or it might go off at a tangent so great that we would have little in common and have difficulty in understanding it.
The AI could end up just simulating our creativeness, or it could end up being autistic and really good at solving one thing and no one can understand anything else about it.
dragoner said:
Plus, cooperation is a bigger survival technique than conflict.
I bit vague there.
 
Rick said:
dragoner said:
Rick said:
We are still talking about 2 different concepts.

I'm using the accepted definition, if you are creating your own definition, I can't help you there. Now if the bar seems low ... /shrug That is humanity for you, not everyone is an Einstein.
Please read all of that post. Then look up 'AI Winter' and 'Haugeland: Synthetic Intelligence' and you'll realise how the terms have changed and that I am not "creating my own definition" by any means. I took the time to check my facts and provide references to concepts that might be helpful, whereas the Merriam Webster definition is merely the first that comes up in an online search term for AI. Yes, I do realise the bar seems very low in some cases.

No, you are reading more into than is actually there, exactly what I said up thread. A godlike intelligence could be an AI, but all AI's do not equate to a godlike intelligence.
 
dragoner said:
No, you are reading more into than is actually there, exactly what I said up thread. A godlike intelligence could be an AI, but all AI's do not equate to a godlike intelligence.
True, but a simulated intelligence, programmed to mimic human responses, will never actually be sapient, or a true intelligence; it can only regurgitate its' own programming or data. An AI would be able to create an intuitive response from data.
For example, a simulated intelligence could be programmed to unscrew a nut and bolt and every time it came across that nut and bolt it could unscrew it; an AI, however, could use that example to generalise a class of things that belonged to 'things that can be unscrewed' and apply the same principle to bottles, jars, lightbulbs, etc.
 
Back
Top