The Perennial Robot Question

Robots should never replace humans except for tasks that are too dangerous for humans or of such a repetitive nature to be detrimental to mental and/or physical health. Humans should always take priority in any job function and robots must be TOOLS to assist human labor. When robots replace humans for tasks they are perfectly capable of performing, it's always for the benefit of certain humans who wish to accumulate personal wealth and resources. It's an attempt to create a society of an absolute elite and that means the elimination of lower class populations either by attrition or....

We keep seeing people imagine a paradise of humanity at perfect leisure to 'attain their potential'. Total fallacy if every task is taken away and humans have nothing to do useful and no incentive to advance and produce. I you are lead to believe a robot can do it better, there's nothing left. Most of those dream worlds envision a population of super upper class literally separated from all other humanity.

Star Trek is a much better vision of a perfect society when it comes to automation. It's an obvious socialist society that focuses on the needs of it's members and robots and automation is transparent. As advanced as they are, you see a human janitor cleaning the floor behind Kirk and Spock near the test simulator rather than an automation. Even in Star Wars, with robots so common, their replacement of organics becomes more abusive going up the social ladder. You realize robotics in the galaxy reflects the vastness of worlds compared to sentient life and robots fill in voids but aren't normally used to remove or replace. Very often, the movie hammer the fact how so many population make it clear robots, no matter how sophisticated are property and tools.

Traveller, including most fans and players, has never made robots more than tools except for an occasional robot society or Virus. Descriptions in most robot sources show he major races accepting them as tools over replacement and sometimes there's social aversions to robots for their possible abuse toward the population. This keeps robots in check and makes organics, especially the various game races, the focus of the stories. Personally, you can keep you I, Robot/Colossus worlds.
 
Reynard said:
Robots should never replace humans
Robots could be ubiquitous; but where faced with a human making a judgment, they need to be able to defer to human overrides on the grounds that the robot cannot be the one to make a decision.

For instance, if you have a lawbot, with full knowledge of local legal tariffs, and there is a trial, a human judge can request the lawbot's advice on the best sentence for a perp who has been found guilty of a crime. The judge could accept the lawbot's recommendation, or choose his own. But the lawbot should never be put in the place to determine the accused's innocence or guilt, no matter how great the lawbot's programmed Edu +DM and Advocate skill.

A linguabot is great for quick translation, in the same way as Google Translate is fantastic for translating short phrases in a hurry; but the linguabot could be helpless in a situation where there is no organic linguist around to separate idiomatic speech or colloquialisms from exact, literal speech.

I recommended to three people here that, if it is unaided by a sophont, a robot's skill check can achieve no more than a +1 Effect, no matter how difficult or easy the task; and unaided by a sophont who can override the robot's recommendations, no robot's skill check can have a Boon dice. Though they can suffer negative Effects and the likelihood of Exceptional Failure, or a Bane dice, no unaided robot could or should achieve Exceptional Success.

I suggested that if there is a sophont guiding the robot, it can have an Effect greater than +1 and even Exceptional Success; and the sophont can lend the robot its Boon dice to the robot's skill check. That can be a human doctor, an organic engineer or even a sophont Ship's Captain responsible for every robot crewman.

That was my idea, anyway.
 
Reynard said:
Robots should never replace humans except for tasks that are too dangerous for humans or of such a repetitive nature to be detrimental to mental and/or physical health. Humans should always take priority in any job function and robots must be TOOLS to assist human labor. When robots replace humans for tasks they are perfectly capable of performing, it's always for the benefit of certain humans who wish to accumulate personal wealth and resources. It's an attempt to create a society of an absolute elite and that means the elimination of lower class populations either by attrition or....

We keep seeing people imagine a paradise of humanity at perfect leisure to 'attain their potential'. Total fallacy if every task is taken away and humans have nothing to do useful and no incentive to advance and produce. I you are lead to believe a robot can do it better, there's nothing left. Most of those dream worlds envision a population of super upper class literally separated from all other humanity.

And how is this different to what we have today? ;).

Robots are "doing it better" than humans now - they've been doing it better for years, in fact. If you are led to believe that they aren't or can't then you're in denial.

Humans still have plenty of incentive to advance and produce with or without robots. Just because robots and AIs could do it, that doesn't mean that humans suddenly have to stop doing everything - at that point it becomes "what does the human WANT to do".

Shrieking and throwing poop at the future isn't going to help you handle it any better.
 
alex_greene said:
Robots could be ubiquitous; but where faced with a human making a judgment, they need to be able to defer to human overrides on the grounds that the robot cannot be the one to make a decision.

There's your fallacy, right there. They don't "need" to defer to humans at all. What's the point of having a human there if it's just going to repeat what the AI says?

You're just being flat-out anti-robot here. Your assumption is that robots should never be allowed to make decisions that affect humans, even when humans would make exactly the same decisions - even ones that literally take the robot's output and utter it through a human mouth instead. Why? It just reeks of "I don't want robots to be our overlords" rather than anything rational.
 
fusor said:
alex_greene said:
Robots could be ubiquitous; but where faced with a human making a judgment, they need to be able to defer to human overrides on the grounds that the robot cannot be the one to make a decision.

There's your fallacy, right there. They don't "need" to defer to humans at all. What's the point of having a human there if it's just going to repeat what the AI says?

No, it is the culpability issue today. As well for the 3i too, http://wiki.travellerrpg.com/Shudusham_Concords robots aren't afforded culpability.
 
alex_greene said:
I suggested that if there is a sophont guiding the robot, it can have an Effect greater than +1 and even Exceptional Success; and the sophont can lend the robot its Boon dice to the robot's skill check. That can be a human doctor, an organic engineer or even a sophont Ship's Captain responsible for every robot crewman.

if you're going to have a human checking over the robot's results and doing what the each robot says anyway, then either the human or robot is redundant.

It's one thing to have a human run a program that gives results that they can use to make a decision. At that point, there's a point to having a human in the mix.

But when you have a robot/AI doing all of that and making the same decision just as reliably (if not more so) then you really don't need the human at all.

And remember, human error is the primary cause of a lot of problems: someone forgets to put the flaps down. Someone forgets one step in a critical process. Someone fiddles with the radio while driving. Someone's tired while looking at the tumour X-rays. Someone's biased and ignores a result. Granted, 'intelligent systems' are as flawed as the people who write the programs too, but an AI that learns can overcome such limitations.
 
dragoner said:
No, it is the culpability issue today. As well for the 3i too, http://wiki.travellerrpg.com/Shudusham_Concords robots aren't afforded culpability.

While the systems are not so reliable, then sure, a human should be there to make the decision (right now, driverless cars are still a bit flaky. In a pretty short time though, they will be able to drive far better than a human ever could. And again, most errors on the road are caused by human error at the wheel. In fact it'll become vastly more likely that accidents would be caused by humans doing stupid things around driverless cars). But they're very rapidly going to become more reliable than humans. At that point the human's just there to rubber-stamp what the AI says. If it's a matter of "taking responsibility" then they can just say "well, the AI suggested this course of action, and it knows far more about all the factors than I ever possibly could, so I agreed to it".

Humans today (ideally) make decisions based on what they know. An fully trained Expert system knows far more than one human possibly could, and can assess things much better than a human possibly could too. It's vastly more likely that there's nothing to take responsibility for because the decision would be the appropriate one to make.
 
fusor said:
It's vastly more likely that there's nothing to take responsibility for because the decision would be the appropriate one to make.

It's the tip of the iceberg, which is why there is so much resistance, because the easiest solution is to have someone there to push an off button when an anomalous situation arises (subway trains are like this and have been since the 70's). Otherwise, the operator/owner could be declared negligent, with negligent homicide being extinction event for a company, legally (which also applies to medical malpractice). To actually declare the machine culpable, would be to give it some sort of legal person-hood, or an independent status, at which point can it say no? What is it's remuneration? Is it free, indentured or a slave? It is almost like the chicken or the egg, legally, why build it if you can't use it? So back to the simplest answer, it is just a machine, a tool, and have a human (or other sophont) operator present for culpability's sake.
 
dragoner said:
fusor said:
To actually declare the machine culpable, would be to give it some sort of legal person-hood, or an independent status, at which point can it say no? What is it's remuneration? Is it free, indentured or a slave? It is almost like the chicken or the egg, legally, why build it if you can't use it? So back to the simplest answer, it is just a machine, a tool, and have a human (or other sophont) operator present for culpability's sake.

At some point, when true AI is created, some states may declare them to be "people" legally, with all the implications. These aren't impossible questions to answer or consider (other transhumanist RPGs have already done that).

That doesn't change the fact that Traveller is still incredibly (and unreasonably) conservative about technological effects on society though. The only reason that robots and AI don't exist and take over from humans, and human society is essentially unchanged from what it is today is that the designers didn't want them to, didn't consider that they could, and didn't want things to change so they contrived reasons to limit them and that was that.
 
dragoner said:
It's the tip of the iceberg ...
Indeed, and it is quite fascinating to watch how the human experts wrangle with the ethical and legal problems of robotics and artificial intelligence. For example, there is a Working Group on legal questions related to the development of robotics set up by the European Parliament Committee on Legal Affairs with the task to reflect on legal issues and especially to pave the way for the drafting of civil law rules in connection with robotics and artificial intelligence. Overall the trend of the debate seems to go towards a new kind of legal status for robots and artificial intelligences, but at the moment the human decision makers still seem rather helpless when having to deal with the problem.

Here is a nice example of what can happen:

http://www.cnbc.com/2015/04/21/robot-with-100-bitcoin-buys-drugs-gets-arrested.html
 
fusor said:
At some point, when true AI is created, some states may declare them to be "people" legally, with all the implications. These aren't impossible questions to answer or consider (other transhumanist RPGs have already done that).
I agree. I mean look how in real life the US Supreme Court has given rights to corporations that were reserved for people in the past. And a corporation is not even a living thing. So the idea a court somewhere will give rights to a thinking machine is not too hard to see at all. :D
 
I for one have a problem seeing robots as anything more than complex appliances, at least in Traveller terms. But that is my opinion.
 
Infojunky said:
I for one have a problem seeing robots as anything more than complex appliances, at least in Traveller terms. But that is my opinion.
I have always like the idea of robot as a tool box that helps the person not replaces them. As I said in another post I have always liked the idea of Huey, Dewey, and Louie rather than Ash, Sonny, and Data. :mrgreen:
 
rust2 said:
Here is a nice example of what can happen:

http://www.cnbc.com/2015/04/21/robot-with-100-bitcoin-buys-drugs-gets-arrested.html

It's Bender! :mrgreen:


Infojunky said:
I for one have a problem seeing robots as anything more than complex appliances, at least in Traveller terms. But that is my opinion.

I have had robots as PC's in my campaign, it worked out fine. There are a couple of books to make it possible for mongoose, and if someone wanted to run a Culture-ish campaign with a AI ship that used a robot "avatar", it's possible. Though I'd say there is a gulf between a true AI and simple robot filling a crew position.
 
Infojunky said:
I for one have a problem seeing robots as anything more than complex appliances, at least in Traveller terms. But that is my opinion.

Let me expand on this a little.

While I do allow intelligent machines and limited AI in my games for the most part specialized droids are only intelligent as it pertains to their programing (Kinda like a number of PHD Candidates I know are). And any AI's are human class intelligences with some access to fair number of idiot savant like processes, i.e. Basic maths and memory recall.... (No more abilities than I would allow a character with a Computer implant link have)

As for world economy ending devices, nope, bots are part of the figures as far as I am concerned, going so far as assuming that there are more than fair number of robotic process involved in every starship operation just they aren't independant robots....
 
Infojunky said:
While I do allow intelligent machines and limited AI in my games for the most part specialized droids are only intelligent as it pertains to their programing (Kinda like a number of PHD Candidates I know are)...
Oh my isn't that the truth. Some PHDs even seen limited to a single area of intelligence and lack any other abilities. :lol:
 
-Daniel- said:
Oh my isn't that the truth. Some PHDs even seen limited to a single area of intelligence and lack any other abilities. :lol:

If you're talking about being too specialised in a single area of knowledge? Perhaps you have a point. But general intelligence? No way. I've worked with a lot of people who have PhDs and they're very smart, well-rounded people. I think it's sad that you feel the need to insult them.
 
fusor said:
-Daniel- said:
Oh my isn't that the truth. Some PHDs even seen limited to a single area of intelligence and lack any other abilities. :lol:
If you're talking about being too specialised in a single area of knowledge? Perhaps you have a point. But general intelligence? No way. I've worked with a lot of people who have PhDs and they're very smart, well-rounded people. I think it's sad that you feel the need to insult them.
fusor, please don't twist my words, I didn't say ALL PHDs. I said some. You are right, I have meet some PHDs that are smarter than I ever will be and are well-rounded folk. I have also meet many who do not have higher degrees who are smart and well rounded as well. But as I said, *SOME* PHDs seem limited to a single area of intelligence and lack any other abilities.
 
-Daniel- said:
fusor, please don't twist my words, I didn't say ALL PHDs. I said some. You are right, I have meet some PHDs that are smarter than I ever will be and are well-rounded folk. I have also meet many who do not have higher degrees who are smart and well rounded as well. But as I said, *SOME* PHDs seem limited to a single area of intelligence and lack any other abilities.

Doesn't really matter if you said "all" or "some" - it's still an unnecessary insult and nothing to do with the thread. There's no reason to bash people who are making an effort to learn more.
 
It's a judgement call by the Dungeon Master, on the tone he wants to set in his milieu.

As to why? The chickens may have already come home to roost, and we beat them off, by the skin of our teeth.

Actual zombiebots.
 
Back
Top