The Perennial Robot Question

The question keeps getting asked on the Traveller boards. Such as here. If we can use robots, why would we need human crews? Why, indeed, would humans go out into space at all? Why would we even leave this paradise of ours and venture out into the stars when our AIs and mechanical servitors can do all the hard work for us? *swoon*

Some good reasons.

One, because when it comes to fighting we will still need humans to make the big decisions, even if robots can develop better tactics and even be better leaders: we need to know that it is a person who decides who has to lay down his life for his fellow humans in a given action, and we'd need a human to send other people off on suicide missions.

Two, because even if a robot can calculate a better route to turn a profit, some customers and brokers are not going to deal with "a hunk of tin."

Three, because even if a robot knew more about diplomacy when dealing with alien species, we'd still need people to represent the human race in dealings with aliens.

Four, because for many decisions, even those where the mathematics favour one outcome, we would still need humans to go out there and decide. Not a machine.

Basically, if the robots outnumbered humans a hundred to one, we'd still need at least one human to direct them, to organise their repair and maintenance schedules, to make the decisions and to give the orders for robots and other humans to carry out.

Would anyone else like to come up with good reasons to send humans out into space, even with robots to do the job for them?
 
While I certainly prefer human characters to robots, I disagree with your points. :(

Point One, since we can send robots commanded by other robots to do the fighting, no humans have to enter the battlefield to fight and die.
Point Two, when the customers and brokers are robots themselves there is no need for humans to get involved.
Point Three, I see no reason why the human species cannot be represented by humanoid robots.
Point Four, we are already in a situation where we leave many important decisions to computers.

In my view whether robots could replace humans completely depends on the degree of sophistication of the robots. Once a robot can do everything a human can do, and can do most or all of it better than a human, there is no longer any logical reason to use humans for these tasks. What remains is the human desire for personal involvement and for some degree of adventure, but I am not entirely convinced that this will always remain sufficient to keep humans as the main actors.
 
rust2 said:
While I certainly prefer human characters to robots, I disagree with your points. :(

Point One, since we can send robots commanded by other robots to do the fighting, no humans have to enter the battlefield to fight and die.
Point Two, when the customers and brokers are robots themselves there is no need for humans to get involved.
Point Three, I see no reason why the human species cannot be represented by humanoid robots.
Point Four, we are already in a situation where we leave many important decisions to computers.

In my view whether robots could replace humans completely depends on the degree of sophistication of the robots. Once a robot can do everything a human can do, and can do most or all of it better than a human, there is no longer any logical reason to use humans for these tasks. What remains is the human desire for personal involvement and for some degree of adventure, but I am not entirely convinced that this will always remain sufficient to keep humans as the main actors.
But you are missing some vital points.

Humans, not robots, value credits as a means to live. Humans, not robots, want to compose music and to appreciate what is being played and the emotions being generated. Humans, not robots, want to know what's out there.

And when it comes to decisions which affect human and robot lives, it is best that a human make the decisions, rather than a pragmatic robot who will only go down the most practical, most obvious, most logical and ultimately least productive path.

If a robot broker dealt solely in speculative trade, high-yield goods would circulate rapidly, and goods such as medicines and food, stuff only of importance to squishies, would be given low priority and would sit there and stink the place down until a robot made the decision not to trade in squishy stuff any more because non-perishables require less energy and cost to store. So what if planets full of squishies start to starve or sicken? That's organics for you. That's what squishies do.

They would be programmed to maximise productivity and profitability, maximise cred yield and minimise outgoings, but the actual value of what they trade in would be lost to them - the ultimate point of getting rich, being that humans can enjoy lives of prosperity, would mean nothing to mechanical minds who would have nothing to want, who are not going to be programmed to want.

Even if the planetside robots were Brokers, they would still have to default to a human decision rather than to go by what their spreadsheets tell them to do.

Same goes for tactical and strategic decisions, music, art, exploration - they all need a human, because even if the technical ability can be studied, duplicated and enhanced, what would be the point if there is nobody but other robots to hear the song and the poems?

A universe full of robots would still be an imitation of human society, unable to understand why they do the things they do.
 
Was a short story in one of the Legion books where robots couldn't handle FTL travel so it left humans to do the exploring.
 
AndrewW said:
Was a short story in one of the Legion books where robots couldn't handle FTL travel so it left humans to do the exploring.
I just posted an elegant possible solution in your PMs. CC'ed it to Matthew too. Wondering if it, or a similar rule, could be put in the Vehicle Handbook's robots chapter.
 
alex_greene said:
A universe full of robots would still be an imitation of human society, unable to understand why they do the things they do.
Again, I would disagree. :(

The robots you describe still seem to be robots created and programmed by humans according to human concepts and for human purposes, and therefore unable to truly replace humans. However, in my view a universe full of robots able to replace humans would be a universe of robots able to program themselves - and I think we cannot know or even imagine what their programs would lead them to want or do, they would probably be a truly alien species to us. A species which would no longer have to rely on humans for any kind of guidance.
 
rust2 said:
alex_greene said:
A universe full of robots would still be an imitation of human society, unable to understand why they do the things they do.
and I think we cannot know or even imagine what their programs would lead them to want or do, they would probably be a truly alien species to us. A species which would no longer have to rely on humans for any kind of guidance.
That is mightily arrogant presumption. A robot could work out an algorithm and compute things like optimal courses of action, but they cannot and should not be considered to be "infallible" even if they seem to be working out solutions more quickly and efficiently than people.

Where there is decision-making, you have the option of compassion. Where there is only calculation, there is no alleviation of suffering.
 
alex_greene said:
Where there is decision-making, you have the option of compassion. Where there is only calculation, there is no alleviation of suffering.
Thanks to the TL 13 Emotion Generator from MGT1 Book 9: Robot the Third Imperium robots can feel compassion. :wink:

More seriously, I think that humans will continue to improve robots and their artificial intelligence in an attempt to create robots which can have all the mental properties of humans, including creativity and emotions. This does not necessarily have to be based on calculation, there may be other approaches to artificial intelligence besides the purely mathematical one - just think of biotechnology and artificial organic brain structures.
 
Autonomous machines have existed since the 1940's, as always the issue is one of culpability, so the KISS answer in the future, is that this has remained the same.
 
alex_greene said:
That is mightily arrogant presumption. A robot could work out an algorithm and compute things like optimal courses of action, but they cannot and should not be considered to be "infallible" even if they seem to be working out solutions more quickly and efficiently than people.

Where there is decision-making, you have the option of compassion. Where there is only calculation, there is no alleviation of suffering.

I think you have a very archaic view of "robots". And are ignoring all the other effects they would have on society.

"Robots" (or more precisely, automation) are already "taking over". Factories are full of robotic assemblers. Driverless cars will be a reality within 5 years. Programs already buy and sell shares way more quickly and effectively than humans to the point where they pretty much run the stock market. They're at the point now where they can analyse data and come to more accurate conclusions than human analysts could (even in radiology and cancer detection). We even have automated burger flippers.

Your question is just an extension of "what happens to humans if robots can do our jobs?". There are several possible answers, but I think the one that leads to our general survival is to provide a "basic income" to everyone and let them do whatever they want because they want to do it. After all, and nobody will be able to find a job if there are no jobs available because they're all automated. Eventually that would probably lead to the idea that money itself is pointless. Naturally it'll take a lot of pain to get to that point because the people who have money won't want to get rid of it, and some groups feel that anyone getting money without doing anything is leeching from society. But it pretty much has to happen whether they want it or not.

The other way your view is archaic is that true artificial intelligence will not produce "uncaring, calculating machines" - it'll produce intelligent machines. There's no reason to believe that their intelligence would be any different to our own (granted, their processing speed would be faster). If humans can figure out compassion or write music then a true AI absolutely can. And humans are pretty capable of emotionless calculation as well (just ask any CEO). AI would be able to understand reason and sympathise and enjoy things and feel sadness about things just as well as a human, because those are things that arise from intelligence itself.

The issue with Traveller is that it forces all this aside because it wants the humans of the 51st century now to be exactly the same as the humans of the 20th century. It has amazing technological advancements and yet none of those advancements changes society - people still struggle over money, still have jobs, are still exactly the same as they've always been. And that's just utter nonsense. Hell, we've changed dramatically in only 100 years, never mind what would happen in 3000 years!

So maybe instead of thinking "ooo, robots will do everything so human characters can't do anything", maybe you should be thinking "AI will be doing everything, so we'll be playing the AI instead. Or an uploaded human intelligence. Or a bioroid. Or the spaceship itself".
 
Alex, you raise some interesting thoughts. But for me it will always come down to one simple answer: It seems to be human nature for us to want to see things with our own eyes. We just can't seem to help ourselves when it comes to seeking out experiences. We clime mountains, scuba dive, explore forests, travel to exotic places. If we were content to let robots do our exploring for us then pictures of these places would be enough for us as a species. So what would stop us from just sending out ships full of robots? Our own desires. :D
 
alex_greene said:
The question keeps getting asked on the Traveller boards. Such as here. If we can use robots, why would we need human crews?
[ . . . ]

I've had 'robots-lite' universes without C3PO-style 'droids, with one or more of the following backgrounds:

i. It's too hard to put together training data sets to give a robot real wisdom. You can wind up with something about as smart as a Cherry 2000 (can sort of fake a human personality) or something that can do tasks in a single domain well. However, we never get the Ash-type synthetic humans.

ii. You just can't stuff enough computing power to run a general purpose A.I. into a 'droid brain. Look at the size of kit you need to run Watson, and we're not far off the limits of what can actually be done with integrated circuits.

iii. The killer app of A.I. isn't personal robot servants after all. The real money in A.I. is using it as a mass surveillance platform. Eventually society rebels against this and you get a sort of Butlerian Jihad that outlaws A.I. and even goes so far as to make computerised database systems a fairly dodgy legal proposition. Within a generation or two, artificial intelligence and mass surveillance come to be viewed as barbaric in much the same way as we would view medieval torture instruments as such.
 
-Daniel- said:
So what would stop us from just sending out ships full of robots? Our own desires. :D
I think we would do both, just as we already do today. A real world example could be the current exploration of the deep sea, where we mostly use robotic AUVs, but there still are people who want to get down as far as possible to see as much as possible. :)
 
rust2 said:
-Daniel- said:
So what would stop us from just sending out ships full of robots? Our own desires. :D
I think we would do both, just as we already do today. A real world example could be the current exploration of the deep sea, where we mostly use robotic AUVs, but there still are people who want to get down as far as possible to see as much as possible. :)
Oh I agree, use of unmanned exploration equipment will always be used in one form or another. I was more addressing how we will always follow in the end. Why we talk of sending man to Mars or back to the Moon or out even farther. We just love to see and experience for ourselves. :D
 
Nobby-W said:
You can wind up with something about as smart as a Cherry 2000 (can sort of fake a human personality) or something that can do tasks in a single domain well.

Enough so you go on a dangerous journey just to get another one of the same model when yours fails? But in the end the Cherry 2000 gets dumped for a human.
 
Consider the scenario where a human doctor and a machine doctor, both with a +2 Edu DM and Medic-5, are arguing over a prognosis. The machine's knowledge base indicates a terminal prognosis - six months, consulting the actuarial tables. The doctor's experience tells him that the patient's going to live no more than three months.

Both the medbot and the doctor tell the patient. The patient goes away, puts his affairs in order, and lives a further nineteen years.

Only one expert was wrong, in this case - the doctor. Because the doctor's experience did not consider that the patient might live; only that people in the patient's stage of the disease have never lasted more than three months.

The robot, in stating six months, was actually more accurate than the doctor; it might have access to more information on treatments not known to the doctor at his TL; and its knowledge base might have been updated with new rules based on the lifespans of actual patients with the disease the patient has. But its report was no more correct or incorrect than a number rolled up by a pile of dice. The robot could pass no value judgments, could not decide what was right or wrong, and could only come up with a number.

In the end, only the human doctor could be right or wrong. A machine not guided by a person can be neither right nor wrong, in the same way as a scalpel cannot be held responsible for nicking the wrong artery and killing a patient.

We'll need humans to make the decisions because the universe will still need judgment to supersede calculation.
 
AndrewW said:
Was a short story in one of the Legion books where robots couldn't handle FTL travel so it left humans to do the exploring.
I missed this the first time. :(

Please can we not do this. I think it smacks of heavy handed game mechanic restrictions. If you do not want robots to be common in 3I, then just keep them costly. As long as human labor is cheaper robots will remain limited. But to restrict them only from FTL will not stop them from becoming common in the setting, just on ships.
 
alex_greene said:
Consider the scenario where a human doctor and a machine doctor, both with a +2 Edu DM and Medic-5, are arguing over a prognosis. The machine's knowledge base indicates a terminal prognosis - six months, consulting the actuarial tables. The doctor's experience tells him that the patient's going to live no more than three months.

Both the medbot and the doctor tell the patient. The patient goes away, puts his affairs in order, and lives a further nineteen years.

Only one expert was wrong, in this case - the doctor. Because the doctor's experience did not consider that the patient might live; only that people in the patient's stage of the disease have never lasted more than three months.

The robot, in stating six months, was actually more accurate than the doctor; it might have access to more information on treatments not known to the doctor at his TL; and its knowledge base might have been updated with new rules based on the lifespans of actual patients with the disease the patient has. But its report was no more correct or incorrect than a number rolled up by a pile of dice. The robot could pass no value judgments, could not decide what was right or wrong, and could only come up with a number.

In the end, only the human doctor could be right or wrong. A machine not guided by a person can be neither right nor wrong, in the same way as a scalpel cannot be held responsible for nicking the wrong artery and killing a patient.

We'll need humans to make the decisions because the universe will still need judgment to supersede calculation.

Nonsense.

For starters, the "machine doctor" will be more likely to be correct than the human one. Why? Because it can access more data more quickly and do more accurate comparisons (as http://www.diagnosticimaging.com/pacs-and-informatics/radiology-man-versus-machine notes, the usual assumption is "normalcy" so a lot of tumours get missed, while an AI has no such bias). Its report is more accurate, and therefore it is "more correct".

For another, if the "machine doctor" is actually intelligent then it will make the diagnosis (why have the human one there if all it's going to do is repeat what the machine says?). You're still viewing the machine/robot/AI as a "tool" and not as an intelligence.

And finally, what you call "judgement" is calculation. Any decision is a calculation for that matter. You assess the ups and downs, the risks and the benefits, and you make a decision. Whether you calculate that "by feel", or by crunching probabilities in your processor makes no difference whatsoever.

Though really, I think really you've made up your mind over this already. You want humans to always be preferable over robots and machine intelligence because that's the way you want your games to be. I don't think anyone is going to persuade you otherwise because you insist on viewing robots and AI as things that they're not.
 
-Daniel- said:
If you do not want robots to be common in 3I, then just keep them costly. As long as human labor is cheaper robots will remain limited.
They are already expensive, but will still out compete humans in space: The most expensive part of sending humans in space is the 4 - 5 dT of very expensive living space.

On a planet a Cr 100 000 utility droid is not likely to obsolete a Cr 5000 / year waiter.
 
Back
Top