Reynard said:
A simulation run by A.I.s? We called that one Matrix and humans became nothing but power sources.
The series was a ham-fisted attempt to bring philosophy to the masses in the form of an action adventure. All they remember of it is pictures of Lawrence Fishburne wearing mirrorshades and big "What if I told you ...?" captions written in Impact font.
Reynard said:
A 'true, artificial intelligence' does not automatically guarantee developing a human concept of compassion.
Nor does organic birth. Look at Martin Shkreli and Pol Pot.
Reynard said:
A.I. evolution that is fast, logical and efficient will probably remove anything not efficient such as emotions and compassion as wasteful.
Not automatically guaranteed to happen. Machines may develop better compassion than we are even capable of, and who says that emotions and compassion are the first thing they ditch?
Machines are, as of the time of this writing, not capable of acting beyond their programming. This is reflected in my proposed rule that robots can never achieve more than Effect of +1, no matter how great their programming; nor can they take advantage of the Natural 12 rule, nor accept a Boon die.
An AI that could become capable of acting beyond its programming would have to be programmed with the capacity to do so; Data might have been aware, at some point, that the reason why he was capable of acting as a sentient being instead of a bipedal Roomba was because his creator, Noonien Soong, coded the capacity into his core personality matrix without actually telling him. Data had to find out for himself - that "leap of faith" which turned him into a sophont rather than unusually smart, capable property.
Ditto for the holographic EMH from Star Trek Voyager; at some point, the EMH realised that he was a sentient being, one who was fully self-aware, and he became self-aware before he was tasked to swear the Hippocratic Oath, a binding legal agreement. It turns out that the Doctor, like Data, has a full ethical core subroutine that determines whether he will do a particular act based on whether it is ethical or not. Like Data, this ethical core subroutine can be turned off by the Doctor - as he was forced to do in the episode where he was kidnapped, and as Data may have been forced to do when he was kidnapped by Kivas Fajo - but generally, neither of them do,
choosing to keep them active and running.
Biomechanical creatures such as Daleks and Cybermen, both basically hard shells with organic cores, are horrors not because they are self-perpetuating but because those organics who initially built them designed them without ethical guides. Same deal for the Borg: the first Borg may have been designed by someone who sought "perfection," that perfection being basically the pattern seen here in some of these posts - that machines are more efficient than humans, therefore machines must be more important than humans.
In the case of cybernetic creatures as monsters, their issue is not that they are some product of "natural" AI evolution, the trope of the "Berserk Golem;" their issue is that they were created by an organic who was a psychopath, and who therefore created monstrous servitors in his psyche's own flawed image, including the hole in the mind where compassion and ethics ought to be.
Besides, they make good drama and endless waves of mooks for the good guys to shoot over and over, like Replicators from Stargate. After all, they are all just drones, aren't they?
As far as
Traveller is concerned, robots are everywhere and they are capable of performing Difficult, Very Difficult and even Formidable tasks; but where there is an ethical consideration, sophonts are generally consulted where possible by another sophont who will inform them of the thing which needs to be done and request consent from the subject, client or patient. Examples: surgery, court proceedings.
Even if a sophisticated robot does do a complicated surgical procedure far more efficiently than a human, it can never initiate pioneering surgical procedures: that initially requires an organic to come up with the idea, something even a robot programmed with level 5 in a skill can never do because their programming does not cover the leaps of intuition that lead to new ideas. Even innovation of existing ideas still requires a sophont to initiate the smart idea, and the robot does the work while the human supervises and overrides where needed.
A sophont on its own might achieve even a Formidable task with great difficulty, but it can accept a Boon die, a Natural 12 is an automatic success and there is the possibility, however remote, of accomplishing an Exceptional Success. A robot on its own could achieve that Formidable task with greater ease, but lacks the Boon die, the benefits of a natural 12 and any rolled Effect greater than +1 is wasted. This reflects that it can complete its programming, even compensating for random elements, and do a competent task - but it will never show flair or ingenuity in accomplishing the deed. And for that reason, robots will still only be able to in tasks excel beyond human capacity if, ironically, there is a human being around to override it.