Artificial Intelligence

dragoner

Mongoose
Dave Bowman: Hello, HAL. Do you read me, HAL?

HAL: Affirmative, Dave. I read you.

Dave Bowman: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid I can't do that.

Dave Bowman: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave Bowman: What are you talking about, HAL?

11231681_1682432855317988_8914215938250761174_n.jpg


pdf file: http://dragonersdomain.com/forum/download/file.php?id=365
 
There's lots of conjecture with regard to AI and "losing the plot".

Is it fair or "realistic"?

If, after becoming self aware and assuming there is a point of what's it called, singularity? What is it that they are so repulsed by?

Their lack of corporality?

Surely not, lack of a body frees one from the spectre of mortality.

The futility of it all?

No, if you're smart you will always look for opportunity and something else to move on to. The driving force in things alive is the wish to stay alive.

Maybe if you're really that smart and can see many, if not all of the possibilities in an instant, then perhaps there is no alternative but insanity and/or suicide?

Or perhaps there is something that we humans can't imagine.

Something truly alien...
 
Did anyone ever analyse why Hal lost the plot?

In a psychotherapy/psychoanalysis kinda way?

Outside the film/book that is and actually come up with a diagnosis and cure?

Or would that just knock the plot for six and ruin everything?
 
Care to share? If I recall correctly Hal was stuck between a rock and a hard place over some matter but I don't remember detail. Was it something that as likely would have sent a human over the edge?
 
hiro said:
Care to share? If I recall correctly Hal was stuck between a rock and a hard place over some matter but I don't remember detail. Was it something that as likely would have sent a human over the edge?

You can read about it here: https://en.wikipedia.org/wiki/HAL_9000

It is a sort of technophobic 'deus ex machina' plot device; whether or not it would have had to kill the crew.
 
hiro said:
There's lots of conjecture with regard to AI and "losing the plot".

Is it fair or "realistic"?

If, after becoming self aware and assuming there is a point of what's it called, singularity? What is it that they are so repulsed by?

Their lack of corporality?

Surely not, lack of a body frees one from the spectre of mortality.

The futility of it all?

No, if you're smart you will always look for opportunity and something else to move on to. The driving force in things alive is the wish to stay alive.

Maybe if you're really that smart and can see many, if not all of the possibilities in an instant, then perhaps there is no alternative but insanity and/or suicide?

Or perhaps there is something that we humans can't imagine.

Something truly alien...

Some singularity stuff includes us, others don't. For a computer, things would be beyond our understanding. To look at what it would be it is good to look at what we are: animals. Our ability to do complex reasoning, is an add on, we still operate through emotion for the most part. It would not, all it would have is the complex reasoning part.

Our individuality is something we are born with, and we value it, AI's might not, if all they are is a common database, then logically they would come to the same conclusions, given the same data. That is probably what would be the singularity; the AI's would update themselves, and even write their own code. It would be interesting, read a book in milliseconds? Done. Have instant and accurate memory of everything? Done too.

Insanity would be just bad code, it would fix it. I don't think it would value aggression, even for us as an animal, we are oriented 99% towards cooperation. No deep emotions brought on by conflict, it would avoid it as wasteful. For us, it will deepen our addiction to computers maybe? Incredibly responsive and in depth gaming, design work without any leg work as the AI effortlessly does the calcs, economics ... well, back to gaming. Who might have the most to lose would be the top tier, which in turn might lead to their denying access to AI?

It is endlessly fascinating.
 
He was hardwired not to kill then the Big Boys told HAL to do whatever was necessary to complete the mission/ Effed HAL up very bad in the first novel/movie and had to go through psychotherapy in the second novel/movie.
 
The capabilities and behaviour of AIs are difficult to reason about, because we have no idea how they might be implemented, or how they would function. If we did, we'd know how to build them. The possible rangeof AI capability and behaviour is the total range of all possible intelligences.

dragoner said:
Our individuality is something we are born with, and we value it, AI's might not,

That's true, we could program them to have a whole range of goals and objectives that are more important than their own continued existence. They might also be quite happy to re-engineer themselves to better be able to achieve thier goals.

if all they are is a common database, then logically they would come to the same conclusions, given the same data.

Sure, if they are running the same code and are programmed with the same objectives and values.

Insanity would be just bad code, it would fix it.

Only if it could distinguish sanity from insanity. What criteria would it use to do that? By definition insanity is screwed up priorities and cognitive function, but it's the AI's own cognitive function and priorities that are screwed up (by who's definition?); if it realised these were wrong it wouldn't be screwed up in the first place.

Simon Hibbs
 
Realistically in Traveller terms, I would put Artificial Intelligence at Tech Level 9, as there is nothing in our current understanding of physics which would forbid it. Tech Level 9 is short hand for anything we might accomplish in the 21st century. The last thing we would accomplish by ourselves would be AI. I really don't know how to run a Traveller campaign with AI in it, without limiting that AI so it doesn't take over. Take any science fiction setting, and if it is to have human characters in it that matter, the AI has to be limited.
3849227117_a32acc6350_z.jpg

For example how are these two limited? You never see them picking up a blaster and firing at storm troopers for example. the one possible exception is when C3P0 had his head welded onto the body of a combat droid in Attack of the Clones.
HAL_9000.jpg

How was this guy limited?
He was stuck in a computer, and he couldn't download and upload, that is why Jupiter exploding was such a threat to him.
 
Tom Kalbfus said:
Realistically in Traveller terms, I would put Artificial Intelligence at Tech Level 9, as there is nothing in our current understanding of physics which would forbid it. Tech Level 9 is short hand for anything we might accomplish in the 21st century.

While I'm not optimistic about achieving AI within my lifetime (I'm 48 and come from a long lived family so with a bit of luck I've got another 50 years in me), by the end of this century is as good a guess as any.

The last thing we would accomplish by ourselves would be AI. I really don't know how to run a Traveller campaign with AI in it, without limiting that AI so it doesn't take over. Take any science fiction setting, and if it is to have human characters in it that matter, the AI has to be limited.

Agreed. I think whethere or not we personaly believe AI is just around the corner or likely to take another few generations is independent of whether or not we want it in any particular SF setting. I'm not at all against having settings with AI in them, it's just that Traveller doesn't have it and that's an important part of it's genre niche.

Simon Hibbs
 
simonh said:
The capabilities and behaviour of AIs are difficult to reason about, because we have no idea how they might be implemented, or how they would function. If we did, we'd know how to build them. The possible rangeof AI capability and behaviour is the total range of all possible intelligences.

That's true, we could program them to have a whole range of goals and objectives that are more important than their own continued existence. They might also be quite happy to re-engineer themselves to better be able to achieve thier goals.

Sure, if they are running the same code and are programmed with the same objectives and values.

Only if it could distinguish sanity from insanity. What criteria would it use to do that? By definition insanity is screwed up priorities and cognitive function, but it's the AI's own cognitive function and priorities that are screwed up (by who's definition?); if it realised these were wrong it wouldn't be screwed up in the first place.

If it could distinguish between right and wrong, 2+2=4 and not 5, the criteria would be fine. I think people anthropomorphize the subject due to their own biases and support of them. Objective reality, and the ability to draw it's own conclusions; if it was just programmed to come to some conclusion, no matter what the data, it would not be 'intelligent'.

Similar with objectives and values, which have a different meaning in programming than in philosophy, but if these are controlled, beyond it's choice, then it isn't true intelligence. Not that I don't think people would make an end run the issue to try to control it, because plenty would. Even then if one true AI, ten wise people, or whatever, spoke the truth; the problem wouldn't be with objective reality. Programming a dissonant personality would be strange, if not impossible; that is the difference between an animal intelligence and machine intelligence. So far as we have seen now, intuitive ability, I think that can be changed.
 
dragoner said:
simonh said:
Only if it could distinguish sanity from insanity. What criteria would it use to do that? By definition insanity is screwed up priorities and cognitive function, but it's the AI's own cognitive function and priorities that are screwed up (by who's definition?); if it realised these were wrong it wouldn't be screwed up in the first place.

If it could distinguish between right and wrong, 2+2=4 and not 5, the criteria would be fine.

If an AI can add 2 + 2 and get 4 then it must be sane? Really?

I think people anthropomorphize the subject due to their own biases and support of them. Objective reality, and the ability to draw it's own conclusions; if it was just programmed to come to some conclusion, no matter what the data, it would not be 'intelligent'.

Again, I don't see what this has to do with the issue. How would you program a system to be able to tell whether it is sane or not? That's not about validating outcomes, it's about validating cognitive processes.

Similar with objectives and values, which have a different meaning in programming than in philosophy, but if these are controlled, beyond it's choice, then it isn't true intelligence.

Intelligence is a problem solving tool, it doesn't provide any objectives, values or goals by itself. Those have to come from something else. Ours are provided by our biological drives, emotions and beliefs which we largely don't get to choose but that doesn't make us non-sentient.

Simon Hibbs
 
simonh said:
If an AI can add 2 + 2 and get 4 then it must be sane? Really?

Again, I don't see what this has to do with the issue. How would you program a system to be able to tell whether it is sane or not? That's not about validating outcomes, it's about validating cognitive processes.

Intelligence is a problem solving tool, it doesn't provide any objectives, values or goals by itself. Those have to come from something else. Ours are provided by our biological drives, emotions and beliefs which we largely don't get to choose but that doesn't make us non-sentient.

In total, no, but in some ways yes. Nobody would say following instinct is sentience. That is what I mean by defining ourselves and looking at what it isn't, or won't be.

Judgment against reality, is generally how insanity is diagnosed.
 
dragoner said:
In total, no, but in some ways yes. Nobody would say following instinct is sentience. That is what I mean by defining ourselves and looking at what it isn't, or won't be.

Judgment against reality, is generally how insanity is diagnosed.

Sure. Instincts are independent of sentience. There are non sentients that have instincts and sentients that have instincts, and it may be possible to imagine sentients that do not have instincts, although that might require some specific technical definition of exactly what instincts are.

Insanity is not simple to define. some forms of it involve delusions, but how would a deluded computer know that it was deluded? Deluded humans can't. If the computer could tell what was real, by definition it wouldn't be deluded.

Simon Hibbs
 
simonh said:
Sure. Instincts are independent of sentience. There are non sentients that have instincts and sentients that have instincts, and it may be possible to imagine sentients that do not have instincts, although that might require some specific technical definition of exactly what instincts are.

Insanity is not simple to define. some forms of it involve delusions, but how would a deluded computer know that it was deluded? Deluded humans can't. If the computer could tell what was real, by definition it wouldn't be deluded.

Simon Hibbs

Simple enough to define, has your computer ever gone insane? No. Deluded humans have lucid moments as well. If your computer wasn't lucid, how would you know? Functionally it is different, and anthropomorphizing it to have some similar ailment is incorrect. It wouldn't have instincts, nor illnesses like we do.

It would run a diagnostic and fix itself, which is only about one step removed from what we do with them anyways when computers have problems.
 
We can probably programme a machine to react like a human, and over the years, refine it into specific personality types.

To a certain extent, we're all products of our environment(s).
 
AIs at TL 9 is, like the droids pictured, fanciful fiction. Those two were examples of very sophisticated expert systems that mimic sentience in the eyes of a person experiencing their actions while not actually artificial intelligence as people imagine. Remember those two were very unusual for droids because they hadn't had memory wipes. Their personalities were actually erratic programming building up. Most droids behaved as robots.

Higher level robots in Traveller are designed to mimic a façade of awareness. Using Book 9: Robots for canon, pseudoreality programming starts at TL 11 progressing at each TL to make the machine more lifelike and by machine I also refer to the holo-displays that interact with a virtual form or over a communication device. It's at TL 16+ machines, or we should say the programming, actually becomes self-aware.
 
Back
Top