That was probably the sanest possible response from prolonged exposure to an interactive AI. There's a fairly famous animation problem: the closer something gets to human (or in this case sentient) behaviour the more we notice its flaws. An animated skeleton that moves with 80% fidelity "looks" much more lifelike than one animated at 99% fidenlity, even though the later matches human motion much more closely than the former. Basically, the fewer errors there are, the more the remaining errors stand out in the pattern matching we inherited from our ancestors.
In terms of skilled AI, I assumed the problems with abstract representation remained: you can get programs to perform in remarkably skilled fashions but only within very narrow parameters. The narrower your parameters the more "intelligent" the system seems. This is great for simulations and games but terrible for anything that has to interact with chaotic systems (i.e. the real world).
In terms of so called "machine life", created by mimicking basic learning behaviours and functions (mostly from insects, who have shockingly simple neural structures) I didn't go there. B5 could, presumably, have a mechanical race of this later type. However, its "intelligence" would be so alien to that of organic life it seems an unlikely inclusion in a game focused on the relationship between races - if it could communicate with us, what kind of relationship could we have with it? How would its "mind" be structured? Would it even have one as we understand the term?
Now, you could have a lot of fun with the above speculation in something derived from David Brin's work. From JMS's though, it seems wiser to stick to things more in keeping with the settings focus: things we can talk to who have the capacity to make moral choices.
Shannon
Current Status: Deciphering "uah...uha...uhhha....uah.". It contains meaning. I know this to be true.