Traveller, TAS, and AI

Coming to this convo late, but I did read through it. My personal take:

Mongoose perceived a problem, real or potential, and made a decision on how to address it. This is their right and obligation as a business; whether it is the right decision will be shown by their future success under the policy.

I perceive the same problem, as a potential one, but my 'business' needs are different, so I made a different choice. This, too, is my right and obligation as a 'business' owner. There is some overlap, and my 'business' depends on Mongoose allowing it to exist. Therefore, I will take note of any statements that Mongoose makes on a subject, evaluate if/how they affect me, and respond accordingly in both my public statements and my actions. This means that - at least at present - I cannot release, even at a zero price point - any compilations of Freelance Traveller on DTRPG under the TAS seal.

I believe that I've stated in other threads touching on this subject what Freelance Traveller's policy is with respect to AI, but I will state it again here: Textual content entirely generated by AI is not welcome. You may use AI to assist in working up an idea, but unless you are specifically trying to spotlight the use of AI in your game or in game prep [e.g., Timothy Collinson's article on the topic in May/June 2024], we expect that the work will be mostly your own, and that you will acknowledge the use of AI and which AI you used if you are including AI-generated text in your submission. We would appreciate, but do not require, acknowledgement if you have developed the idea with AI assistance, but the text is entirely your own.

Graphical content generated by AI is acceptable, but we again expect that you will acknowledge its use, and identify the AI used. (Several covers have been AI-generated without retouching; they've been identified as by "Norman Nescio via DayBistro" (or whatever the AI involved).) This applies regardless of whether you have subsequently modified it sufficiently to allow copyright under US law.
 
Strawman:
is not just disagreeing with someone.
is not simply criticising an argument.

It is misrepresenting the original claim and then attacking that misrepresentation.

Put another way, I do not think it means what you think it means.
wrong. He switched his argument once he was cornered and shown to have a flawed argument. I know EXACTLY what a straw man argument is.
 
@jwlovell
Any time anyone says we are not coming for your jobs, your money, your kids, your guns, your freedom of speech or religion or anything else, that is exactly what they are coming for.
Don’t get me wrong, just because I advocate for AI as a tool, I have no delusions about how dangerous a tool it can be now and with no apocalyptic fictional exaggeration - how dangerous it will be in just a few years. The true magnitude of just how dangerous, how jobs and society as a whole can and the ways they will likely be impacted negatively is evident, and seen by experts in the field so clearly that recently a paper was written that detailed in a sobering analysis a potential timeline broken down month by month over the next several years about one possible eventuality given a few bad choices. How we as a whole are not ready as a society to reach AGI, super intelligence. And these experts do not talk about what if or maybe, but whether these things pan out in just a few years or ten. The lure of an incredible utility, wealth, and discovery - with rapid technological advances across every field will be too irresistible to not risk pushing ahead, as seen by nearly every industry investing billions into leveraging and developing AI. Over the next few years - wrong decisions and poor ones could prove open a pandora’s box that make Skynet look like a children’s story. So AI can’t be ignored, it has to be managed / adapted to, and be strategically dealt with, just to keep up with avoiding any economic losses that other organizations will gain over those who ignore it. So, no blinders here - but people should understand where the real dangers are, whether jobs, your income, your freedom or your life; and take practical measures - not hide from it. Until the day comes that an AGI presents an existential threat to humanity or not is going to become very clear within the next ten years, In the meantime, I’m going to keep advocating for practical measures, not putting on blinders and hoping it all goes away.
 
Don’t get me wrong, just because I advocate for AI as a tool, I have no delusions about how dangerous a tool it can be now and with no apocalyptic fictional exaggeration - how dangerous it will be in just a few years. The true magnitude of just how dangerous, how jobs and society as a whole can and the ways they will likely be impacted negatively is evident, and seen by experts in the field so clearly that recently a paper was written that detailed in a sobering analysis a potential timeline broken down month by month over the next several years about one possible eventuality given a few bad choices. How we as a whole are not ready as a society to reach AGI, super intelligence.
 
impacted negatively

A concern I have is human misuse of it, such as creating some ai that isn't all that great but using it anyway for some not very good reason like cheaper cost, and people who are forced to interact with it suffer. SISO, management denying its failures and denying their responsibility for it, we've seen this before in decades past, when important people were beginning sentences with, "Well, the computer says..." Maybe tech companies have some kind of secret good ai, because when I've interacted with it I haven't been particularly impressed. It's impossible to teach it to play ttrpgs, or to remember that much of a conversation, or even to get it to stop returning the political bias it was trained on. The art it produces is usually pap or slop, it frequently "hallucinates", which means it lies, or tells you what it calculates you want to hear, or it fabricates false information on the fly. It'll give false positives for facial identification, give people bad medical or psychiatric advice, or reach bad conclusions, and it won't be anybody's fault because "the ai did it". Any sort of redress will require lengthy and expensive legal investigations which are beyond the resources of the ordinary people who will the be victims of it. There's going to be a kind of blindness in which people bringing up serious problems will be ignored by decisionmakers because of the perceived benefits of it, for example, companies aren't hiring junior programmers because "there's an app for that". This means there won't be senior programmers to replace the retiring senior programmers, but they'll say "we'll make an app for that". They'll make ai robots to do unskilled labor jobs, then make robots to maintain the robots, and they'll gleefully caper and prance while they say things like "...maximizing shareholder value..." and they'll be helpless when problems happen that need someone at an on the job level to use his head to solve them. I suspect that over time contingencies will mount, expenses will mount, and the advantages won't be that great after all. Ordinary people will suffer from these failures, more people will die in wars fighting against ai robots and self guiding munitions, self-driving car failures, massive ai-driven hacking attacks by organized crime and state actors (many major scamming operations are run by Chinese triads working with the Chinese government), and exploitation of all the vulnerabilities digital automation brings. And we'll be totally screwed when the next major solar flare event turns it all to scrap.
 
A concern I have is human misuse of it, such as creating some ai that isn't all that great but using it anyway for some not very good reason like cheaper cost, and people who are forced to interact with it suffer.
Sounds like all Cust Serv outsourced to 3rd world countries today. Nothing different.
 
Didn't say it wasn't, but it is better than no one being fed, as is the case with AI.
Having large numbers of people not gainfully employed sets up conditions for riots, slavery or the culling of not particularly useful people.
This is the big picture of AI. It does away with jobs without leaving new avenues of employment. It betters no one when used in this manner.
Using it for microsecond adjustments and six dimensional calculus is fine. Painting, writing, coding, customer service, other jobs - these are abuses and degrade the human experience
 
Back
Top