Traveller, TAS, and AI

Coming to this convo late, but I did read through it. My personal take:

Mongoose perceived a problem, real or potential, and made a decision on how to address it. This is their right and obligation as a business; whether it is the right decision will be shown by their future success under the policy.

I perceive the same problem, as a potential one, but my 'business' needs are different, so I made a different choice. This, too, is my right and obligation as a 'business' owner. There is some overlap, and my 'business' depends on Mongoose allowing it to exist. Therefore, I will take note of any statements that Mongoose makes on a subject, evaluate if/how they affect me, and respond accordingly in both my public statements and my actions. This means that - at least at present - I cannot release, even at a zero price point - any compilations of Freelance Traveller on DTRPG under the TAS seal.

I believe that I've stated in other threads touching on this subject what Freelance Traveller's policy is with respect to AI, but I will state it again here: Textual content entirely generated by AI is not welcome. You may use AI to assist in working up an idea, but unless you are specifically trying to spotlight the use of AI in your game or in game prep [e.g., Timothy Collinson's article on the topic in May/June 2024], we expect that the work will be mostly your own, and that you will acknowledge the use of AI and which AI you used if you are including AI-generated text in your submission. We would appreciate, but do not require, acknowledgement if you have developed the idea with AI assistance, but the text is entirely your own.

Graphical content generated by AI is acceptable, but we again expect that you will acknowledge its use, and identify the AI used. (Several covers have been AI-generated without retouching; they've been identified as by "Norman Nescio via DayBistro" (or whatever the AI involved).) This applies regardless of whether you have subsequently modified it sufficiently to allow copyright under US law.
 
Strawman:
is not just disagreeing with someone.
is not simply criticising an argument.

It is misrepresenting the original claim and then attacking that misrepresentation.

Put another way, I do not think it means what you think it means.
wrong. He switched his argument once he was cornered and shown to have a flawed argument. I know EXACTLY what a straw man argument is.
 
wrong. He switched his argument once he was cornered and shown to have a flawed argument. I know EXACTLY what a straw man argument is.
Technically, a strawman argument is making up an argument and claiming incorrectly that it’s your position, then disproving it.
 
@jwlovell
Any time anyone says we are not coming for your jobs, your money, your kids, your guns, your freedom of speech or religion or anything else, that is exactly what they are coming for.
Don’t get me wrong, just because I advocate for AI as a tool, I have no delusions about how dangerous a tool it can be now and with no apocalyptic fictional exaggeration - how dangerous it will be in just a few years. The true magnitude of just how dangerous, how jobs and society as a whole can and the ways they will likely be impacted negatively is evident, and seen by experts in the field so clearly that recently a paper was written that detailed in a sobering analysis a potential timeline broken down month by month over the next several years about one possible eventuality given a few bad choices. How we as a whole are not ready as a society to reach AGI, super intelligence. And these experts do not talk about what if or maybe, but whether these things pan out in just a few years or ten. The lure of an incredible utility, wealth, and discovery - with rapid technological advances across every field will be too irresistible to not risk pushing ahead, as seen by nearly every industry investing billions into leveraging and developing AI. Over the next few years - wrong decisions and poor ones could prove open a pandora’s box that make Skynet look like a children’s story. So AI can’t be ignored, it has to be managed / adapted to, and be strategically dealt with, just to keep up with avoiding any economic losses that other organizations will gain over those who ignore it. So, no blinders here - but people should understand where the real dangers are, whether jobs, your income, your freedom or your life; and take practical measures - not hide from it. Until the day comes that an AGI presents an existential threat to humanity or not is going to become very clear within the next ten years, In the meantime, I’m going to keep advocating for practical measures, not putting on blinders and hoping it all goes away.
 
Don’t get me wrong, just because I advocate for AI as a tool, I have no delusions about how dangerous a tool it can be now and with no apocalyptic fictional exaggeration - how dangerous it will be in just a few years. The true magnitude of just how dangerous, how jobs and society as a whole can and the ways they will likely be impacted negatively is evident, and seen by experts in the field so clearly that recently a paper was written that detailed in a sobering analysis a potential timeline broken down month by month over the next several years about one possible eventuality given a few bad choices. How we as a whole are not ready as a society to reach AGI, super intelligence.
 
Back
Top