Mainframe Computer

but what is with the restriction that the combined bandwidth must be less than the TL being used to build the computer?
doesn't say. Probably because the combined effect of the "specialised equipment" used to achieve linked mainframes is hardware-based and TL of equipment assumed to be the same as TL of mainframe. Later, software-configurable Message-passing Interfaces are used to build clusters, so isn't limited by h/w bandwidth.
It is standard practice to automatically drop individual computers from the cluster if their bandwidth drops to the point that they would hinder the cluster.
Only with Load-Balancing clusters such as in web servers. MgT seems to have gone for High Performance clusters and, arguably, that is not configured to the same standard.
 
doesn't say. Probably because the combined effect of the "specialised equipment" used to achieve linked mainframes is hardware-based and TL of equipment assumed to be the same as TL of mainframe. Later, software-configurable Message-passing Interfaces are used to build clusters, so isn't limited by h/w bandwidth.

Only with Load-Balancing clusters such as in web servers. MgT seems to have gone for High Performance clusters and, arguably, that is not configured to the same standard.
We can transfer an unlimited amount of data between systems now, you just need more connectors to handle the increased traffic and still maintain your processing speed. 10,000 gallons is easier to put through 1,000- 1" hoses than through 1 - 1" hose.
 
I tackled this a couple of years ago (https://forum.mongoosepublishing.com/threads/retrotech-and-ships-computers.123925/)
The cluster rules create the backlog because of the geometric progression required to increase bandwidth (see examples)
1745346910947.png
1745346281814.png1745345993053.png

I would recommend a simple fix that says:
This approach uses a geometric expansion of TL8 Personal Computer/1 units, upgraded through advancing tech levels. Though the base architecture remains unchanged, each TL step improves efficiency through better fabrication, denser integration, and increased bandwidth capacity.

To determine how many computers are needed in a cluster, square the additional bandwidth and divide it by the retrotech advantage—the difference between the current TL and TL8, raised to the power of the current TL. The result represents the scaling factor for cluster size: the lower the result, the fewer computers required.
1745352160447.png
For example, a TL10 Improved Cluster/25 with 24 additional bandwidth computes as (24²) ÷ (2¹⁰) = 5.63, while a TL15 Ultimate Cluster/60 with 59 additional bandwidth computes as (59²) ÷ (7¹⁵) ≈ 2.87. Higher tech levels allow clusters to extract more from the same foundation—turning an obsolete computer into a backbone for vast, modern supercomputing arrays.
 
Also, what defines a portable computer? The biggest one is 5kg at TL-7, Computer/0. Advanced and Superior Mid-Size computers weigh the same amount. Can they now count as portable computers and use the rules for Weight Reduction by TL?
Good question. In my mind I have the amount of electronics to run the software and also the size of the equipment needed to imput and view the data. So the electronics keeps on shrinking and thus is able to be embedded in wafer jacks and controllers for prosthetic limbs etc. But the size of a laptop cant really get smaller than 500g or so as you need keyboard and screen. I guess until you can integrate holographic displays and keyboards. Then you get into the realm of rings as computers!
 
In theory, you can run parallel computers.

In practice, not on the same network on a spacecraft.

I have had several goes at trying to come up with a unified algorithm that allows scaling of computer performance, but they tend to veer off since the given published values don't really synchronize.
 
That would work.
If you felt like it you could give all computers a volume or a weight for the smaller computers (Portable, Hand-held, Mid-Size, etc). Then just make Vehicle computers take up 1 Space at base TL and free if the Computer is 1 TL or more above it's base TL. Same goes with Starship Computers, except make it 1 ton instead of 1 Space.
 
ST: Voyager had an interesting take on the computer - theirs was a distributed network, but there was also a main core (and a sizable one at that). The distributed CPU's were biological, sort of like man-made gel-pack brains. They store tremendous amounts of data and also had tremendous processing capability. And, like all computer tech, if you make it available someone will figure out ways to fill it all up (or make it too small because you want to keep stuffing data into it).

I think it's realistic to make it so that bigger IS better - but the thing comes down to where you draw the lines to be able to describe just why that big-ass and expensive 'mainframe' system you installed in your ship really IS so much better/worse than a different one that's way cheaper / more expensive. If you can fix the reasoning then you'll fix the scaling factor. And that tonnage isn't just a room, it's miles and mile of cabling, distributed storage and CPU nexuses, interface terminals, etc. All the detrius that comes with making it useful.
 
It needn't be complicated but would be nice to see something on
Travellers creating s/w programs
do away with "Bandwidth" and replace with a Synthetic Benchmark for costing s/w h/w requirements
Expert Systems, Cluster computing and Wafer Jacks need more clarity.
Introduce Net Running (haven't seen this elsewhere.)
 
What you want is the ability to scale the capability of the computers.

Since the volume they take up is virtual, bandwidth needs only to be linked to cost, which would start to diminish returns after the stated technological level cap.
 
Heat.

Just make it that the key limitation on what's simplified from many limiting factors into "bandwidth" is mostly how hot the things run, always an issue in a vacuum.

But I would also point out you can do an awful lot at once with bandwidth/0 programs. It's largely the ones that can act instead of a person or which give task roll modifiers that go above that.

Or which can calculate interstellar jumps through another dimension with the accuracy to arrive near a planet.

And that IS getting into serious computing power.
 
Heat.

Just make it that the key limitation on what's simplified from many limiting factors into "bandwidth" is mostly how hot the things run, always an issue in a vacuum.

But I would also point out you can do an awful lot at once with bandwidth/0 programs. It's largely the ones that can act instead of a person or which give task roll modifiers that go above that.

Or which can calculate interstellar jumps through another dimension with the accuracy to arrive near a planet.

And that IS getting into serious computing power.
Why make it that complicated? Just use TL of the Computer and TL of the Software. Make the rule that Software can only run on a Computer of its TL or above. Simple. No other mechanics are really needed. We tend to like to over complicate things in Traveller.
 
It’s a holdover form Classic starship combat, where you could only have one or two programs running at a time and there was a distinct turn phase where you could swap software programs. Made things a little more tense and strategic, cat-and-mouse sort of.

But now we know what computers are actually capable of and the idea is quaint at best.

TL as a limiter is a great idea. But we need a new rules paradigm that’s not as fiddly as T5/MegaT but not as simplistic as MgT.
 
Why make it that complicated? Just use TL of the Computer and TL of the Software. Make the rule that Software can only run on a Computer of its TL or above. Simple. No other mechanics are really needed. We tend to like to over complicate things in Traveller.
There needs to be a way for a military computer network at a given TL to be way more capable than the civilian bare bones model.

Can a type A trader computer at TL9 run everything a TL9 warship computer can?

Or are we doing it wrong.

Ship computers could be like the sensor arrays. Basic, advanced, military...


Ship computer - runs the ship and avionics
ship sensor computer - runs the ship's sensor and EW capability
ship weapons computer - runs the ship's weapon systems
 
Last edited:
It’s a holdover form Classic starship combat, where you could only have one or two programs running at a time and there was a distinct turn phase where you could swap software programs. Made things a little more tense and strategic, cat-and-mouse sort of.
PCs tended to upgrade their ship computers, a model 3 could run 5 and a model 4 could run 8.
But now we know what computers are actually capable of and the idea is quaint at best.
I have no idea what a TL12 computer will be capable of... :)
TL as a limiter is a great idea. But we need a new rules paradigm that’s not as fiddly as T5/MegaT but not as simplistic as MgT.
I agree.
 
Last edited:
Back
Top