1891: "Obsolete Technology"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

cryptoengineer
Posts: 125
Joined: Sun Jan 31, 2010 4:58 am UTC

Re: 1891: "Obsolete Technology"

Postby cryptoengineer » Tue Sep 19, 2017 7:47 pm UTC

sardia wrote:https://xkcd.com/1891/
Title text: And I can't believe some places still use fax machines. The electrical signals waste so much time going AROUND the Earth when neutrino beams can go straight through
Now I have to Google to see if neutrinos can be used to send signals for the internet. That would give me a nice advantage on Dota.
Edit fixed a typo.


You laugh, but for some people this is a real issue. High Velocity traders have to compete at 100 microsecond scales and data centers can charge differently according to how long the wires are to the market floor.

Impulses in electrical wire travel at 0.58c (CAT3) to 0.99c (open ladder) depending on how the conductors are laid out. Light impulses in optical fiber
travel at c/(avg index of refraction), which can be quite a bit less than c.

A few years ago commodity arbitragers constructed a microwave relay line between NYC and Chicago to gain a speed advantage - EM radiation
through air is a lot faster than fiber or wires.

They'd use neutrinos if they could - it would allow them to avoid relay latency, and be a shorter path.

ce

User avatar
sardia
Posts: 5804
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: 1891: "Obsolete Technology"

Postby sardia » Tue Sep 19, 2017 7:58 pm UTC

Well Google says it doesn't work cuz neutrinos drop too many packets. Hence the ability to fly through the Earth.

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Tue Sep 19, 2017 9:00 pm UTC

If you're in a line of work where you're at a disadvantage because someone is conducting human affairs microseconds faster than you, I think the solution is to find a less silly line of work.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

cryptoengineer
Posts: 125
Joined: Sun Jan 31, 2010 4:58 am UTC

Re: 1891: "Obsolete Technology"

Postby cryptoengineer » Tue Sep 19, 2017 9:28 pm UTC

commodorejohn wrote:If you're in a line of work where you're at a disadvantage because someone is conducting human affairs microseconds faster than you, I think the solution is to find a less silly line of work.


As I said, 'You may laugh....'

Some people are getting very rich off those microsecond advantages.

The worth of their souls is another question...

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Tue Sep 19, 2017 10:07 pm UTC

I like to think that, as the Data General engineer famously did, they'll end up moving to a commune and "deal with no unit of time shorter than a season."
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

keldor
Posts: 50
Joined: Thu Jan 26, 2012 9:18 am UTC

Re: 1891: "Obsolete Technology"

Postby keldor » Tue Sep 19, 2017 10:21 pm UTC

In response to all the comments of MMUs being unsuitable to real time OSes, the big cause of variable memory latency is cache misses. So unless you want to disable all caching (which would be stupid since it would slow your CPU down to 10% normal speed or much worse - uncached memory accesses take 100s of cycles!), clock level control is a complete myth. It's also worth noting that the TLB is cached as well. Another thing that will screw over any attempt at figuring out how many cycles a given piece of code takes is instruction scheduling. Deeply pipelined superscalar processors that don't actually execute x86 (or ARM for that matter) natively are bad enough, but once you add in OOO execution (and it's very unlikely the specifics of the instruction queue are public knowledge) and especially SMT, all bets are off. Branch prediction is also problematic, since the workings of a predictor unit are very complicated and data dependant. Also an industrial secret.

The thing is, if you were to remove or disable all these features, you've basically pulled out every thing that has increased CPU performance during the last 20-25 years. You might be able to get Pentium 2 or Pentium 3 level performance, but not much more. Hard clock-level real time OSes are impossible on modern processors.

Your best bet, if you must run such a thing, would be a low to mid range microcontroller. Of course, by this point, you're looking at 1% of the performance of a typical CPU, if you're lucky, so it's very questionable if you're gaining anything of use. It's like getting that terrifying MMU miss, but every single clock cycle.

What we really need is an OS that guarantees a maximum amount of time between time slices of a program. Kernel mode drivers might provide a possible avenue, since a lot of hardware requires very low latency communication. Anything high bandwidth will require DMA. CPUs really aren't good at that sort of thing in general. If this is a problem, your only option would be something like a FPGA.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 2467
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: 1891: "Obsolete Technology"

Postby Soupspoon » Wed Sep 20, 2017 12:17 am UTC

Force trade transactions to be conducted over RFC1149. (Not RFC 2549, for obvious reasons.)

ericgrau
Posts: 66
Joined: Sat Dec 13, 2008 7:14 pm UTC

Re: 1891: "New Technology"

Postby ericgrau » Wed Sep 20, 2017 2:39 am UTC

sonar1313 wrote:
Stargazer71 wrote:
Farabor wrote:
Aubri wrote:If it ain't broke, don't fix it. If it is broke, fix it only as much as is necessary.


Unfortunately, this attitude can lead to situations where people are complacent and don't realize that something actually is broken, but not in an obvious way. Case in point: Light bulbs. For their primary function (Delivering light), over 100 year old technology worked just fine, and wasn't fixed.

Unfortunately, this ignored the fact that the way they went about doing it lead to way more energy being used than needed, most of which bled off into waste heat (To the point that a children's toy used to use a light bulb to power an oven that you could cook actual baked goods in!).

It's only now in the age of energy efficiency being a goal that people started to realize "Err, wait..."


It depends on the magnitude of the improvements you want to see. I have yet to hear anyone say, "Our dependance on foreign oil for energy used to be much worse. Now that we have switched out our light bulbs, things are looking much better."

I'm not saying that energy efficient light bulbs are a bad thing. If you look at energy for a single household before and after, I'm confident you can see a meaningful difference. But in the large scale--on the magnitude of the global economy, the only meaningful change I see is the hubris of those who tout it as a major accomplishment.

That, and racing to "fix" something before you have a good solution is a terrible idea. I think the light bulbs are the perfect case in point. Part of the reason you don't see a major impact is because it turns out CFLs in the real world have nowhere near the lifespan they do in the lab. The tests that determined lifespan just involved leaving the damn things on til they burned out, which is a pathetic real-world simulation. Turning them on and off damages them, especially if you do so within a few minutes. I actually read an article once that suggested people leave a CFL on for at least 15 minutes to preserve its lifespan, which is silly when a great many uses of lightbulbs are for like three minutes or less. Kind of blunts the energy savings.

And then it became a struggle to find a place to recycle them, which ended up meaning a 15-minute drive to a place that accepted them, instead of just putting them in the trash, which kind of blunts the carbon impact.

I'm pretty convinced that the light bulb thing could've been solved by itself once LEDs came out. CFLs went from future to dinosaur in a hurry, and probably would never have caught on had they not been essentially mandated.

I noticed that issue with CFL's, which is why I only use CFLs in places where the light stays on. My bathroom and closet still use incandescents. LEDs were still a bit expensive when I got my CFLs. And rather than being damaged from being turned on and off, their weakness is that the briefest of power surges can permanently destroy them. Unless the proper protection circuitry is installed. I didn't trust a large investment to a 10-20 year return until I let others be the guinea pigs first to see if they really last. Like here: http://ask.metafilter.com/269486/Is-my- ... t-bad-luck
I have some LEDs only because they're Echo controlled smart bulbs which will move with me when I move. I also rely on smart switches but in places with 1-3 bulbs the bulbs make more sense as they install faster.

I had to edit my post into the past tense because I checked prices and discovered we are already at the time when LEDs are affordable. But the thing is we weren't when I bought my CFLs, they were way too expensive. And my CFLs still work because of where I installed them. My first ever burnt out CFL just happened but I had 3-4 spares. So it may be a few more years before I lose my last spare and buy my first non-smart LED bulb ever. I'll probably replace my incandescents with LED after all of them burn out. They're taking their sweet time though since they're all in locations where the light doesn't stay on very long. That also means they're each burning only a buck or two a year in wasted electricity so the proper response until they and my spares are dead is "why bother".

As for DOS, it's easy to use, easy to install, and doesn't have common O.S. problems like incredibly difficult to disable system popups. For an old system you want to leave running and forget about, I'd rather leave it with DOS. Or a public machine where those popups get in the way.

User avatar
Soupspoon
You have done something you shouldn't. Or are about to.
Posts: 2467
Joined: Thu Jan 28, 2016 7:00 pm UTC
Location: 53-1

Re: 1891: "New Technology"

Postby Soupspoon » Wed Sep 20, 2017 3:10 am UTC

ericgrau wrote:LEDs are still a bit expensive. And rather than being damaged from being turned on and off, their weakness is that the briefest of power surges can permanently destroy them.

I have had problems in my house with incandescent bubs blowing absurdly often, on one floor, with no obvious issues identified when electrically tested, ever since I bought the house, maybe twenty years ago. Except that the given lighting circuit RCD switch in the fusebox always tripped when the bulb popped. Which has me as puzzled as the various electricians I have consulted and had check it out under normal operation (never there during a pop).

When incandescent-style fluorescent bulbs came in, poor-starters though they were, I discovered they were good replacements because they lasted more than a few weeks/months at a time (about eight light fittings involved, and hard to recall the rate-per-fitting or rate-per-all-fittings accurately). Later replaced with better cold-starters after a number of years (and a lifetime far closer to the expected life of the non-incandescent 'bulb' before being obviously dimming), I've now switched to using some LED replacement-replacements (not due to failure, just to give 'em a go) and not yet (touch wood, or some other suitable insulator) have they broken at all, despite the presumed filament-failing fault assumed to be still existing. Whatever it is.

But that' s just my experience, anecdotal and not really studied scientifically. I've only had the LEDs in for a couple of years, so too early to properly test (but definitely outliving the filament predecessors). Can't rule out external changes to my power supply, but as the proble only ever happened on one lighting circuit then it'd have to be a an interesting borderline confluence of internal and external thresholds, in that event.

User avatar
jonhaug
Posts: 23
Joined: Fri Jan 02, 2015 12:44 pm UTC

Re: 1891: "New Technology"

Postby jonhaug » Wed Sep 20, 2017 9:49 am UTC

suso wrote:I think we're overcoming the stigma/stereotype of the command line being obsolete. The general public now associates that black window with text in it mostly with "hacking" due to the media portrayals. I guess this is fine as long as they don't think of it as old and I don't get kicked off a plane just for editing a text file in vim. I see it heavily being used in programmer tutorials, screenshots, tutorial videos and when looking across people's laptops at conferences. I think there has even been an increase in recent years over the way things were in the early 2000s. It's becoming obvious to people that some applications don't make sense to turn into a whole GUI app and that a lot of GUI apps are just overbloated.


I've always been a CLI (Command Line Interface) guy and was mocked years ago for this. "Move with the times," I heard, "use a human-friendly interface." However, humans don't use images and mouse pointers to communicate. They use -- surprise! -- words and sentences! The invention of the mouse was a serious setback and should be named SFC ("Search Find Click").

jc wrote:A nice summary that I've seen in several forms is: It's often said that a picture is worth a thousand words, but a typical thousand-word text can rarely be replaced by a picture.


I remember reading an essay by a Norwegian author who sneered at this "picture is worth a thousand words" expression. "Language is vastly superior at expressing abstract ideas," he wrote, "Try using a picture (or a movie) to say 'the spring turned into summer' for example." (Original Norwegian example I don't know how to translate, literally it says "It became winter, it became spring," which sounds awful in English.)

/Jon

User avatar
Flumble
Yes Man
Posts: 1943
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: 1891: "Obsolete Technology"

Postby Flumble » Wed Sep 20, 2017 12:05 pm UTC

I've long advocated the use of GUI programs, because (dumb) users are likely to be better at navigating through them than a CLI. Of course the GUI must have enough keyboard controls so a power user can be just as quick as with a CLI. Though they'll never have the best feature of CLIs, piping, and often they lack the principle of "do one thing and do it well".


wumpus wrote:Don't always assume you need a "real" OS, or even much beyond a bootloader. If you want to *know* how long it will take to execute something on your processor, you don't want all the stuff an OS supplies (even using virtual memory means a single memory load can take a *long* time). Of course, this might require a two processor (separate. Dual core would interfere with each other's memory). One for dealing with real time, and the other an interface for the programmer (using a full blown OS) to the real time chip. Expect the "OS chip" to need a lot more processing power than the "work chip".

Yeah, I've worked with beaglebones for a bit and having that second (and/or third) real-time processor is neat. Well, apart from working with assembly (I have no idea how to compile from C and know anything about timings afterwards) and working with the different I/O pins. An instruction simply takes 1 cycle (except for reading from anywhere but the scratchpad) and there's no weird optimizations.

User avatar
orthogon
Posts: 2690
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1891: "New Technology"

Postby orthogon » Wed Sep 20, 2017 1:19 pm UTC

jonhaug wrote:
jc wrote:A nice summary that I've seen in several forms is: It's often said that a picture is worth a thousand words, but a typical thousand-word text can rarely be replaced by a picture.


I remember reading an essay by a Norwegian author who sneered at this "picture is worth a thousand words" expression. "Language is vastly superior at expressing abstract ideas," he wrote, "Try using a picture (or a movie) to say 'the spring turned into summer' for example." (Original Norwegian example I don't know how to translate, literally it says "It became winter, it became spring," which sounds awful in English.)
/Jon


It occurred to me a while back that, if a picture really is worth a thousand words, then based on the average entropy of English, that makes a picture worth about 12kbits or 1.5kBytes. By contrast, a picture takes two or three orders of magnitude more than this to store, which proves that pictures are a highly inefficient way to convey information.

Flumble wrote:I've long advocated the use of GUI programs, because (dumb) users are likely to be better at navigating through them than a CLI.

But there's an important distinction: GUIs used to have menus, which were text based. For some reason these got replaced with ribbons and toolbars with stupid little tiny pictures that don't look like anything particular even after you've learned what they do, and change from one version of the software/OS to the next, because graphic design trumps usability every time.
xtifr wrote:... and orthogon merely sounds undecided.

wumpus
Posts: 494
Joined: Thu Feb 21, 2008 12:16 am UTC

Re: 1891: "Obsolete Technology"

Postby wumpus » Wed Sep 20, 2017 2:30 pm UTC

keldor wrote:In response to all the comments of MMUs being unsuitable to real time OSes, the big cause of variable memory latency is cache misses. So unless you want to disable all caching (which would be stupid since it would slow your CPU down to 10% normal speed or much worse - uncached memory accesses take 100s of cycles!), clock level control is a complete myth. It's also worth noting that the TLB is cached as well. Another thing that will screw over any attempt at figuring out how many cycles a given piece of code takes is instruction scheduling. Deeply pipelined superscalar processors that don't actually execute x86 (or ARM for that matter) natively are bad enough, but once you add in OOO execution (and it's very unlikely the specifics of the instruction queue are public knowledge) and especially SMT, all bets are off. Branch prediction is also problematic, since the workings of a predictor unit are very complicated and data dependant. Also an industrial secret.

The thing is, if you were to remove or disable all these features, you've basically pulled out every thing that has increased CPU performance during the last 20-25 years. You might be able to get Pentium 2 or Pentium 3 level performance, but not much more. Hard clock-level real time OSes are impossible on modern processors.[snip]


I think I was the only one suggesting that modern OSs were hopeless at clock level real time, and I was recommending equally slow (AVR might not be with the times, but it is hardly obsolete) CPUs to go with it. And while the cache (maybe branch prediction) is likely the big cause of unpredictable execution time, TLB lookup can hit 3-4 separate memory locations, with the obvious issues if they don't happen to be in cache (many CPUs allow gigabyte pages, which should help). Don't forget page hit/miss in memory, it is virtually impossible to know how long it will take to access memory (it queues up like a disk drive now...).

Avoiding the pitfalls of modern design will leave you with a 40-80MHz 32bit Atmel chip which won't even be close to Pentium2-3 levels (except it will likely have better worst-case performance, which is pretty much the whole point of clock-level real time). I did find a link to a 100MHz 6502, but presumably fabbing the thing is an exercise left to the reader.

If you still need to do (numerical) processing, I suspect you can get nearly modern performance from a DSP chip and hard real time. Expect to sweat as much as if you had massive flops on a GPU (but GPUs are the extreme case of the chip scheduling operations for you).

The whole point of real time design comes down to how hard is your deadline. If it is a truly hard deadline then modern processors simply can't guarantee the speed of short snippets of code (such as interrupt handling) remotely as well as old school processors. If you can get an Atmel to give you the answer in time, you can always expect it to give you the answer in time. Determining the ability of a modern processor (especially one running an OS) to do so is a statistical exercise that will leave you wondering if/when it will fail.

ericgrau
Posts: 66
Joined: Sat Dec 13, 2008 7:14 pm UTC

Re: 1891: "New Technology"

Postby ericgrau » Wed Sep 20, 2017 2:43 pm UTC

Soupspoon wrote:
ericgrau wrote:LEDs are still a bit expensive. And rather than being damaged from being turned on and off, their weakness is that the briefest of power surges can permanently destroy them.

I have had problems in my house with incandescent bubs blowing absurdly often, on one floor, with no obvious issues identified when electrically tested, ever since I bought the house, maybe twenty years ago. Except that the given lighting circuit RCD switch in the fusebox always tripped when the bulb popped. Which has me as puzzled as the various electricians I have consulted and had check it out under normal operation (never there during a pop).

When incandescent-style fluorescent bulbs came in, poor-starters though they were, I discovered they were good replacements because they lasted more than a few weeks/months at a time (about eight light fittings involved, and hard to recall the rate-per-fitting or rate-per-all-fittings accurately). Later replaced with better cold-starters after a number of years (and a lifetime far closer to the expected life of the non-incandescent 'bulb' before being obviously dimming), I've now switched to using some LED replacement-replacements (not due to failure, just to give 'em a go) and not yet (touch wood, or some other suitable insulator) have they broken at all, despite the presumed filament-failing fault assumed to be still existing. Whatever it is.

But that' s just my experience, anecdotal and not really studied scientifically. I've only had the LEDs in for a couple of years, so too early to properly test (but definitely outliving the filament predecessors). Can't rule out external changes to my power supply, but as the proble only ever happened on one lighting circuit then it'd have to be a an interesting borderline confluence of internal and external thresholds, in that event.

Two possibilities I see:
1. The thing killing your incandescents was random voltage *drop* which would cause temperature fluctuations in a filament and wear it out faster but wouldn't hurt an LED. Since it popped the breaker it probably isn't this one.
2. You got good quality LED bulbs with electronics designed to protect them against power surges. And thus in protecting against their general problem they also addressed your specific problem. As their electronics get cheaper I expect this to be the norm, if it isn't already the norm. Since this cuts the power during surges it would probably prevent your breaker from popping too.

Like I said I had to edit my post because much of my reasoning is out of date. As my current bulbs burn out and I use up my handful of spares (in a few years mind you) they'll probably all get replaced with LED. My point was that while CFLs were problematic they were still very useful for over a decade under the correct circumstances: anywhere the light stays on for hours rather than minutes. It's not until recently that LEDs were cheap enough and perhaps reliable enough.

jgh
Posts: 90
Joined: Thu Feb 03, 2011 1:04 pm UTC

Re: 1891: "Obsolete Technology"

Postby jgh » Thu Sep 21, 2017 2:56 am UTC

Hey! I'm still writing PDP11 code. Was doing some last week, before the chunk of 6809 debugging I'm in the middle of now.

User avatar
Solra Bizna
Posts: 48
Joined: Fri Dec 04, 2015 6:44 pm UTC

Re: 1891: "Obsolete Technology"

Postby Solra Bizna » Thu Sep 21, 2017 4:39 am UTC

jgh wrote:Hey! I'm still writing PDP11 code. Was doing some last week, before the chunk of 6809 debugging I'm in the middle of now.

I'm envious. Though the other cultists will probably defenestrate me for saying this, a 6809 is much nicer to develop on than the 65C02 I've been saddled with.

As for real-time systems... I vaguely remember problems being divisible into "hard" real-time and "soft" real-time, and that it turns out the vast majority of problems are, at worst, only "soft" real-time. With relatively little effort, even modern CPUs can be coaxed into acceptable soft real-time behavior. (Arguably, they do it whenever your computer plays audio.) And "hard" real-time systems can often get by with a percent or so of error, and you can reach that by severely compromising performance in the aforementioned ways.

Unfortunately for me, my system occasionally requires me to tightly synchronize my code with external devices that are modulating a signal whose bandwidth is directly comparable to my clock speed.

(Fun fact: with proper setup the 65C02 can achieve an interrupt latency of just 1 cycle! ...or 2, depending on how you measure time.)

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Thu Sep 21, 2017 6:08 am UTC

The 68xx series is definitely nice in terms of features, particularly for an 8-bitter. I'm not crazy about the average number of cycles per instruction, though. It's not Z80 bad, but when you're coming from 6502-land where most instructions are in the 2-5 range, you're kinda spoiled for the rest.

PDP-11, though, that's a damned nice architecture. What're you doing on it?
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

cryptoengineer
Posts: 125
Joined: Sun Jan 31, 2010 4:58 am UTC

Re: 1891: "Obsolete Technology"

Postby cryptoengineer » Thu Sep 21, 2017 1:02 pm UTC

jgh wrote:Hey! I'm still writing PDP11 code. Was doing some last week, before the chunk of 6809 debugging I'm in the middle of now.


Color me impressed. I've never done PDP11 assembler, but I have done PDP-8 and DEC-10 (as well as 6502).

About 15 years ago I had a project on a PIC Microcontroller. It shared a lot of features with the PDP-8.

ce

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Thu Sep 21, 2017 1:57 pm UTC

The PDP-8's an adventure in constructing the operations you want out of the much more limited set of operations that are actually available. It's weird (but I suppose it has to do with the system's intentional minimalism) but definitely good mental exercise. The -11 is much more straightforward and programmer-friendly.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
orthogon
Posts: 2690
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1891: "Obsolete Technology"

Postby orthogon » Thu Sep 21, 2017 2:50 pm UTC

commodorejohn wrote:The 68xx series is definitely nice in terms of features, particularly for an 8-bitter. I'm not crazy about the average number of cycles per instruction, though. It's not Z80 bad, but when you're coming from 6502-land where most instructions are in the 2-5 range, you're kinda spoiled for the rest.

Are you including my favourite instruction, ldir, in your average for the Z80? That could take hundreds of thousands of cycles to finish if you set BC high enough, though might want to stop short of overwriting the program itself.
xtifr wrote:... and orthogon merely sounds undecided.

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Thu Sep 21, 2017 4:35 pm UTC

Hah, yes, that'd skew the average even more...
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
chridd
Has a vermicelli title
Posts: 760
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: 1891: "New Technology"

Postby chridd » Fri Sep 22, 2017 10:06 pm UTC

orthogon wrote:But there's an important distinction: GUIs used to have menus, which were text based. For some reason these got replaced with ribbons and toolbars with stupid little tiny pictures that don't look like anything particular even after you've learned what they do, and change from one version of the software/OS to the next, because graphic design trumps usability every time.
Mac OS, at least from what I've used, seems to mostly be immune to that problem, since there's one system-wide menu bar, which is outside the window and includes some system-wide features (like the battery meter and clock). I don't think I've seen a non–full-screen application hide the menu bar, and if it's there anyways, why not use it?
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

xtifr
Posts: 296
Joined: Wed Oct 01, 2008 6:38 pm UTC

Re: 1891: "Obsolete Technology"

Postby xtifr » Wed Sep 27, 2017 5:43 pm UTC

Please note that there's a difference between text-based and command-line. Vi and Emacs are (usually) text-based, but aren't command-line unless you press colon (for vi) or M-X (for Emacs).

A TUI can offer most of the same advantages that are often claimed for GUIs: easily discoverable menu-based interactions, consistency and mouse-support. The big advantage of a GUI is that it does, well, graphics. But that's mostly important if you're working with, well, graphics. (There are some extremely powerful command-line programs for working with graphics, but on those rare occasions where I have to deal with graphics, I'll generally turn to The GIMP.)

And speaking of graphics: MS-DOS supports graphics, so it's a bit misleading to call it a command-line or text interface. :)

Side note: if your interest in DOS is gameplay, you're probably better off with Dosbox rather than Freedos. Freedos boots standalone, so it's a good choice for real-time work, but Dosbox uses its host OS for filesystem access, so you don't need to have a separate FAT partition, and contains emulations for all sorts of formerly-common hardware that games often rely on.
"[T]he author has followed the usual practice of contemporary books on graph theory, namely to use words that are similar but not identical to the terms used in other books on graph theory."
-- Donald Knuth, The Art of Computer Programming, Vol I, 3rd ed.

commodorejohn
Posts: 957
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: 1891: "Obsolete Technology"

Postby commodorejohn » Wed Sep 27, 2017 6:39 pm UTC

Though DOSBox isn't a perfect solution itself; it's good for games up through the early 486 era running on the standard VGA/Sound Blaster combo, but its VESA/SVGA compatibility is a lot spottier and its emulated horsepower becomes noticeably inadequate if you're running anything more CPU-intensive than Duke Nukem 3D. Granted, most of the really classic DOS titles were comfortably ensconced in its wheelhouse, and a number of the really well-known outliers have open-source engine ports or reimplementations for modern operating systems, but there's still a fair number of games that it doesn't handle all that well. (And that's not even getting into Win16 and Windows 9x-era gaming.)
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: mrob27 and 43 guests