War of the workstations: How the lowest bidders shaped today’s tech landscape

Feature Digging into stories of 1980s OSes, a forgotten war for the future of computing emerges. It was won by the lowest bidders, and then the poor users and programmers forgot it ever happened.

Retrocomputing is a substantial, and still growing, interest for many techies and hobbyists, and has been for a decade or two now. There’s only so much you can write about a game that’s decades old, but operating systems and the different paths they’ve taken still have rich veins to be explored.

Anyone who played with 1980s computers remembers the Amiga versus the Atari ST and other battles. But digging down past the stratum of cheap home computers and gaming reveals bigger, more profound differences. The winners of the battles got to write the histories, as they always do, and that means that what is now received wisdom, shared and understood by almost everyone, contains and conceals propaganda and dogma. Things that were once just marketing BS are now holy writ, and when you discover how the other side saw it, dogma is uncovered as just big fat lies.

The biggest lie

The first of the big lies is the biggest, but it’s also one of the simplest, one that you’ve probably never questioned.

It’s this: Computers today are better than they have ever been before. Not just that they have thousands of times more storage and more speed, but that everything, the whole stack – hardware, operating systems, networking, programming languages and libraries and apps – are better than ever.

The myth is that early computers were simple, and they were replaced by better ones that could do more. Gradually, they evolved, getting more sophisticated and more capable, until now, we have multi-processor multi-gigabyte supercomputers in our pockets.

Which gives me an excuse to use my favorite German quote, generally attributed to physicist Wolfgang Pauli: “Das ist nicht nur nicht richtig; es ist nicht einmal falsch!” (That is not only not right, it is not even wrong!)

Though probably apocryphal, someone asked John Glenn, one of America’s first people in space, what it felt like just before launch. Supposedly, he replied: “I felt about as good as anybody would, sitting in a capsule above a rocket that were both built by the lowest bidder.”

Well, that is where we are today.

The story about evolution is totally wrong. What really happened is that, time after time, each generation of computers developed until it was very capable and sophisticated, and then it was totally replaced by a new generation of relatively simple, stupid ones. Then those were improved, usually completely ignoring all the lessons learned in the previous generation, until the new generation gets replaced in turn by something smaller, cheaper, and far more stupid.

Evolution

The first computers, of course, were huge room-sized things that cost millions. They evolved into mainframes: very big, very expensive, but a whole company could share one, running batch jobs submitted on punched cards and stuff like that.

After a couple of decades, mainframes were replaced by minicomputers, shrunk to the size of filing cabinets, but cheap enough for a mere department to afford. They were also fast enough that multiple users could use them at the same time, using interactive terminals.

All the mainframes’ intelligent peripherals, networked to their CPUs, and their sophisticated, hypervisor-based, operating systems, with rich role-based security just thrown away.

Then, it gets more complicated. The conventional story, for those who have looked back to the 1970s, is that microcomputers came along, based on cheap single-chip microprocessors, and swept away minicomputers. Then they gradually evolved until they caught up.

But that’s not really true.

First, and less visibly because they were so expensive, department-scale minicomputers shrank down to desk-sized, and then desk-side, and later desk-top, workstations. Instead of being shared by a department, these were single-user machines. Very powerful, very expensive, but just about affordable for one person – as long as they were someone important enough.

Unfortunately, though, in the course of being shrunk down to single-user boxes, most of their ancestors’ departmental-scale sophistication was thrown away. Rich file systems, with built-in version tracking, because hard disks cost as much as cars: gone. Clustering, enabling a handful of machines costing hundreds of thousands to work as a seamless whole? Not needed, gone. Rich built-in groupware, enabling teams to cooperate and work on shared documents? Forgotten. Plain-text email was enough.

Meanwhile, down at the budget end and at the same time as these tens-of-thousands-of-dollar single-user workstations, dumb terminals evolved into microcomputers. Every computer in the world today is, at heart, a “micro.”

At first, they were feeble. They could hardly do anything. So, this time around, we lost tons of stuff.

Hard disks? Too expensive. Dropped. Multitasking? Not enough memory. Removed. Choice of programming languages? Retained in the 1970s CP/M machines, then when they got cheaper still in the early ’80s, dropped: kids don’t need that. Shove BASIC in a ROM, that will do. The machines mostly got used for playing games anyway.

Early micros could handle floppy disk drives and had a DOS, but over the longer run, even those got eliminated: too expensive. Instead, you got an audio cassette recorder.

How we got here

Those early-1980s micros, the weakest, feeblest, most pathetic computers since the first 1940s mainframes, the early eight-bit micros, those are the ancestors of the computers you use today.

In fact, of all the early 1980s computers, the one with the most boring, unoriginal design, the one with no graphics and no sound – that, with a couple of exceptions, is the ancestor of what you use today. The IBM PC, which got expanded and enhanced over and over again to catch up and eventually, over about 15 years, exceed the abilities of its cheaper but cleverer rivals.

It is a sort of law of nature that when you try to replace features that you eliminated, the result is never as good as if you designed it in at the beginning.

We run the much-upgraded descendants of the simplest, stupidest, and most boring computer that anyone could get to market. They were the easiest to build and to get working. Easy means cheap, and cheap sells more and makes more profit. So they are the ones that won.

There’s a proverb about choosing a product: “You can have good, fast, and cheap. Choose any two!”

We got “fast” and “cheap.” We lost “good”, replaced by “reliable,” which is definitely a virtue, but one that comes with an extremely high price.

What we lost

Like many a middle-aged geek, there was a time when I collected 1980s hardware, because what was inaccessible when it was new – because I couldn’t afford it – was being given away for free. However, it got bulky and I gave most of it away, focusing on battery-powered portable kit instead, partly because it’s interesting in its own way, and partly because it doesn’t take up much room. You don’t need to keep big screens and keyboards around.

That led me to an interesting machine: the other line of Apple computers, the ones that Steve Jobs had no hand in at all.

The machine that inspired Jobs, which led to the Lisa and then the Mac, was of course the Xerox Alto, a $30,000 deskside workstation. In Jobs’ own words, he saw three amazing technologies that day, but he was so dazzled by one of them that he missed the other two. He was so impressed by the GUI that he missed the object-oriented graphical programming language, Smalltalk, and the Alto’s built-in ubiquitous networking, Ethernet.

The Lisa had none of that. The Mac had less. Apple spent much of the next 20 years trying to put them back. In that time, Steve Jobs hired John Sculley from PepsiCo to run Apple, and in return, Sculley fired Jobs. What Apple came up with during Sculley’s reign was the Newton.

I have two Newtons, an Original MessagePad and a 2100. I love them. They’re a vision of a different future. But I never used them much: I used Psions, which had a far simpler and less ambitious design, meaning that they were cheaper but did the job. This should be an industry proverb.

The Newton that shipped was a pale shadow of the Newton that Apple originally planned. There are traces of that machine out there, though, and that’s what led to me uncovering the great computing war.

The Newton was – indeed, still is – a radical machine. It’s designed to live in your pocket, store and track your information and habits. It had an address book, a diary, a note-taking app, astonishing handwriting recognition, and a primitive AI assistant. You could write “lunch with Alice” on the screen, and it would work out what you wrote, analyze it, work out from your diary when you normally had lunch, from your call history where you took lunch most often and which Alice you contacted most often, book a time slot in your diary and send her a message to ask her if she’d like to come.

It was something like Siri, but 20 years earlier, and in that time, Apple seems to have forgotten all this: It had to buy Siri in.

NewtonOS also had no file system. I don’t mean it wasn’t visible to the user; I mean there wasn’t one. It had some non-volatile memory on board, expandable via memory cards – huge PCMCIA ones the size of half-centimetre-thick credit cards – and it kept stuff in a sort of OS-integrated object database, segregated by function. The stores were called “soups” and the OS kept track of what was stored where. No file names, no directories, nothing like that at all.

Apps, and some of the OS itself, were written in a language called NewtonScript, which is very distantly related to both AppleScript on modern macOS and JavaScript. But that was not the original plan. That was for a far more radical OS, in a more radical language, one that could be developed in an astounding graphical environment.

The language was called Dylan, which is short for Dynamic Language. It still exists as a FOSS compiler. Apple seems to have forgotten about it too, because it reinvented that wheel, worse, with Swift.

Have a look. Dylan is amazing, and its SK8 IDE even more so (a few screenshots and downloads are left). Dylan is very readable, very high-level, and before the commercial realities of time and money prevailed, Apple planned to write an OS, and the apps for that OS, in Dylan.

Now that is radical: Using the same, very high-level, language for both the OS and the apps. It was feasible because Dylan is built as a layer on top of one of the oldest programming languages that’s still in active use, Lisp.

Both Smalltalk and Lisp are very much still around. For both, there are commercial and FOSS versions. Both can run on the .NET CLR and on the JVM. There’s even a Smalltalk that runs in your browser on the JavaScript engine. The primary text editor of the Lisp environment is still widely used today, mostly by old-timers.

But these are only traces. They are faint memorials left after the war. Because once, these things were not just slightly weird languages that ran on commodity OSes.

They were OSes in their own right

I started digging into that, and that’s when the ground crumbled away, and I found that, like some very impressive CGI special effects, I wasn’t excavating a graveyard but a whole, hidden, ruined city.

Once, Lisp ran on the bare metal, on purpose-built computers that ran operating systems written in Lisp, ones with GUIs that could connect to the internet.

And if you find the people who used Lisp machines … wow. They loved them, with a passion that makes Amiga owners look like, well, amateurs, hobbyists. This level of advocacy makes Vi versus Emacs look like a playground squabble.

Some of the references are easy to find. There’s a wonderful book called The Unix-Haters Handbook [PDF] which I highly recommend. It’s easy to read and genuinely funny. It’s a digest of a long-running internet community from the time when universities were getting rid of Lisp machines and replacing them with cheaper, faster Unix workstations – Sun boxes and things like that.

Lisp machine users were not impressed. For them, Unix was a huge step backwards. The code was all compiled, whereas Lisp OSes ran a sort of giant shared dynamic environment (it’s hard to explain something when the words for it have been lost). On a Unix machine, if you didn’t like the way a program did something, you had to go find the source code, edit it, save it in a new file, compile that file, restart the program to try it … and if it worked, find your existing binary, and replace it with the new one. Then you would probably find that you’d made a mistake and it didn’t work, and try again.

This is why, apart from the hardcore Gentoo users, we all outsource this stuff to OS vendors and Linux distributors.

On the Lisp machines, your code wasn’t trapped inside frozen blocks. You could just edit the live running code and the changes would take effect immediately. You could inspect or even change the values of variables, as the code ran. Developer and blogger Steve Yegge called it “living software.”

Lisp machines booted slowly, but that didn’t matter much because you rarely cold booted them. At the end of the day, the OS wrote the values of all its objects and variables to disk – called “saving a world” – and then just stopped. When you turned it back on, it reread these values into memory, and resumed exactly where it was.

Most of this, incidentally, also applies to Smalltalk machines. That’s why these two are sometimes called languages of the gods.

This is what Steve Jobs missed. He was distracted by the shiny. He brought the world the GUI, but he got his team to reimplement it on top of fairly conventional OSes, originally in a mixture of assembly and Pascal.

He left behind the amazing rich development environment, where it was objects all the way down.

And we never got it back

We got networking back, sure, but not this.

Now Lisp and Smalltalk are just niche languages – but once, both of them were whole other universes. Full-stack systems, all live, all dynamic, all editable on the fly.

The closest thing we have today is probably JavaScript apps running in web browsers, and they are crippled little things by comparison.

The difference, though, is where the biggest losses in the war came.

Smalltalk machines ran on relatively normal processors. Smalltalk is all about objects, and you can’t really handle objects at hardware level.

(Well, you can – a very expensive Hi-Fi manufacturer called Linn tried with a machine called the Rekursiv, but it flopped badly. So did Intel’s attempt to do a chip that implemented high-level stuff in hardware – not the Itanium, no, long before that, the iAPX 432.)

But Lisp machines ran on dedicated chips, and this is where stuff gets real. As in, the stuff that hits the fan.

There were several big vendors of Lisp machines. As we covered recently, Xerox sold them, and its Lisp OS is now FOSS.

But the bigger one, and the most influential, was Symbolics. At risk of sounding like a hipster, “you’ve probably never heard of it.” Forgotten as it is now, as an indication of how significant the company was, it owned the first ever dot-com domain on the internet. It launched in 1980 with a commercial version of the MIT CADR Lisp machine, and made dedicated Lisp hardware until 1993.

The company’s dead, but the OS, OpenGenera, is still out there and you can run it on an emulator on Linux. It’s the end result of several decades of totally separate evolution from the whole Mac/Windows/Unix world, so it’s kind of arcane, but it’s out there.

There are a lot of accounts of the power and the productivity possible in Lisp and on Lisp machines.

One of the more persuasive is from a man called Kalman Reti, the last working Symbolics engineer. So loved are these machines that people are still working on their 30-year-old hardware, and Reti maintains them. He’s made some YouTube videos demonstrating OpenGenera on Linux.

He talks about the process of implementing the single-chip Lisp machine processors.

Now that is significant.

When different people tell you that they can achieve such a huge differential in productivity – one tenth of the people taking one tenth of the time to do the same job – you have to pay attention.

Youtube Video

‘Better is the enemy of good’

I am not here to tell you that Lisp machines were some ultimate super workstation. An environment that is mostly semi-interpreted code, running in a single shared memory space, is not very stable … and when it crashes, if you don’t have a snapshot to go back to, you have pain in store.

The point here is not that this long-gone technology was better in every way. It wasn’t. But it did have advantages, and it’s instructive to look at some of these that were shared by both Lisp and Smalltalk machines.

They were written in one language, or mostly in one, all the way down. As original Smalltalk implementer Dan Ingalls put it: “An operating system is a collection of things that don’t fit inside a language; there shouldn’t be one.”

They had a pervasive model of data, all the way down the stack. In Smalltalk, everything is objects. In Lisp, everything is lists. What the Unix model offers by comparison is weak stuff: Everything is a file.

More big lies

The Unix model of computation was designed in response to Multics, the original all-singing, all-dancing 1960s OS. Unix was intended, in contrast, to be the ultimate in minimalism (although it very much is not anymore).

This shows up another of the big lies that everyone just takes as read, but this one has layers like a cabbage:

  • For speed, you need a language that’s close to the metal. That means it must be very simple. This has costs, but they’re worth it: the programmer must manually manage their memory, meaning that they must be very, very careful.
  • But that’s hard, so to keep life easier, you layer simpler languages on top. Early on, AWK and SED and so on; later, Perl and Python; then later still, runtimes such as the JVM and WASM, and languages on top of that, such as Clojure or Scala.
  • As the stack matures, it grows ever more layers. The higher layers are ever further from the metal.
  • Many of these languages are much easier but slower. In the old days, you needed to rewrite code in a lower-level language for speed, such as C++. So what if it’s huge? You don’t need all of it! So what if it’s hard to read? It was hard to write!
  • Then, in time, computers keep getting faster, so you can retain the inefficient implementation and keep going.

The result is a folk belief that there is a necessary, implicit contrast between “readable” and “fast.” This is one of the big assumptions behind both the Unix and Windows schools of OS design, that different languages are best for different jobs, and so you need a mixture of them. That means that the layers are sealed off from one another, because of different models of variable storage, of memory management, etc.

That’s one big lie.

Firstly, because the layers are not sealed off: higher-level languages are usually implemented in lower-level ones, and vulnerabilities in those permeate the stack.

For instance, a common way of trying to make stuff safer is to wrap it in a runtime or VM, but that doesn’t solve the problem, it creates new ones. Problem: The language eliminates whole types of error, but they persist in the VM. Problem: Now, your code is dependent on the performance of the VM. Problem: Try to fix either of these, you risk breaking the code. Problem: Because the VM isn’t part of the OS, you end up with multiple VMs, all sharing these issues.

Secondly, there is the existence proof that multiple projects and products, successful in their time, show that if you pick the right language, you can built your entire stack all in one.

Lisp code is structured as lists, which is the data structure that Lisp is designed to manipulate. That is the reason for its legendary plethora of parentheses. It also makes it very easy to write macros that manipulate program code, meaning programs can modify themselves. This sort of thing is why Neal Stephenson referred to it thus:

It is highly readable, and vastly powerful. Alan Kay, the designer of Smalltalk, said of Lisp:

These are powerful qualities – but possibly only to a certain type of mind.

Dylan, however, shows that it need not be like that. If you lose the list-based notation, yes, there is a price in efficiency and power, but the result is readable by mere mortals. Dylan was not the only attempt to do this. There are quite a few – PLOT, CGOL, sweet expressions, the cancelled Lisp 2 project, and more.

The plan was that the Newton would be a Dylan-based Lisp machine in your pocket. The Newton was Sculley’s baby. Jobs didn’t invent it, so he didn’t like it, derided it as a “scribble thing,” and on his return to Apple he killed it along with Hypercard.

In 1993, the Newton was Apple’s first Arm-powered computer, a CPU architecture it returned to 17 years later. It was planned to be the a pocket Lisp workstation, and it launched the same year that Symbolics quit making Lisp processors and moved to the DEC Alpha [PDF] instead.

In the end, the big fight was between dedicated hardware, with custom processors running custom operating systems, versus a shared, lowest-common-denominator technology: UNIX, running at first on off-the-shelf chips such as the Sun-1’s Motorola 68000 and later on RISC chips such as Sun’s SPARC and the MIPS R2000, launched the same year as Acorn’s RISC OS 2 on its ARM chips. A cross-platform OS, its development sponsored by Bell Labs and refined at the University of California, compiled from source written in a cut-down form of BCPL.

As is usually the way, the cheaper, simpler solution won. Commodity chips were replaced with faster RISC chips, broadly speaking designed to run compiled C code efficiently. Then, decades later, the PC caught up: 32-bit x86 processors became comparable in performance to the RISC chips, and we ended up with a few varieties of Unix on x86, plus Windows NT. Unix, of course, was originally developed on the DEC PDP-7, moved to the PDP-11, and later its 32-bit successor the VAX. Windows NT, meanwhile, was designed by the chief architect of the VAX’s native OS, VMS. That’s why it’s so visibly influenced by it. Instead of DEC’s BLISS language, NT is implemented in Unix’s native language, C.

From the perspective of a Smalltalk or Lisp machine, they are siblings, almost twins, with deep roots in DEC. They are representatives of a school of design called the “New Jersey approach” by Lisp luminary Richard Gabriel in his essay “Lisp: Good News, Bad News, How to Win Big” [PDF]. He compares it to what he calls the “MIT approach,” and his summary of the New Jersey way is called “Worse is Better.”

It’s well worth reading at least the summary, but the key points are this.

He continues:

The conclusion, though, is the stinger:

Time has certainly proved Dr Gabriel correct. Systems of the New Jersey school so dominate the modern computing industry that the other way of writing software is banished to a tiny niche.

Smalltalk evolved into Self, which begat JavaScript, which in the hands of a skilled Smalltalker can do amazing things – but only its visual design transformed the computer industry.

The Lisp text editor is now just one of the more arcane options on Linux boxes. Lisp itself is a fringe programming language, beloved by some industry heroes but ignored by most – even those who need the very best tools. As one famous Lisp programmer put it:

If you want to become a billionaire from software, you don’t want rockstar geniuses; you need fungible cogs, interchangeable and inexpensive.

Software built with tweezers and glue is not robust, no matter how many layers you add. It’s terribly fragile, and needs armies of workers constantly fixing its millions of cracks and holes.

There was once a better way, but it lost out to cold hard cash, and now, only a few historians even remember it existed. ®

Bootnote

For a look at a Lisp machine in action, as well as BTRON and IBM i, this talk from the Chaos Computer Congress entitled “What have we lost?” is worth a watch.

Read More