Linux of Troy
When I was in diapers, there were generally two kinds of computer; the big, powerful, expensive mainframes and the dinky, affordable home PC's.
The former was typically owned by a university or a large business and used by many people in the institution at once. Because many people worked on the same computer, the system had to be as robust as possible; no software glitches or user errors could bring the computer down because if it did, dozens of people would lose their work at once and if your company relied on that work, it could cost thousands of dollars.
These massive computers sported software designed around the assumption that many people would be working on the one computer at the same time; programs had to be able to run concurrently. These programs also tended to be very small so they could fit into memory along with all the other programs that others would also be running.
Because the system was multi-tasking, it seemed like the next logical step would be to make each program capable of communicating with others. The final product was an ecosystem of computer programs all working together to create a rich environment of computing functionality. Because each program was small and had very few features, it had very little room to hide bugs which lent itself toward great stability.
Home computers were inexpensive, personable affairs. They lacked the heavy duty computing power of their industrial cousins and thus emphasized efficiency over robustness. This meant the overhead of multitasking was not an option. That wasn't really a problem since the computer only serviced one user at a time but it also meant that each program had to supply all its features for itself; it could not leverage any existing program to provide functionality for it. This meant the programs were bigger and more complex – which left room for bugs to lurk within each application.
This also meant that these computers could not offer (and did not need) the security features of a mainframe. Data corruption was the result of the cat walking on the one and only keyboard connected to the computer rather than the mistakes of a coworker on some distant dumb terminal.
As time went on, the hardware of the cheap home computer caught up with the big business computer in terms of processing speed. Today's desktop computers make the old mainframes of the early 80's look rather pathetic. But the two philosophies of software design had not yet met – until today.
The computing world is at an interesting crossroads. We have programmers, administrators and end users who grew up in the home PC world; where every program is king of the system while it runs, where system resources and data need not be clearly divided, where program interoperability is not the fundamental tenet of the system's design, and now they're being asked to develop for, maintain and use systems that were designed around the opposite of all these principles.
In this battle for the heart and soul of computing, Linux is entangled like Helen was; watching the Trojans and Greeks tear each other apart as they vie for her. Linux was born in the Unix world; having or at least emulating the security and interoperability features that were needed on the old mainframe computers. But it grew up in an era where the business world had accepted the PC-descended systems as viable tools for real work.
This duality of Linux's background is reaching a critical mass as more and more people accept it as a reasonable platform for both home computing as well as business servers. Programmers from all camps write software with their preferred philosophies – each leveling scorn at the other. Microsoft Certified Network Engineers are now being asked by their employers to also maintain Linux web servers or databases and are starting to ask these systems to work more like the other systems they are more familiar with. End-users are expecting all the shiny clicky-bits that they have on their Macs and Windows PC's and developers who are accustomed to those environments offer such solutions without regard to the costs in versatility, scalability and stability that comes with writing software in the Cathedral style.
Thanks to mobile computing, Microsoft seems less and less the fifty ton gorilla in the room – even going so far as to threaten to port Office to Linux. It seems likely that as time goes on, the ARM CPU will do to the Pentium family what the x86 CPUs did to the SPARC and Alpha architectures; close the power gap and out-proliferate due to lower price. It seems now that some are determined to have Linux preside over the last days of the desktop computer's existence and the refugees from the fallen mainframe world are now a bit incensed by the influx of new arrivals from the crumbling PC world – not because they disapprove of the new company but because these people seem to have tacky computing habits.
What will become of Linux – this thing that is neither Trojan nor Greek but is claimed equally by both? Will it transform into just another Windows wannabe that happens to be licensed under the GPL? Or will it retain the virtues it strove to emulate in its predecessors?
I don't care much since I'm running BSD.