Better than what you may ask? Better than:
-Older versions of LInux ... it is always improving.
-Windows. Hands down.
-Apple's operating system before Apple chose, essentially, Linux (a Unix variant) to run its eye candy and development environment on
But why, specifically, is it better?
One reason, apparently, is because the Linux Kernel does not have a stable API.
So what, you ask, is a Kernel and/or an API? Very simple: The Kernel is the guts, the most basic part, the way-down way-down of the operating system. What is the API? That stands for Application Programming Interface. The API is a list of "functions" that a computer program can use. For instance, this may be found in the programmer's manual for a fictional API of some computer system:
USB_check(port#)
port# is the number of the port to check for a USB device. This function exists only for backward compatability. You should use USB_IsIt(port#).
USB_New_Check(port#)
Port# is the number ot he port to check for a USB device. This function exists only for backward compatability. You should use USB_IsIt(port#)
USB_IsIt(port#)
Port# is the number ot he port to check for a USB device. Returns an internal file handle for the USB device if it exists. Returns 0 if no USB device is attached. Returns -9 if the device reports an error.
Obviously, I made all this up, but it demonstrates something that can happen with an API. If you have an API then everyone uses it, you can't really edit the API later. You have to leave all those functions you put in there in place, although of course you can add to it (but very very carefully, in some cases...) A "stable" API is an API that will grow and grow and grow and that will always include code that has been replaced for some reason or another.
Check this out:
The Linux USB code has been rewritten at least three times. We've done this over time in order to handle things that we didn't originally need to handle, like high speed devices, and just because we learned the problems of our first design, and to fix bugs and security issues. Each time we made changes in our api, we updated all of the kernel drivers that used the apis, so nothing would break. And we deleted the old functions as they were no longer needed, and did things wrong. Because of this, Linux now has the fastest USB bus speeds when you test out all of the different operating systems. We max out the hardware as fast as it can go, and you can do this from simple userspace programs, no fancy kernel driver work is needed.
Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven't taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don't have access to the code in all of the different drivers, so they can't fix them up. So now the Windows core has all 3 sets of API functions in it, as they can't delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That's their business decision to do this, and that's fine, but with Linux, we didn't make that decision, and it helps us remain a lot smaller, more stable, and more secure.
This is a quote from Greg Kroah-Hartman.
In a proprietary model, you can't rewrite downstream from important upstream changes. And, people working down stream can't (often) know what is actually in the code upstream, so they can't make helpful suggestions. In the OpenSource model, there is a lot of collaberation.
As a result of this, with respect to USB connectivity, Linux is the least broken and the fastest of all operating systems on the planet earth.
So it's better.
- Log in to post comments
On the other hand, the Linux kernel folk decided to completely rewrite the FireWire (IEEE1394) code. They did this in a way that broke it for some devices. It also breaks packages with dependencies, such as libdc1394, which handles streaming video from cameras. The revised Firewire code is already being distributed in Fedora, but a revised libdc1394 is not.
It's also better when the OS quits supporting things that eventually become obsolete. A function no longer used is not a function: it is a chunk of useless disk space.
I may be wrong about this, but I'm pretty sure Apple's "Darwin" operating system is a FreeBSD/Mach3 microkernal hybrid, not exactly GNU/Linux.
Actually, Apple chose freebsd as the base for OSX, not linux. Both freebsd and linux choose many GPL utilities for their OSes. But at the core, OSX is a bsd mach kernel.
Not keeping stable APIs makes the job of the OS developer easier, but it plays absolute Hell on anybody who develops downstream. From the application developer's perspective, this is a good reason to stay away from Linux since your application will stop working every time the APIs you use are updated; an unstable API is more or less a "Fuck you, we're more important." to application developers. Their answer to the updates issue is to make your software opens source and include it in the kernel tree, which beyond surrendering any commmerical potential for the software itself and opening a gargantuan can of IP-related worms about related works (like your Windows drivers for the same device, which will likely share some code), takes the change management process out of your hands.
Refusing to maintain stable APIs doesn't eliminate the fundamental change management problem, it just shifts it to the downstream developers instead - rather than having the upstream code branch to support all the different APIs that downstream code may have, the downstream code has to branch to support the changing APIs of the upstream code. It makes the OS look better on paper, since the API change management issues are now considered application problems, but it doesn't really solve anything.
It's also better when the OS quits supporting things that eventually become obsolete.
It's easy to say that when you're one of the very few people who don't use obsolete software or equipment.
Do you realise that people are still using computers that are ten years old? And even older software, mixed in with the latest stuff? Maybe Linux is only used by those who can afford a new computer every year.
MattXIV: Then how do you explain the fact that Windows, with it's stable API, sucks donkey dicks, while Linux, with it's dynamic API, strong interactive development community, and non-traditional business model, rocks.
Windows sucks donkey dicks because their programmers delight in making the simplest things complex and unwieldy. Actually I think the whole company is that way. I sometimes wonder if they have organized training seminars on making things complex, or whether it it's just something that comes naturally to the people they hire.
Linux does not rock and neither does Windows. Microsoft sees itself as building a platform that others build their apps on top of. Because of that, they have to keep all sorts of foundations available and supported in all versions of Windows. This allows the *user* to run pretty much anything they want on whatever version of Windows they want. They also leave it up to the developer to start using the newer (and presumably better) APIs in order to serve their customers.
Linux (and BSD) take the other view that the *user's* job to keep up. We get user-friendly niceties like compiling from code and editing conf files and a security model that's a tad too draconian (on some Linux distros I've used, changing the system time and date required the root password).
Of course, when the Linux kernel people decide to break their APIs, all the downstream developers have to follow in line. That's not a very friendly thing to do, let alone pragmatic.
And before anyone jumps on me as a Microsoft apologist, back off. I use Windows, Macs, and Linux. I know them all.
Pierre
Wow. I honestly never thought I'd ever see anyone argue that API stability and backwards-compatibility were bad things.
Remember folks, the only purpose of an OS is to enable you to use a computer to run other software in order to achieve useful work. It's just a tool. I don't give a damn how elegant or up-to-date the OS is if it breaks already deployed software. Double that for essential line-of-business software. Also remember that there are people out there still running LOB apps that were written on freaking DOS!
Sheesh, and people wonder why Linux is having a hard time breaking into the corporate desktop...
Linux does not support latest Seagate USB drives
I've been using linux for a few years now, and I have yet to have any software "break" because of the idea of having the API updated. It may be a pain for developers, but it is probably also a pain for developers to work in the Microsoft environment.
At the same time, Linux is lean and mean. It is not true that one is expected to compile code ... that is an option if you like that sort of thing, nothing like an expectation.
The theory is interesting, but the reality is what counts here, I think.
Greg,
Linux will run faster and more reliably because it offloads the API-related maintenance tasks to the downstream developer. But this creates a lot more work for the downstream developers and administrators since they need to produce and install updates to match the new APIs more often. This design philosophy is why Linux more popular for servers, which are generally fewer in number and monitored by knowledgeable people and frequently have peformance and stability prioritized over easy maintenance, than PCs.
As the presentation itself says, this is one of the common reasons that people chose not to develop on Linux. You may not have much of an issue maintaining your own Linux PC, but I take it you're not writing device drivers or trying to maintain a client-server app over a couple hundred PCs. The API stability issue isn't specific to the OS-app level. It occurs within the development of a single app, in interpretative languages, in client-server development, and so on and so forth. This is a universal issue in software development.
The answer Kroah-Hartman gives is to more or less tell the downstream guys that its their problem, not the kernel developers - it's a very "over the wall" attitude and the attempt to play off API instability as a strength rather than the downside of a tradeoff is an insult to the intelligence of anybody he's trying to convince to develop on Linux. Quit being such a fanboy and think about it - and don't cite the "reality" of the fact that Linux runs fine on your PC as proof that Linux is a superior platform for development. The "reality" that commercial software developers tend to prefer Windows to Linux for PC software is much more salient to the point at hand.
This commentator has obviously never tried to install Windows XP or Vista on an old system and has no idea what they are talking about at all! I haven't laughed this hard in awhile.
The idea of removing obsolete functions from the Linux API is because there are plenty that are no longer being used. For instance, why would anyone still need kernel functions for 5 1/4 floppy drives? These functions are removed or added to existing functions to keep the Linux kernel more streamlined.
But besides that, for those that think a stable API is a good thing, I would love to see you take the newest OS from M$ and apply an older kernel with those functions in it. Or better yet, re-compile the kernel to work with your hardware. Oh what's that, you can't. Exactly!
Or better yet, take the newest stable release of Knoppix or Ubuntu and put it on a ten year system. Then do the same with Vista. Let me know which works better for you. Okay, I will be fair and allow you to put XP. Oh that didn't work either... Well even if you could, try taking a guess at which one works better or faster. Guess which one you can run a desktop manager on. That's right, just the other week someone got Compiz Fusion running on a P3 900mhz system. Try running Aero Glass and Vista on that.
When people hear "Non stable API" they think non stable in terms of a program that is stable or non-stable. What the Linux world means when they say non-stable, is that the Linux Kernel can change, it can evolve, it isn't stuck with what it previously had.
A stable API means, once it is stable and operating, it can never be changed, only added to, for newer functions and compatibility. I fail to see how this is a good thing. Basically at some point in the future M$ is going to have to decide to scrap their current kernel and start all over again with a new system or the same system without the older functions and old hardware compatibility. Because they are going to get to the point where the kernel is so large and convoluted it just bogs down the system and fails to run properly. Maybe this is the problem with Vista.
Sure, but your software needs may not be representative. From what I've seen, the software lifecycle in an academic or scientific environment is rather different to that in business. In many cases, a lot of stuff just doesn't get updated ever unless you can make a compelling business (ie financial bottom-line) case for it. In others, every single application (or update) has to go through a lengthy and and very expensive validation process.
Kicking the problem downstream to application developers is all very well if they're doing it for fun, or if they've got a good ongoing development effort. However, if you go to a business user and tell them they need to get someone in to patch up a bespoke app that they've been using completely unchanged for years, and that this will cost them money, they'll throw you out in the street. There's a vast ecosystem of really crappy old VB apps out there that no-one even has the source code for anymore. Yes, this is terrible from a software development perspective - but the business doesn't care.