Recently at work, we’ve been being "audited" by a firm to help us determine the cost of operations in our department, and how we measure up to other organizations of similar structure and size. One of the comments that came back to us is that we could potentially be saving on cost of ownership if we were running a Windows shop on the server side. The argument put forth is that Windows Server expertise is more prevalent, and therefore less expensive, whereas Linux is exotic, and therefore less expertise is available, and therefore the cost of Linux expertise is higher. The argument assumes of course that the cost of ownership would drop with Windows to the point of making it more affordable as a whole over time.
Personally, I don’t buy it. I think this theory is based upon this idea that everyone runs Windows desktops (I admit that I do as well, for desktop support reasons), and Windows Server is just like Windows, therefore it’s easier to administer a Windows Server over a Linux server because (and I know I’m exaggerating here) any old Joe can work Windows. Usually, when this assumption is made, it’s false. I’ve seen guys mess around with Windows Server because they "just want to set up a server". Next thing they know, they’ve got broken pile of crap for a network. Setting up a server properly, whether it’s Windows or Linux, requires expertise. Not that I discourage experimenting with either, I’m just saying that a person needs experience with the Server environment and paradigms in order to understand how to properly set up a server, and knowing Windows does not better qualify a person for that.
I would argue that Linux actually offers a lower cost of ownership in addition to the obvious up front cost (free) for one simple reason (though there are many): Scriptability. Take for example the need to create a new VPN tunnel from 100 different servers back to a secondary site (something I’m working on this very moment actually). On Windows Server with the builtin PPTP VPN clients, you would have to either connect to each system individually, create the connection, and set up the proper routing, if applicable, by hand. At best you could create a macro with something like AutoHotKey to help you do this, or even purchase some uber expensive tool for doing these sorts of replications. But even in those cases, I would challenge anyone to do it as quickly as I could with a handful of free command line tools in Linux. With one system that’s been granted passwordless SSH to all of our branch servers, I can script pretty much any change to happen with a minimal of fuss, and more importantly, I can proceed to other tasks while this executes. Writing a change on 5 servers takes literally as long as writing a change for 50, or 500, all without a single mouse click.
Now that’s scalability. I’d like to see that happen on Windows Server.
I’ve administered an organization with multiple Windows Servers. I know what a pain it is. It’s a mess. Word to the wise: GUI’s are for the desktop, not for the server. We’re currently looking at switching from CommuniGate to Exchange at work, and I cringe at the thought. Administration will be a headache. I know this because I’ve admined Exchange. Everything is a kludge, and you can blame Microsoft for that. God forbid they follow an established industry standard once in a while.