I just had my monthly meeting with my fellow technician, a kind of after work social club where we go over all things networking and computing, but today’s meeting was more about security and keeping the internal network from the bad guys, the outside world through the Internet. I did indeed learn a lot, and took many valuable lessons home with me. When it came to server vulnerability in today’s world of corporate/business integrity, the stakes could never be higher, especially when your business’s foundation rests on the interoffice network and its connection to the outside world. So we made a contest, pitting five servers, each with the latest, greatest O.S., equipped with the state-of-the-art security software for each one.
I will only talk about three of the contest participants: OpenBSD, Linux (Ubuntu) and Microsoft Sever 2012. I maintain a Linux Ubuntu Server, and in the past I have also worked on OpenBSD networks, but my last employer ran a MS Server 2012 which I had little to no patiences for due to its constant upkeep and maintenance needs. Also, each machine was given a 390 point inspection during the final scan, but I will only show the one chart which pertains to the security breaches (some 80 types) and network scans (anyone who can scan through the firewall), so my presentation in the blog is just these two aspects of our contest.
A security breach is anything that can be sent through the server that is not authorised, or programs that leave wide open holes during, or after, use, and even software where the intended vendor wishes to control the system without permission of the network. For example, something that can bypass any security through a port; slip through an external networking node; someone using a hard-wire branching method; gain access by breaking encryption through a back-door protocol; and items like trojans, worms, and so on. Network Scans are outside machines that try, or attempt, to scan the internal network, such as your I.S.P. who does this to check to see if you are complying with your end user agreement, but spy programs from the outside who map your network and then to use that information to focus their attacks on machines from within.
So here is the data:
Each machine had 2.3TB of data pass through it within the 30 test cycle (Local Area Network), and a simulated 10,500 emails, set at different intervals to create a realistic workplace environment through the Wide Area Network up through a real domain/DBN.
MS2012 received 3872 successful hits/breaches over a 30 day period. From the moment the machine was active, the hits started coming in at a rate of 44 scans per hour on the first day. The first breach took place around the 18 hours mark, and continued at a steady rate up until the last day of testing. There was a brief power outage in the building which lasted less than a minute, but it took the server ninety-one minutes before it reactivated itself, only to shut down again due to a configuration error which had to be fixed by a technician. The server also needed a total 18 hours of maintenance through the 30 day test cycle.
Linux (Ubuntu) received only 117 hits/breaches. The first hit did not come until day 8, but escalated around day 15 to 21, then ceased until day 30. Ninety percent of the hits were scans, and 18 breaches were detected but mostly on emails ports, and attempted configuration updates that were not authorized. The downtime was nearly 99.98 percent. The power outage recovery time was 16 minutes. The server scored in second place over all.
OpenBSD was the winner hands-down. Its performance was unbeatable and is the world champion as far as the best O.S. you can have for a server. Zero breaches, and two network scans within the whole 30 day cycle!
This is not a scientific study! There were five servers located at a company that builds, and does maintenance work on servers in Vancouver, Canada. Each machine was not given the same treatment, but was subjected to random variables from the outside world through a single service provider over the WAN, via LAN and internal networks. Only the internal network data was simulated.
Bottom line is that free is good. It seems that proprietary systems are more weaker with security, and offer less choices then open source products available on the market. Systems that do not allow root administration to the user cannot possibly be effective in offering a level of effective security. Poor programming and bad software code are also a huge problem, as fixes often take many days to fix once the flaws are noticed, and in contrast, with open source systems, fixes are by far faster with more frequent update cycles.