Five factors to remember when selecting a Virtualization Solution
When it comes to servers used by large, medium and even small companies, one thing is certain - they have to be powerful enough to handle any operation or task dictated by modern IT needs. Well, luckily, Gordon E. Moore was right when he first came up with his law about hardware and now all enterprises, regardless of size are able to benefit from more than enough computing resources.
Gordon Earle Moore of course is the co-founder and Chairman Emeritus of the Intel Corporation and the rule that bears his name states that the number of transistors in any integrated circuit can be doubled every two years, increasing the computational power by almost the same amount. This has proven to be true and these days companies have many high performance servers running within their IT facilities.
The traditional - and also the simplest - way to handle server operations is to assign one major application or service to every server. There are some advantages to the rudimentary "one app per server" approach. To begin with, it is very easy to identify problems when they arise. This is also a simple way to streamline a computer network which means that administrators don't have a lot to worry about.
Even so, the advantages of Server Virtualization clearly overshadow everything gained through the classical approach. The process of partitioning the computing power of every server by setting up virtual machines on it is an excellent way to use its resources to the maximum. Conventional use of servers wastes IT resources and the only way to avoid this is by running a complete virtualization process.
Since this technology is so effective, it's no wonder that many of the large IT companies like Microsoft, Dell, IBM or HP offer virtualization services. In addition there are a lot of medium sized companies that also offer this kind of services so choosing the right one for your business can prove to be a difficult task. The benefits from virtualization are enormous, but the process needs to be handled well in order to realize the benefits.
Here are the five most important things that you should keep in mind when implementing a virtualization solution:
1. Hypervisor type
In the IT world the hypervisor, also known as a VMM (virtual machine manager) is the platform which allows multiple operating systems to be run at the same time. Its denomination comes from the fact that it is placed conceptually one level above the supervisory program. There are two types of Hypervisors: bare-metal and hosted.
The native/bare metal one manages all the virtual machines by operating directly on the hardware of the host. As such, all the guest or virtual OS instances run one level above it. On the other hand, the second type of hypervisor is hosted on a conventional OS which means that all the virtual machines are being run at the 3rd level.
At first glance, the difference between these two types of hypervisors doesn't seem to have many implications in the virtualization process, but it will impact the way apps are run and tested later. An infrastructure based on native hypervisors will constitute a better application running environment and this is because they provide easy OS access to a wide range of hardware elements while supporting real time operating systems.
While hosted Hypervisors don't offer this kind of benefit, these are still more efficient when it comes to engineering operations or testing and running new applications designed for distinct Operating Systems. All this is due to simpler and faster installation and configuration processes.
After deciding which Hypervisor best suits your needs it also important to figure out the type of OS and ISA (Instruction Set Architecture) that you will require hypervisor support for. Also take into consideration future developments like new OS releases and embedded processors changes when you select your virtualization approach. Detailed study at this stage will provide you with a more robust and expandable system later.
2. Operating System rebooting
This is something frequently overlooked since most embedded systems are built for 100% uptime. Still, asking what will happen in case of an OS crash (however unlikely this may be) is essential. This is because if such an event actually happens an OS reboot will be the normal course of action and being able to do it independently of others is mandatory.
One of the requirements for this is being able to restart I/O devices and for example stop the running direct memory access operations without having to restart the entire VM. Also, rebooting only one or multiple specific virtual machines can be a difficult thing to do in case they're interdependent for hardware access.
3. Virtualization Method
Regardless of hypervisor type, there are 3 main virtualization methods and choosing the right one is essential. One method is based on binary hypervisor translation which allows multiple operating systems to be run at the same time without any conflicts. On the other hand, using hypervisors with hardware assist means that embedded CPU virtualization support is needed in order to pass control automatically if needed.
There are several reasons to choose either of these methods but it is important to note that no matter the decision, supporting guest operating systems won't be a problem. The third method is called paravirtualization and can return higher performance by calling the hypervisor directly through an application programming interface.
4. Deployment work
Usually, the hosted hypervisor software available on the market is easy to buy and implement, but the same can't be said for bare metal software since it is more difficult to handle. In some cases, especially when it comes to large deployments, going through a complete integration process will pay out in the end.
In other cases when virtualization involves fewer systems, a readymade commercial solution might prove to be more efficient as it will save a significant amount of time and work.
Before settling on a virtualization solution, check if it comes with symmetric multiprocessing support (SMP) or asymmetric multiprocessing support (AMP), or both combined. To make things more clear, if your infrastructure includes multiple processors of the same type you will need SMP, while using different processors will require AMP.
To round off the discussion, there are a number of important factors that must be considered before you finally select your virtualization solution. Many of these will have a long term impact on the quality of your final solution and therefore it is essential that the solution is chosen with due care. In this article, we have attempted to discuss the issues that are of greatest importance to your implementation.
With the growth of the Internet and the increasing reliance on web and mobile applications, it’s no surprise that data breaches associated with applications are also increasing. So how does your organization address application security today? more
To help you understand how to minimize vulnerabilities in web applications, Qualys provides this guide as a primer to web application security. The guide surveys typical web application vulnerabilities, compares options for detection, and introduces the QualysGuard Web Application Scanning solution. more
Out of literally hundreds of different vulnerability management reports available, this paper introduces 10 of the most important reports and uses reports generated by Qualys’ vulnerability management solution, QualysGuard, for reference purposes. more