Hardware virtualization is becoming a dominant trend for IT departments in many organizations. The underlying concept of hardware virtualization is to allow multiple systems or users to share one host (or to have one very large user shared across multiple host systems). Though the concept of hardware virtualization is not new, the process has become much easier and more cost effective.
Hardware virtualization gets defined in several ways:
Hardware virtualization was developed as a concept in the early 1960s by IBM, where it was referenced as a muliprogramming system. IBM was instrumental in the development of hardware-assisted virtualization on their core business system, the IBM System 370. Before hardware virtualization was introduced it was difficult to run multiple applications on a computer. Companies ran applications in batches, saving up all the data and operations for a particular program then running them all at once before switching the whole system to something else.
An effort to find an efficient method to share resources was the innovation behind hardware virtualization. When desktop computers surfaced with Windows during the 1980s and 1990s, challenges were presented with respect to cost, security, storage space, and maintenance and hardware virtualization faded out of favor for a time until the rise of web-based computer systems.
Systems that incorporate hardware virtualization can virtualize applications into silos that will not interfere with the host system. Software can be duplicated and installed on more than one host. Any damage that occurs to the operating system security can easily be detected, and the virtual machine will be discarded without causing harm to the host system. Also, the new system can run without making changings to the host system.
Hardware virtualization uses software to make multiple pieces of hardware act like they are a single system or to make one piece of hardware act like it is multiple systems. According to Natalie Lambert, principal analyst at Forrester Research, “Desktop virtualization is a very different beast and should not be treated as simple enhancements to the server strategy. "The drivers are entirely different and the environment will present new challenges to those experienced with server virtualization."
So what are the pros and cons of deploying hardware virtualization? There are three levels of hardware virtualization; full, assisted (or virtual machine), and paravirtualized. Each level operates a bit differently. For example:
Though virtualization has been around for years, as technology rapidly progress, so are the needs of businesses. The benefits of using hardware virtualization are:
The pitfalls of using hardware virtualization are:
“It’s also no secret that the movement toward virtualization has experienced what is sometimes referred to as “virtualization stall.” This refers to the fact that many organizations get around 25 percent of their total server population virtualized, and then progress stops,” says Bernard Golden, CIO writer, in his article titled ‘3 Key Issues for Secure Virtualization.’
Although most companies use some form of virtualization, they are aware that there are pros and cons with every operating system. However, the pros of hardware virtualization outweigh the cons.
Implementing hardware virtualization into an organization requires developing a strategic plan to fit the work environment:
To help you understand how to minimize vulnerabilities in web applications, Qualys provides this guide as a primer to web application security. The guide surveys typical web application vulnerabilities, compares options for detection, and introduces the QualysGuard Web Application Scanning solution. more
With the growth of the Internet and the increasing reliance on web and mobile applications, it’s no surprise that data breaches associated with applications are also increasing. So how does your organization address application security today? more