These days, it is not unusual for companies of all sizes to devote 45 percent of their total annual expenditures to IT licenses and maintenance, according to Nicholas Carr 's 2008 book "The Big Switch." With such a heavy financial burden, most enterprises are primed for solutions that save money and reduce complexity. "Cheap and simple" is the promise of licensing SaaS (Software as a Service) .
By putting the software vendor in charge of most of the infrastructure required to run the software, and allowing users to pay for only what they consume, the SaaS model allows companies to slash hardware, software and maintenance costs. For large companies, this can mean millions of dollars in savings each year. For SMBs (small to medium-sized companies), it can also mean access to all the computing power of the big players without the need to buy and maintain their own prohibitively expensive IT infrastructures.
Simple enough, but where is the catch? Since networking pioneer John McCarthy first proposed his idea in 1961, the model of delivering computing services over a network has gone through several incarnations, variously dubbed "utility computing," "on-demand computing," "time-sharing," "service bureaus" or the "ASP (application service provider)" model. And for various reasons, mostly having to do with inadequate, unreliable, nonstandard and nonsecure infrastructure, it has never caught on as the preferred mode of software delivery.
The latest incarnation, SaaS, differs in that the enabling infrastructure seems to be mature enough to drive widespread adoption. The Gartner Group thinks so: It estimates that SaaS sales will account for 25 percent of the business software market by 2011, up from only 5 percent in 2005.
Just as it took a few years before consumers were willing to trust online retailers, it's taken businesses time — and the pain of maintaining increasingly complex IT operations — to accept the viability of mission-critical applications delivered as services over the Internet. Some recent developments have catalyzed acceptance of the SaaS model:
Maturation of the infrastructure is what prompted Bill Gates to announce, in a famous 2005 internal memo to Microsoft staff, that "the next sea change is upon us." His successor as Microsoft's chief software architect, Ray Ozzie, elaborated: "The environment has changed yet again. Computing and communications technologies have dramatically and progressively improved to enable the viability of a services-based model." This has far-reaching consequences for a company like Microsoft, which currently licenses about 100,000 copies of its Office suite to Cisco alone. Microsoft's push to turn its Office software suite from a packaged product into an annual subscription service is a tacit acknowledgment of the shifting tide.
Before there was a dot-com bust, there was a dot-com boom, and one of its achievements was to lay down enough fiber cable to bankrupt several large network builders. This high bandwidth overcapacity is now getting some use. Coupled with Internet standards that allow disparate systems to talk to one another and Web-native SaaS applications that are developed specifically to leverage technologies like the browser, high bandwidth ubiquity has effectively turned the Internet into one massive computer. Sun Microsystems' prescient advertising slogan "the network is the computer" has become a reality.
When the network is as fast as the processor (CPU), computing power becomes location-independent. This enables not only SaaS but also PaaS (Platform as a Service), IaaS (Infrastructure as a Service), Web 2.0 services and all the other utilities offered under the catchall term "cloud computing ."
The largest players in the industry (such as Google, Amazon, Microsoft and IBM) are convinced of this sea change; they are investing hundreds of millions of dollars in hyper-scale datacenters scattered in mostly undisclosed locations around the world. The idea is that platform and infrastructure providers derive huge economies of scale by centralizing data processing in low-cost locations and end users derive huge savings by reducing IT infrastructure investment.
If, as Nicholas Carr and others believe, we are at an inflection point in the transition to utility computing, then these centralized datacenters become the equivalent of the power stations that enabled people to plug into the electric grid rather than run their own local power generators. The enabling technology and bandwidth is like the arrival of AC (alternating current), which replaced DC (direct current) because, among other things, it could be produced in massive centralized power stations and delivered over far greater distances than DC.
Compared with traditional client-server computing, SaaS significantly boosts capacity utilization. End users no longer have to build for peak loads (for example, as retailers do for spikes in demand during the holidays) or license every device in their organization with underutilized bloatware.
To use another analogy, client-server computing is like building multistory single-tenant buildings in which, 90 percent of the time, only the top floor is used — but the other floors are needed for the annual Christmas party. Utility or cloud computing is like building condominium towers with movable walls (enabled by virtualization technology), multiple tenants per floor (multitenant architecture) and a 90 percent occupancy rate.
No wonder SaaS is getting VIP treatment from IT departments around the world.