Join me in welcoming our panel: Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN; Steve Conway, Vice President in the High Performance Computing Group at IDC, and Randy Clark, Chief Marketing Officer at Platform Computing. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.
Just to give you an example, we at IDC just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.
CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.
Clark: At Platform Computing we have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn't put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily one-third and possibly more [evaluating cloud].
Cass: At CERN we're a laboratory that exists to enable, initially Europe's and now the world's, physicists to study fundamental questions. Where does mass come from? Why don't we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.
We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.
We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We've developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.
If you look at the past, in the 1990's, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.
We realized in 2000-2001 that this wasn't going to work and also that the scale of resources that we needed was so vast that it couldn't all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we've done.
Grid sets stage for seeking greater efficiencies
[Our grid] pushes the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we've run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.
We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.
The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.
Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.
We're now looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don't need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.
... We're definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.
Conway: CERN's scientists have earned multiple Nobel prizes over the years for their work in particle physics. CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.
More generally, CERN is a recognized world leader in technology innovation. What's been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.
For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it's running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.
In the case of CERN's and Platform's collaboration, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.
Showing a clear path to cloud
CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.
That's just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it's going to move along pretty quickly.
... [Being able to manage workloads in a dynamic environment] is the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.
That's one of the reasons we're here today with Platform and CERN, because that's been Platform's business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.
Enterprise Strategy Group's Lab Validation Report on TSM for Virtual Environments. See why TSM is one of the preeminent backup solutions for VMware and other virtual servers. more
IBM Tivoli Storage Productivity Center can help reduce storage costs by enabling integrated management of storage assets, performance and operations from a single, web-based console. It also integrates with IBM Cognos Business Intelligence for reporting and analytics. more
This EMA paper gives insights on why storage matters for cloud and what's the advantages of storage virtualization for cloud. It reviews IBM’s software defined storage infrastructure solution and highlights the competitive differentiator for IBM's SmartCloud offering. more
The next generation of simplified backup administration dramatically improves scalability and efficiency. Experience how IBM’s advanced interface for Tivoli Storage Manager enables consolidation, intuitive problem resolution and integrated team collaboration. more
IBM Tivoli Storage Manager (TSM) provides a turnkey solution to a range of data protection issues. This complimentary ESG Lab Validation focuses on key improvements in the TSM platform that drive greater scalability, efficiency, and availability in storage management. more