Afraid of the Public Cloud? Try Going Private
With the National Security Agency snooping through cloud data stores, mistrust in the public cloud has hit an all-time high. Ready-made private cloud solutions can provide the advantages of the cloud without the liabilities.
- By Dian Schaffhauser
- 08/15/13
Thanks to whistleblower Edward Snowden, cloud computing is taking a hit. As the Cloud Security Alliance recently reported, its non-US members were 56 percent less inclined to use US-based cloud providers in the wake of revelations that the National Security Agency was peeking into public cloud data stores. In fact, one in 10 of those respondents reported canceling a project to in light of news about PRISM's clandestine data mining.
Perhaps it's only coincidental that in recent months, the vendor space that focuses on private clouds appears to have heated up--with Oracle, Cisco, HP, Hitachi, SwiftStack, SUSE, ownCloud, and AppScale Systems all announcing new offerings. If the public cloud scares you, these companies seem to be suggesting, then consider taking your cloud activities private.
Going Private
For the most part, higher education has long been reluctant to move its IT systems to the public cloud. As Casey Green wrote last year in his 2012 Campus Computing Survey, "Even as the performance benefits and cost savings of migrating to the cloud appear compelling, trust really is the coin of the realm; many campus IT officers are not ready to migrate mission-critical data, resources, and services to the cloud services offered by their IT providers." The concerns cited by those he surveyed: risk, limited options from providers, trust, and control.
Private cloud computing may be just the solution schools are seeking to gain the advantages of the cloud without the concerns. According to the National Institute of Standards and Technology, a private cloud is a cloud infrastructure provisioned for use by a single organization. Location has nothing to do with defining a private cloud. As NIST notes, a private cloud may exist on or off premises, and it may be owned, managed, and operated by the organization, a third party, or some combination.
While a private cloud infrastructure is complex, ready-made options offer simplicity and rapid deployment, noted Gartner Research Director Nik Simpson in a recent webinar on the topic. Frequently, with the cloud-in-a-can approach, the vendor promotes the fact that it can build a system and have it racked up and rolled into place in days. Simpson says the cloud-in-a-can is a good solution for environments where:
- The organization lacks the technical skills or time to implement the private cloud;
- You aren't concerned about higher acquisition costs or you prefer to minimize operational costs at the expense of capital investment; and
- You're not "overly" concerned about "excessive" vendor lock-in.
Private Cloud-in-a-Can
A "cloud-in-a-can" at its most basic is a configuration where you buy everything for the infrastructure--servers, storage, networking, and software management--as a single product. Examples abound (see the box at right).
Often, the offerings appear to have come from a vendor looking around "in their parts bin" and bundling the components as a solution. "It's great from their point of view," Simpson notes. "It allows them to do more integration in house and to charge more for that integration." HP falls into this category as one example, he says.
A modified approach to cloud-in-a-can, referred to by Gartner as "reference architecture," uses components from several different vendors to create a turnkey bundle. But rather than having the cloud built at the factory, a VAR follows a "recipe" provided by the vendor for putting it together. VCE Vblock, for instance, includes servers and networking from Cisco, storage from EMC, and virtualization from VMware. NetApp's FlexPod integrates NetApp storage and Cisco servers and networking. An advantage here is that often the products specified by the reference architecture are already in place within the environment and there's a level of in-house skill for integrating the cloud with the existing infrastructure.
A major benefit is simplification. "Ideally, you're getting [the cloud] from the virtual layer all the way down to the physical layer," explains Simpson. "Rather than having to configure individual servers, individual storage arrays, individual networks, you start at the virtualization layer, define the characteristic of the virtual machine you want--in terms of amount of storage, class of storage, type of network activity, security requirements, the network connectivity, etc.--and the provisioning layer works out how to do that with the hardware that's involved in the system."
That convergence of configuration spills over into component management. That can be both a benefit and a detriment, he says. "If you buy one of these converged cloud-in-a-can solutions and try to manage it in the way you've always managed infrastructure in your organization, with separate networking, storage, [and] servicing, you're not going to get the best out of it."
Another benefit of cloud-in-a-can: gaining one throat to choke. "Rather than calling three different vendors and trying to figure out who's causing the problem, you've got one vendor to call. You only have one point of contact, and ideally you will never see the sort of finger-pointing that happens in traditional IT infrastructure."
Caveats
Simpson cites two distinct disadvantages of the cloud-in-a-can approach. First, there's a level of vendor lock-in that could constrain the IT organization in the future. For example, while in theory core components can be scaled separately, in reality, scalability is limited by how the system is sold. Frequently, scalability is done in "building blocks." For example, a customer that outgrows last year's NetApp FlexPod will need to purchase another FlexPod to grow capacity. Also, if one aspect needs expansion, such as storage, it's probable that you won't be able to scale that individual component as you would in a different type of cloud set-up.
The reference architecture approach mitigates some of this lock-in. If you know our organization will require more compute or storage capacity than the cloud-in-a-can delivers, you can order it upfront. The only restrictions are the vendors that you'll be choosing from. But even here, expansion beyond that initial deployment implementation must be done through the addition of building blocks.
Second, with time and usage, that cloud-in-a-can will "become increasingly dug into your own infrastructure," insists Simpson. "You will adapt your infrastructure around it, which will make replacing it with another vendor's private cloud solution increasingly difficult as time goes on."
Doing It Yourself
Ready-made private clouds aren't for everyone. Institutions can also opt to build out a traditional private cloud on their own. The DIY approach involves working with multiple vendors and integrating components in house, or working with a local value-added reseller (VAR) or vendor to provide the components.
The traditional route offers several benefits:
- You're able to scale subsystems individually. To add more servers, you simply buy more servers and do the integration work.
- You can build and configure the private cloud exactly to your specifications.
- This approach enables you to leverage the skills your IT organization has built up over the years.
- You can play vendors off of one another for competitive pricing.
Of course, there's a downside too. Your organization--or its rep in the form of a vendor--has to do the heavy lifting. "You are going to spend time, energy, and effort in your organization to actually turn these [components] into a working private cloud system," Simpson notes.
While the traditional scheme provides a more flexible approach for building a private cloud, it's best suited, he adds, to an organization that has already developed expertise in building an advanced server virtualization infrastructure, "and you know how the pieces and parts fit together."
|
A Good Fit--Until Something Better Comes Along
For institutions of higher education that have mastered the art of virtualization, a private cloud is a smart next step in the evolution of data center optimization. A well-defined private cloud strategy can help the IT organization deliver cloud benefits such as more efficient deployment of new applications and services to users and the ability to deliver parsed services to individual schools and colleges without the duplication of IT resources.
But it's not necessarily the endpoint. Cloud-in-a-can solutions came into existence in the first place, Simpson observes, because there are no standards among hardware vendors for provisioning those components or among hardware and software vendors where those connections take place.
In a "perfect world," he declares, IT would have a "de facto standard" for managing all of the components that make up cloud infrastructure. "Ideally, we would like to get to a state where we could just add storage resources, server resources, network resources into our private cloud, and the cloud management platform [would] recognize that we have new resources. And there would be no other provisioning work that you would have to do. It would all be done in the provisioning and orchestration layer of the cloud management platform. The hardware would just become a disposable commodity." We're a long way from that scenario, he concludes. "If history has taught us anything, it's that we shouldn't expect too much progress on that."
Choosing the Right Private Cloud
As your organization is considering a private cloud, Simpson recommends asking what the purpose of the cloud is and whether the approach you're considering will accommodate those needs. For example, if services such as chargeback, service catalog, and self-service provisioning are essential, how easy will it be to add those components on top of what the vendor is delivering?
"Generally, so long as you stick with the components that [vendors] certify and that they work with, it'll all be fairly easy to integrate because they've designed it that way," he says. "But if you want to step outside of what they do in their own line of products, then you're getting back into the same issues you've dealt with in building traditional infrastructure in that you'll have to do some of that integration yourself."
Simpson also advises doing research on pricing so that you understand what you're really paying. "If the solution is $200k and you go out and price the individual hardware components and find out that they're $100k, then you have to decide, is the extra $100k that they're charging for the integration worth that much?"
He also counsels understanding your existing environment so that you know what's involved in integrating it with the private cloud -- in whatever formation. That includes considering what the integration points are, what work will be required, and how the private cloud will be connected with subsystems such as backup or disaster recovery.
Finally, the IT organization has to decide how strategic its IT integration skills are for the organization. Jobbing the work out to an external vendor may mean losing those IT skills internally as people move elsewhere to do the work they like, says Simpson.
|