The many types of technology clouds
August 21, 2012 1 Comment
If you remember from school, there are many different types of clouds. You have stratus, cumulus and nimbus just to name a few. In technology, the cloud is no different. The cloud has a different meaning depending on who you talk to and they are all mostly right but they can be referring to totally different things.
Let’s look at the Wiki definition of a cloud
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over anetwork (typically the Internet). The name comes from the use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation.
There are many types of public cloud computing:
- Infrastructure as a service (IaaS),
- Platform as a service (PaaS),
- Software as a service (SaaS)
- Storage as a service (STaaS)
- Security as a service (SECaaS)
- Data as a service (DaaS)
- Business process as a service (BPaaS)
- Test environment as a service (TEaaS)
- Desktop as a service (DaaS)
- API as a service (APIaaS)
In the strictest sense, cloud computing is not merely just moving your software or services onto a server outside your firewall, but it’s enlisting a service with “cloud” properties like flexible horizontal scalability through elasticity. Cloud architecture is not at all uniform as well. It also means different things to the vendors that provide the services and it they implement it how they choose. Amazon, VMWare Cloud Foundry, Stackato and Azure are some of the hottest cloud architecture services out there today.
From a consumers perspective, the cloud means that your data is stored on the internet somewhere. For example, the iCloud product from Apple keeps all your data in sync between computers and iOS devices. The iTunes Match service stores your music on servers and the same goes for the Google Music service.
To many IT professionals, the cloud is just a way to offload services they currently have in house to a server outside of their infrastructure managed by someone else to varying levels. Take for example that many organizations have shifted having internal mail servers to migrating to Google for Business, where every aspect of the infrastructure is managed by Google. Another example, might be moving your internal Microsoft Exchange mail server to a hosting provider. The end result, again to varying degrees, is greater reliability and less burden and reliance on internal IT resources. JFrog Artifactory for dependency management has a cloud service or a server you can download. If I purchase this service, can I say that my dependency management is in the cloud regardless of how JFrof implements their cloud service? The answer to this question is yes.
The True Cloud
While the cloud may mean different things to different people, preparing an application for a cloud architecture is not as simple as one might think. There are all kinds of decisions that need to be made and coding practices that should be adhered to. Typically, if you follow Java conventions, then you won’t have much of an issue with cloud deployments. Let’s look at an example Java web application and some of the concerns that need to be addressed prior to cloud deployment to a service such as Cloud Foundry.
- Your application must be bundled as a war file
- You application must not access the file system. (Utilize the class loader to load files from within the war, but don’t write to files)
- Resources/Services should be looked up via JNDI
- If you cache, utilize caching solutions like EHCache that propagate automatically across instances
- Persist data to a database such as MongoDB, MySQL or PostgreSQL to guarantee access
For the most part, if you follow the J2EE web application conventions regarding deployments, you won’t have an issue with a deployment to a true cloud environment.
Cloud Architecture Dynamic Scaling vs. Autoscaling
One of the advantages to a cloud architecture is the ability to scale up instances of your application and only pay for what you utilize from your provider. There are two types of scaling, the first is Auto-scaling where a tool is utilized to measure load and will automatically increase instances of your application to accommodate the need and the second if dynamic scaling where you control the scale based on a load forecast. An example of dynamic scaling would be if you normally have a load of 1000 concurrent users and you have an event which will ramp up your load to 10,000. In a more traditional setting, you would have purchased servers and hardware to accommodate the worst case scenario which leaves your infrastructure largely utilized most of the time causing inefficiency and waste. In a cloud scenario,you might have 10 instances to service your normal load and for the 24 hour period of your event, you scale to 100 instances and when the smoke clears, you return to 10 instances. All this can be done with a single click of a button in a cloud infrastructure.
Automatic scaling is provided by some cloud providers in the form of a tool. Amazon provides such a tool that will monitor the load and allow you to set triggers to determine how many instances to ramp and when. Automatic scaling is very useful as it handles the unforeseen, but it should not be used in lieu of true capacity planning. It can also be dangerous and advanced intelligence needs to be built in to determine the different between junk and legitimate traffic. For example, think of what would happen if a DoS attack was performed against your application. Remember you pay based on the number of servers you ramp up.
Under the hood, scaling works by issuing commands to the server through an API. For example, with Cloud Foundry, the vmc command will let you monitor load and add instances which in turn creates a JSON request that gets sent to the server to give it instruction. You can also use a third party tool that interfaces with it in the same fashion or you can also build in the scaling intelligence into your own application by hitting the server API yourself. Using the latter technique, your application can control itself, making it very intelligent.
Regardless of how you see the cloud and how you utilize it, the endgame for your organization should be to offload the burden from your internal staff, decrease your expenses, provide greater uptime and flexibility and give you the ability to scale dynamically.