Cloud brings with it tremendous agility through a physical and software framework that removes all of the frustration that exists with a traditional legacy approach to the provisioning of IT services.
When we talk about Cloud services, these are services that deliver the Cloud operating model for both private and public deployments.
As we have seen in the description of Cloud and the definition of the Cloud tenets, the infrastructure building blocks need to be virtual in order to achieve the orchestration needed. But this can all be done privately using a combination of enterprise and open source technologies, and DevOps design methodologies. The DevOps toolchain provides a wealth of products and tools that enable the infrastructure components to operate in an orchestrated manner.
A modern Cloud platform will go far beyond the broad set of Cloud principles. It will shift the end user's focus from thinking of services as being delivered in physical terms, to treating them as loosely coupled managed services. Having these services ready to go at the click of a mouse button is a powerful driver.
And I will stress again, that Cloud is an operating model that can be built cost effectively by you and your technology teams. Some would rightly say that it is indeed more cost effective to run your own Cloud than consume a public Cloud service.
So let’s take a look at an infrastructure scenario, that was common practice not too long ago, that shows how not to deliver an IT service to your customers (end users) - a new physical server platform is needed to support a production application that has been developed to run ‘in-house’. The steps that would be followed in the legacy on-premise infrastructure scenario would look something like:
1.
Establish the compute characteristics needed to support the new application
2.
Raise a request to the IT infrastructure team for a new compute platform
3.
IT Infrastructure team identifies platform from their service catalogue
4.
Purchase order is raised for the physical hardware, and approval requested from a list of approvers
5.
Purchase order is raised by the purchasing team and the order dispatched to vendor or IT reseller
6.
Lead time for delivery established (8 weeks)
7.
Request raised for storage (assuming SAN or NAS)
8.
On delivery of the server hardware, the IT infrastructure team will build the new platform
9.
Request placed for network connectivity
10.
Application build and go live
This 'legacy' example has taken perhaps 4 months from start to finish. It may seem a little extreme, but this process for provisioning a new in-house service on a new physical server is probably still standard practice in large organisations today. Of course provisioning a virtual machine on an existing hypervisor would speed this process up considerably, but would still typically require a series of steps, with each involving some manual intervention. In other words, not a self-serve process.
The same series of steps to provision a new platform from a Cloud service can be completed in a matter of minutes. The requester needs to know what compute platform to order, and the porting of the application and migration of data will of course need to be planned effectively (assuming the application has been developed ‘on-prem’). However, the actual build for the compute platform itself (IaaS), which took more than three months in the legacy scenario, is completed in only a few minutes.
One obvious challenge that remains if you are migrating from a legacy on-premise infrastructure to public Cloud, despite the infrastructure efficiencies, is the movement of data. This needs some additional effort, the scale of which will depend on the amount of data. It is however still achievable with relative ease through the use of data movement services provided by the service providers, which may involve copying large data volumes to a storage unit and physically transporting the data to the service providers site.
If you are building a private Cloud, this problem largely disappears. Food for thought indeed.
A well considered Cloud hosted application will support mechanisms that allow for portability between private and public Cloud, and even between service providers. In other words, it will have hybrid and multi-Cloud capability. It is by no means essential to have all of this capability, but it is achievable.
The movement of applications between Clouds can be achieved in various ways. Application containers (Docker) can be orchestrated using Kubernetes and moved between physical and virtual platforms. Enterprise virtualisation (VMware for example) will move virtual machines between physical platforms on demand or automatically.
Designing an application to run in containers will increase the density of workloads across the physical and virtual platforms that make up your Clouds, and can support the movement of individual containers between the Clouds and even between regions.
Containers can move between private and public Clouds with relative ease (although dependencies need to be considered and built in to the orchestration).
Applications are not however containerised unless designed specifically to run in containers. Designing an application run as microservices as a design technique that will help achieves this. The obvious leader in the field of containerisation is Docker, and Kubernetes has become the industry standard for orchestration.
Cloud resources are easy to 'spin up', and the Cloud model makes the full automation of entire environment builds very easy. This is referred to as ‘Infrastructure as Code’ (IaC) and is made possible through software tools such as Terraform, Ansible and many more.
Tools such as Terraform and Ansible belong to a catalogue of tools that deliver the end-to-end automation needed for 'self service'. This catalogue can be referred to as the DevOps Toolchain.
It is important to remember that a public Cloud service provider is just that - a single provider of services from a growing list. You may want to change your Cloud service provider at some future date, which will be extremely hard to do.
This problem largely disappears when running a private Cloud with hybrid capability and adopting a sensible Cloud strategy (and not just a 'Cloud First' strategy which was a fashionable albeit misguided strategy among managers).
A private, on-premise Cloud is a very real option. This is helped significantly by the evolution of open source and software defined IT over the last decade. There is a shift away from proprietary physical infrastructure with ‘intelligence’ built in, towards a software defined approach to building scalable platforms. This software defined approach to storage (SDS) has spawned a new(ish) wave of highly scalable IT infrastructure models that are built on commodity hardware. If designed properly, this approach will lower the cost of physical hardware dramatically as you are no longer investing in the very high cost proprietary physical infrastructure, but instead investing in a scalable software platform that will run on any standard server.
This IT model includes the hugely popular Hyper-Converged architectures (HCI), Composable, and the immensely compelling OpenStack suite of projects. These all deliver very high performance, with tremendous degrees of scalability, and will typically be accompanied by a management toolset that will support speedy provisioning for new environments. Collectively these features broadly satisfy most of the Cloud tenets. Add the Cloud automation tools for IaC described in the previous paragraphs, and all tenets are satisfied. You have your private Cloud
Having an on-premise instance of Cloud can help with regulatory and compliance type restrictions on where data can be stored, ensuring that you maintain control of your infrastructure. This can in turn reduce the impact from such things as a third-party service outage that may be the result of mismanagement of infrastructure or resource. Or a security breach on a service providers infrastructure.