Updated: 04.02.2019: Cloud Adoption / AWS (Overview text update)
Cloud-as-a-Service Deciphering The Cloud

Introducing Cloud-as-a-Service

CloudasaService.co.uk is here to help decipher the Cloud operating model and its services. We will help explain what Cloud really means - the benefits of Private Cloud, the pitfalls of Public Cloud, and the definition of hybrid Cloud.

There is a general lack of understanding of what a hybrid Cloud is. So we will explain the principles of Cloud and how it benefits any business, large or small.

On close inspection, public Cloud is a complex array of choices. Some are compelling, all can be confusing, and many will become costly over time if not designed correctly.

CloudasaService will go some way in explaining the service models, address the cost of consuming public Cloud services, and how the capability of the modern Cloud infrastructure is built to abstract the physical infrastructure from the component service or services being delivered to the end user.

So, before we delve in to the detail, let's make one thing clear, and this is very important – it is not necessary for a Cloud operating model to be supplied by a third-party service provider, such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform. The very same principles that define Cloud can be built on your own premises, called a private Cloud, and this is one of the most important facets of Cloud computing.

A Cloud service is built on a set of principles (or tenets). These Cloud principles can be satisfied by an infrastructure that has been designed and built on your own premises, to form a private Cloud. The very same Cloud principles will already exist as a framework for all off-premise (public) Cloud resources delivered by a third party.

This is the highest level differentiator: private Cloud, that you build, own and manage; or public Cloud, that a third party service provider owns and runs, and that you consume at a cost. Both are equally viable and have their own merits.

A hybrid Cloud incorporates both public and private.

When exploring the cost of public and private Cloud it becomes apparent that both are compelling for different reasons. Public Cloud has no CAPEX and very high OPEX costs. The support overheads in most areas are low. There are no hardware costs and no maintenance costs to consider. However, the technical expertise needed to build and maintain your public Cloud presence does not come cheap..

Private Cloud has predictably high CAPEX and fairly high OPEX. A private Cloud is really only viable if certain core services already exist in the company. For example, if datacenter or 'comms' room space exists, and there is a core network already in place, then the private Cloud option becomes very compelling. There are also some other benefits from having your data remain onsite (local) for requirements such as PCI, GDPR and financial data governance.

To make the decision between a private of public Cloud for your applications even more challenging, the cost of an effective private Cloud is reducing through the adoption of software defined and open source architectures. These are rapidly replacing the higher cost infrastructures based on the ‘intelligent appliance’ model. This shift towards software defined technologies reduces the infrastructure complexities that exist in legacy architectures.

So as you can see, it's not so straightforward.

What exactly is Cloud?

The term 'Cloud' reflects the access and delivery characteristic for the service being consumed, be it platform, storage, software, or some other service. The infrastructure design that effectively supports the notion of a cloud service is still essentially a collection of IT assets delivering an IT service. A rack of hardware in a datacenter somewhere if you like.

The secret behind the Cloud operating model lies in the IT infrastructure's ability to meet the five tenets that define Cloud (there are plenty more tenets defining the operating model, but these can be considered the main ones).

Satisfying the main Cloud tenets is achieved through the virtualisation of each infrastructure building block: Compute, Storage and Network. It is ultimately virtualisation that makes a Cloud service possible.

Cloud is essentially an IT service that is delivered across a network, but it is also a mode of operation, delivering the flexibility needed to meet the Cloud tenets. As a potential consumer of Cloud services, you have the choice of building a Cloud estate that meets these tenets in your own datacentre, or using a third-party Cloud service provider, and letting them do the hard part.

The traditional, perhaps legacy non-Cloud mode of operation is built around a physical IT hardware infrastructure running from an end user’s own or colocation datacenter somewhere. This legacy infrastructure will have evolved over time, consisting of multiple discreet technologies from multiple vendors, and will lack the efficiency and orchestration that is essential in satisfying the key Cloud tenets.

So although a Cloud operating model can run from your own physical infrastructure in your own datacentre facility, the model that we are most familiar with is that served by a third party service provider such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform. But Cloud is more of a methodology, built on infrastructure efficiencies that are achieved through the virtualisation of each of the physical building blocks. It is the consequence of these efficiencies that allows for an operating model that supports the tenets of Cloud, and it is these tenets that makes Cloud what it is:
Self service
The end-user has the ability to request an IT (private or public Cloud) resource or service from a web-based portal
Remote network delivery
Access to the resource or service is across a network
Multi tenancy
The infrastructure and service building blocks supporting the Cloud service are shared with other Cloud users
Rapid elasticity
The physical IT infrastructure will scale effortlessly to support the growth in demand for services
Measured service
The IT service or resource has the ability to adapt to the demand on it, gaining additional resource as needed by scaling out and in
So now that we have acknowledged that Cloud is a methodology and mode of operation, and that it can run from anywhere (so to speak), we can explore the categories of Cloud available to a business. These categories are broadly referred to as Public, Private, Hybrid and Edge, and are easy to understand once the fundamentals of Cloud are taken in:
Public
A Cloud service delivered by a remote third party
Private
Also known as 'on-premise', is built and owned by the Cloud end user
Hybrid
A combination of Public and Private services (perhaps you want your key applications running on-premise, and your backups written to lower cost public Cloud)
Edge
A mechanism to address some of the physical limitations in delivering services globally by shifting the resource 'experience' (say, storage) closer to the end user, thus removing the latency associated with accessing that resource from halfway around the world.


 

Help me decide!

Cloud brings with it tremendous agility through a physical and software framework that removes all of the frustration that exists with a traditional legacy approach to the provisioning of IT services.

When we talk about Cloud services, these are services that deliver the Cloud operating model for both private and public deployments.

As we have seen in the description of Cloud and the definition of the Cloud tenets, the infrastructure building blocks need to be virtual in order to achieve the orchestration needed. But this can all be done privately using a combination of enterprise and open source technologies, and DevOps design methodologies. The DevOps toolchain provides a wealth of products and tools that enable the infrastructure components to operate in an orchestrated manner.

A modern Cloud platform will go far beyond the broad set of Cloud principles. It will shift the end user's focus from thinking of services as being delivered in physical terms, to treating them as loosely coupled managed services. Having these services ready to go at the click of a mouse button is a powerful driver.

And I will stress again, that Cloud is an operating model that can be built cost effectively by you and your technology teams. Some would rightly say that it is indeed more cost effective to run your own Cloud than consume a public Cloud service.

So let’s take a look at an infrastructure scenario, that was common practice not too long ago, that shows how not to deliver an IT service to your customers (end users) - a new physical server platform is needed to support a production application that has been developed to run ‘in-house’. The steps that would be followed in the legacy on-premise infrastructure scenario would look something like:
1.
Establish the compute characteristics needed to support the new application
2.
Raise a request to the IT infrastructure team for a new compute platform
3.
IT Infrastructure team identifies platform from their service catalogue
4.
Purchase order is raised for the physical hardware, and approval requested from a list of approvers
5.
Purchase order is raised by the purchasing team and the order dispatched to vendor or IT reseller
6.
Lead time for delivery established (8 weeks)
7.
Request raised for storage (assuming SAN or NAS)
8.
On delivery of the server hardware, the IT infrastructure team will build the new platform
9.
Request placed for network connectivity
10.
Application build and go live
This 'legacy' example has taken perhaps 4 months from start to finish. It may seem a little extreme, but this process for provisioning a new in-house service on a new physical server is probably still standard practice in large organisations today. Of course provisioning a virtual machine on an existing hypervisor would speed this process up considerably, but would still typically require a series of steps, with each involving some manual intervention. In other words, not a self-serve process.

The same series of steps to provision a new platform from a Cloud service can be completed in a matter of minutes. The requester needs to know what compute platform to order, and the porting of the application and migration of data will of course need to be planned effectively (assuming the application has been developed ‘on-prem’). However, the actual build for the compute platform itself (IaaS), which took more than three months in the legacy scenario, is completed in only a few minutes.

One obvious challenge that remains if you are migrating from a legacy on-premise infrastructure to public Cloud, despite the infrastructure efficiencies, is the movement of data. This needs some additional effort, the scale of which will depend on the amount of data. It is however still achievable with relative ease through the use of data movement services provided by the service providers, which may involve copying large data volumes to a storage unit and physically transporting the data to the service providers site.

If you are building a private Cloud, this problem largely disappears. Food for thought indeed.

A well considered Cloud hosted application will support mechanisms that allow for portability between private and public Cloud, and even between service providers. In other words, it will have hybrid and multi-Cloud capability. It is by no means essential to have all of this capability, but it is achievable.

The movement of applications between Clouds can be achieved in various ways. Application containers (Docker) can be orchestrated using Kubernetes and moved between physical and virtual platforms. Enterprise virtualisation (VMware for example) will move virtual machines between physical platforms on demand or automatically.

Designing an application to run in containers will increase the density of workloads across the physical and virtual platforms that make up your Clouds, and can support the movement of individual containers between the Clouds and even between regions.

Containers can move between private and public Clouds with relative ease (although dependencies need to be considered and built in to the orchestration).

Applications are not however containerised unless designed specifically to run in containers. Designing an application run as microservices as a design technique that will help achieves this. The obvious leader in the field of containerisation is Docker, and Kubernetes has become the industry standard for orchestration.

Cloud resources are easy to 'spin up', and the Cloud model makes the full automation of entire environment builds very easy. This is referred to as ‘Infrastructure as Code’ (IaC) and is made possible through software tools such as Terraform, Ansible and many more.

Tools such as Terraform and Ansible belong to a catalogue of tools that deliver the end-to-end automation needed for 'self service'. This catalogue can be referred to as the DevOps Toolchain.

It is important to remember that a public Cloud service provider is just that - a single provider of services from a growing list. You may want to change your Cloud service provider at some future date, which will be extremely hard to do.

This problem largely disappears when running a private Cloud with hybrid capability and adopting a sensible Cloud strategy (and not just a 'Cloud First' strategy which was a fashionable albeit misguided strategy among managers).

A private, on-premise Cloud is a very real option. This is helped significantly by the evolution of open source and software defined IT over the last decade. There is a shift away from proprietary physical infrastructure with ‘intelligence’ built in, towards a software defined approach to building scalable platforms. This software defined approach to storage (SDS) has spawned a new(ish) wave of highly scalable IT infrastructure models that are built on commodity hardware. If designed properly, this approach will lower the cost of physical hardware dramatically as you are no longer investing in the very high cost proprietary physical infrastructure, but instead investing in a scalable software platform that will run on any standard server.

This IT model includes the hugely popular Hyper-Converged architectures (HCI), Composable, and the immensely compelling OpenStack suite of projects. These all deliver very high performance, with tremendous degrees of scalability, and will typically be accompanied by a management toolset that will support speedy provisioning for new environments. Collectively these features broadly satisfy most of the Cloud tenets. Add the Cloud automation tools for IaC described in the previous paragraphs, and all tenets are satisfied. You have your private Cloud

Having an on-premise instance of Cloud can help with regulatory and compliance type restrictions on where data can be stored, ensuring that you maintain control of your infrastructure. This can in turn reduce the impact from such things as a third-party service outage that may be the result of mismanagement of infrastructure or resource. Or a security breach on a service providers infrastructure.

...For Public

Cost is often cited as a disadvantage with public Cloud. This is largely true; it is expensive, and don’t let the Cloud provider tell you otherwise. It is however expensive for a reason. The range of resources and managed services is overwhelming from all service providers. Whichever way you approach a move towards Cloud, it will most likely need a substantial financial commitment to be forecast, and forecasted accurately. The public Cloud service providers all offer highly effective mechanisms to control and considerably reduce the overall cost, and this is an essential ingredient for workloads running in public Cloud. These mechanisms include the automatic scaling of resources up and down, and ephemeral 'serverless' instances that consume only the resource needed while a workload is running, to name but two. There are plenty more. So for an application destined to run in public Cloud, it MUST be designed to do so with minimal resource consumption using the efficiency techniques available.

If you are planning a move to public Cloud then an investment in professional services can help reduce the long-term Cloud costs at the design stage, or you can save that overhead and attempt to design and build your Cloud environments in-house. Either way, it is essential that your monthly billing is considered in the design and the necessary mechanisms incorporated to reduce costs. All of this requires a low-level understanding of the environments that will move from your on-premise infrastructure. It will be far easier to develop your application in the Cloud, where you are able to prove your design and incorporate the required efficiencies as you go. Adapting an application environment's original design to operate effectively in a specific public Cloud service will require changes to the application design, and in some cases some fundamental changes to the core software components (database for example).

The second potential disadvantage from Public Cloud is in its complexity. The complex nature of Public Cloud reflects the vast growth in functionality, and the battle between Cloud service providers to deliver ever greater functionality to take full advantage of the flexibility that Cloud offers. You can perform any action on your data in Public Cloud today, and its clever stuff, delivering an extraordinary range of mechanisms that would have seemed almost impossible to achieve only a few years ago. The ability to perform analytics on a data lake using Map Reduce style mechanisms, for example, is a major challenge without the on-tap resources that Cloud makes available. And all the visualisation tools are there ready and waiting. Containers that are now standard practice for application agility and help with the portability. So how can these amazing mechanisms be a disadvantage? Well, it is complex, and still requires expertise to design and build effectively.

The last disadvantage (for now) is the risk of 'lock in' with your chosen third-party service provider. There may be a rather frantic stampede to public Cloud by businesses today, driven in most cases by a desire by senior management to reduce capital expenditure, but if you are not in a position to extract your service from a public Cloud or move it elsewhere at some point in the future, then you run a serious risk of being held ransom at a later date (perhaps a little less drastic, but you get my drift). There are, again, mechanisms and software defined platforms that provide the flexibility needed to help avoid this risk. So portability is key, and a neutral approach to the service providers is needed.

..and Private...

Private Cloud has become a viable and cost effective option for businesses as a result of a few key developments in recent years - software defined storage (SDS), OpenStack, the increased move from physical to virtual compute, and the tools that drive Infrastructure as Code, to name a few. SDS removes the complex hardware dependency needed in a traditional SAN or Converged Infrastructure (CI), and allows for commodity servers to be used instead. The SDS architecture can be referred to as Hyper-Converged Infrastructure (HCI) and removes the need for the network of storage switches between host and storage, and with it the cost of that hardware (both network and storage hardware, and support), and also the complexity. And it is highly scalable!

OpenStack on the other hand is a software framework that allows for existing infrastructure to be collectively presented to the end user as a set of Cloud resources. OpenStack achieves what has been a huge challenge for many years – the ability to deliver self-service resource provisioning on existing private infrastructure. Being open-source, you can build an OpenStack cluster on Ubuntu hosts at no cost except that of the expertise needed to build the OpenStack environment.

One obvious disadvantage in private Cloud is the need for physical infrastructure, and technicians to support and run that infrastructure. This will be costly if a support team does not already exist. In addition, a datacentre facility or similar (colo, comms room perhaps) will be needed. So private Cloud will be a far more attractive proposition for a business that already has a datacentre facility, or at least a suitable comms room, and a technical team providing support already. However, the cost savings from an OpenStack private Cloud can be tremendous when compared to the cost of public Cloud.

Another technical disadvantage associated with owning your own Cloud is the overhead associated with disaster recovery and high availability services. Having an infrastructure that you own will typically include the need for a second platform at a second (DR) location, and a backup process that includes policies for backing up data to disk or tape, and managing the movement of data or physical tapes to and from an offsite location. This all requires effort, and that comes at a cost. One solution to both of these challenges may be to adopt a Public Cloud strategy for disaster recovery and backups, and run production services on your private Cloud. This is a classic Hybrid approach to Cloud. By containerising your application environments to make them portable between private and public Cloud, and performing backups to an on-prem backup platform and to low cost Cloud object storage or similar, you get the best of all worlds. Public Cloud can then be used to run your application if DR invocation were needed. There are modern backup platforms (software defined of course) that allow for entire environments to be backed up to public Cloud, and then used to run as a production environment very quickly if the need occurs. These are all considerations to be made when evaluating a Cloud strategy.

All the normal processes and procedures that are needed to run an enterprise IT infrastructure are also needed to run on-premise Cloud. This includes the obvious need for monitoring and escalation for faults and issues that occur during and outside of production hours. The modern HCI vendor will help facilitate some of these requirements, but that does not remove the need for a permanent technical support presence within the IT structure to provide onsite support. However, if these resources exist already, then they provide solid justification for a private or hybrid approach.

Despite the obvious overhead associated with a private Cloud, the actual physical platform will be far less complex than the traditional infrastructures mentioned earlier. So the real impact on existing technical resources will actually be quite small.

What is right for you? Public or Private?

If we were to focus purely on the cost of Public and Private Cloud platforms, then we would see that Public Cloud is not an inevitable first choice. The cost, for example, of running a database server capable of supporting in excess of 500 concurrent users, will most likely cost somewhere in excess of £6K per month. This is a very rough estimate to provide us with an example to work from, and this cost is not taking in to account the additional storage, network, web server and content server that may also be needed for an application environment. So, you can see how the three-year costs can mount up, and how these costs can go some way towards the justification of an on-premise platform instead of public. If the proposed on-premise platform satisfied the tenets of Cloud, and the infrastructure build costs were lower over the same three-year period when compared with the equivalent public Cloud costs, then it would be natural to consider building your own Cloud. A private Cloud is obviously made more practical if a datacentre facility is already available. It is also made more practical if the technology existed to reduce the impact on the datacentre (cab space, power, heat dissipation), which in turn lowers the running costs for on-premise Cloud.

Until fairly recently the private Cloud platform options were limited, and most enterprises looking to build their own Cloud were reliant on high cost proprietary vendor hardware. These proprietary platforms lacked the overall flexibility to effectively deliver the tenets needed to qualify as a Cloud platform without a suite of complex orchestration tools. In addition, the hardware that was available to attempt to deliver an on-premise Cloud infrastructure severely impacted the datacentre. This limitation and impact can now be removed with software defined techniques, and the choice of vendor platform available today has grown, and the choices continue to grow. In addition, the orchestration needed to allow existing infrastructure to function in a Cloud-esque manner can be achieved through OpenStack.

You can today build a genuine, cost effective private Cloud that delivers an end user experience comparable to that of its Public equivalent, but without all the features that are bundled with the public service. And in turn you can continue to control and own your data, and be rest assured that your service will not be impacted by some third party service limitations that are outside of your control. Let's not forget however that many of the very clever features available in public Cloud are there to help improve efficiency and lower the cost. But ironically, to design your Cloud to incorporate these features will need architecting, which requires skills that are themselves costly. These costs may not exist in a private Cloud (with some exceptions), and the other overheads associated with a private Cloud, such as support costs, can be diluted significantly if a team already exists supporting existing platforms.

The Software Defined computing platforms that run on commodity hardware are very much simpler in their design, as they are not built on complex physical hardware, and so in turn do not need the specific skilled technicians needed to support them. In the legacy example used previously, the connectivity between compute and storage is highly complex. It can typically require server and compute virtualisation (hypervisor) skills, Storage Area Network skills (fibre channel networking, masking and zoning), and specific vendor storage skills (EMC, HDS, NetApp etc). The engineers with these skills are understandably costly, as indeed are the hardware platforms themselves, which are designed to deliver a Tier-1 service.

You would be excused in thinking that I am just a little fanatical about software defined computing (or Software Defined Storage, SDS, which is a collective term commonly used). As good as this approach to IT is, it will not entirely match the service catalogue available from a public Cloud service provider. The whole point of public Cloud is to provide a broad range of services that can be 'stood up' at the click of a button. And the range of services available today is vast, covering all possible requirements and combinations. This is the public Cloud service provider's 'value add', and it adds huge value to the overall public Cloud service. This 'value add' will not exist in an on-premise private Cloud simply by introducing software defined computing. You will need to develop any additional services yourself, and depending on your requirement, this can be extremely complex. One example where public Cloud is clearly streets ahead is for business analytics. The resources needed to support data warehousing and map reduce style processing are significant, and available at the click of a button in public Cloud. And when you have finished with them, they can be switched off!

This still leaves us without an answer to the question – which is right for your organisation? Public or Private? Well, the easy answer is both, but the correct answer will depend on a closer analysis of the factors mentioned above.