Updated: 04.02.2019: Cloud Adoption / AWS (Overview text update)
Cloud-as-a-Service Deciphering The Cloud

A Brief History of Virtualisation

Virtualisation is a mechanism by which access to a resource is manipulated by a layer that separates the resource itself from the physical layer. Virtualisation is commonly referred to as an abstraction layer. An obvious example of compute virtualisation is the Hypervisor (VMware or Hyper-V for example). The Hypervisor is the software that contains the many virtual machines, and runs on a single physical machine.

Virtualisation of the compute platform dates back to the mid 1960's, when IBM introduced an early version in the form of the IBM System 360-67 Mainframe. The mini computers that evolved in the 1970's, notably the Digital Equipment Corporation VAX range, very successfully incorporated a virtual memory system, and became a market leader in mini computers (as they were called ironically).

Virtualisation on x86 appeared in the late 1990's when a research team at Stanford developed VMware, code named DISCO. VMWare and other compute virtualisation platforms entered circulation among enterprise users in early 2000, and revolutionised the strategy for compute processing from that point. It was then a race to adopt a virtualised compute strategy to squeeze ever greater efficiencies from physical hardware. A similar race to what we see today with the stampede to public Cloud. Before virtualisation took hold, the standard approach for all compute was to purchase dedicated physical machines. This resulted in a huge spread of servers running individual business applications. When you consider that utilisation may, at best, run at somewhere between 35% and 45% on a single physical machine, then about 65% of your expensive physical resource is lying idle, waiting on the off chance that it may be consumed as demand grows. The inefficiencies become even more alarming when you factor in the additional overhead of disaster recovery, which may see you running a mirror datacentre which is entirely idle waiting for that disaster to happen. That was the 1990's, and in to the 2000's until attitudes changed and technology moved forward.

The benefits of virtualisation were also realised, albeit later, for data storage. For most of the first decade since the year 2000, storage suffered the same pre-virtualisation inefficiencies. This was a time when the SAN (Storage Area Network) was growing as an enterprise storage infrastructure. The SAN allowed for the consolidation of storage by providing a fibre channel network for servers to attach to storage disks (block storage LUNs) presented from a ‘farm’ of storage arrays. The SAN provided a high-speed storage platform accessible to a very large number of servers, and similar in performance to storage attached directly to the server. The SAN reduced the support overhead by consolidating storage in one place, rather than have hundreds or even thousands of servers with direct attached physical disks. Although the SAN improved storage efficiencies (imagine the overwhelming challenge of managing a thousand servers with at least 4 physical disks each), the earlier storage arrays, with a few exceptions, did not support virtualisation mechanisms. These are mechanisms like ‘Thin provisioning’, de-duplication and compression, and result in far greater efficiency in storage utilisation. 3Par (now owned by HP) was an early advocate of thin provisioning on its block storage, and built its popularity on being able to ‘over-provision’ and therefore drastically improve storage utilisation efficiencies. A Thin Provisioned LUN (disk drive) is essentially a container that consumes no physical storage until it is written to, and that 'container' will consume only the amount of storage written to it. Compare that to the pre-virtualisation LUN, which will have reserved (effectively consumed) the size of the LUN from the moment it was created.

If we fast-forward to the present day, we can think of application containers (Docker) as the next layer of efficiency. Among the possible benefits from containerising your application is the increase in density that is achieved from running multiple application containers on a single virtual machine, which shares the same physical machine with many other virtual machines.

Grow with demand

With the introduction of what is effectively an abstracted virtualisation layer for each Cloud building block, virtualisation has the ability to control what resource runs where, and how it runs. So the removal of the traditional, or legacy pools of resource dedicated to a particular business function, and the introduction of the single virtual resource layer across each building block, means that certain resources can be moved seamlessly between physical platforms.

The abstraction from the physical layer also allows for physical resources to be increased without risking interruption to the business functions and processes using them. This makes for a scalable infrastructure, which is one of the essential principles of Cloud. It should no longer matter to the end user where their process runs, or where their data physically resides, as long as the agreed service levels and levels of high availability are met. The physical infrastructure behind the service delivery can therefore be high-end bespoke, or commodity, and that flexibility gives the infrastructure owners (the Cloud service providers in this case) far greater choice on what physical platform to invest in. In the case of the main Cloud service providers, the hardware will in many cases be bespoke, having been developed to function in a way that will support the services in the catalogue that their customers choose from. This is the ‘smoke and mirrors’ element of the advertised Cloud service by the provider. You have no idea where your service is running in their infrastructure, and indeed whether you are getting value for money. There is a very real risk that over time the performance of your application may deteriorate because it runs from a part of the Cloud that is over utilised, or worse, that your virtual platform has been moved to a lower class of service. Things to be mindful of.

An effective Cloud architecture allows for resources to be managed closely. A public Cloud will come with its front-end console, from which all of the services are available for selection. For a private Cloud the options are rather more complicated. There are a myriad of tools that can be used to run a private Cloud, and the decision on which combination of tools to use will depend to a certain degree on the chosen vendor physical platforms. However, to build a true private Cloud will likely need the OpenStack suite of tools (projects), which are designed specifically to deliver a Cloud service. Whichever combination of tools are employed to make a private Cloud, they must provide a high degree of visibility of resource utilisation, and the utilisation data must be accessible to report and alert on. This can then be used to trigger automated virtual pool expansion, or the manual process needed to request additional physical resource. This of course is all taken care of in the public Cloud.

Increasing a resource, let's say storage, can be a seamless operation made possible by the flexibility that Cloud, through virtualisation introduces. But public Cloud goes way beyond the traditional virtualisation and orchestration mechanisms. As mentioned already, Cloud service providers have themselves developed the physical platforms that their services run from, and this has made it possible to deliver the extraordinary range of features available to their customers. However, if you do not need access to that extraordinary range of services, then why pay the cost of public Cloud? More food for thought.

 

The Network Virtualisation Piece

Network virtualisation has been available for some time in the form of VLANs, which segment resources in to subnets. Workloads and trust groups can be segmented based on business unit, application type, data sensitivity and performance.

WANs also support virtualisation by having services run in one geographic location connected to another by dark fiber.

For Storage Area Networks (SAN), fully converged connectivity that employs Fibre Channel over Ethernet (FCoE) to the storage removes the need for additional high cost infrastructure, such as SAN switches and HBA cards on the hosts. However, FCoE has not been adopted by enterprises as was anticipated some years ago, and not all storage vendors support FCoE on their platforms. iSCSI is another protocol that allows for the convergence of storage and network, and is a realistic alternative to the higher cost SAN. Convergence also introduces some management challenges between Network and Storage support teams, and so ownership of functions and responsibilities needs to be clearly defined. But the fact remains that full convergence is not as popular among infrastructure users as you would expect it to be based on the compelling features it delivers. This may be because a considerable investment may have already been made in a SAN infrastructure, or a shift towards NAS as a lower cost alternative to SAN now that the network can support far higher speeds.

In a virtual datacenter, networks can be formed from discreet segments, on which traffic can be isolated, and these can be managed as logical networks. 10Gbps Ethernet will support distances of 300 meters on multimode fiber and over 40km on single-mode fiber, and has evolved such that the latest Ethernet supports all upper layer services.

VLANs are switched networks that are logically segmented regardless of the user's physical location, and can be created at layer 2 of the OSI model. Similar to the way in which SAN switches can be segmented in to virtual SANs (VSANs), with traffic limited to the VSAN, a network switch can be divided in to multiple VLANs. In both cases, broadcast traffic is reduced and more bandwidth made available for user traffic. Routers are used to connect separate networks that are segmented in to subnets, or to access remote sites across the wide area links. Where you have multiple VLANs on a switch, it is necessary to enable VLAN Tagging on the ports that communicate out. When VLAN Tagging is enabled, a tag is inserted in to the Ethernet frame that tells the switch which network the packet is destined for. Ports that are configured for VLAN tagging are called trunk ports. A common use of VLAN Tagging are Inter Switch Links (ISLs), which are common in SAN fabrics and used to communicate packets between switches. So, if a host is connected to a network switch that is configured for multiple VLANs, the network interface on the host (NIC) can function as if it were multiple NICs on different physical networks. This is particularly useful for virtual machines

The association of Virtualisation with Cloud means that you will find the word ‘Cloud’ wrapped up in numerous new products. Call me cynical, but it is once again driven most likely by a need to have a Cloud strategy. Having ‘Cloud’ in the title of any project will be well received when it comes to justifying the spend. Taking my cynical hat off for a moment, there are valid cases, such as Cisco VACS (Virtual Application Cloud Segmentation) which applies the culture of Cloud (self service for example) to the provisioning of networks.

Exponential growth

We have touched on much of this already, but to re-iterate, storage virtualisation has been an essential goal for all storage vendors. It is a key mechanism in addressing the exponential growth in the demand for storage. Virtualisation exists as a default in fibre channel SAN storage, and Network Attached Storage (NAS), and achieved through a number of key features:

. Thin Provisioning
. Automated Tiering
. De-duplication
. Data compression
. Virtualisation Controller
. Virtualisation API's

Thin Provisioning allows for significantly higher storage utilisation, eliminating as much as possible the need for large expensive pools of unused storage. This is now common practice on all vendor platforms, and an essential virtualisation feature (see the virtualisation summary at the top of this page). Looking back, some vendors were surprisingly slow to engineer thin provisioning in to their high-end enterprise platforms. EMC for example were without thin provisioning on their high-end block storage platform until they introduced the VMAX storage array in about 2010 (2011 onwards by the time the VMAX reached some customers). By comparison, 3PAR introduced their block storage array with thin provisioning in 2005! I guess EMC were perhaps a little concerned about their potential for lost revenues with these new efficiencies, so were not so keen to develop them. I suspect however that it was more likely to be the challenge that exists with implementing thin provisioning on a block storage platform.

Automated Tiering is the ability to support high and low performing storage tiers, perhaps in the same array (but not necessarily), and for data to be seamlessly moved to the appropriate tier depending on the I/O profile of the data residing on that portion of disk. The higher performing tiers will be more expensive, and any unnecessary consumption will be kept to a minimum, while the lower cost and performing tier will be grown to support the majority of data. Automated tiering allows for a far higher density of low cost lower tier storage whilst maintaining I/O performance for the applications that need the faster tier. Multiple tiers can reside in the same storage frame, or can in some cases reside in separate frames and be tiered by an external virtualisation controller. It is inevitable that the spinning disk will at some point very soon become a thing of the past. It just depends on how quick the solid-state alternative can be made cost effective.

Virtualisation Controllers provide an abstraction layer between physical storage and host platforms. This allows for data to be moved between storage platforms and between physical locations without interrupting the service, and in most cases without the host server and application being aware. An abstracted virtualisation layer also provides enhanced capabilities that allow, for example, applications to write to two physical datacenter locations simultaneously (cross site mirroring), which makes for seamless application failover in the event of a localised disaster. Virtualisation controllers can also perform some of the functions that are internal to many storage arrays, such as automated tiering between physical storage arrays.

Storage Virtualisation APIs are available for hypervisors to transfer storage workload to the storage array, reducing the overhead on the hypervisor and thus improving efficiency and performance. Some examples of storage workload that can be transferred to the storage array (known as primitives by a popular hypervisor), are:

. XCOPY
. Block Zeroing
. Write Same (zero)
. Hardware Lock Assist
. Atomic Test and Set (ATS)
. This Provisioning Reclaim (UNMAP)

But all of this storage virtualisation is being overshadowed to a large degree by software defined storage (SDS), which removes the dependency on the very expensive proprietary physical storage platforms that have dominated IT infrastructure for so long. SDS makes the comparatively low-cost commodity platforms scale massively, whilst maintaining very high performance. And it achieves all of this without the need for complex hardware infrastructure that requires highly skilled technicians to build and run it. It makes the private Cloud a reality for businesses that are not so keen on the public Cloud.

Self Service

Self Service is probably the most recognised Cloud tenet. It is quite obvious that in order for a Cloud operating model to work, you need an effective portal that presents all the services available for the consumer to request. And that those services will be made available promptly. There follows the need for infrastructure automation and orchestration to be integrated in to the Cloud framework, to provide an agile and efficient service delivery to the end user. When you consider that the service provider will likely have multiple physical platforms from multiple vendors, ensuring that each is capable of coordinating the provisioning of its resource with the next platform (for example, provisioning an amount of block storage for a specific virtual machine) is non-trivial, and in the case of the public Cloud service providers, has been very successfully developed over time. Achieving the same for an existing enterprise that runs a traditional on-premise IT infrastructure requires considerable effort and is unlikely to be truly achievable without adopting a complex framework of software automation tools, or instead by re-purposing the hardware and building a private Cloud using OpenStack.

Public Cloud resources should be considered not as physical hardware resources, but instead as software resources which can be easily created, replaced, moved and destroyed. This requires a whole different mind-set when compared to traditional infrastructure, treating resources as software rather than hardware. This may seem like an odd concept, but becomes a lot clearer when you explore how the automation of resource creation is achieved in today’s Cloud. Automating resource creation can be done through templates that are built from common formats and mark-up languages, and written in plain text. These templates describe the resource being created (for example, a server platform configuration designed to support a web server). Once written, the script can be run at any time, perhaps from a self-service catalogue or as the result of a trigger from some event, and represents true automation. This resource management and creation is all software driven, and is commonly referred to as Infrastructure as Code (IaC). Many of the services made available by the service providers today barely relate to physical hardware, such as serverless resources, and many of the managed services that may deliver a database or some other resource.

By defining a set of realistic objectives, a home-grown private Cloud is perfectly achievable, and will complement any existing public Cloud deployment in a hybrid fashion. And it is the development of a robust IaC foundation that is needed to deliver an automated service delivery between the public and private elements. As implied in the opening paragraph, automating at this level for a private Cloud requires the infrastructure to be built from Cloud principles, and not traditional (legacy) principles. Automation is an essential feature that is needed for the end user to be offered a self-service mechanism to request new resources, or an increase in the amount of resource they currently have.

Once again, it is no surprise that virtualisation is the key feature in making it possible to design an effective self-service portal. It is however the combination of component management and automation tools that make self-service possible when attempting a private Cloud build. Despite there being many tools available to help deliver automation, making the final self-service solution work effectively depends on the right combination of tools for the platforms that are delivering the service to the end user. As already mentioned a number of times, for private Clouds this can be achieved using OpenStack, which is a framework of tools or projects that support specific functionality and operate together under the OpenStack umbrella. These individual projects control the compute, storage and network infrastructure components, allowing them to operate as a Cloud. Making OpenStack work can however be challenging, and requires time to properly design and build, and is a bit of a moving target. But, the benefits to be realised will be worth the effort.

Under the banner of DevOps, there is an abundance of software tools that brings everything together. DevOps can be thought of, not surprisingly, as the combining of the development of services and the deployment to an infrastructure in a cycle. This cycle broadly consists of a plan, code, build, test, deploy, operate and monitor phases, with CI at the heart. The tools used to drive this cycle can be referred to as the DevOps toolchain, and consists of far too many potential software tools to mention here, but some of the obvious and more recognisable tools and services will include Git and Gitlab, Jenkins, Docker, Kubernetes, Terraform, Ansible, Splunk and ServiceNow to name but a few (there are MANY more to consider).