This article was first published on Information Age, the original article is here. As of 2016, it’s yet to be seen if this if any of this is accurate, although a number of commentators are now talking about smaller more localised cloud provision perhaps ending up as the dominant model, we’ve seen the concepts of Fog and Edge computing becoming more tangible, and we’re certainly seeing specialised clouds appearing.
It’s no secret that the IT industry is in the process of a paradigm shift: the ongoing decline in tier-one hardware sales and the rise of on-demand computing.
What’s not so clear is how far the paradigm is going to shift – and where the industry is going in the long term.
Although many see competition and the inevitable consolidation of the current mass market model as the endgame for the cloud industry, it does not necessarily follow that biggest equals best in the still-emerging world of on-demand computing: this is just the beginning of the story for the cloud revolution.
The top end of the market is dominated by Amazon and Google, with Microsoft and IBM investing heavily to maintain momentum, all engaged in a race over pricing. Yet these major proprietary vendors are struggling to deliver solutions which sufficiently address many businesses’ needs.
For many organisations, the question of price is far from the most important issue around their transition to on-demand computing. Rather, businesses are facing operational and organisational problems that can’t be solved simply by adding a few 1,000 VMs provided by Amazon or Google.
Consumers must now deal with crunching complex data sets, the exponential growth of storage requirements, access to growing amounts of data and the uncertain nature and size of workloads in the future.
As the problem of space increases, as it is doing in many industries which were not traditionally large-scale consumers of IT, the cost implications of the traditional vendor approach is a huge burden on organisations already struggling to provide services to their users within constrained budgets.
Businesses are essentially “locked in” to systems which are unable to adapt to rapidly changing digital environments. They are desperate to break the linear relationship between cost and scale.
There are also strong reasons why using multi-national cloud providers is simply not an option. There may be security or regulatory issues, issues around data ingress and egress, or specific requirements concerning connectivity or latency. These reasons are strengthened by the ongoing emergence of information about surveillance programs.
There is, therefore, an emerging demand for smaller, nimbler providers in the cloud computing space who are able to develop a very specific set of tools, people and approaches and which understand the individual problems companies are facing and offer the tailored solutions required.
Some of these issues are industry specific. Clouds designed for the broadcast media sector, for example, will have very different characteristics to those designed for academic research, and both will differ from the requirements of local and national government.
These more specialised clouds will offer targeted software configurations, designed for their particular vertical market, and may also have very specific hardware characteristics.
This has already started with the deployment of GPU-based hardware, and the trend will continue into ARM-based platforms and more specialised hardware like FPGAs.
This specialisation will also extend to the network layer, with different requirements for interconnectivity and routing, and for latency and throughput.
As data volumes continue to grow exponentially, physical proximity to storage or, in the case of the as-yet-unclear demands of the Internet of Things, the creators of the data could be a key requirement for many regions and industries.
This naturally leads to a requirement for highly localised regional cloud providers. The emergence of such players will naturally address security concerns around storing large volumes of data in a single location and compliance issues about data residency.
The drivers to the cloud are not only strategic. Applications will also become naturalised to the distributed environment, becoming massively parallelised through use of eventual consistency and their ability to work around failure states.
This will undoubtedly lead to cloud brokerage emerging as the standard abstraction layer, with workloads automatically and dynamically allocated across many different physical cloud platforms depending on customer-definable characteristics.
Cost will undoubtedly be one of these, but performance will also be key and is very dependent on the type of workload.
Federation like this depends on interoperability, the sharing of space and power, and open standards like those around Openstack will be the key to participating in these emerging markets.
In this new federated world, there is space for agile, cooperative service providers to offer a new kind of collaborative relationship with customers which crosses the traditional boundaries of service provision and consulting.
The Googles and Amazons of the cloud world may bet on price, but these players are betting on new ways of working based on mutual trust and an ambition to push the boundaries of both the traditional customer-supplier relationship, and the capabilities of the technology in order to deliver tailored solutions to complex problems.