Covering the essentials of business intelligence, explore the features & functions for an overview.
Orbit accelerators are purpose built and customizable to fit your requirements.
Knowledge is power, signup forOrbit customer portal access.
Leading provider of enterprise Reporting and Analytics software.
There are many good reasons to move your data and applications to the cloud. It gives you the flexibility, access and security that you don’t always get with an on-premises data center. And of course, speed of deployment can’t be matched by on-premises solutions. Little wonder then that last year a reported 96% of organizations had not one, but multiple clouds in use.
The flip side of this – one that doesn’t get as much attention – is that there is a small but growing movement of applications repatriating from the cloud back to local data centers. It may be simply anecdotal, but this year there have been multiple examples in the news of companies moving some of their data back on-premises. Perhaps the most visible of these happened last year, when Dropbox moved off the public cloud to their own co-located cloud infrastructure, resulting in a claimed savings of $75 million. Bank of America has made similar claims (although in the billions) about their homegrown cloud.
The takeaway from stories like these is not that the cloud is a failure. They are examples of companies trying something new, and deciding it was not optimal for their organizations. Once you store massive amounts of data in the cloud, moving that data back on prem or to another cloud vendor becomes a significant undertaking. So a discussion about repatriation is an opportunity to think about what implementations are best suited to the cloud.
Outages, latency, security, costs and vendor lock-in are just a few of the issues that can come with a cloud deployment. For example, this year Azure, AWS and Google have all suffered significant public cloud outages. It turns out, moving to the cloud doesn’t make outages non-existent, it just makes them someone else’s problem. That of course doesn’t help businesses trying to keep their operations running. Besides outages, organizations may have issues with poor latency and the complexity of identifying where a problem occurs in a network when it is connected to the cloud.
Similarly, even given the expert security of cloud providers, data breaches still occur. Recent examples include the Capital One data breach in July and a Dow Jones breach in February. The lesson we can learn from these: Amazon and other cloud providers are good at securing your data, but ultimately the responsibility lies with your organization. Especially for business-critical operations, you may not want to rely on a third party for core business deliverables – you might feel you trust your team better.
One of the main benefits of the cloud, the speed and simplicity of spinning up new instances, can lead to other problems. Ultimately, it is yet another thing that organizations need to track. That can be difficult when anyone with a corporate credit card can create a cloud instance. This article highlights the issue of data gravity: “At present, the structure of the cloud market and the volumes of data we’re dealing with have brought many to a position where they are stuck using the compute functionality of the provider they’ve been using for data storage, due to the sheer cost & complexity of extracting and moving that data to another cloud provider.”
And challenges go beyond costs to issues like data governance and compliance. This article points out the problem: “Without adequate data governance or records management, it becomes easy to treat the cloud as a digital dumping ground.” Lack of oversight of data assets held in the cloud exposes organizations to risks from non-compliance with critical regulations like PCI SSC or GDPR.
For some applications, the recurring costs of deploying in the cloud makes it less attractive than moving back on-prem. There is a good case for identifying applications that are not producing cost savings in the cloud and moving them back in house. As organizations get better at tracking the costs of a cloud implementation and choosing cloud vendors that are the best fit for their needs, this may be less common. Cloud providers are increasingly understanding the challenge of managing more than one cloud. Microsoft just announced their multi-cloud management tool Azure Arc, which will allow management of resources both on premises and across multiple clouds, even AWS and Google.
Finally, the difficulty in locating and hiring experienced cloud engineers is another factor to consider when moving to the cloud. The role remains one of the most in demand (and lucrative) jobs of 2019. In contrast, many organizations still have a deep bench of systems developers available to run their local data centers.
Not everything is going to be able to be moved to the cloud. Most enterprises continue to run legacy applications, many of which are core business functions, and those applications are not going anywhere any time soon. So organizations that are already invested in a data center infrastructure to run these applications will not be getting off of that soon. And if architecture or compliance forces some applications to remain on-prem, it may make more sense to keep others there as well.
Probably many organizations simply moved applications too quickly to the cloud, realized the drawbacks that have made it a poor match for their needs, and are now adjusting accordingly. Good candidates for remaining in the cloud include testing and development, and applications that periodically need to be scaled up. The ultimate benefit of the cloud is that it gives organizations the flexibility of another option to meet their deployment needs.
Orbit customers can benefit by moving their analytics to Orbit Cloud BI, which offers the benefits of a managed cloud, including improved performance for large queries and cost savings on maintenance and licensing.