Otter networks specializes in helping new technology teams get up and running with Continuous Integration (CI), Continuous Deployment(CD). More often Kubernetes is being used by our customers for application hosting.
Now that everyone has decided that Cloud Computing from the Likes of Amazon, Azure and Google is probably not a bad thing the more wily among us have realized that these cloud providers are busy playing the “Vendor Lock-in” game of old.
Using Cloud Provider Resources
All cloud providers offer standard Linux machines and databases. Although these components are common to each cloud provider it can take a skilled engineer quite some time to migrate a webapp from Amazon Web Services(AWS) to Azure for example. The structures of autoscaling groups and load balancers that make these applications scale are not transferable and will require re-engineering when migrating your application from one cloud to another.
Using resources with Kubernetes
Kubernetes provides us with a way to ensure a common platform which allows us to maintain independence of cloud providers. All of the complex Terraform, CloudFormation, Ansible et al that we used to configure our Cloud providers can be replaced by a Dockerfile and some quite simple Kubernetes yaml which describes the application being deployed and how that application is plumbed in.
Where the rubber meets the road
Wherever your Kubernetes is deployed; your configurations would stay the same with only very minor differences required where cloud resources such as Disk or IP addresses are required by the cluster.
Cloud providers deliver resources to us in different ways. These differences are marketed as differentiation but often result in customers stuck with a single provider. We can deploy Kubernetes into cloud providers which gives us a common environment for our applications which can reduce our dependence on cloud providers.