Kubernetes at LiveRamp
Our journey to Kubernetes at LiveRamp was a critical step in our cloud migration. With Kubernetes came plenty of trade-offs and challenges, which we addressed with different strategies and tooling.
In this post is a brief introduction to our Kubernetes infrastructure footprint, strategies and tooling we used to tackle our Kubernetes migration, and areas where we have left to make improvements.
Kubernetes in the Cloud
In previous posts, we discussed LiveRamp’s on-premise data center to cloud migration. Our re-platformization onto Kubernetes was integral to facilitate our move from our datacenter to the cloud. On-premise, we ran our applications on VMWare alongside our 2,500 node Hadoop cluster. We used Chef to configure our application infrastructure on VMWare. We had hundreds of application services, Chef cookbooks, and VMs.
We established that Chef was not a strategy worth pursuing in the cloud , and decided that Kubernetes was the most viable option.
Over several periods, we assessed and deployed Kubernetes clusters on VMWare, on bare metal, and in AWS. Our largest multi-tenant clusters were running on VMWare on-premise and on GKE in GCP. During our cloud migration, we made the switch from multi-tenant Kubernetes clusters to multi-clusters with single tenants.
Today we have hundreds of GKE clusters across our GCP organization.
Kubernetes Application Configuration Management
Our internal toolchain kube_deploy_tools, or KDT, wraps our common workflows for building and releasing to Kubernetes. This toolchain replaced configuration and release management with Chef and Capistrano.
A typical build and release pipeline involves the following workflow commands:
- kdt push to tag and upload one or more Docker images to our image registry
- kdt generate to template Kubernetes manifests with parameters, including the Docker image tags
- kdt publish to package and upload Kubernetes manifests as an artifact integrated with the application and image build
- kdt deploy to validate and apply an artifact of Kubernetes manifests
We assessed several open source alternatives, including helm v2 (package management), kustomize (application configuration management via composition), and krane (release automation). KDT was the best choice at the time to address LiveRamp’s specific use cases.
For a deeper look into our toolchain, KDT is open-source at github.com/LiveRamp/kube_deploy_tools.
Next Steps
We’ve been running on Kubernetes and using our KDT toolchain for a few years in production. There is still more we’d like to do, including assessing how Kubernetes multi-tenancy can complement our multicluster strategy, and extending KDT for use with our continuous deployment automation.
We hope sharing a glimpse into our Kubernetes infrastructure is helpful for your journey with Kubernetes. Thanks to all of LiveRamp engineering, and especially early adopters of Kubernetes and contributors to KDT who helped us during our path to complete our containerization.