How does Kubernetes create lock in versus how could Kubernetes be used to prevent lock in?
Lock in is not always a bad thing. When you avoid committing to a single vendor, you may have to work to the lowest common denominator or deal with heterogeneity in your infrastructure. Heterogeneity is pretty normal, and you might have to do this work regardless, but when you commit to a vendor you get to focus on using the vendor’s strengths.
In this episode, you’ll pick up some great tips on how to reduce your lock in when using Kubernetes.
GitOps is a really important way of collaborating and communicating about infrastructure.
But can GitOps escape from Kubernetes? While we did talk about Kubernetes too, we mainly talked about what it takes to implement GitOps outside of Kubernetes. We considered building a GitOps architecture and then having people understand and use it. We also cover the fundamental parts of GitOps like having a reconciler and a bunch of tools that drive clusters.
Today’s episode is about measuring complexity. Complexity is a topic that we cover a lot. And in this case, we really went past the idea that we could measure complexity, and into looking at the causes and costs of complexity.
We had a remarkable conversation about what it means to say something’s too complex? What are the consequences of complexity? And what should we do about them? Ultimately, it’s about how how we measure the cost or the risk of complexity?
In the end, we are reframing complexity in business terms and human terms. That is the important approach to looking at complexity.
This discussion sifts into tactical concerns for containers in the near term. We’ve gotten far with containers and Kubernetes. But what about process controls that we need to wrap around containers?
We talked through how we need to be thinking about containers now that we have good control surfaces around them to make things work. If you were using containers and Kubernetes, this podcast will certainly inform your thinking.
To explore HCI at the edge, we started with SUSE’s Harvester. It’s an HCI integration of Kubernetes, KubeVirt, and Longhorn (their storage system) plus some PXE booting magic they threw in there. From there we explored how Kubernetes can fit into Edge HCI.
That really morphed into Edge operations more generally. It’s not clear if hyperconverged infrastructure can or can’t fit. We covered items like AWS Outpost which is Amazon’s edge. We included items for the cloud to edge migration from an application development perspective.
There are a lot of fascinating ops and development topics throughout the conversation.
About the book
Core Kubernetes is a reference guide designed to teach operators, SREs, and developers how to improve reliability and performance of Kubernetes-based systems. In it, Kubernetes experts Chris Love and Jay Vyas provide a guided tour through all major aspects of Kubernetes, from managing iptables to setting up dynamically scaled clusters that respond to changes in load. You’ll understand the unique security concerns of container-based applications, discover tips to minimize costly unused capacity, and get pro tips for maximizing performance. This awesome collection of undocumented internals, expert techniques, and practical guidance has invaluable information you won’t find anywhere else.
Joining us this week is Lee Liu, CTO and Co-Founder, LogDNA.
LogDNA is a log management company for the future of business. LogDNA enables petabytes of data from disparate locations (public cloud, private cloud, on-premise, hybrid, IoT and PoS) to be parsed and searched super-fast.