Building reliable automation at scale for infrastructure presents challenges. In this episode, we discuss orchestration, workflow automation, and the reconciler pattern in the context of Terraform.
We refer to the pattern of Terraform, automation, and orchestration systems as “TACOS” and today we dig into how you test it and check it against drift. These are real topics of operational concern for anybody building any type of infrastructure.
How are Metaverse environments built? Today we talk about how we use intellectual property to build these Metaverse environments, and who has access to what and who’s going to create it. That turns into a discussion on how you’re going to pay for it.
Typically, Metaverse is framed as a platform, but we got interested in the content, media wars, and streaming platforms. Since we started the conversation at streaming media, that led us to payment platforms and transactions. It was fascinating that we couldn’t talk about intellectual property without also talking about payment and purchase transactions.
Today’s discussion was about distributed ledger technology (DLT), also known as blockchain and the technology behind Bitcoin. We had a balanced discussion: some people who were excited about the technology and others who were skeptical. That interplay really created one of the best conversations I’ve heard about DLT and its applications
Throughout the conversation, we tested each other and we came back to basics. We didn’t assume that blockchain was good because it was new, or that organizations like Banks or Ticket Sellers were bad. That neutrality really plays out by helping us consider how DLT can actually benefit people.
What makes Everything as Code and Infrastructure as Code interesting? In today’s episode, we discuss what makes something code-like and the idea of Everything as Code, based on Patrick Dubois’ article “In depth research and trends analyzed from 50+ different concepts as code.”
Some of our conclusions were practical, like if a concept is a process that is reproducible and auditable, that’s what makes it code-like. And some other possible conclusions were that it’s just marketing because it makes everything programmable. The reality is somewhere in the middle.
Organizations take a risk when they get locked into a vendor. In today’s episode, we talk a lot about the risks of lock in, both in general and in the context of Oracle.
That discussion takes us into a question of insurance, and if insurance policies could ultimately drive people to reduce lock in exposure. This was a fascinating discussion, not only about lock in but about what would drive organizations to fix their lock in problems.
A Goldilocks’ balance challenges us to trade off prescriptive and flexible platforms. James Urquhart shares his experiences with Cloud Foundry, VMware, and Amazon about trying to find the right balance between building it yourself versus a prescriptive service approach.
We’ve decided that there needs to be a middle zone with enough opportunity for customization, as well as enough pre-set, prescriptive methods to create sustainability.
In this episode, we talk about that balance and how different processes have done it in industry.
The Okta hack highlights the value versus complexity trade off. In today’s episode, we ask if the complexity of using single sign on is the right move in this context. We also think about how to deal with these interconnected systems that have high degrees of complexity.
We also discussed API design, and whether or not we should have more rigid or flexible APIs. You can’t remove complexity from the system, but you can hide it. The structure of APIs will push complexity into either the users’ realm or the operators’ realm.
Making automation safe is essential to making it usable at scale. How do we make automation safe? We found a lot of great insights drawing from space craft design, aircraft, aircraft design and other systems where safety is super important.
Automation is a force multiplier. If we don’t factor in safety when we build it,then we could create a lot of harm in systems from wasteful spending to actual injury. These designs have very real implications.
Majors versus minors are enterprise data centers versus blockchain, bitcoin and distributed ledger data centers. We dive into the differences in processing and environmental requirements for those two different use cases.
While the idea of blockchain and distributed ledgers generate very different computational profiles, what we’re building keeps coming back to the design of a data center is design of a data center. The exception is proof of work like Bitcoin. In those cases, it’s really just how many CPUs you can run.
For this episode, we focus on proof of stake data center infrastructure. This podcast is helpful to understand the difference between proof of work and proof of state. There’s clear consensus on the call that that proof of work is not environmentally sustainable. So proof of stake is much more interesting.
GitOps is a really important way of collaborating and communicating about infrastructure.
But can GitOps escape from Kubernetes? While we did talk about Kubernetes too, we mainly talked about what it takes to implement GitOps outside of Kubernetes. We considered building a GitOps architecture and then having people understand and use it. We also cover the fundamental parts of GitOps like having a reconciler and a bunch of tools that drive clusters.