What makes people interested in new tech versus the stable, boring, things that keep the lights on work?
It feels to me as if we’re in the phase of development where we start saying, I need to make sure this all works. I’ve followed all the cool stuff, now I need to make sure everything’s working and get my ROI out.
This conversation questions that assumption, talks about why we care, what we’re really trying to accomplish, and digs into what is boring and what is sexy? And what makes them different.
We continue our Governance as Code discussions in today’s episode.
We started by very broadly looking at Governance as Code generally, but quickly drilled down into Infrastructure as Code meets Governance as Code focused discussion. Understanding that intersection is critical to building something that is both automated and governable.
The topic explored how we audit controls for systems. We also need to make sure that when we build infrastructure, it’s following our policies. The challenge here is making sure that what we’ve automated is conforming to our governance.
We conceptualize data centers as core infrastructure components in today’s discussion about green infrastructure.
In our discussion about data centers as an industrial load that have peaks and valleys in demand, we dive into the grid as a connected system. We discuss how storage can disrupt the way power is generated and distributed, not only in the United States, but also around the world.
Distribution systems play a huge role in green infrastructure, just like networks. The way we assume that networks are available and robust, we have made the same assumption about power generation in the world to run these data centers.
These topics are all tied together, and you will see a lot of insights and similarities in how we solve and build green infrastructure.
In the June 2nd episode, Rob Hirschfeld emphasizes the importance of distribution in discussions about creating greener data centers. Recognizing how power is generated, its localities, and dealing with infrastructure peaks and valleys are crucial components. The shift from centralized to decentralized infrastructure, both for power and data centers, plays a key role in reducing reliance on distribution systems, enhancing resilience, and addressing interconnected challenges. Join ongoing conversations about green infrastructure at the2030.cloud to explore these complexities further.
How do you manage complexity? Something we talk about a lot in Cloud2030 is how challenging it is to understand complexity, measure it and cope with it.
Richard Cooke wrote a paper called “How Complex Systems Fail,” (how.complexsystems.fail) and in it he talks about complex systems having strong defense mechanisms against failure. That’s what we talked about today. How do we build defense mechanisms for complex systems, not by making them simpler, but by exercising them and testing them?
We discuss the importance of testing, validation, and layer of abstraction and testing the layers in this conversation. If you deal with complex systems, this discussion will be fascinating and actionable.
In the May 24th DevOps lunch and learn, Rob Hirschfeld delves into the concept of making complex systems defensible by exercising and testing them thoroughly. Emphasizing the importance of shared automation and collaborative efforts within communities, he cites examples like Kubernetes and OpenStack as complex systems made more defensible through widespread testing and shared code. While complexity cannot be eliminated, actively exercising systems enhances their defensibility. Join the ongoing discussions and explore the intricacies of complexity management at the2030.cloud.
How is data center infrastructure adapted to edge distributed ledger technology workloads?
We think through if those demands (blockchain, proof-of-stake coins, etc) are changing the way we look at data center infrastructure, and the short answer is yes. We also explore the impacts of the type of workloads that we’re running and how we distribute them, rather than the type of equipment that we need to buy.
This conversation quickly becomes one about what we want to do with our infrastructure, not what the infrastructure is.
In the May 24th Cloud 2030 Podcast episode, Rob Hirschfeld explores how distributed ledger technologies like blockchains could impact application design and workload distribution across infrastructure. The discussion shifts from the impact on data centers to the potential for distributed applications that are more portable, capable of running in smaller data centers. While acknowledging missing pieces in building such applications, the conversation highlights the opportunity for more portable and cost-effective workloads. Join the comprehensive discussions at the2030cloud to delve deeper into this transformative intersection of distributed ledgers and infrastructures.
What kind of orchestration systems does the industry use for infrastructure, automation and controlling day to day operations?
In today’s episode, we talk about infrastructure pipelines at the tooling level, and specifically the use of Jenkins and other CI pipelining tools for ops and orchestration. We dig into why and how you would do this, and what pieces are missing from the system. That conversation leads us into larger day to day challenges.
If you are doing infrastructure ops and DevOps automation, you will get a lot out of this session.
In the May 19th Cloud 2030 Podcast episode, Rob Hirschfeld delves into the intersection of payment systems, PCI V4, NFTs, blockchain, virtual reality, and the metaverse. The discussion highlights the often overlooked XRP or ripple specification, enabling banks to transfer funds outside the SWIFT system, introducing alternative ways for banks to exchange fiat currency with significant impacts on credit, microtransactions, and blockchain conversions. The episode emphasizes the importance of understanding seemingly esoteric elements that can shape the future landscape and influence how it evolves. Explore the full conversation for insights into this intriguing combination of PCI, V4, Kryptos, and the Metaverse.
What’s going on with green data centers, why does it matter, and how do we think about it in a wider context? In this short conversation, we discuss green data centers and creating carbon neutral infrastructure.
This isn’t just about servers using electrons – the actual conversation about making our infrastructure carbon neutral includes thinking about all of the components that go into our infrastructure.
We also have an upcoming series of conversations on green data centers and carbon neutral infrastructure.
What makes API’s complex? In this episode, we talk about how we compose APIs into higher level systems, and how we think about the design elements that go into building durable, reusable API’s.
This is a classic topic for us, and in this discussion we looked beyond the API itself and started talking about the state of the system and how you manage that state.
In the Cloud 2030 podcast on April 21st, Rob Hirschfeld delves into the complexity of APIs, emphasizing the layered and nested nature of API systems. The discussion unveils the challenges of managing distributed state within APIs, where each layer needs to be aware of and interact with the state of adjacent or underlying APIs. The key insight is that without a well-understood distributed state model at the architectural level, building resilient APIs becomes inherently complex. Join the conversation at the2030.cloud for a comprehensive exploration of API design challenges and solutions.
Building reliable automation at scale for infrastructure presents challenges. In this episode, we discuss orchestration, workflow automation, and the reconciler pattern in the context of Terraform.
We refer to the pattern of Terraform, automation, and orchestration systems as “TACOS” and today we dig into how you test it and check it against drift. These are real topics of operational concern for anybody building any type of infrastructure.
In the April 5th Cloud 2030 Podcast episode, Rob Hirschfeld discusses orchestration, automation, and workflow, focusing on Terraform and introducing the “Terraform Automation and Orchestration” (TACO) pattern. The conversation emphasizes that while Terraform is a valuable tool, the broader patterns of reconciliation, GitOps, and event-driven automation are crucial for building and maintaining complex systems over time. Hirschfeld encourages listeners to view tools like Terraform and Ansible as initial steps in a journey, prompting consideration of scaling, building orchestration systems, and understanding the importance of comprehensive system development. For more in-depth discussions, explore the full episode on orchestration, automation, and workflow from April 5th, and join the ongoing conversations at the2030.cloud.
Organizations take a risk when they get locked into a vendor. In today’s episode, we talk a lot about the risks of lock in, both in general and in the context of Oracle.
That discussion takes us into a question of insurance, and if insurance policies could ultimately drive people to reduce lock in exposure. This was a fascinating discussion, not only about lock in but about what would drive organizations to fix their lock in problems.
In the Cloud 2030 Podcast episode on March 31st, Rob Hirschfeld discusses the intricate aspects of vendor lock-in, focusing on the risks associated with relying on a single provider, such as an authentication service like Okta. The conversation delves into the challenges of migrating away from tightly integrated platforms and emphasizes the importance of assessing and mitigating lock-in risks. The broader theme within Cloud 2030 discussions seems to revolve around identifying and understanding various risk factors in building complex infrastructures, aiming to drive market dynamics by addressing and managing these risks. To explore this insightful discussion further, check out the full episode on March 31st at the2030.cloud and become part of these engaging conversations.