How is data center infrastructure adapted to edge distributed ledger technology workloads?
We think through if those demands (blockchain, proof-of-stake coins, etc) are changing the way we look at data center infrastructure, and the short answer is yes. We also explore the impacts of the type of workloads that we’re running and how we distribute them, rather than the type of equipment that we need to buy.
This conversation quickly becomes one about what we want to do with our infrastructure, not what the infrastructure is.
Today’s episode is about measuring complexity. Complexity is a topic that we cover a lot. And in this case, we really went past the idea that we could measure complexity, and into looking at the causes and costs of complexity.
We had a remarkable conversation about what it means to say something’s too complex? What are the consequences of complexity? And what should we do about them? Ultimately, it’s about how how we measure the cost or the risk of complexity?
In the end, we are reframing complexity in business terms and human terms. That is the important approach to looking at complexity.
We discussed the intersection of serverless and digital twinning. These two concepts are really tightly intermingled!
We discarded the idea of a central single serverless hub managing everything; instead, we think sites would actually have a mesh of serverless, interconnected event processing and stream processing systems. This approach is much more function dependent, but really opens up a lot of interesting discussions and possibilities.
We also discussed how to manage all of this meshed, serverless subscription modeling eventing, and digital twinning.
We started talking about blockchain and the edge, but that is not where it ended up at all! Our fascinating journey started with web3, and surprisingly, it’s potential for distributed infrastructure and distributed web.
That led us to edge: managing and trusting devices on the edge through distributed ledger (DLT). That led us to the distributed ledger landscape. The journey is important because some of these technologies will be essential for establishing trust in systems.
In this conversation, we walk through the progression of these very important topics.
This episode explores applications for the edge. We really try to dig in on what will work in the edge from an application perspective. We also explore what’s holding us back.
Every time we have a conversation about Edge, we help undangle the components of Edge. In this discussion, we get more concise about what type of infrastructure is needed to build real edge applications. We also define where edge applications are expected to work and where they don’t.
To explore HCI at the edge, we started with SUSE’s Harvester. It’s an HCI integration of Kubernetes, KubeVirt, and Longhorn (their storage system) plus some PXE booting magic they threw in there. From there we explored how Kubernetes can fit into Edge HCI.
That really morphed into Edge operations more generally. It’s not clear if hyperconverged infrastructure can or can’t fit. We covered items like AWS Outpost which is Amazon’s edge. We included items for the cloud to edge migration from an application development perspective.
There are a lot of fascinating ops and development topics throughout the conversation.
Dependency chains are complex and fragile when you’re depending on software, hardware cloud services that go away or change. In this conversation, we really examine the challenge of having dynamic vendor relationships and what we can do to fix and protect our environments.
It’s really hard to fix what can be vulnerable when it also changes your software supply chain at any moment! And that can impact any device in your infrastructure!! We work through that problem means in practical terms.
We reflected on 2021 and our four key panelists talked through what’s coming for 2022. Instead of making broad predictions, we focused on the needs of the market. We felt there were many immediate needs around cloud outages and security challenges.
Of course, we also discuss how the edge is coming up along with more physical integrations like for automotive, healthcare, and energy creation and storage. All are very big topics that are local presence related computing.
How do we make data centers green because, fundamentally, they are going to use electricity. But the sources of that electricity, how we respond to shortages of electricity and cost signals about that electricity are all critical to consider. These are the questions that lead us to how a green data center or green infrastructure gets created.
Our discussion also includes how infrastructure at the edge can play a role. Overall, there are A LOT of the factors that go into building and creating green infrastructure, including the motivations and signals that will hopefully change the market.
Serverless at the edge, part one. This is a dynamic and engaged conversation with key questions like:
What is serverless? Do we need serverless? How is edge serverless different than cloud serverless?
We see edge environments as collecting data from sensors that needs to be heterogeneous, multi vendor, dynamic and centralized. But where centralized?
I think that the serverless aspect of this really drives home the idea that we need to be able to make small, quick, easy updates into an edge environment into a sensor environment. But how we accomplish that is still to be defined.