We continue our discussion of what would the environment look like without Kubernetes? We started with the idea of what if Kubernetes went away, what if there was a copyright or a trademark or an API issue that made us have to abandon Kubernetes altogether?
In this episode we played what if scenarios, exploring what made Kubernetes unique, and if parts of Kubernetes or parts of the architectural model could exist outside of Kubernetes? What would be necessary?
We identified enough parts of Kubernetes individually where we saw how it itself is an interesting convergence of some core technologies. Nothing new except in the combination of those architectural paradigms, designs and open source models. Through this, we dig into why Kubernetes is so powerful in the market.
We talk about improving the time it takes to make decisions – called time to decision, a topic that we like to address quite a bit. We started with the news of the day around AI, ml Chaffee GP, and learning models.
We asked ourselves if AI/ML and generative AI could change the way expertise is used to make decisions and improve the time to the decision for experts. What type of implications would that have in the market?
If you’ve been tracking this subject, I know you will find this exciting and interesting.
Today we look at what it takes to have much more collaborative building of automation, templates and shared components that are necessary to really drive platform engineering, and not just between teams at the same company.
We make components for infrastructure automation that bridges the industry because they can be shared much more broadly, similar to the way we share modules in coding languages. We dug into what it takes to make that type of environment work in automation, and what are the prerequisites of the environment?
How do we sustain open source? Today we discussed how the commercial models and sustaining models around open source are changing and evolving.
We also included some conversations about whether or not generative AI might actually change the economics around that part of open source. We hit on top projects, open source hardware, open source, operating systems platforms, a whole gambit, and how it fits together into a sustainable model for the users, companies, enterprises, and really everybody. We all use open source to one extent or another.
We have our book club coming up on data cartels, we’re going to be discussing it on May 4th and I hope you take the time to read and come join us.
A conversation about platform migration turned into an interesting topic about the end of expertise and the changing of the way we think about expertise in a variety of contexts.
How can platform improvement be radically transformed by the use of AI? We discuss entering a world where the lock that we’ve had in a platform, or the longevity of a platform, is radically transformed by the ability to review, scan, test, correct, and transport the data included in that system. The expertise needed to handle platform migration might be entering a new era in which it’s radically reduced. What are the implications of those transformations?
We address a wide range of the impacts of knowledge, AI, and generative machine learning.
In the May 6th episode of the Cloud 2030 podcast, the discussion revolved around the diminishing significance of expertise due to advancements in AI and ML technologies. Rob Hirschfeld, the host, emphasized how various fields, from law to data science, traditionally reliant on specialized knowledge, are being impacted by AI, challenging established barriers of expertise. The episode explored the transformative implications of AI on different sectors, suggesting that this theme will be a focal point in future podcasts and discussions on 2030.cloud.
What are the human and management factors that go into building great platform engineering? And what are the efforts of control having too much control or too much flexibility, not enough collaboration, not creating space for innovation, and changing inside what’s inside these platform engineering efforts?
Today, we discuss centralized versus decentralized platform engineering, or as came up in the conversation about platform engineering, it’s the opposite of Java Enterprise, version and platform.
As you’re doing this type of work interacting with platform teams should influence how you design and authorize the effort to make that work. What type of slack you need to put in the system and what type of authority needs to be given to the platform engineering team.
In the Cloud 2030 Podcast episode on March 14th, Rob Hirschfeld discusses the importance of adopting a system-wide view in platform engineering, emphasizing the need to identify over-optimization in certain areas like developer productivity while underestimating other critical aspects such as operations, security, or compliance. Hirschfeld advocates for a holistic approach to platform engineering, focusing on optimizing the entire system, streamlining teams, and making strategic trade-offs rather than just emphasizing technology or developer productivity. He suggests that this mindset can lead to improved efficiency, productivity, and return on investment for platform teams, highlighting the significance of considering the broader organizational context. Hirschfeld encourages listeners to explore the March 14 episode for a deeper understanding of these concepts, available on the 2030.cloud platform.
We check in on data gravity to see how generative AI and conversations about metadata and thinking on data lakes impacts data gravity thinking in general.
Data gravity is a concept that has been propagated by David McCrory, a friend of mine, who defined this idea that data itself, the aggregation of data, the use and transit of data has a gravitational effect. That it pulls more data to it as well as workloads towards it.
We jumped right into impacts of data gravity in this conversation.
Is hardware going to be innovative and change? Brian Cantrell brings up oxide computing and some of their design motivation.
Today we discuss our skepticism about some of his points, as well as the impacts for cloud distributed Compute hardware design mainframes, cloud, repatriation, and a whole bunch of topics about next generation thinking in Compute infrastructure management and applications.
We are officially starting our cloud2030 book group and I hope you will join us – we are going to be reading Data Cartels by Sara Landon, followed by Investments Unlimited by John Willis and crew.
What is generative AI and what are people now just generically calling ChatGPT?
We put these things in a technical frame, meaning can we use generative AI to improve our programming, testing or automation? What does it take to use these concepts in ways that iteratively improve IT infrastructures.
We review the state of chat, ChatGPT, AI infrastructure and things like that.
In a discussion on the DevOps Lunch and Learn podcast, Rob Hirschfeld, CEO of RackN, explores the complexities of generative AI and its impact on coding and automation. Hirschfeld raises questions about trust in generative AI models, emphasizing the need to understand how they are trained, updated, and refined to eliminate errors. He highlights the importance of creating reliable training sets to ensure the technology’s applications, focusing on enhancing system resilience and maintainability.
Is platform engineering effective at hiding complexity from developers? Today we tear apart what platform engineering is doing, how it came about and what it’s trying to be.
We discuss what companies are trying to accomplish with platform engineering – how can successful efforts improve outcomes for development teams and operations teams by improving collaboration in contracts? Why and how is that important, and what do those efforts entail?
In the Cloud 2030 podcast episode on platform engineering, Rob Hirschfeld, CEO of RackN, explores the profound impact of platform engineering on operational efficiency and developer complexity. He emphasizes the discipline’s role in making operations accessible, efficient, and repeatable, leading to significant benefits for companies and DevOps teams. While discussing how platform engineering can hide complexity, Hirschfeld highlights the ongoing essential work involved, debunking the notion of shortcuts and emphasizing the value it adds to organizations. He invites listeners to join discussions at the 2030.cloud, where important technology topics are analyzed in depth.