We talk about improving the time it takes to make decisions – called time to decision, a topic that we like to address quite a bit. We started with the news of the day around AI, ml Chaffee GP, and learning models.
We asked ourselves if AI/ML and generative AI could change the way expertise is used to make decisions and improve the time to the decision for experts. What type of implications would that have in the market?
If you’ve been tracking this subject, I know you will find this exciting and interesting.
Today we look at what it takes to have much more collaborative building of automation, templates and shared components that are necessary to really drive platform engineering, and not just between teams at the same company.
We make components for infrastructure automation that bridges the industry because they can be shared much more broadly, similar to the way we share modules in coding languages. We dug into what it takes to make that type of environment work in automation, and what are the prerequisites of the environment?
How do we sustain open source? Today we discussed how the commercial models and sustaining models around open source are changing and evolving.
We also included some conversations about whether or not generative AI might actually change the economics around that part of open source. We hit on top projects, open source hardware, open source, operating systems platforms, a whole gambit, and how it fits together into a sustainable model for the users, companies, enterprises, and really everybody. We all use open source to one extent or another.
We have our book club coming up on data cartels, we’re going to be discussing it on May 4th and I hope you take the time to read and come join us.
A conversation about platform migration turned into an interesting topic about the end of expertise and the changing of the way we think about expertise in a variety of contexts.
How can platform improvement be radically transformed by the use of AI? We discuss entering a world where the lock that we’ve had in a platform, or the longevity of a platform, is radically transformed by the ability to review, scan, test, correct, and transport the data included in that system. The expertise needed to handle platform migration might be entering a new era in which it’s radically reduced. What are the implications of those transformations?
We address a wide range of the impacts of knowledge, AI, and generative machine learning.
In the May 6th episode of the Cloud 2030 podcast, the discussion revolved around the diminishing significance of expertise due to advancements in AI and ML technologies. Rob Hirschfeld, the host, emphasized how various fields, from law to data science, traditionally reliant on specialized knowledge, are being impacted by AI, challenging established barriers of expertise. The episode explored the transformative implications of AI on different sectors, suggesting that this theme will be a focal point in future podcasts and discussions on 2030.cloud.
How has Kubernetes changed our industry? Today’s discussion is part of a multi podcast conversation in which we’re going to think about ways in which Kubernetes could go away, or could influence other technologies in such a way to be transformative.
We went down the path of what we have learned from Kubernetes and how it influences other aspects of IT operations, architecture and design, and explored the impact that the expectation for declarative immutable operational constructs will play into other aspects of our system. We also discuss micro LS microkernels and how operations are staged to talk about the need for declarative OS, banking on this idea that what Kubernetes has built extends into other areas.
Chat GPT Summary: “The conversation is part of a multi-podcast series focused on exploring ways in which Kubernetes could influence other technologies, as well as the potential consequences if it were to disappear. During the discussion, the group delved into the lessons learned from Kubernetes and its impact on various aspects of IT operations, architecture, and design. One key takeaway was the importance of declarative immutable constructs in managing the complexities of modern IT systems. The group also explored the potential for microkernels to revolutionize system design and emphasized the need for declarative operating systems. Overall, the discussion highlighted the transformative role that Kubernetes has played in shaping the IT industry and underscored the importance of adopting a declarative, immutable approach to managing complex IT systems.”
Every time we look at data analytics and data systems, the idea of having a way to manage control and explain the data is actually as important as the data itself. This episode is all about metadata, specifically, metadata related to data analytics analysis, Big Data computation, sort of the data lake metadata problem.
Today we discuss the challenges of data management, but also the potential of understanding so much more about how data is used. If you are an IT professional or a data professional, you will find this conversation about how we’re going to draw inferences, manage and control all of that data that were collected fascinating.
Emily Friedman’s DevOpsDays Ukraine presentation about rethinking the software development lifecycle or SDLC sparks our conversation today. She describes looking at it as a multi-dimensional cross functional discipline, that actually accounts for six different vectors of capabilities that need to be factored in – a resilient and robust look at the SDLC. Watch her YouTube:
We found that the model does not cover all of the things that we’ve been discussing as important things to consider in building, deploying, and making software resilient and reliable, most specifically software, build materials, or s bombs.
What are the human and management factors that go into building great platform engineering? And what are the efforts of control having too much control or too much flexibility, not enough collaboration, not creating space for innovation, and changing inside what’s inside these platform engineering efforts?
Today, we discuss centralized versus decentralized platform engineering, or as came up in the conversation about platform engineering, it’s the opposite of Java Enterprise, version and platform.
As you’re doing this type of work interacting with platform teams should influence how you design and authorize the effort to make that work. What type of slack you need to put in the system and what type of authority needs to be given to the platform engineering team.
In the Cloud 2030 Podcast episode on March 14th, Rob Hirschfeld discusses the importance of adopting a system-wide view in platform engineering, emphasizing the need to identify over-optimization in certain areas like developer productivity while underestimating other critical aspects such as operations, security, or compliance. Hirschfeld advocates for a holistic approach to platform engineering, focusing on optimizing the entire system, streamlining teams, and making strategic trade-offs rather than just emphasizing technology or developer productivity. He suggests that this mindset can lead to improved efficiency, productivity, and return on investment for platform teams, highlighting the significance of considering the broader organizational context. Hirschfeld encourages listeners to explore the March 14 episode for a deeper understanding of these concepts, available on the 2030.cloud platform.
We check in on data gravity to see how generative AI and conversations about metadata and thinking on data lakes impacts data gravity thinking in general.
Data gravity is a concept that has been propagated by David McCrory, a friend of mine, who defined this idea that data itself, the aggregation of data, the use and transit of data has a gravitational effect. That it pulls more data to it as well as workloads towards it.
We jumped right into impacts of data gravity in this conversation.
Is hardware going to be innovative and change? Brian Cantrell brings up oxide computing and some of their design motivation.
Today we discuss our skepticism about some of his points, as well as the impacts for cloud distributed Compute hardware design mainframes, cloud, repatriation, and a whole bunch of topics about next generation thinking in Compute infrastructure management and applications.
We are officially starting our cloud2030 book group and I hope you will join us – we are going to be reading Data Cartels by Sara Landon, followed by Investments Unlimited by John Willis and crew.