This is our annual year review and prediction episode and it is a doozy. We talk through what has been an incredibly busy year in Open Source, cloud repre, repatriation, AI, ML, chatGPT.
We laid down some really interesting insights and then looked forward not just into 2024 but two years of predictions and trends that we see happening. We cover what we think will be shaking, shaping and shaking the market.
Can large language models effectively supplant developers and DevOps engineers?
Today we go deeper into how the models can be trained, if they can be trusted, and what is the upside or positive use case in which we really turn LLMs into the type of weighing person experts that they have the potential to be versus simply something that turns up the volume on how fast you generate code.
We also talked about the downsides of that type of model and the potential upsides of how powerful using these tools as assistants could emerge to be as a key aspect here to transform and improve the outcome for work.
In the Cloud 2030 Podcast episode from August 31st, Rob Hirschfeld discusses the potential of using large language models to enhance DevOps and development outcomes. The conversation emphasizes the possibilities of leveraging AI to improve codebases, facilitate refactoring, and encourage code reuse by tapping into the knowledge embedded in existing code bases. Hirschfeld envisions a future where AI assists developers in reducing technical debt, maintaining code more efficiently, and consolidating code intelligently, ultimately leading to improved development practices. The episode explores the challenges and investments required for realizing these outcomes, encouraging listeners to delve into the full podcast for a comprehensive understanding. To engage in further discussions, interested individuals can explore the Cloud 2030 podcasts and join the conversations at the2030.cloud.
How do you regulate large language models? We look at the challenges of regulating these AI approaches and how governments and companies can approach it. We untangle how these models work, and dive into the mechanics of what information is controllable. We walk through concrete information that is a benefit to you here as our listener, and incentive for you to join us in future conversations as we continue to unravel it.
In addition, John Willis was on the panel today, and he started us off with a story about API’s, Amazon, Jeff Bezos, and O’Reilly from the warmup. So you’ll get a short bonus story by John Willis before we start.
In a Cloud 2030 podcast episode, Rob Hirschfeld, CEO and co-founder of RackN, discussed the complexities of regulating large language models. He highlighted the stark differences in approaches between the US, focusing on model risks, and the EU, emphasizing user rights protection. Hirschfeld expressed concern about reconciling these varying perspectives, especially regarding data rights preservation and understanding the risks associated with using such models, particularly when algorithms cannot be fully validated. He invited listeners to engage in the ongoing conversation at the 2030.cloud.
What is technical debt, and how does it apply to large language models? We dive into a really interesting conversation that goes from technical debt into system and code maintenance, which is probably a much better way to think about the challenges we have in maintaining the infrastructure systems, code, data and data lakes that we have to deal with on an everyday basis.
How do we maintain, store, track and update the LLMs themselves? How do we know and manage which model is being used when we retire a model?
In the Cloud 2030 podcast episode on August 17th, Rob Hirschfeld discusses the distinction between technical debt and system maintenance costs, emphasizing the importance of understanding the ongoing effort needed to maintain and improve systems. He points out that overlooking the maintenance costs while building a system leads to technical debt. Hirschfeld raises the question of whether large language models and AI can change the equation of system maintenance, a topic yet to be explored fully in the podcast.
A coming Data Darkage is on its way, where we’re watching Reddit, Twitter and other companies take what used to be publicly available information and put it behind a paywall or gate.
Because of the way large language models are using this data and the value of the data, we are expecting to see that trend accelerate. This will have profound implications for how we think of, share, and use data in the coming years.
We use ChatGPT to live create DevOps, automation, Ansible, TerraForm, Python, and interact with different clouds to get advice on how to set up clouds.
This discussion includes a screen share session, so if you’re listening to this audio there will be times when we are talking about something you can’t see but I do make a point of working to explain what we’re doing. There’s also a video of the screen share session if you prefer.
We dig into a topic written about by Eric Norlin or SK ventures about technical debt and AI. In this episode, we discuss the consequences of generative AI could be radically transforming the way in which we generate code and deal with code that has been generated in technical debt.
We explore some fascinating concepts about how fast we can iterate, how we change the dynamics of building software, building automation, and the expertise required to architect systems. This leads pretty far down in the path towards disruptive thinking, and how this could reshape the entire industry.
In a discussion on the Cloud 2030 podcast, CEO and co-founder of RackN, Rob Hirschfeld, highlighted the changing landscape of expertise in emerging technologies like AI. With the cost to build and iterate dropping significantly, expertise is no longer primarily applied during the building process, but integrated into design and testing sequences. The advent of generative AI has the potential to revolutionize how we design and build automation, software code, and technical systems, necessitating a redefinition of expertise in this rapidly evolving field.
How does AI chat and generative AI have the potential to disrupt everything we know about social media? Today we talk Twitter versus mastodon.
We spend most of our time talking about the power, influence and simple use cases for generative AI.
Is this going to break Mastodon, Twitter and other forms of social media? We have a pretty compelling conversation about that, too.
If you’re a fan of Mastodon and Twitter, jump forward to about 30 minutes in when we really start getting down to that topic. Stay tuned for our agenda as a bonus extra in the back half of the podcast.
What is generative AI and what are people now just generically calling ChatGPT?
We put these things in a technical frame, meaning can we use generative AI to improve our programming, testing or automation? What does it take to use these concepts in ways that iteratively improve IT infrastructures.
We review the state of chat, ChatGPT, AI infrastructure and things like that.
In a discussion on the DevOps Lunch and Learn podcast, Rob Hirschfeld, CEO of RackN, explores the complexities of generative AI and its impact on coding and automation. Hirschfeld raises questions about trust in generative AI models, emphasizing the need to understand how they are trained, updated, and refined to eliminate errors. He highlights the importance of creating reliable training sets to ensure the technology’s applications, focusing on enhancing system resilience and maintainability.
We discussed the implications of chat GPT for it and the industry.
In today’s episode, we spend a lot of time figuring out how data provenance governance, bias, and ownership will impact chat GPT in IT and technology and cloud contexts. This discussion really looks into how chat GPT can be used in disruptive ways, but also in protective ways as what we describe as guardrails for how these systems are going to get built.
In the Cloud 2030 podcast’s January fifth episode, CEO Rob Hirschfeld explores the complexities of data provenance in ChatGPT, questioning ownership and control of the generated content. He emphasizes the need to understand the sources of data, pondering whether the output belongs to users, the algorithm, or no one, highlighting the challenges of systems that belong to nobody. Hirschfeld also connects this issue with Software Bill of Materials, emphasizing the importance of knowing the components of systems for accuracy and confidence. He encourages listeners to delve into the full episode for valuable insights and invites them to engage further in discussions at 2030.Cloud.