We talk about current events, the acquisition of data stacks and the closing of the HashiCorp acquisition by IBM. Later, we dive into the productivity of AI and what’s going on – are companies really getting the benefits that they expect from AI chat bot integrations and what the challenges are?
We touch base on a little bit of something more infrastructure focused, where I give a preview of work I’ve been doing on separating Kubernetes virtualization from Kubernetes development use cases, which is something that we will be talking about more in the future.
In this episode, we dive deep into the emerging world of building and training small language models. We’ll discuss the benefits, risks, and challenges companies face as they work to create more targeted and efficient AI models. From managing hardware and power requirements to ensuring data privacy and governance, we’ll cover the key considerations for enterprises looking to leverage the power of small language models. Join us as we unpack this fascinating topic and consider the implications for the future of AI and infrastructure operations.
We dive into AI, manufacturing and how to improve manufacturing outcomes by better analyzing data.
If you are interested in manufacturing or advanced applications of AI and digital twins – which is where we create accurate representations of physical items – this episode will hit all of your favorite topics!
How do we apply the principles of lean to data science and data engineering? We discuss this broadly into using AI and machine learning more generally.
This is a topic that we had discussed over the summer and wanted to come back to six months later because so much has changed and transformed in the industry. What does agile lean process control look like in an infrastructure automation platform? How can we make these very difficult and challenging components of data and data management, more agile, more lean?
I think you will get a lot out of this conversation considering our current hypercharged AI ml and LM environment.
Transcript: otter.ai/u/1ZuALgSXcPw-bIf2GO…?utm_source=copy_url DALL-E Prompt: please create a picture of a very large truck stuck under a low bridge. please label the truck as ai and the bridge as lean
How do we limit and regulate LLMs and AI? We approach this at multiple angles and look through what it’s like to regulate this type of technology.
If you’re interested in the limits of any technology, and specifically how AI gets regulated, and where we’re likely to impose legislative barriers or restrictions on this, then this will be a fascinating podcast for you.
In the Cloud 2030 Podcast episode from October 19th, Rob Hirschfeld delves into the topic of limiting large language models (LLMs) in AI and explores the potential legal frameworks for regulating artificial intelligence and technology. The conversation highlights the intriguing idea that Section 230, a core governing principle of the internet that exempts internet service companies from extensive content moderation, could play a pivotal role in shaping technology use. Hirschfeld suggests that changes to Section 230 might serve as a critical component in influencing the control and regulation of emerging technologies like AI. Listeners are encouraged to check out the full October 19th episode for a detailed exploration of these regulatory considerations and can join ongoing discussions at the2030.cloud.
What goes on behind the scenes with AI, and specifically data center infrastructure and hardware?
We discuss broad ranging concerns, opportunities and market blockers around AI. We also address how deeply it can impact innovation companies’ privacy legislation from the frame of hardware and automation.
Today’s discussion leads us to a larger question of what unlocks innovation in general that we will address in future podcasts.
Can large language models effectively supplant developers and DevOps engineers?
Today we go deeper into how the models can be trained, if they can be trusted, and what is the upside or positive use case in which we really turn LLMs into the type of weighing person experts that they have the potential to be versus simply something that turns up the volume on how fast you generate code.
We also talked about the downsides of that type of model and the potential upsides of how powerful using these tools as assistants could emerge to be as a key aspect here to transform and improve the outcome for work.
In the Cloud 2030 Podcast episode from August 31st, Rob Hirschfeld discusses the potential of using large language models to enhance DevOps and development outcomes. The conversation emphasizes the possibilities of leveraging AI to improve codebases, facilitate refactoring, and encourage code reuse by tapping into the knowledge embedded in existing code bases. Hirschfeld envisions a future where AI assists developers in reducing technical debt, maintaining code more efficiently, and consolidating code intelligently, ultimately leading to improved development practices. The episode explores the challenges and investments required for realizing these outcomes, encouraging listeners to delve into the full podcast for a comprehensive understanding. To engage in further discussions, interested individuals can explore the Cloud 2030 podcasts and join the conversations at the2030.cloud.