Tofu vs a Death of Expertise

The TerraForm fork, now known as the OpenTofu project, is our first topic in today’s episode. We discuss what’s going on with that, the challenges, as well as the potential pressures from HashiCorp that created this whole situation.

How do we get experts to recover their authority and how do we look at organizations like that? We have about 20 minutes of really involved conversation about the book, Death of Expertise by Tom Nichols, from the previous podcast. If you haven’t heard our first part of the conversation, I suggest you go back and listen to our full Death of Expertise podcast.

We cover two topics, one of them short term and one of them long term. So it’s a nice, balanced industry discussion around what the fork means, what its impacts are and a little bit of recap. There’s some really spicy opinions around 32 minutes in if you want to jump forward, we resume our discussion about death of expertise.

Transcript: otter.ai/u/zGUYDP6DynzxPBNLM9…?utm_source=copy_url
Photo by lil artsy: www.pexels.com/photo/person-abou…ur-dices-1111597/

VMware Explore, Hashicorp & Industry Update

What’s going on from VMWare to Broadcom to HashiCorp and their license changes. We discuss current topics, even to the sad news about Kris Nova passing during a mountaineering expedition.

If you’d like to catch up on the tech news, then this topic hopefully has aged well and you will enjoy it!

Transcript: otter.ai/u/J18ecRKKwc8As_TLyC…?utm_source=copy_url
Photo by Oziel Gómez: www.pexels.com/photo/man-wearing…-mountain-925263/

Edge (and Beyond) Industry Update

How do Edge and Compute and SaaS and cloud influence everything that we do? We covered topics from VMware explorer and talked a lot about Edge. That led to AI ml, which led to another topic, which led to another topic.

If you enjoy hearing about how interconnected our technology and choices are, everything from Bitcoin to edge, and cloud and government interaction, this is the podcast for you because we cover pretty much all of it and connect it together.

Remember that on September 14, we are having one of our quarterly book club meetings on the death of expertise.

Resources:
www.fiercewireless.com/wireless/telc…promised-land

Transcript: otter.ai/u/wf2OahZkgp9tzF7eJu…?utm_source=copy_url
Photo by Aksonsat Uanthoeng: www.pexels.com/photo/close-up-ph…s-on-map-1078850/

Rob’s Hot Take:

In the Cloud 2030 podcast episode from August 24th, CEO Rob Hirschfeld discusses the shift from the rental/service economy to owning production assets in the context of cloud and SaaS models. He highlights the financial commitment and decoupling of capital expenses associated with service usage, emphasizing the value of owning assets in certain scenarios. Hirschfeld encourages deliberate decision-making regarding asset ownership, stressing the importance of understanding the long-term consequences and skill-building for businesses. He invites listeners to explore these topics further in the complete August 24th Cloud 2030 conversation.

Can we regulate LLMs? Should we?

How do you regulate large language models? We look at the challenges of regulating these AI approaches and how governments and companies can approach it. We untangle how these models work, and dive into the mechanics of what information is controllable. We walk through concrete information that is a benefit to you here as our listener, and incentive for you to join us in future conversations as we continue to unravel it.

In addition, John Willis was on the panel today, and he started us off with a story about API’s, Amazon, Jeff Bezos, and O’Reilly from the warmup. So you’ll get a short bonus story by John Willis before we start.

References
ised-isde.canada.ca/site/innovation…panion-document
www.europarl.europa.eu/news/en/headl…xt=Parliament‘s%20 priority%20is%20to%20make,automation%2C%20to%20prevent%20harmful%20outcomes
www.trade.gov/market-intelligenc…i-regulations-2023
content.naic.org/cipr-topics/arti…ial-intelligence

Transcript: otter.ai/u/dBRQBFNz8d01taQ-iM…?utm_source=copy_url
Image: www.pexels.com/photo/measuring-g…tar-pick-3988555/

Rob’s Hot Take:

In a Cloud 2030 podcast episode, Rob Hirschfeld, CEO and co-founder of RackN, discussed the complexities of regulating large language models. He highlighted the stark differences in approaches between the US, focusing on model risks, and the EU, emphasizing user rights protection. Hirschfeld expressed concern about reconciling these varying perspectives, especially regarding data rights preservation and understanding the risks associated with using such models, particularly when algorithms cannot be fully validated. He invited listeners to engage in the ongoing conversation at the 2030.cloud.

Book Discussion: Investments Unlimited

This is the second installment of our book group, which is a discussion about Investments Unlimited. We have one of our authors, and a great all around DevOps enthusiast, John Willis, on the call with us.

As you might expect, while we talk about the book and John gives a lot of background and details about the book, we treat it with the classic cloud2030 style, and bring in AI, large language and advanced DevOps.

We take the topics of the book to the next level, and frame it in the moment of the year, looking beyond and into how the concepts of compliance, validation, team coordination and risk assessment are incorporated into the coming AI and how it changes in our landscape.

Sources
Book www.amazon.com/Investments-Unlim…tal/dp/1950508536
techstrong.ai/aiops/the-rise-of-shadow-ai/
guidehouse.com/insights/financia…-lines-of-defense

Transcript: otter.ai/u/uC9c3xJS4oATQx7BrY…?utm_source=copy_url

Can ChatGPT do DevOps?

We use ChatGPT to live create DevOps, automation, Ansible, TerraForm, Python, and interact with different clouds to get advice on how to set up clouds.

This discussion includes a screen share session, so if you’re listening to this audio there will be times when we are talking about something you can’t see but I do make a point of working to explain what we’re doing. There’s also a video of the screen share session if you prefer.

Video: youtu.be/hU7pUDfliGk
Transcript: otter.ai/u/MPvT7SP0FCSe02asm8…?utm_source=copy_url
Image: www.pexels.com/photo/pink-backgr…ch-bubble-1111369

Cloud2030DevOpsChatGPTLLMGenerative DevOpsCloudAutomation

Generative Coding & DevOps Challenges

What can we expect generative AI to generate and is it going to produce good code? Today we talk about Gluecon and generative DevOps and the different concepts and capabilities around it. What impact is it going to have on developers? How do we control that?

Today’s discussion was in preparation for our session on June 13, where we’re going to group program GPT to see what type of DevOps coding skills we can prompt. We talked about the necessity of prompting in this session and covered some tips to help you think about how to be a better prompt engineer, a skill set that everybody’s going to need to have in the next months if not years.

Transcript: otter.ai/u/lsye_htH0-wksAOrqg…?utm_source=copy_url
Image: www.pexels.com/photo/man-welding…ow-frame-2965260/

The Kubernetes Alternate Universe

What would our systems look like if we didn’t have Kubernetes?

We started this discussion with platform engineering and its associated challenges. In talking about platforms, we covered ways in which people can consume infrastructure more effectively. That segwayed directly into ways in which Kubernetes could be changed under the covers, used for virtualization use for non traditional containerized automation.

This episode is a pretty thorough review of alternatives to Kubernetes, and the ways in which Kubernetes misses the mark.

Transcript: otter.ai/u/0qX-qloVOzAQ47LgpS…?utm_source=copy_url
Image: www.pexels.com/photo/wrecked-ship-2336927/

Strengthening Security’s Weakest Link

How do you deal with the weakest link in security?

Today we talk through how we can secure systems, all the way from what technical processes put in place to the people involved to legal enforcement, and who pays the price when data is compromised? There’s a lot to digest here that comes back to thoughtful ways in which we can deal with the weakest link in the systems.

How do we create robust security models?

Transcript: otter.ai/u/mkup2hKSzyP0Pkpxkk…?utm_source=copy_url
Image: www.pexels.com/photo/brown-thread-2072872/

AI And Technical Debt

We dig into a topic written about by Eric Norlin or SK ventures about technical debt and AI. In this episode, we discuss the consequences of generative AI could be radically transforming the way in which we generate code and deal with code that has been generated in technical debt.

We explore some fascinating concepts about how fast we can iterate, how we change the dynamics of building software, building automation, and the expertise required to architect systems. This leads pretty far down in the path towards disruptive thinking, and how this could reshape the entire industry.

Source: skventures.substack.com/p/societys-te…and-softwares
Transcript: otter.ai/u/MEtVkoNnZeCu0JHa30…?utm_source=copy_url
Image: www.pexels.com/photo/piggy-bank-…a-flower-4886900/

Rob’s Hot Take:

In a discussion on the Cloud 2030 podcast, CEO and co-founder of RackN, Rob Hirschfeld, highlighted the changing landscape of expertise in emerging technologies like AI. With the cost to build and iterate dropping significantly, expertise is no longer primarily applied during the building process, but integrated into design and testing sequences. The advent of generative AI has the potential to revolutionize how we design and build automation, software code, and technical systems, necessitating a redefinition of expertise in this rapidly evolving field.