Compliance is Fun! (and why you care)

We dive deep into the technical subject of governance and policy enforcement, including the tools, techniques and processes that you need to be aware of to do a good job with policy and governance enforcement.

We cover how to get started, what to think about, what to be aware of, and chip away at your governance and policy challenges including developer development portals, infrastructure pipelines and DevSecOps.

Transcript: otter.ai/u/ND90jKHwbklUBOAwT1…?utm_source=copy_url
Image by Dall-E prompt “please make a carton that shows a regulator who is managing cloud and IT assets using impractical tools”

Rob’s Hot Take:

Rob Hirschfeld, CEO and co-founder of RackN and host of the Cloud 2030 Podcast, discusses the October 19th conversation about limiting large language models (LLMs) and AI. The discussion focused on creating legal limitations for artificial intelligence and technology, highlighting the potential impact of regulations such as Section 230, which governs internet service providers’ moderation of content. Hirschfeld suggests that changes to Section 230 could be a critical component in controlling emerging technologies, inviting listeners to explore the insightful conversation at the2030.cloud.

Building Open Ecosystems [Tofu vs Terraform]

We dive into the dynamics of open source projects and monetization today, specifically starting around the TerraForm and open tofu split. That topic is one that we love to chew over and potentially over analyze, but today’s discussion is different.

We go into how ecosystems are built both in open and proprietary and cloud systems, and look at sort of a historical perspective on what makes a project successful from an ecosystem perspective. We also dive into why some projects work like that, and why some projects don’t.

Today’s episode gives a new take on some of the dynamics going on in the open source communities through the lens of what happened with Open Tofu and TerraForm.

Transcript: otter.ai/u/ONDvgS9yGMrSN-bXMT…?utm_source=copy_url
Photo by James Wheeler: www.pexels.com/photo/lake-pebble…of-water-1574181/

Innovators vs Techno Optimists

We discuss innovation, a favorite topic of ours, today. Instead of diving in for a structured conversation, we dove at the bait that was offered by Marc Andreessen in his techno optimist manifesto. If you haven’t read it, I would suggest taking a moment to read it before you listen to the rest of the podcast, but you do not have to!

It is definitely an interesting opinion piece about the power of innovation, which is why it was a good input for our discussion. We have our own unique perspective and a robust discussion about how innovation should work that tees up further conversations about the three horizons model for innovation.

References:
a16z.com/the-techno-optimist-manifesto/

Transcript: otter.ai/u/6qOpnFW0LMvh-rvZfw…?utm_source=copy_url
Photo by RDNE Stock project: www.pexels.com/photo/woman-in-bl…-sweater-7413891/

Compliance Comes to Kubernetes

What does it take to implement governance and compliance, because they are process controls much more than individual technologies. Today we discuss that a lot of the talks seem to be about governance and compliance, and we have a fascinating discussion about governance compliance and Kubernetes.

The idea that Kubernetes is maturing, losing the drama that is a hallmark of its first decade now and moving into a focus on managing how to control and have security, compliance and normality. Yet all of those things have a degree of tension with the vendors and users, which puts single choice compliance and governance
in direct conflict with open source competitive ecosystems.

This makes for a fascinating conversation where we touch on some really important issues for the industry.

Transcript: otter.ai/u/mAkvsYgMYMp_W8Bizk…?utm_source=copy_url
Image: Generated by Dall-E

Is Limiting LLMs possible?

How do we limit and regulate LLMs and AI? We approach this at multiple angles and look through what it’s like to regulate this type of technology.

If you’re interested in the limits of any technology, and specifically how AI gets regulated, and where we’re likely to impose legislative barriers or restrictions on this, then this will be a fascinating podcast for you.

Transcript: otter.ai/u/8IsFB-H-U3XzpQ751l…?utm_source=copy_url
Photo by Pixabay: www.pexels.com/photo/black-andro…white-book-39584/

Rob’s Hot Take:

In the Cloud 2030 Podcast episode from October 19th, Rob Hirschfeld delves into the topic of limiting large language models (LLMs) in AI and explores the potential legal frameworks for regulating artificial intelligence and technology. The conversation highlights the intriguing idea that Section 230, a core governing principle of the internet that exempts internet service companies from extensive content moderation, could play a pivotal role in shaping technology use. Hirschfeld suggests that changes to Section 230 might serve as a critical component in influencing the control and regulation of emerging technologies like AI. Listeners are encouraged to check out the full October 19th episode for a detailed exploration of these regulatory considerations and can join ongoing discussions at the2030.cloud.

Data Center & Hardware Impacts on AI

What goes on behind the scenes with AI, and specifically data center infrastructure and hardware?

We discuss broad ranging concerns, opportunities and market blockers around AI. We also address how deeply it can impact innovation companies’ privacy legislation from the frame of hardware and automation.

Today’s discussion leads us to a larger question of what unlocks innovation in general that we will address in future podcasts.

Links: research.aimultiple.com/wp-content/we…kers.png.webp

Transcript: otter.ai/u/3FUaZ3m8JabYLyJZGH…?utm_source=copy_url
Photo by Tim Samuel: www.pexels.com/photo/woman-handf…hy-chips-6697286/

State of the IT vs OT Edge

If you follow cloud2030 discussions or any of my podcasting over the last decade, Edge is a very interesting topic to me. Today’s episode is a short update on the state of the edge from a very specific position.

In this discussion, I walk through with Josh why edge has been hard for us to nail down from a technology perspective. This is something of special interest to RackN as we keep honing and refining our IT edge infrastructure technology set.

Transcript: otter.ai/u/OtzOtPvoyiAKZdxJjm…?utm_source=copy_url
Photo by Khoa Võ: www.pexels.com/photo/unrecogniza…down-sky-5780744/

Tofu vs a Death of Expertise

The TerraForm fork, now known as the OpenTofu project, is our first topic in today’s episode. We discuss what’s going on with that, the challenges, as well as the potential pressures from HashiCorp that created this whole situation.

How do we get experts to recover their authority and how do we look at organizations like that? We have about 20 minutes of really involved conversation about the book, Death of Expertise by Tom Nichols, from the previous podcast. If you haven’t heard our first part of the conversation, I suggest you go back and listen to our full Death of Expertise podcast.

We cover two topics, one of them short term and one of them long term. So it’s a nice, balanced industry discussion around what the fork means, what its impacts are and a little bit of recap. There’s some really spicy opinions around 32 minutes in if you want to jump forward, we resume our discussion about death of expertise.

Transcript: otter.ai/u/zGUYDP6DynzxPBNLM9…?utm_source=copy_url
Photo by lil artsy: www.pexels.com/photo/person-abou…ur-dices-1111597/

Bias in LLMs

What are the potentials for biasing LLM models? We dive into biases both in good ways and in bad ways.

Is the expertise that we’re feeding into these models is not sufficient to actually drive the outcomes that we’re looking for? We’re going to be eliminating humans out of the loop in a relatively short period of time. Both outcomes, at the moment, feel equally probable, which is troubling.

We dive into how and why that happens, what’s going on, and some concrete tips for how you can improve your prompting to avoid these same pitfalls.

Transcript: otter.ai/u/v3MaWiCWEe-G1ar2O0…?utm_source=copy_url
Photo by Marta Nogueira: www.pexels.com/photo/pink-and-bl…or-text-17151677/

Rob’s Hot Take:

In the Cloud 2030 Podcast episode from September 7th, Rob Hirschfeld explores the topic of bias in large language models, emphasizing the ease with which the output and tone of these models can be influenced by initiating them with different idiomatic English dialects. By demonstrating that variations in greetings like “bonjour” or “howdy” yield distinct results, Hirschfeld underscores the importance of crafting prompts and setting the right tone to unlock the embedded expertise within the models. The conversation delves into the fascinating and somewhat alarming aspects of bias in large language models, offering insights that encourage listeners to engage with the full discussion. Those interested in participating in ongoing conversations can find more information about Cloud 2030 at the2030.cloud.

Death of Expertise [Book Discussion]

We continue our book group series today about the Death of Expertise by Tom Nichols, which is very dense with a lot of provocative and thought provoking comments, topics and ideas. It was so interesting that we decided we needed two sessions to fully unpack this. This is part one, which is about how expertise as a society is handled, how social media changes and the cyclical nature of confidence in our institutions, and how technology is shaped in buying patterns in use by expertise. If you’re interested, please participate in part two of the discussion!

We also talked about the Dunning Kruger effect, the idea that the less you know about something, the more confident you are, and that gaining knowledge makes you more knowledgeable but also less falsely confident in how you present yourself. It’s a more complex topic than that very short summary.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
― Upton Sinclair, I, Candidate for Governor

Transcript: otter.ai/u/m–7wT4fRjdodT3qRu…?utm_source=copy_url
Image is book cover