What are the potentials for biasing LLM models? We dive into biases both in good ways and in bad ways.
Is the expertise that we’re feeding into these models is not sufficient to actually drive the outcomes that we’re looking for? We’re going to be eliminating humans out of the loop in a relatively short period of time. Both outcomes, at the moment, feel equally probable, which is troubling.
We dive into how and why that happens, what’s going on, and some concrete tips for how you can improve your prompting to avoid these same pitfalls.
Transcript: otter.ai/u/v3MaWiCWEe-G1ar2O0…?utm_source=copy_url
Photo by Marta Nogueira: www.pexels.com/photo/pink-and-bl…or-text-17151677/
Rob’s Hot Take:
In the Cloud 2030 Podcast episode from September 7th, Rob Hirschfeld explores the topic of bias in large language models, emphasizing the ease with which the output and tone of these models can be influenced by initiating them with different idiomatic English dialects. By demonstrating that variations in greetings like “bonjour” or “howdy” yield distinct results, Hirschfeld underscores the importance of crafting prompts and setting the right tone to unlock the embedded expertise within the models. The conversation delves into the fascinating and somewhat alarming aspects of bias in large language models, offering insights that encourage listeners to engage with the full discussion. Those interested in participating in ongoing conversations can find more information about Cloud 2030 at the2030.cloud.