Let’s say you hear a claim that test tooling is coming that will recognize a failure, file a report, fix the production code, re-run and note the defect is fixed. Is this credible or not? How would you know?
It seems like every day we are bombarded with claims like this. Yet no one talks about discernment. Here we are, discussing testing, a professional activity whose role includes figuring out what is really going on, whose philosophers talk about epistemology, the science of knowing, and no one is talking about how to figure out what is actually going on in AI as an accelerator of testing.
It’s time we start.
Starting with an explanation of how large language models (LLMs) actually work, speaker and author Matt Heusser will build a conceptualization to help you understand what they are actually doing, and why they seem to work yet fail so spectacularly so often. After that he’ll discuss some actual practical applications, including a few real examples from the field. The goal here isn’t to give you an answer -- it is to give you a feel for how to find out the answer
Finally, we’ll put AI in context, compared to other “veritable tsunami waves” that were going to eliminate what came before.
Some ideas in AI are hip; some are hype.
Let’s start talking about how to know the difference.
… maybe with a couple of hip examples if we can.
About the speaker:
Matthew Heusser is the managing director at Excelon Development, and the co-author of “Software Testing Strategies: A Guide for the 2020’s.” He is also the 2014 recipient of the Most Influential Agile Test Professional Person (MAITPP) award in Potsdam Germany, a 2015 recipient of the Most Popular Online Contributor to Agile at the Agile Awards (Marble Arch, United Kingdom). Over the course his adult career in software, he has served two terms on the Association for Software Testing and delivered keynote speeches on the topic on three continents. Learn about Matt at www.xndev.com.