We would like to think that AI-based machine learning systems will always produce the right answer within their problem domain. However, in reality, their performance is a direct result of the data used to train them. The answers in production are only as good as that training data.
But data collected by human means, such as surveys, observations, or estimates can have built-in human biases, such as the confirmation bias or the representative bias. Even seemingly objective measurements can measure the wrong things or miss essential information about the problem domain.
The effects of biased data can be even more insidious. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion. This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision.
Tune into this talk to learn how AI systems can suffer from the same biases as human experts, and how that could lead to biased results. Viewers will learn how testers, data scientists, and other stakeholders can develop test cases to recognize biases, both in data and the resulting system, and how to address those biases.
About the speaker
Gerie Owen is a Lead Quality Engineer at ZS. She is a Certified Scrum Master, Conference Presenter and Author on technology and testing topics. She enjoys analyzing and improving test processes and mentoring new Quality Engineers as well as bringing a cohesive team approach to testing. Gerie is the author of many articles on technology including Agile and DevOps topics. She chooses her presentation topics based on her experiences in technology, what she has learned from them and what she would do to improve them. Gerie can be reached at gerie@gerieowen.com. Her blog, Testing in the Trenches, is https://testinggirl.wordpress.com/ and she is available at www.gerieowen.com and on Twitter and LinkedIn.