The buzz word generative artificial intelligence is everywhere. It is impossible not to be confronted with it unless you live on the moon. Machine learning (ML) algorithms are increasingly taking decisions in place of human actors in various domains. However, many companies are not aware of the novel types of security threats that arise from training and deploying ML models at scale. In this talk, we will explore some of the most common and dangerous attacks on ML systems, such as data poisoning, adversarial attacks, evasion techniques, oracle attacks, model extraction, and more. We will also discuss how data and models can behave unpredictably due to data/model drift, data format conversion errors, or missing values. We will discuss some possible solutions to these challenges, involving both human and technical aspects. Our goal is to raise awareness of an important topic that is still in its early stages and to provide practical guidance for securing ML systems in production.