ChatGPT has been all over the news in the last couple of months. While AI has been around for a while, ChatGPT (and OpenAI’s Playground) simply allows anyone to interact with the software. This did not go unnoticed by cybercriminals, who are already discussing ways of using the tool to their advantage in underground Russian forums.
In this session, we will show how threat actors can use ChatGPT to help them in different stages of a breach, from creating phishing emails, identifying vulnerabilities, writing malware, and exfiltrating data. We will then show how defenders can use ChatGPT for debugging, understanding security tools, creating reports, predicting an adversary’s next step, and more.
How can threat actors and security teams use ChatGPT to their advantage?
Join Sam Curry, Chief Security Officer at Cybereason, Ofer Maor, Chief Technology Officer at Mitiga, and Etay Maor, Sr. Director of Security Strategy at Cato Networks, to learn about:
How threat actors can use ChatGPT for cyber attacks
How security teams can use ChatGPT to protect from attacks
AI limitations and biases
The legal, privacy, and copyrights challenges that AI presents