Generative AI tools like GitHub Copilot have become increasingly popular in software development, and for good reason. Multiple reports have shown that tools like Copilot can significantly improve development efficiency and increase developer satisfaction, among other benefits.
At the same time, some engineering organizations have understandably been reluctant to adopt generative AI tools because of uncertainty around potential security, legal, and data privacy risks.
Over the past months, FOSSA has been working to develop product functionality to help our customers manage these potential risks. Learn about these new features — and get big-picture guidance on generative AI risk-management best practices and processes — in this webinar with Senior Software Engineer Jessica Black, who is leading FOSSA’s generative AI risk management feature development.
Jessica will discuss:
-Design principles behind FOSSA’s new generative AI risk-management features
-How to use FOSSA’s generative AI risk-management features to understand and manage security and legal risks
-Strategies to improve maintainability and code privacy when using generative AI code-generation tools
-GitHub Copilot settings that can guard against potential legal and privacy risks