Best Practices for Python Data Transformations in Snowflake

Logo
Presented by

Sandeep Gupta, Senior Product Manager Snowflake, Jeremiah Hansen, Principal Architect, Data Engineering Snowflake & Lucy Zhu, Product Marketing Manager Snowflake

About this talk

Data preparation can be a headache with legacy architecture that requires manual environment configuration, job tuning, and cluster management. Snowpark, Snowflake’s developer platform, changes that by making it easier than ever for data engineers and data scientists to transform and process data. Snowpark natively supports SQL, Python, Java, and Scala on a single platform to enable data wrangling for pipelines feeding ML models, analytics, and applications faster and more securely with Snowflake’s elastic processing engine. Watch this webinar with Snowpark experts on Python best practices to: - Execute custom Python code with stored procedures and various UDF types for ELT/ETL - Seamlessly access popular open source packages through the built-in Anaconda integration - Use Snowpark-optimized warehouses for memory-intensive operations on large datasets - Implement Sklearn-style transformers with Snowpark to take advantage of Snowflake’s parallelization to scale feature engineering
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (8)
Subscribers (5975)
Snowflake makes enterprise AI easy, efficient and trusted. Thousands of companies around the globe, including hundreds of the world’s largest, use
Snowflake’s AI Data Cloud to share data, build applications, and power their business with AI. The era of enterprise AI is here. Learn more at snowflake.com
(NYSE: SNOW).