Unravel "Optimize" Webinar Series | Managing Costs for Spark on Databricks

Logo
Presented by

Patrick Mawyer, Senior Solutions Engineer, Unravel Data

About this talk

Are you looking to optimize costs and resource usage for your Spark jobs on Databricks? Then this is the webinar for you. Overallocating resources, such as memory, is a common fault when setting up Spark jobs. And for Spark jobs running on Databricks, adding resources is a click away - but it’s an expensive click, so cost management is critical. Unravel Data is our AI-enabled observability platform for Spark jobs on Databricks and other Big Data technologies. Unravel helps you right-size memory allocations, choose the right number of workers, and map your cluster needs to available instance types. Unravel’s troubleshooting capabilities mean you can fix problems the right way. You may never have to overallocate memory and other resources again! On November 4th at 10 AM PT, join Patrick Mawyer, Senior Solutions Engineer at Unravel Data, as he offers tricks and tips to help you get the most from your Databricks environment, while taking advantage of auto-scaling, interactive clusters vs. job clusters, and reducing cost. You’ll learn: -How Unravel cuts costs by an average of 30-40%. -How Unravel cuts time to solve problems (MTTR) by an average of 50%. -How to auto-tune and fix jobs to speed them up, eliminate errors, and meet SLAs. -How to screen jobs with Unravel before they go into production, ensuring a smooth launch and happy users. -How Unravel’s AI-powered recommendations, AutoActions, and TopX reports save you time, money, and stress.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (32)
Subscribers (5810)
At Unravel, we see an urgent need to help every business understand and optimize the performance of their applications, while managing data operations with greater insight, intelligence, and automation. For these businesses, Unravel is the AI-powered data operations company. We offer novel solutions that leverage AI, machine learning, and advanced analytics to help you fully operationalize the way you drive predictable performance in your modern data applications and pipelines.