Created on 04-08-202012:33 PM - edited on 09-02-202003:57 PM by cjervis
The Extreme Gradient Boosting or Xgboost has received a lot of attention recently and has been the star of the show across many data science workflows and competitions on Kaggle. It is a decision-tree-based model that creates hundreds of “weak learners,” i.e. trees that do not overfit the data. The final predictions or estimations are made using this group of trees, thereby reducing overfit and increasing generalization on unseen data.
For many machine learning use cases, especially in enterprises with exponential amounts of data, Xgboost is a great candidate for distributed ML model building, as it has been proven to scale well linearly along with the number of parallel instances. To achieve this, we can use DASK, an open-source parallel computing framework, inside of CML. While you can use other frameworks like Spark, we chose DASK because it's written natively in Python (Spark is written in Java) and doesn't have the overhead of running JVMs and context switching between Python and Java. It's also easier to debug by using the Python stack trace that comes from DASK.
In the following video on Using distributed Xgboost with DASK in Cloudera Machine Learning, we will learn how to use distributed Xgboost inside of CML to supercharge your ML models and drive results faster.