Community Articles

Find and share helpful community-sourced technical articles.
avatar
Master Collaborator

Earlier this year Cloudera Machine Learning (CML) added a new way to accelerate GenAI projects by tapping into Hugging Face Spaces and deploying these projects right inside of CML with just a few clicks. With over 6,500 spaces as of this writing, Hugging Face community is still growing rapidly and provides a convenient platform for practitioners and organizations to share their work in areas from classical machine learning to the latest GenAI research. In this article you will learn how to enable and use this feature to accelerate your own ML projects.

aakulov_0-1709754847065.png
The default Hugging Face Spaces AMP catalog is enabled for all CML Public Cloud workspaces starting from version 2.0.43-b208. To enable users to launch external Hugging Face AMPs, additional steps are necessary (see end of this article).

Steps to Deploy Hugging Face Space AMP

Let's dive right in and see how simple it is to deploy a Hugging Face AMP:

  1. Click on AMPs in the left sidebar of your ML Workspace. If you don’t see this, then AMPs are not enabled by your administrator.
  2. Click on Hugging Face tab to narrow down the view to HF AMPs only
  3. On the Can you run it? LLM version card click on Deploy
  4. Read through the details of the AMP and the disclosure message. You can also navigate to the HF Space’s official github if you wish
    aakulov_1-1709754847049.png

     

  5. Click Configure & Deploy
  6. This particular HF Space is focused on answering a question of whether or not a given LLM can run on a particular hardware spec. In the next screen, note the environment variables that can be passed down the project. You can leave these at default values here.
    aakulov_2-1709754847075.png
  7. Leave the rest of the settings unchanged and click Launch Project
    aakulov_3-1709754847078.png

At this point CML kicks off the steps required to launch this Hugging Face Space, namely installing dependencies and launching an application. After the steps are completed, the AMP will be fully deployed.

aakulov_5-1709755052665.png

Clicking on Applications in the left side-bar, you can see a gradio app deployed. Clicking on the app's card (Application to serve UI :link:) will take you to the app's UI, opened in a new tab of your web browser. It will look like this:

aakulov_6-1709755159185.png

What happened in the background?

Applied ML Prototypes (AMPs) are packaged projects that include execution steps that CML can understand and perform. The owner of a project defines .project-metadata.yaml in their project repository to instruct CML on what steps should be done run code, schedule a job, or deploy a model, etc.). In the case of Hugging Face Spaces this metadata is injected on the fly by CML as the project is being spun up. The two steps that are executed with Hugging Space AMPs are the following:

  1. Install dependencies that a given HF Space requires
  2. Deploy an Application (gradio or streamlit) if one is present in the HF Space

Once a Hugging Face AMP is launched in CML, users can treat it as any other local project, reviewing the code, making changes, breaking things and learning as they go. The goal is to accelerate innovation in the enterprise and adjust open projects to meet the requirements of specific customer use cases.

Enable Deployment of External HF Spaces

While Hugging Face Spaces AMPs is a Tech Preview feature, there is a setting that needs to be enabled in the ML Workspace to make it available to the users. For this you will need to have MLAdmin role in the workspace or work with your workspace administrator through the following steps:

  1. Inside of the ML Workspace, navigate to Site Administration 

  2. Go to Settings tab
  3. In the Feature Flags section, check the box next to Allow users to deploy external Hugging Face Space
  4. This setting takes effect immediately

aakulov_4-1709754847080.png

Once this setting is enabled, users will not only deploy Hugging Face Spaces AMPs from the existing catalog, but also let them point to any Hugging Face space and start working with it as a project within CML. In Tech Preview this supports gradio and streamlit applications only. 

Iterate Faster with CML

At Cloudera we strive to give customers options, from deployment models on-prem or in the cloud to using external or internally-hosted Large Language Models. Introduction of Hugging Face Spaces integration in CML will significantly accelerate customers' Machine Learning projects, especially those focused on Generative AI. 

1,372 Views
Comments

Excellent work here Oleksandr!!