Member since
02-27-2020
173
Posts
41
Kudos Received
48
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
360 | 11-29-2023 01:16 PM | |
470 | 10-27-2023 04:29 PM | |
539 | 07-07-2023 10:20 AM | |
1348 | 03-21-2023 08:35 AM | |
558 | 01-25-2023 08:50 PM |
03-06-2024
12:10 PM
Earlier this year Cloudera Machine Learning (CML) added a new way to accelerate GenAI projects by tapping into Hugging Face Spaces and deploying these projects right inside of CML with just a few clicks. With over 6,500 spaces as of this writing, Hugging Face community is still growing rapidly and provides a convenient platform for practitioners and organizations to share their work in areas from classical machine learning to the latest GenAI research. In this article you will learn how to enable and use this feature to accelerate your own ML projects. The default Hugging Face Spaces AMP catalog is enabled for all CML Public Cloud workspaces starting from version 2.0.43-b208. To enable users to launch external Hugging Face AMPs, additional steps are necessary (see end of this article). Steps to Deploy Hugging Face Space AMP Let's dive right in and see how simple it is to deploy a Hugging Face AMP: Click on AMPs in the left sidebar of your ML Workspace. If you don’t see this, then AMPs are not enabled by your administrator. Click on Hugging Face tab to narrow down the view to HF AMPs only On the Can you run it? LLM version card click on Deploy Read through the details of the AMP and the disclosure message. You can also navigate to the HF Space’s official github if you wish Click Configure & Deploy This particular HF Space is focused on answering a question of whether or not a given LLM can run on a particular hardware spec. In the next screen, note the environment variables that can be passed down the project. You can leave these at default values here. Leave the rest of the settings unchanged and click Launch Project At this point CML kicks off the steps required to launch this Hugging Face Space, namely installing dependencies and launching an application. After the steps are completed, the AMP will be fully deployed. Clicking on Applications in the left side-bar, you can see a gradio app deployed. Clicking on the app's card (Application to serve UI :link: ) will take you to the app's UI, opened in a new tab of your web browser. It will look like this: What happened in the background? Applied ML Prototypes (AMPs) are packaged projects that include execution steps that CML can understand and perform. The owner of a project defines .project-metadata.yaml in their project repository to instruct CML on what steps should be done run code, schedule a job, or deploy a model, etc.). In the case of Hugging Face Spaces this metadata is injected on the fly by CML as the project is being spun up. The two steps that are executed with Hugging Space AMPs are the following: Install dependencies that a given HF Space requires Deploy an Application (gradio or streamlit) if one is present in the HF Space Once a Hugging Face AMP is launched in CML, users can treat it as any other local project, reviewing the code, making changes, breaking things and learning as they go. The goal is to accelerate innovation in the enterprise and adjust open projects to meet the requirements of specific customer use cases. Enable Deployment of External HF Spaces While Hugging Face Spaces AMPs is a Tech Preview feature, there is a setting that needs to be enabled in the ML Workspace to make it available to the users. For this you will need to have MLAdmin role in the workspace or work with your workspace administrator through the following steps: Inside of the ML Workspace, navigate to Site Administration Go to Settings tab In the Feature Flags section, check the box next to Allow users to deploy external Hugging Face Space This setting takes effect immediately Once this setting is enabled, users will not only deploy Hugging Face Spaces AMPs from the existing catalog, but also let them point to any Hugging Face space and start working with it as a project within CML. In Tech Preview this supports gradio and streamlit applications only. Iterate Faster with CML At Cloudera we strive to give customers options, from deployment models on-prem or in the cloud to using external or internally-hosted Large Language Models. Introduction of Hugging Face Spaces integration in CML will significantly accelerate customers' Machine Learning projects, especially those focused on Generative AI.
... View more
12-11-2023
02:08 AM
Hey there, I successfully added a banner to hue, as described in the dokumentation. what unfortunately does not work, is setting a background color, but im still ongoing. Regards, Timo
... View more
11-02-2023
04:01 PM
@DianaTorres, yes, thank you! @aakulovthank you for your answer! Andrea
... View more
07-09-2023
06:36 AM
I find it very interesting and useful! Thank you and peace to all!
... View more
03-21-2023
10:16 AM
1 Kudo
Thank you for your help, after I tried to change the Java version, the service started normally, because the Java version I used was too high for my installed CDH, thank you for your reply to my post!!!
... View more
02-23-2023
01:06 PM
I would suggest working with Cloudera support on this, as they would be best suited for analyzing logs and suggesting next steps.
... View more
01-25-2023
08:50 PM
There is not enough detail here to be able to provide any kind of answer. Please open a Cloudera Support case and upload the logs to that case in order to get the best solution for the issue. Regards, Alex
... View more
08-24-2022
12:36 AM
I'm finally using the binaries in the parcel dir: # sudo -u kudu /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/bin/kudu master list Master1,Master2,Master3
... View more
05-23-2022
03:26 PM
@yagoaparecidoti Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more