What's New @ Cloudera

Find the latest Cloudera product news

Accelerate Data Scientist Productivity with Cloudera Copilot for Cloudera Machine Learning – Now Generally Available

avatar
Super Collaborator

We’re pleased to announce that Cloudera Copilot for Cloudera AI Workbench is now generally available, bringing AI-powered productivity to the data science and machine learning development workflow. Cloudera Copilot is designed to accelerate development by generating code snippets, offering real-time assistance with troubleshooting, and enabling collaboration by answering questions about the project. These capabilities streamline the workflow from data exploration to model training, helping data scientists work more efficiently and reducing time spent on routine coding and debugging.

Copilots have two main architectural components: the tooling and the LLM model. The tooling integrates with the developer’s IDE to deliver intelligent assistance in the coding environment, offering features like code generation, inline suggestions, and contextual guidance to streamline development. The model serves as the brain, powering these capabilities by interpreting user input and generating responses based on trained knowledge. In Cloudera Copilot, the tooling is embedded directly within Cloudera ML Runtimes with JupyterLab, while the model is configurable by administrators to fit organizational needs. Deploying the LLM with Cloudera AI Inference service ensures a completely private setup where no data—such as code and IP—leaves the developer’s environment, making it ideal for even the most sensitive projects.

To get started with Cloudera Copilot, administrators simply need to configure the LLM model to power the assistant hosted on Cloudera AI Inference service or Amazon Bedrock. Cloudera Copilot is available in JupyterLab environments running ML Runtime 2024.10.1 or later.

To learn more, you can visit the Cloudera Copilot documentation