Support Questions

Find answers, ask questions, and share your expertise

Runtime Addon Upload Failing - "Failed saving addon content to NFS"

avatar
New Member

I'm running into an issue with custom runtime addon uploads on CML 2.0.44-h1-b4 (Kubernetes). Wondering if anyone else has seen this or can confirm my fix is safe.

**Problem:**
When uploading a runtime addon via the API endpoint `/api/v2/runtimeaddons/custom`, it shows "Initialized" in the UI, then fails after ~5 minutes with no error message. API logs show:

```
ERROR api.runtimeaddons Finish writing Custom Runtime Addon: failed.
ERROR api.runtimeaddons Finish adding new custom runtime addon failed: Failed saving addon content to NFS.
```

**What I Found:**
After digging through the deployment configs, the API pod is missing the `projects-pvc` volume mount. Other pods like `ds-vfs` and `s2i-client` have it mounted at `/projects`, but the API deployment doesn't.

```bash
# PVC shows it's used by ds-vfs and s2i-client, but NOT api
kubectl describe pvc projects-pvc

# API deployment has no projects-pvc volume
kubectl get deploy/api -o yaml | grep projects
# (returns nothing)
```

**Proposed Fix:**
Add the volume mount to the API deployment:

```bash
kubectl -n [namespace] patch deployment api --type='json' -p='[
{
"op": "add",
"path": "/spec/template/spec/volumes/-",
"value": {"name": "projects-claim", "persistentVolumeClaim": {"claimName": "projects-pvc"}}
},
{
"op": "add",
"path": "/spec/template/spec/containers/0/volumeMounts/-",
"value": {"name": "projects-claim", "mountPath": "/projects"}
}
]'
```

**Questions:**
1. Has anyone else hit this issue?
2. Is this mount supposed to be there by default?
3. Any risks to patching this in production?
4. Is this a known bug or did our installation miss something?

The addon metadata DOES save to the database, but the tarball content fails to write. My tarball and metadata.json are valid per the docs.

Any insights appreciated before I patch our production cluster!

Thanks!

3 REPLIES 3

avatar
Community Manager

@DevOpsWorld Welcome to the Cloudera Community!

To help you get the best possible solution, I have tagged our CDP experts @venkatsambath @abdulpasithali @upadhyayk04  who may be able to assist you further.

Please keep us updated on your post, and we hope you find a satisfactory solution to your query.


Regards,

Diana Torres,
Senior Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Master Collaborator

Hello , 

That error is not normal, and the mount should exist. 
So this points to be some external issues that may be identified by Cloudera Support on case. 

What risks you may face, the first is considering that deployments are handled by Cloudera, so in future updates of it may get overwritten. 

Your steps are fine, should work, but still a support case would be a safer option. 
If you still want to proceed, do that, rollout the pods and then check status: 

kubectl exec -it deploy/api -- ls -lah /projects
kubectl exec -it deploy/api -- touch /projects/test.txt

Regards,
Andrés Fallas
--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs-up button.

avatar
Community Manager

@DevOpsWorld Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.


Regards,

Diana Torres,
Senior Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: