Member since
11-17-2021
1128
Posts
257
Kudos Received
29
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2935 | 11-05-2025 10:13 AM | |
| 483 | 10-16-2025 02:45 PM | |
| 1042 | 10-06-2025 01:01 PM | |
| 822 | 09-24-2025 01:51 PM | |
| 629 | 08-04-2025 04:17 PM |
04-09-2024
01:48 PM
1 Kudo
What headers do you set inside of Postman?
... View more
04-09-2024
10:51 AM
@VijendarD Welcome to the Cloudera Community! To help you get the best possible solution, I have reached to you on your DMs with next steps. Thanks!
... View more
04-09-2024
05:23 AM
Hi Matt, we configured 24GB as xms and xmx parameters. For now we have a normal use of memory without OOM errors. Thank you, Fortunato
... View more
04-08-2024
09:14 AM
@sajidkhan Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
04-08-2024
08:43 AM
@darkcoffeelake Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
04-04-2024
04:23 AM
1 Kudo
Good Morning -- any takers on helping answer this question? I would be super appreciative.
... View more
04-03-2024
01:09 AM
@rizalt I had a same bug when used ODP stack to deploy Ambari. After debug, I found code in ambari-agent bug when runtime to try start Hive Server2. Problem: This below codes output {out} parameter to handle hdfs path, but this code returned not valid hdfs URI 163 metatool_cmd = format("hive --config {conf_dir} --service metatool")
164 cmd = as_user(format("{metatool_cmd} -listFSRoot", env={'PATH': execution_path}), params.hive_user) \
165 + format(" 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v '{fs_root}' | head -1")
166 code, out = shell.call(cmd) 2024-04-03 07:40:48,317 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/odp/current/hive-server2/conf/ --service metatool -listFSRoot' 2>/dev/null | grep hdfs:// | cut -f1,2,3 -d '/' | grep -v 'hdfs://vm-ambari.internal.cloudapp.net:8020' | head -1'] {}
2024-04-03 07:40:58,721 - call returned (0, '07:40:53.268 [main] DEBUG org.apache.hadoop.fs.FileSystem - hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from ') To fix: Step 1: edit row 170 in file "/var/lib/ambari-agent/cache/stacks/ODP/1.0/services/HIVE/package/scripts/hive_service.py" as below: I hard code old_path to valid URI, this help by pass updateLocation config # cmd = format("{metatool_cmd} -updateLocation {fs_root} {out}")
cmd = format("{metatool_cmd} -updateLocation {fs_root} hdfs://oldpath") You can see as below image: Step 2: Restart ambari agent sudo ambari-agent restart Step 3: Try to restart Hive server2 in Ambari Service started successful.
... View more
04-03-2024
12:11 AM
1 Kudo
Thanks for the reply. I have found the answer myself. The table name must be all in capital letters. No need to add schema or catalog name.
... View more
04-02-2024
07:26 PM
@Deepak_Unravel This is a regression on the python3 upgrade or more specifically something that worked by mistake in the past. Before py3 avro used to be more flexible with it's schema's and used to ignore any extra fields. If a value didn't exist in the schema it could have any value without invalidating the parcel. Now it expects data to be in exact format to the schema which is more accurate. During the py3 migration most fields were added but looks like we missed the conflicts field. A workaround for now can be to remove the conflicts field from the parcel since it doesn't have any conflicts right now. Please remove the conflicts entry in /opt/cloudera/parcels/UNRAVEL_SENSOR-1.0.4791002_cdh7.1_h3.1_s2.4/meta/parcel.json file on all the agents. Post conflicts entry removal you will able to distribute the parcel.
... View more
04-02-2024
09:34 AM
Yes the controller service was working before aks version upgrade
... View more