Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 616 | 06-04-2025 11:36 PM | |
| 1182 | 03-23-2025 05:23 AM | |
| 585 | 03-17-2025 10:18 AM | |
| 2192 | 03-05-2025 01:34 PM | |
| 1378 | 03-03-2025 01:09 PM |
01-28-2019
11:57 AM
1 Kudo
@Michael Bronson If you have exhausted all other avenues YES, Step 1 Check and compare the /usr/hdp/current/kafka-broker symlinks Step 2 Download both env'es as backup from the problematic and functioning cluster Upload the functioning cluster env to the problematic one, since you have a backup Start kafka through ambari Step 3 sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg Step 4 Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini
... View more
01-25-2019
07:47 PM
Thanks Geoffrey. I copied the backup of the ambari.properties to expected location and ran the upgrade command again and it worked this time.
... View more
01-28-2019
08:50 AM
@Bhushan Kandalkar Good it worked out but you shouldn't have omitted the information about the architecture ie Load balancer such info is critical in the analysis ....:-) Happy hadooping
... View more
01-26-2019
01:38 AM
@Mahendiran Palani Samy Any updates if the answer resolved your issue please click "Accept" so that the thread is marked as close and can get referenced by other members for similar errors.
... View more
01-24-2019
04:07 PM
@Lokesh Mukku Good to know it has given you a better understanding. If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors. Happy hadooping !!!!!
... View more
01-22-2019
12:23 PM
Thank you for your fast answer! Indeed it works after tweaking zeppelin's spark interpreter parameters and changing: master: yarn-cluster to master: yarn
spark.submit.deployMode: cluster
... View more
02-04-2019
04:11 PM
@Geoffrey Shelton Okot Thank you, you are right, the problem was really in zookeeper's acl. I copied everything in "ZooKeeper directory" from Test cluster to Dev cluster and that was help. But i don't know what exactly permission affected it. Is something way to get list all acl permission by Zookeeper? I would like to compare it with all acl from both cluster.
... View more
01-12-2019
07:41 PM
1 Kudo
@Divya Thaore If you are on HDP 2.6.x the login using credentials: root / hadoop and you get the prompt Did you run this command after ambari-admin-password-reset HTH
... View more
01-10-2019
08:45 AM
You are right, it is just a label issue in the HDF mack extention
... View more
01-29-2019
02:49 PM
Go to ResourceManager UI on Ambari. Click
nodes link on the left side of the window. It should show all Node
Managers and the reason for it being listed as unhealthy. Mostly found reasons are regarding disk space threshold
reached. In that case needs to consider following parameters
Parameters
Default value
Description
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The minimum fraction of number of disks to be healthy for the
node manager to launch new containers. This correspond to both
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there are
less number of healthy local-dirs (or log-dirs) available, then new
containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
90.0
The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0. If the
value is greater than or equal to 100, the nodemanager will check for full
disk. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
0
The minimum space that must be available on a disk for it to
be used. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
In the final step, if above steps do not reveal the actual
problem , needs to check log , location path : /var/log/hadoop-yarn/yarn.
... View more