Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 949 | 06-04-2025 11:36 PM | |
| 1552 | 03-23-2025 05:23 AM | |
| 775 | 03-17-2025 10:18 AM | |
| 2786 | 03-05-2025 01:34 PM | |
| 1835 | 03-03-2025 01:09 PM |
01-29-2019
01:50 AM
Thanks Again! I do believe I found my issue. the repos where not complete and accurate on my ubuntu 18.04 builds , so I just copied repos from my xenial 16.04 box and replaced xenial with ubuntu then was able to install lafter update the kerberos client.
here was my final repo for ubuntu 18.04
deb http://us.archive.ubuntu.com/ubuntu/ bionic main restricted
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb http://us.archive.ubuntu.com/ubuntu/ bionic universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic multiverse
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates multiverse
deb http://us.archive.ubuntu.com/ubuntu/ bionic-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu bionic-security main restricted
deb http://security.ubuntu.com/ubuntu bionic-security universe
deb http://security.ubuntu.com/ubuntu bionic-security multiverse
... View more
01-28-2019
11:57 AM
1 Kudo
@Michael Bronson If you have exhausted all other avenues YES, Step 1 Check and compare the /usr/hdp/current/kafka-broker symlinks Step 2 Download both env'es as backup from the problematic and functioning cluster Upload the functioning cluster env to the problematic one, since you have a backup Start kafka through ambari Step 3 sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg Step 4 Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini
... View more
01-25-2019
07:47 PM
Thanks Geoffrey. I copied the backup of the ambari.properties to expected location and ran the upgrade command again and it worked this time.
... View more
01-28-2019
08:50 AM
@Bhushan Kandalkar Good it worked out but you shouldn't have omitted the information about the architecture ie Load balancer such info is critical in the analysis ....:-) Happy hadooping
... View more
01-26-2019
01:38 AM
@Mahendiran Palani Samy Any updates if the answer resolved your issue please click "Accept" so that the thread is marked as close and can get referenced by other members for similar errors.
... View more
01-24-2019
04:07 PM
@Lokesh Mukku Good to know it has given you a better understanding. If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors. Happy hadooping !!!!!
... View more
01-22-2019
12:23 PM
Thank you for your fast answer! Indeed it works after tweaking zeppelin's spark interpreter parameters and changing: master: yarn-cluster to master: yarn
spark.submit.deployMode: cluster
... View more
02-04-2019
04:11 PM
@Geoffrey Shelton Okot Thank you, you are right, the problem was really in zookeeper's acl. I copied everything in "ZooKeeper directory" from Test cluster to Dev cluster and that was help. But i don't know what exactly permission affected it. Is something way to get list all acl permission by Zookeeper? I would like to compare it with all acl from both cluster.
... View more
01-12-2019
07:41 PM
1 Kudo
@Divya Thaore If you are on HDP 2.6.x the login using credentials: root / hadoop and you get the prompt Did you run this command after ambari-admin-password-reset HTH
... View more
01-10-2019
08:45 AM
You are right, it is just a label issue in the HDF mack extention
... View more