Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 540 | 06-04-2025 11:36 PM | |
| 1076 | 03-23-2025 05:23 AM | |
| 556 | 03-17-2025 10:18 AM | |
| 2079 | 03-05-2025 01:34 PM | |
| 1300 | 03-03-2025 01:09 PM |
11-19-2019
12:57 PM
3 Kudos
@mike_bronson7 Here is a good compromise hoping you have enough disk space change directory # cd /opt/confluent/zookeeper/data Move the directory # mv version-2 version-2_bck Recreate with same permissions # mkdir version-2 # chown user:group version-2 Compare the permissions # ls -al version-2 version-2_bck Now you can restart zookeeper
... View more
11-19-2019
12:22 PM
2 Kudos
@mike_bronson7 Yes, in fact, a better solution is mv all the contents of /opt/confluent/zookeeper/data/version-2 usually ie [log.1,log.18263] there could be many that's why its easier to move than delete but remember to recreate the version-2 directory with the same user: group and permissions take note of those details 🙂 HTH
... View more
11-19-2019
12:11 PM
@asmarz When you delete components at times you leave the database in an inconsistent status. you will need to log onto the Ambari Database to do some manual cleaning. There are 3 important tables that hold the component status hostcomponentstate hostcomponentdesiredstate servicecomponentdesiredstate A targeted select on the component name to show , run the below to check if your oozie components are still [root@nakuru ~]# psql ambari ambari Password for user ambari: #default password is "bigdata" psql (8.4.20) Type "help" for help. ambari=> select distinct component_name from hostcomponentstate; If you see the oozie components in the databases then you will have to delete those entries, I have used 'OOZIE_SERVER','OOZIE_CLIENT' in the below example Delete the orphaned Oozie entries [root@nakuru ~]# psql ambari ambari Password for user ambari: #default password is "bigdata" psql (8.4.20) Type "help" for help. ambari=> delete from hostcomponentstate where component_name='OOZIE_CLIENT'; DELETE 1 ambari=> delete from hostcomponentstate where component_name='OOZIE_SERVER'; DELETE 1 ambari=> delete from hostcomponentdesiredstate where component_name='OOZIE_CLIENT'; DELETE 1 ambari=> delete from hostcomponentdesiredstate where component_name='OOZIE_SERVER'; DELETE 1 ambari=> delete from servicecomponentdesiredstate where component_name='OOZIE_CLIENT'; DELETE 1 ambari=> delete from servicecomponentdesiredstate where component_name='OOZIE_SERVER'; DELETE 1 ambari=> commit; WARNING: there is no transaction in progress COMMIT ambari=> \q [root@nakuru ~]# You can then stop all the components in the cluster and restart the database, you might need to restart some stale configurations that might have been orphaned due to the removal of the oozie components HTH
... View more
11-19-2019
11:21 AM
2 Kudos
@mike_bronson7 Yes "Unable to load database on disk" is due to corruption also as a backup r # mv /opt/confluent/zookeeper/data/version-2 /tmp Then restart the zookeeper it should copy the snapshot from one of the healthy nodes in the quorum HTH
... View more
11-18-2019
09:08 PM
@divya_thaore Yes please you need to use the Linux CLI to navigate to those directories
... View more
11-18-2019
01:50 PM
1 Kudo
@mike_bronson7 I see the difference in the response of the CURL. HTTP/1.1 404 Not Found [Before] HTTP/1.1 405 Method Not Allowed [Now] I have just tried on my cluster without the -i v and it worked Worked for me right now curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVER" Seen a case where with only a -i curl -i -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVER" Please revert
... View more
11-18-2019
12:54 PM
@fklezin I think your zeppelin is not aware of the python version in the Zeppelin config at /usr/hdp/current/zeppelin-server/conf/interpreter.json, and change the below line 30 in the config: "zeppelin.pyspark.python": { "type": "string", "name": "zeppelin.pyspark.python", "value": "python" }, To "zeppelin.pyspark.python": { "type": "string", "name": "zeppelin.pyspark.python3", "value": "python" }, Make sure you have these values in Ambari UI--> Zeppelin-->Config-->Advanced zeppelin-env export PYSPARK_PYTHON=python3 export PYSPARK_DRIVER_PYTHON=python3 Restart Zeppelin and retry
... View more
11-18-2019
12:34 PM
@svasi read through the documentation in that link and let me know !
... View more
11-18-2019
12:33 PM
@Kou_Bou That shouldn't be a problem, but before getting to the diagnostics can you confirm you have diligently followed this Prepare the Environment newbies always forget that every step is important and hence things look complicated 🙂 Having said that is it a single/multi node cluster? In the logs I see something like host=ambari.server I hope you it's a pseudonymized value else your ambari should have FQDN the output of Linux command $ hostname -f I also see an error NetUtil.py:89 - SSLError: Failed to connect. That is due to python version To resolve that do this you have to set verify=disable by editing the /etc/python/cert-verification.cfg file. [https] verify=platform_default To [https] verify=disable Can you also share these files /etc/ambari-server/conf/ambari.properties /etc/ambari-agent/conf/ambari-agent.ini /etc/hosts Please revert
... View more
11-18-2019
11:57 AM
@divya_thaore Th default location is /var/log/hadoop/hdfs and here you wiill get hadoop-hdfs-datanode-<FQDN>.log hadoop-hdfs-datanode-<FQDN>.out hadoop-hdfs-secondarynamenode-<FQDN>.log hadoop-hdfs-secondarynamenode-<FQDN>.out And it's important too to check the host server messages /var/log/messages Those files should contain the clue all the .out are informative look at the .log most probably the last few entries
... View more