Member since
07-09-2020
7
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2544 | 09-28-2020 01:36 AM |
02-11-2021
04:43 AM
Hi @Aco Yes, you can create it manually. Check the zookeeper document on how to create those directories. It happens due to insufficient permissions. You need to create the directories from zookeeper cli and set appropriate permission, rest of the data and file creation will be taken care by zookeeper. I will see if I could find the exact steps to share it with you
... View more
11-01-2020
10:26 PM
Hi all, I'm not sure if this issue is considered solved. In case it helps, I explain how we did it. We found the same error after removing several nodes of our kerberized cluster (Ambari 2.7.4 and HDP 3.1.4). $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:04:39 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:04:39 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 ats-hbase Failed : HTTP error code : 500 Following this thread, we checked carefully the YARN configuration to ensure that all the variables were correctly scaled to the available nodes. After that, we destroyed the yarn app: $ yarn app -destroy ats-hbase 20/11/02 07:06:13 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:13 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:14 INFO client.ApiServiceClient: Successfully destroyed service ats-hbase $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:06:19 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:19 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 Service ats-hbase not found Thus, we restarted all the YARN service on ambari. Now, everything is running fine. $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:09:02 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:09:02 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 {"name":"ats-hbase","id":"application_1604297264331_0001","artifact":{"id":"/hdp/apps/3.1.4.0-315/hbase/rm2/hbase.tar.gz","type":"TARBALL"},"lifetime":-1,"components":[{"name":"master","dependencies":[],"artifact":{"id":"/hdp/apps/3.1.4.0-315/hbase/rm2/hbase.tar.gz","type":"TARBALL"},"resource":{"cpus":1,"memory":"4096","additional":{}},"state":"STABLE","configuration":{"properties":{"yarn.service.container-failure.retry.max":"10","yarn.service.framework.path":"/hdp/apps/3.1.4.0-315/yarn/rm2/service-dep.tar.gz"},"env":{"HBASE_LOG_PREFIX":"hbase-$HBASE_IDENT_STRING-master-$HOSTNAME","HBASE_LOGFILE":"$HBASE_LOG_PREFIX.log","HBASE_MASTER_OPTS":"-Xms3276m -Xmx3276m -Djava.security.auth.login.config=/usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase/yarn_hbase_master_jaas.conf", [...]
... View more
09-28-2020
01:36 AM
Hello again, Finally, we decided to remove audit_logs Collection from solr and recreated it again. curl --negotiate -u : "http://$(hostname -f):8886/solr/admin/collections?action=DELETE&name=audit_logs" curl --negotiate -u : "http://$(hostname -f):8886/solr/admin/collections?action=CREATE&name=audit_logs&collection.configName=audit_logs&autoAddReplicas=false&nrtReplicas=2&pullReplicas=0&ReplicafionFactor=2&maxShardsPerNode=4&numShards=2" It seems that doing this we solved our error with the document containing the bad input string. java.lang.NumberFormatException: For input string: "t rue" However, there is still the ERROR: ERROR [ ] org.apache.solr.update.processor.DocExpirationUpdateProcessorFactory$DeleteExpiredDocsRunnable (DocExpirationUpdateProcessorFactory.java:431) - Runtime error in periodic deletion of expired docs: null But I understand that this only means that solr has nothing "expired" to remove, right? Thank you very much. Cheers, Carles
... View more