Member since
03-09-2016
91
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
499 | 10-26-2018 09:52 AM |
10-26-2018
09:52 AM
1 Kudo
@Sampath Kumar, don't think you got any error to configuring the HA in kerberized cluster. Just take care of the steps which we execute while configuring the namenode HA. Ambari will take care of your kerberos related options.
... View more
09-19-2018
07:22 PM
We have HDP 2.6.2.14 and Ambari 2.5.2.0 with Kafka 0.10.1.
... View more
Labels:
06-08-2018
06:16 AM
@Vinicius, We had checked all the prerequisites properly and thing in the zookeeper log and Ambari agent.
... View more
06-06-2018
11:31 AM
Our workaround: We have restart all the servers. Also, restart all ambari agent on all nodes. still, try to troubleshoot the issue. I thing this is a bug in above mentioned HDP version.
... View more
Labels:
05-15-2018
12:23 PM
1 Kudo
Note: First made your topology file. Please find an attached example. knox-topology-file.xml knox-ad-ldap-upgraded-docus.pdf Above PDF file covered all practical concepts and some theory part. Step 1:- Install Knox on edge node or any node on the cluster. Step 2:- Start Knox service from Ambari,make sure your Ambari Server is already sync with LDAP. Step3:- Search your LDAP Server via below command ldapsearch -W -H ldap://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" ldapsearch -W -H ldaps://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" Step 4:- Create a master password for Knox: /usr/hdp/current/knox-server/data/security/keystores/gateway.jks /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh create-master --force enter password then verify it Note:- (2.6.4.0-91 is my HDP versions select your hdp version /usr/hdp/XXXXXXX/) Step 5: Validate your topology file (your cluster name and toplogy file name should be same):- /usr/hdp/2.6.0.3-8/knox/bin/knoxcli.sh validate-topology --cluster walhdp Stpe 6: Validate your auth users:- sudo /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh --d system-user-auth-test --cluster walhdp Step 7:- Change all below property and restart required services:- HDFS:- Core-site.xml: hadoop.proxyuser.knox.groups=* hadoop.proxyuser.knox.hosts=* HIVE:- webhcat.proxyuser.knox.groups=* webhcat.proxyuser.knox.hosts=* hive.server2.allow.user.substitution=true hive.server2.transport.mode=http hive.server2.thrift.http.port=10001 hive.server2.thrift.http.path=cliservice Oozie oozie.service.ProxyUserService.proxyuser.knox.groups=* oozie.service.ProxyUserService.proxyuser.knox.hosts=* Step 7 :- Try to access HDFS list status:- curl -vvv -i -k -u binduser -X GET https://hdp-node1.ansari.net:8443/gateway/walhdp/webhdfs/v1?op=LISTSTATUS curl -vvv -i -k -u binduser -X GET https://namenodehost:8443/gateway/walhdp(clustername)/webhdfs/v1?op=LISTSTATUS Step 8:- Try to access hive beeline !connect jdbc:hive2://hdp node1.ansari.net:8443/;ssl=true;sslTrustStore=/home/faheem/gateway.jks;trustStorePassword=bigdata;transportMode=http;httpPath=gateway/walhdp/hive entery username: binduser password for binduser: XXXXXXXXXX Step 9: To access Web UI’s via knox using below lines:- Ambari Ui access https://ambari-server-fqdn-or ambari-server-ip:8443/gateway/walhdp/ambari/ HDFS UI's access https://namenode-fqdn:8443/gateway/walhdp/hdfs/ HBase access https://hbase-master-fqdn:8443/gateway/walhdp/hbase/webui/ YARN UI's https://yarn-master-fqdn:8443/gateway/walhdp/yarn/cluster/apps/RUNNING Resource Manager:- https://resource-manager-fqdn:8443/gateway/walhdp/resourcemanager/v1/cluster curl -ivk -u binduser:Ansari123 " https://hdp-node3.ansari.net:8443/gateway/walhdp/resourcemanager/v1/cluster" curl -ivk -u binduser:Ansari123" https://localhost:8443/gateway/walhdp/resourcemanager/v1/cluster" Ranger Web UI's https://ranger-admin-fqdn:8443/gateway/walhdp/ranger/index.html OOzie UI's https://oozie-server-fqdn:8443/gateway/walhdp/oozie/ Zeppline https://zeppline-fqdn:8443/gateway/walhdp/zeppelin/ Thanks Ansari Faheem Ahmed HDPCA Certified
... View more
- Find more articles tagged with:
- Hadoop Core
- Issue Resolution
- issue-resolution
- Knox
- knox-gateway
- knox-ldap
- knox-namenode-ha
Labels:
04-03-2018
12:03 PM
Hello Kuldeep Kulkarni, I have made all the step which you mentioned in Article, but HDP installation will take a long time after one-hour installation is still in processes. Thanks Ansari Faheem Ahmed
... View more
11-24-2017
06:52 AM
I have to try to create deny policy and before creating deny policy I have added the following property in custom ranger-site file:- The deny condition in policies is disabled by default and must be enabled for use.
From Ambari>Ranger>Configs>Advanced>Custom ranger-admin-site, add ranger.servicedef.enableDenyAndExceptionsInPolicies= true . But it should not work for me.Can some give me an steps.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
10-26-2017
10:12 AM
phoenix.jpgI have made the changes according to below website:- https://community.hortonworks.com/questions/1652/how-can-i-query-hbase-from-hive.html https://phoenix.apache.org/hive_storage_handler.html After settings all the jars and added the property in custom hive-env export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar custome hive-site export HIVE_AUX_JARS_PATH="${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar" WebHcat Server is not started no error message showing in logs: log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.IllegalStateException: Variable substitution depth too large: 20 "${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:967)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:987)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:77)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:59)
at org.apache.hive.hcatalog.templeton.AppConfig.dumpEnvironent(AppConfig.java:256)
at org.apache.hive.hcatalog.templeton.AppConfig.init(AppConfig.java:198)
at org.apache.hive.hcatalog.templeton.AppConfig.<init>(AppConfig.java:173)
at org.apache.hive.hcatalog.templeton.Main.loadConfig(Main.java:97)
at org.apache.hive.hcatalog.templeton.Main.init(Main.java:81)
at org.apache.hive.hcatalog.templeton.Main.<init>(Main.java:76)
at org.apache.hive.hcatalog.templeton.Main.main(Main.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Phoenix
10-26-2017
10:08 AM
I have made the changes according to answer by @Guilherme Braccialli but after adding the jars and put the following setting in custom hive-env export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar custome hive-site export HIVE_AUX_JARS_PATH="${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar" but not lukc and WebCat Server is not starting : ERROR from log:- log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.IllegalStateException: Variable substitution depth too large: 20 "${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:967)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:987)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:77)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:59)
at org.apache.hive.hcatalog.templeton.AppConfig.dumpEnvironent(AppConfig.java:256)
at org.apache.hive.hcatalog.templeton.AppConfig.init(AppConfig.java:198)
at org.apache.hive.hcatalog.templeton.AppConfig.<init>(AppConfig.java:173)
at org.apache.hive.hcatalog.templeton.Main.loadConfig(Main.java:97)
at org.apache.hive.hcatalog.templeton.Main.init(Main.java:81)
at org.apache.hive.hcatalog.templeton.Main.<init>(Main.java:76)
at org.apache.hive.hcatalog.templeton.Main.main(Main.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148) after
... View more
10-25-2017
07:12 AM
Once I fire the command hdfs dfs -ls /user/:- please check the hdpuser1. why it showing in a double cot. Please refer screenshot. can anyone help me on that how to remove the double cot?user.jpg
... View more
Labels:
- Labels:
-
Apache Hadoop
09-06-2017
08:59 AM
ERROR InsertIntoHadoopFsRelation: Aborting job. java.io.IOException: Failed to rename FileStatus and ERROR DefaultWriterContainer: Job job_201709052340_0000 aborted. 17/09/05 23:40:56 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted. org.apache.spark.SparkException: Job aborted.
... View more
Labels:
08-31-2017
05:27 AM
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
... View more
Labels:
08-28-2017
03:39 PM
Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?
... View more
08-28-2017
03:39 PM
Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?
... View more
08-09-2017
03:16 AM
can some one provide the best setting for spark heap size, much appricated
... View more
Labels:
- Labels:
-
Apache Spark
08-04-2017
06:55 AM
Can you put your user-id in yarn.admin.acl = user_id_name then restart the required services and try to restart the Tez View Instance.
... View more
08-04-2017
06:54 AM
can you put your user id in: yarn.admin.acl = user-id_name then restart the required service.
... View more
07-29-2017
11:19 PM
Thanks a lot Jay SenSharma
... View more
07-24-2017
01:29 PM
2017-07-24 09:22:05,733 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-07-24 09:22:05,739 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-07-24 09:22:05,741 - Group['hadoop'] {}
2017-07-24 09:22:05,742 - Group['users'] {}
2017-07-24 09:22:05,742 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,742 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,743 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,743 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,744 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,744 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,745 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,745 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,746 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,746 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,747 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,747 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,748 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,748 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 09:22:05,749 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-07-24 09:22:05,754 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-07-24 09:22:05,754 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-07-24 09:22:05,755 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 09:22:05,756 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-07-24 09:22:05,760 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-07-24 09:22:05,760 - Group['hdfs'] {}
2017-07-24 09:22:05,760 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-07-24 09:22:05,761 - FS Type:
2017-07-24 09:22:05,761 - Directory['/etc/hadoop'] {'mode': 0755}
2017-07-24 09:22:05,773 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-07-24 09:22:05,774 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-07-24 09:22:05,787 - Initializing 2 repositories
2017-07-24 09:22:05,788 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-07-24 09:22:05,794 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 09:22:05,794 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-07-24 09:22:05,797 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 09:22:05,797 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,913 - Skipping installation of existing package unzip
2017-07-24 09:22:05,913 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,946 - Skipping installation of existing package curl
2017-07-24 09:22:05,947 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,979 - Skipping installation of existing package hdp-select
2017-07-24 09:22:06,141 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-07-24 09:22:06,143 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5
2017-07-24 09:22:06,163 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-07-24 09:22:06,175 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-07-24 09:22:06,202 - checked_call returned (0, '2.5.6.0-40', '')
2017-07-24 09:22:06,208 - Package['hadoop_2_5_6_0_40'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:06,325 - Skipping installation of existing package hadoop_2_5_6_0_40
2017-07-24 09:22:06,326 - Package['hadoop_2_5_6_0_40-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:06,360 - Installing package hadoop_2_5_6_0_40-client ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-client')
2017-07-24 09:22:07,727 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-client' returned 1. Error: Package: hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2017-07-24 09:22:07,727 - Failed to install package hadoop_2_5_6_0_40-client. Executing '/usr/bin/yum clean metadata'
2017-07-24 09:22:07,938 - Retrying to install package hadoop_2_5_6_0_40-client after 30 seconds
Command failed after 1 trie
... View more
Labels:
07-24-2017
01:21 PM
017-07-24 08:55:57,187 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-07-24 08:55:57,193 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-07-24 08:55:57,194 - Group['hadoop'] {}
2017-07-24 08:55:57,195 - Group['users'] {}
2017-07-24 08:55:57,195 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,197 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,197 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,202 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,203 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-07-24 08:55:57,207 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-07-24 08:55:57,207 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-07-24 08:55:57,208 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,209 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-07-24 08:55:57,213 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-07-24 08:55:57,213 - Group['hdfs'] {}
2017-07-24 08:55:57,213 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-07-24 08:55:57,214 - FS Type:
2017-07-24 08:55:57,214 - Directory['/etc/hadoop'] {'mode': 0755}
2017-07-24 08:55:57,226 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-07-24 08:55:57,226 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-07-24 08:55:57,238 - Initializing 2 repositories
2017-07-24 08:55:57,238 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-07-24 08:55:57,244 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,245 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-07-24 08:55:57,247 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,248 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,358 - Skipping installation of existing package unzip
2017-07-24 08:55:57,358 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,390 - Skipping installation of existing package curl
2017-07-24 08:55:57,390 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,422 - Skipping installation of existing package hdp-select
2017-07-24 08:55:57,592 - Package['hadoop_2_5_6_0_40-yarn'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,705 - Skipping installation of existing package hadoop_2_5_6_0_40-yarn
2017-07-24 08:55:57,706 - Package['hadoop_2_5_6_0_40-mapreduce'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,738 - Skipping installation of existing package hadoop_2_5_6_0_40-mapreduce
2017-07-24 08:55:57,739 - Package['hadoop_2_5_6_0_40-hdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,771 - Installing package hadoop_2_5_6_0_40-hdfs ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs')
2017-07-24 08:55:59,481 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs' returned 1. Error: Package: hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2017-07-24 08:55:59,482 - Failed to install package hadoop_2_5_6_0_40-hdfs. Executing '/usr/bin/yum clean metadata'
2017-07-24 08:55:59,686 - Retrying to install package hadoop_2_5_6_0_40-hdfs after 30 seconds
Command failed after 1 tries
... View more
Labels:
06-14-2017
02:58 AM
Log file: 2017-06-12 23:06:29,603 ERROR
[regionserver/sddsvrwm383.scglobaluat.aduat.com/172.25.12.67:16020]
zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 7 attempts 2017-06-12 23:06:29,603 WARN
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
zookeeper.ZKUtil: regionserver:16020-0x35c93984f940fd2,
quorum=sddsvrwm369.scglobaluat.aduat. com:2181,sddsvrwm367.scglobaluat.aduat. com:2181,sddsvrwm368.scglobaluat.aduat.com:2181,
baseZNode=/hbase-secure Unable to list children of znode
/hbase-secure/replication/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase-secure/replication/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at
org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472) at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:455) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:483) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1462) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1384) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1266) at
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:196) at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:302) at org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:202) at
org.apache.hadoop.hbase.replication.regionserver.Replication.stopReplicationService(Replication.java:194) at
org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2163) at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1090) at
java.lang.Thread.run(Thread.java:745) 2017-06-12 23:06:29,604 ERROR
[regionserver/sddsvrwm383.scglobaluat.aduat.com/172.25.12.67:16020]
zookeeper.ZooKeeperWatcher: regionserver:16020-0x35c93984f940fd2,
quorum=sddsvrwm369.scglobaluat.aduat.com:2181,sddsvrwm367.scglobaluat.aduat.com:2181,sddsvrwm368.scglobaluat.aduat.com:2181,
baseZNode=/hbase-secure Received unexpected KeeperException, re-throwing
exception org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase-secure/replication/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at
org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472) at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:455) at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:483) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1462) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1384) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1266) at
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:196) at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:302) at
org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:202) at
org.apache.hadoop.hbase.replication.regionserver.Replication.stopReplicationService(Replication.java:194) at
org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2163) at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1090) at
java.lang.Thread.run(Thread.java:745) 2017-06-12 23:06:29,605 INFO
[regionserver/sddsvrwm383.scglobaluat.aduat.com/172.25.12.67:16020]
ipc.RpcServer: Stopping server on 16020 2017-06-12 23:06:29,606 INFO [regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
token.AuthenticationTokenSecretManager: Stopping leader election, because:
SecretManager stopping 2017-06-12 23:06:29,607 INFO
[RpcServer.listener,port=16020] ipc.RpcServer:
RpcServer.listener,port=16020: stopping 2017-06-12 23:06:29,614 INFO
[RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2017-06-12 23:06:29,614 INFO
[RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2017-06-12 23:06:29,619 WARN
[regionserver/sddsvrwm383.scglobaluat.aduat.com/172.25.12.67:16020]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=sddsvrwm369.scglobaluat.aduat. com:2181,sddsvrwm367.scglobaluat.aduat. com:2181,sddsvrwm368.scglobaluat.aduat.
com:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase-secure/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 2017-06-12 23:07:29,625 INFO
[HBase-Metrics2-1] impl.MetricsSystemImpl: Stopping HBase metrics
system... 2017-06-12 23:07:29,625 INFO
[timeline] impl.MetricsSinkAdapter: timeline thread interrupted. 2017-06-12 23:07:29,628 INFO
[HBase-Metrics2-1] impl.MetricsSystemImpl: HBase metrics system stopped. 2017-06-12 23:07:29,628 INFO
[pool-684-thread-1] timeline.HadoopTimelineMetricsSink: Closing
HadoopTimelineMetricSink. Flushing metrics to collector... 2017-06-12 23:07:30,132 INFO
[HBase-Metrics2-1] impl.MetricsConfig: loaded properties from
hadoop-metrics2-hbase.properties 2017-06-12 23:07:30,148 INFO
[HBase-Metrics2-1] timeline.HadoopTimelineMetricsSink: Initializing
Timeline metrics sink. 2017-06-12 23:07:30,148 INFO
[HBase-Metrics2-1] timeline.HadoopTimelineMetricsSink: Identified
hostname = sddsvrwm383.scglobaluat.aduat.com, serviceName = hbase 2017-06-12 23:07:30,148 INFO
[HBase-Metrics2-1] timeline.HadoopTimelineMetricsSink: Collector Uri:
http://sddsvrwm368.scglobaluat.aduat. com:6188/ws/v1/timeline/metrics 2017-06-12 23:07:30,151 INFO
[HBase-Metrics2-1] impl.MetricsSinkAdapter: Sink timeline started 2017-06-12 23:07:30,151 INFO
[HBase-Metrics2-1] impl.MetricsSystemImpl: Scheduled snapshot period at
10 second(s). 2017-06-12 23:07:30,151 INFO
[HBase-Metrics2-1] impl.MetricsSystemImpl: HBase metrics system started 2017-06-12 23:07:32,620 WARN
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=sddsvrwm369.scglobaluat.aduat. com:2181,sddsvrwm367.scglobaluat.aduat. com:2181,sddsvrwm368.scglobaluat.aduat.com:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode
= Session expired for /hbase-secure/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 2017-06-12 23:08:36,621 WARN
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=sddsvrwm369.scglobaluat.aduat. com:2181,sddsvrwm367.scglobaluat.aduat. com:2181,sddsvrwm368.scglobaluat.aduat.
com:2181, exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase-secure/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 2017-06-12 23:08:36,621 ERROR
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 7 attempts 2017-06-12 23:08:36,621 WARN
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
regionserver.HRegionServer: Failed deleting my ephemeral node org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase-secure/rs/sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052 at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at
org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:178) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1222) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1211) at
org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1427) at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1098) at
java.lang.Thread.run(Thread.java:745) 2017-06-12 23:08:36,622 INFO
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
regionserver.HRegionServer: stopping server sddsvrwm383.scglobaluat.aduat. com,16020,1497136332052;
zookeeper connection closed. 2017-06-12 23:08:36,622 INFO
[regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020]
regionserver.HRegionServer: regionserver/sddsvrwm383.scglobaluat.aduat. com/172.25.12.67:16020
exiting 2017-06-12 23:08:36,627 ERROR [main]
regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: HRegionServer Aborted at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68) at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at
org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2681) 2017-06-12 23:08:36,633 INFO
[pool-5-thread-1] provider.AuditProviderFactory: ==>
JVMShutdownHook.run() 2017-06-12 23:08:36,633 INFO
[pool-5-thread-1] provider.AuditProviderFactory: JVMShutdownHook:
Signalling async audit cleanup to start. 2017-06-12 23:08:36,633 INFO
[pool-5-thread-1] provider.AuditProviderFactory: JVMShutdownHook:
Waiting up to 30 seconds for audit cleanup to finish. 2017-06-12 23:08:36,634 INFO
[Ranger async Audit cleanup] provider.AuditProviderFactory:
RangerAsyncAuditCleanup: Starting cleanup 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] destination.HDFSAuditDestination: Flush
HDFS audit logs completed..... 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] queue.AuditAsyncQueue: Stop called.
name=hbaseRegional.async 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] queue.AuditAsyncQueue: Interrupting
consumerThread. name=hbaseRegional.async, consumer=hbaseRegional.async.summary 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] provider.AuditProviderFactory: RangerAsyncAuditCleanup:
Done cleanup 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] provider.AuditProviderFactory:
RangerAsyncAuditCleanup: Waiting to audit cleanup start signal 2017-06-12 23:08:36,635 INFO
[pool-5-thread-1] provider.AuditProviderFactory: JVMShutdownHook: Audit
cleanup finished after 2 milli seconds 2017-06-12 23:08:36,635 INFO
[pool-5-thread-1] provider.AuditProviderFactory: JVMShutdownHook:
Interrupting ranger async audit cleanup thread 2017-06-12 23:08:36,635 INFO
[pool-5-thread-1] provider.AuditProviderFactory: <==
JVMShutdownHook.run() 2017-06-12 23:08:36,635 INFO
[Ranger async Audit cleanup] provider.AuditProviderFactory:
RangerAsyncAuditCleanup: Interrupted while waiting for audit startCleanup
signal! Exiting the thread... java.lang.InterruptedException at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at
java.util.concurrent.Semaphore.acquire(Semaphore.java:312) at
org.apache.ranger.audit.provider.AuditProviderFactory$RangerAsyncAuditCleanup.run(AuditProviderFactory.java:487) at
java.lang.Thread.run(Thread.java:745) 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0] queue.AuditAsyncQueue:
Caught exception in consumer thread. Shutdown might be in progress 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0] queue.AuditAsyncQueue:
Exiting polling loop. name=hbaseRegional.async 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0] queue.AuditAsyncQueue:
Calling to stop consumer. name=hbaseRegional.async,
consumer.name=hbaseRegional.async.summary 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0]
queue.AuditSummaryQueue: Stop called. name=hbaseRegional.async.summary 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0]
queue.AuditSummaryQueue: Interrupting consumerThread.
name=hbaseRegional.async.summary, consumer=hbaseRegional.async.summary.batch 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditAsyncQueue0] queue.AuditAsyncQueue:
Exiting consumerThread.run() method. name=hbaseRegional.async 2017-06-12 23:08:36,636 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
queue.AuditSummaryQueue: Caught exception in consumer thread. Shutdown might be
in progress 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
queue.AuditSummaryQueue: Exiting polling loop. name=hbaseRegional.async.summary 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0] queue.AuditSummaryQueue:
Calling to stop consumer. name=hbaseRegional.async.summary,
consumer.name=hbaseRegional.async.summary.batch 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
queue.AuditBatchQueue: Stop called. name=hbaseRegional.async.summary.batch 2017-06-12 23:08:36,637 INFO
[pool-5-thread-1] regionserver.ShutdownHook: Shutdown hook starting;
hbase.shutdown.hook=true;
fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@4f3faa70 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
destination.HDFSAuditDestination: Flush HDFS audit logs completed..... 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
queue.AuditBatchQueue: Interrupting consumerThread. name=hbaseRegional.async.summary.batch,
consumer=hbaseRegional.async.summary.batch.hdfs 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditSummaryQueue0]
queue.AuditSummaryQueue: Exiting consumerThread.run() method.
name=hbaseRegional.async.summary name=hbaseRegional.async.summary 2017-06-12 23:08:36,638 INFO
[pool-5-thread-1] regionserver.ShutdownHook: Starting fs shutdown hook
thread. 2017-06-12 23:08:36,637 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0] queue.AuditBatchQueue:
Caught exception in consumer thread. Shutdown might be in progress 2017-06-12 23:08:36,638 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0] queue.AuditBatchQueue:
Exiting consumerThread. Queue=hbaseRegional.async.summary.batch,
dest=hbaseRegional.async.summary.batch.hdfs 2017-06-12 23:08:36,638 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0] queue.AuditBatchQueue:
Calling to stop consumer. name=hbaseRegional.async.summary.batch,
consumer.name=hbaseRegional.async.summary.batch.hdfs 2017-06-12 23:08:36,651 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0]
provider.BaseAuditHandler: Audit Status Log:
name=hbaseRegional.async.summary.batch.hdfs, interval=04:18.058 minutes,
events=2, succcessCount=2, totalEvents=12, totalSuccessCount=12 2017-06-12 23:08:36,651 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0] queue.AuditFileSpool:
Stop called, queueName=hbaseRegional.async.summary.batch,
consumer=hbaseRegional.async.summary.batch.hdfs 2017-06-12 23:08:36,652 INFO
[org.apache.ranger.audit.queue.AuditBatchQueue0] queue.AuditBatchQueue:
Exiting consumerThread.run() method. name=hbaseRegional.async.summary.batch 2017-06-12 23:08:36,651 INFO
[hbaseRegional.async.summary.batch_hbaseRegional.async.summary.batch.hdfs_destWriter]
queue.AuditFileSpool: Caught exception in consumer thread. Shutdown might be in
progress 2017-06-12 23:08:36,655 INFO
[pool-5-thread-1] regionserver.ShutdownHook: Shutdown hook finished.
... View more
- Tags:
- Data Processing
- HBase
Labels:
- Labels:
-
Apache HBase
05-20-2017
07:27 AM
Thanks for reply, We tried but not successful. is any other way to resolved the above issue.
... View more
05-19-2017
04:30 PM
kerborized*
... View more
05-19-2017
04:28 PM
We are configured grafana in herborized environment, We are trying to open a grafana web UI but we are getting some Blue Coat Web filter Business class. We talk to our network admin and try to find out the issue as some proxy or firewall is blocking the grafana port, we are also trying to change grafana prot 3000 t0 3001 but not are success. Grafana is mapped the Json data, we got the Json data using Advanced REST client. but once we are using any explorer, not able to get the graph
... View more
- Tags:
- grafana
12-09-2016
05:50 PM
can anyone please provide a docs or link for performance tuning of HDP.
... View more
11-09-2016
05:17 AM
@Savanna Endicott: I have done the HA rollback, please send the link of that docs, I am also rollback HA but got some error in the following command " curl --negotiate -u root:hashmap "X-Requested-By: ambari" -i -X POST -d '{"host_components" : [{"HostRoles":{"component_name":"navideh02.hash.net"}] }' http://localhost:8080/api/v1/clusters/NHDP/hosts?Hosts/host_name=navideh02.hash.net" please correct the above command
... View more