Member since
03-09-2016
91
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
265 | 10-26-2018 09:52 AM |
10-26-2018
09:52 AM
1 Kudo
@Sampath Kumar, don't think you got any error to configuring the HA in kerberized cluster. Just take care of the steps which we execute while configuring the namenode HA. Ambari will take care of your kerberos related options.
... View more
10-25-2018
04:12 PM
We have nifi 1.7 version running on azure cloud with two nifi node and three zookeeper server. We have made the standalone nifi cluster.
... View more
Labels:
09-19-2018
07:22 PM
We have HDP 2.6.2.14 and Ambari 2.5.2.0 with Kafka 0.10.1.
... View more
06-08-2018
06:16 AM
@Vinicius, We had checked all the prerequisites properly and thing in the zookeeper log and Ambari agent.
... View more
06-06-2018
11:31 AM
Our workaround: We have restart all the servers. Also, restart all ambari agent on all nodes. still, try to troubleshoot the issue. I thing this is a bug in above mentioned HDP version.
... View more
05-15-2018
12:23 PM
1 Kudo
Note: First made your topology file. Please find an attached example. knox-topology-file.xml knox-ad-ldap-upgraded-docus.pdf Above PDF file covered all practical concepts and some theory part. Step 1:- Install Knox on edge node or any node on the cluster. Step 2:- Start Knox service from Ambari,make sure your Ambari Server is already sync with LDAP. Step3:- Search your LDAP Server via below command ldapsearch -W -H ldap://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" ldapsearch -W -H ldaps://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" Step 4:- Create a master password for Knox: /usr/hdp/current/knox-server/data/security/keystores/gateway.jks /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh create-master --force enter password then verify it Note:- (2.6.4.0-91 is my HDP versions select your hdp version /usr/hdp/XXXXXXX/) Step 5: Validate your topology file (your cluster name and toplogy file name should be same):- /usr/hdp/2.6.0.3-8/knox/bin/knoxcli.sh validate-topology --cluster walhdp Stpe 6: Validate your auth users:- sudo /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh --d system-user-auth-test --cluster walhdp Step 7:- Change all below property and restart required services:- HDFS:- Core-site.xml: hadoop.proxyuser.knox.groups=* hadoop.proxyuser.knox.hosts=* HIVE:- webhcat.proxyuser.knox.groups=* webhcat.proxyuser.knox.hosts=* hive.server2.allow.user.substitution=true hive.server2.transport.mode=http hive.server2.thrift.http.port=10001 hive.server2.thrift.http.path=cliservice Oozie oozie.service.ProxyUserService.proxyuser.knox.groups=* oozie.service.ProxyUserService.proxyuser.knox.hosts=* Step 7 :- Try to access HDFS list status:- curl -vvv -i -k -u binduser -X GET https://hdp-node1.ansari.net:8443/gateway/walhdp/webhdfs/v1?op=LISTSTATUS curl -vvv -i -k -u binduser -X GET https://namenodehost:8443/gateway/walhdp(clustername)/webhdfs/v1?op=LISTSTATUS Step 8:- Try to access hive beeline !connect jdbc:hive2://hdp node1.ansari.net:8443/;ssl=true;sslTrustStore=/home/faheem/gateway.jks;trustStorePassword=bigdata;transportMode=http;httpPath=gateway/walhdp/hive entery username: binduser password for binduser: XXXXXXXXXX Step 9: To access Web UI’s via knox using below lines:- Ambari Ui access https://ambari-server-fqdn-or ambari-server-ip:8443/gateway/walhdp/ambari/ HDFS UI's access https://namenode-fqdn:8443/gateway/walhdp/hdfs/ HBase access https://hbase-master-fqdn:8443/gateway/walhdp/hbase/webui/ YARN UI's https://yarn-master-fqdn:8443/gateway/walhdp/yarn/cluster/apps/RUNNING Resource Manager:- https://resource-manager-fqdn:8443/gateway/walhdp/resourcemanager/v1/cluster curl -ivk -u binduser:Ansari123 " https://hdp-node3.ansari.net:8443/gateway/walhdp/resourcemanager/v1/cluster" curl -ivk -u binduser:Ansari123" https://localhost:8443/gateway/walhdp/resourcemanager/v1/cluster" Ranger Web UI's https://ranger-admin-fqdn:8443/gateway/walhdp/ranger/index.html OOzie UI's https://oozie-server-fqdn:8443/gateway/walhdp/oozie/ Zeppline https://zeppline-fqdn:8443/gateway/walhdp/zeppelin/ Thanks Ansari Faheem Ahmed HDPCA Certified
... View more
- Find more articles tagged with:
- Hadoop Core
- Issue Resolution
- issue-resolution
- Knox
- knox-gateway
- knox-ldap
- knox-namenode-ha
Labels:
04-03-2018
12:03 PM
Hello Kuldeep Kulkarni, I have made all the step which you mentioned in Article, but HDP installation will take a long time after one-hour installation is still in processes. Thanks Ansari Faheem Ahmed
... View more
11-24-2017
06:52 AM
I have to try to create deny policy and before creating deny policy I have added the following property in custom ranger-site file:- The deny condition in policies is disabled by default and must be enabled for use.
From Ambari>Ranger>Configs>Advanced>Custom ranger-admin-site, add ranger.servicedef.enableDenyAndExceptionsInPolicies= true . But it should not work for me.Can some give me an steps.
... View more
Labels:
11-07-2017
07:08 AM
As I am created a phoenix_table1 table in hive. In order to test it following command I followed:
I inserted the data into phoenix_table1 using upsert command As now data is there in phoenix_table1 as shown: please refer screenshot 1 But when I try to read data from hive still I am facing the issue: please refer screenshot 2screenshot1.jpgscreenshot2.jpg can aynone please suggest any link or suggestion how to resolved this issue.
... View more
Labels:
10-26-2017
10:12 AM
phoenix.jpgI have made the changes according to below website:- https://community.hortonworks.com/questions/1652/how-can-i-query-hbase-from-hive.html https://phoenix.apache.org/hive_storage_handler.html After settings all the jars and added the property in custom hive-env export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar custome hive-site export HIVE_AUX_JARS_PATH="${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar" WebHcat Server is not started no error message showing in logs: log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.IllegalStateException: Variable substitution depth too large: 20 "${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:967)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:987)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:77)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:59)
at org.apache.hive.hcatalog.templeton.AppConfig.dumpEnvironent(AppConfig.java:256)
at org.apache.hive.hcatalog.templeton.AppConfig.init(AppConfig.java:198)
at org.apache.hive.hcatalog.templeton.AppConfig.<init>(AppConfig.java:173)
at org.apache.hive.hcatalog.templeton.Main.loadConfig(Main.java:97)
at org.apache.hive.hcatalog.templeton.Main.init(Main.java:81)
at org.apache.hive.hcatalog.templeton.Main.<init>(Main.java:76)
at org.apache.hive.hcatalog.templeton.Main.main(Main.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
... View more
Labels:
10-26-2017
10:08 AM
I have made the changes according to answer by @Guilherme Braccialli but after adding the jars and put the following setting in custom hive-env export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar custome hive-site export HIVE_AUX_JARS_PATH="${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar" but not lukc and WebCat Server is not starting : ERROR from log:- log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.IllegalStateException: Variable substitution depth too large: 20 "${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:967)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:987)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:77)
at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:59)
at org.apache.hive.hcatalog.templeton.AppConfig.dumpEnvironent(AppConfig.java:256)
at org.apache.hive.hcatalog.templeton.AppConfig.init(AppConfig.java:198)
at org.apache.hive.hcatalog.templeton.AppConfig.<init>(AppConfig.java:173)
at org.apache.hive.hcatalog.templeton.Main.loadConfig(Main.java:97)
at org.apache.hive.hcatalog.templeton.Main.init(Main.java:81)
at org.apache.hive.hcatalog.templeton.Main.<init>(Main.java:76)
at org.apache.hive.hcatalog.templeton.Main.main(Main.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148) after
... View more
10-25-2017
07:12 AM
Once I fire the command hdfs dfs -ls /user/:- please check the hdpuser1. why it showing in a double cot. Please refer screenshot. can anyone help me on that how to remove the double cot?user.jpg
... View more
Labels:
09-06-2017
08:59 AM
ERROR InsertIntoHadoopFsRelation: Aborting job. java.io.IOException: Failed to rename FileStatus and ERROR DefaultWriterContainer: Job job_201709052340_0000 aborted. 17/09/05 23:40:56 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted. org.apache.spark.SparkException: Job aborted.
... View more
08-31-2017
05:27 AM
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
... View more
08-28-2017
03:39 PM
Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?
... View more
08-28-2017
03:39 PM
Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?
... View more
08-25-2017
11:17 PM
ERROR
from log file: 2017-08-25 14:10:38,328 ERROR
[Thread-36209] hdfs.DFSClient: Failed to close inode 4628634 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on
/apps/hbase/data/WALs/ bvlhdptdn02.ansari.net,
16020,1502906250572-splitting/bvlhdptdn02.ansari.net%2C16020%2C1502906250572. default.1503688201261
(inode 4628634) owned by DFSClient_NONMAPREDUCE_1165748342_1 but is accessed by
DFSClient_NONMAPREDUCE_-1708837476_1at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:361)at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3578)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:905) at org.apache.hadoop.hdfs.protocolPB. ClientNamenodeProtocolServerSideTranslatorPB.complete
(ClientNamenodeProtocolServerSideTranslatorPB.java:544) at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at
java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) at
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) at
org.apache.hadoop.ipc.Client.call(Client.java:1496) at org.apache.hadoop.ipc.Client.call(Client.java:1396) at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at
com.sun.proxy.$Proxy16.complete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete (ClientNamenodeProtocolTranslatorPB.java:501) at
sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (RetryInvocationHandler.java:278) at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at
com.sun.proxy.$Proxy17.complete(Unknown Source) at
sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy18.complete(Unknown
Source) at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2361) at
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2338) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303) at
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:947) at
org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:979) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1192) at
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2852) at
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2869) at java.lang.Thread.run(Thread.java:748) 2017-08-25 14:10:38,328 INFO [pool-5-thread-1] regionserver.ShutdownHook: Shutdown hook finished.
... View more
- Tags:
- Data Processing
- HBase
Labels:
08-09-2017
03:16 AM
can some one provide the best setting for spark heap size, much appricated
... View more
Labels:
08-04-2017
06:55 AM
Can you put your user-id in yarn.admin.acl = user_id_name then restart the required services and try to restart the Tez View Instance.
... View more
08-04-2017
06:54 AM
can you put your user id in: yarn.admin.acl = user-id_name then restart the required service.
... View more
07-29-2017
11:19 PM
Thanks a lot Jay SenSharma
... View more
07-24-2017
01:29 PM
2017-07-24 09:22:05,733 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-07-24 09:22:05,739 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-07-24 09:22:05,741 - Group['hadoop'] {}
2017-07-24 09:22:05,742 - Group['users'] {}
2017-07-24 09:22:05,742 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,742 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,743 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,743 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,744 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,744 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 09:22:05,745 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,745 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,746 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,746 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,747 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,747 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,748 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 09:22:05,748 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 09:22:05,749 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-07-24 09:22:05,754 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-07-24 09:22:05,754 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-07-24 09:22:05,755 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 09:22:05,756 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-07-24 09:22:05,760 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-07-24 09:22:05,760 - Group['hdfs'] {}
2017-07-24 09:22:05,760 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-07-24 09:22:05,761 - FS Type:
2017-07-24 09:22:05,761 - Directory['/etc/hadoop'] {'mode': 0755}
2017-07-24 09:22:05,773 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-07-24 09:22:05,774 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-07-24 09:22:05,787 - Initializing 2 repositories
2017-07-24 09:22:05,788 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-07-24 09:22:05,794 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 09:22:05,794 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-07-24 09:22:05,797 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 09:22:05,797 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,913 - Skipping installation of existing package unzip
2017-07-24 09:22:05,913 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,946 - Skipping installation of existing package curl
2017-07-24 09:22:05,947 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:05,979 - Skipping installation of existing package hdp-select
2017-07-24 09:22:06,141 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-07-24 09:22:06,143 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5
2017-07-24 09:22:06,163 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-07-24 09:22:06,175 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-07-24 09:22:06,202 - checked_call returned (0, '2.5.6.0-40', '')
2017-07-24 09:22:06,208 - Package['hadoop_2_5_6_0_40'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:06,325 - Skipping installation of existing package hadoop_2_5_6_0_40
2017-07-24 09:22:06,326 - Package['hadoop_2_5_6_0_40-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 09:22:06,360 - Installing package hadoop_2_5_6_0_40-client ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-client')
2017-07-24 09:22:07,727 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-client' returned 1. Error: Package: hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2017-07-24 09:22:07,727 - Failed to install package hadoop_2_5_6_0_40-client. Executing '/usr/bin/yum clean metadata'
2017-07-24 09:22:07,938 - Retrying to install package hadoop_2_5_6_0_40-client after 30 seconds
Command failed after 1 trie
... View more
07-24-2017
01:21 PM
017-07-24 08:55:57,187 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-07-24 08:55:57,193 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-07-24 08:55:57,194 - Group['hadoop'] {}
2017-07-24 08:55:57,195 - Group['users'] {}
2017-07-24 08:55:57,195 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,197 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,197 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,202 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,203 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-07-24 08:55:57,207 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-07-24 08:55:57,207 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-07-24 08:55:57,208 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,209 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-07-24 08:55:57,213 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-07-24 08:55:57,213 - Group['hdfs'] {}
2017-07-24 08:55:57,213 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-07-24 08:55:57,214 - FS Type:
2017-07-24 08:55:57,214 - Directory['/etc/hadoop'] {'mode': 0755}
2017-07-24 08:55:57,226 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-07-24 08:55:57,226 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-07-24 08:55:57,238 - Initializing 2 repositories
2017-07-24 08:55:57,238 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-07-24 08:55:57,244 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,245 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-07-24 08:55:57,247 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,248 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,358 - Skipping installation of existing package unzip
2017-07-24 08:55:57,358 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,390 - Skipping installation of existing package curl
2017-07-24 08:55:57,390 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,422 - Skipping installation of existing package hdp-select
2017-07-24 08:55:57,592 - Package['hadoop_2_5_6_0_40-yarn'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,705 - Skipping installation of existing package hadoop_2_5_6_0_40-yarn
2017-07-24 08:55:57,706 - Package['hadoop_2_5_6_0_40-mapreduce'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,738 - Skipping installation of existing package hadoop_2_5_6_0_40-mapreduce
2017-07-24 08:55:57,739 - Package['hadoop_2_5_6_0_40-hdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,771 - Installing package hadoop_2_5_6_0_40-hdfs ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs')
2017-07-24 08:55:59,481 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs' returned 1. Error: Package: hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2017-07-24 08:55:59,482 - Failed to install package hadoop_2_5_6_0_40-hdfs. Executing '/usr/bin/yum clean metadata'
2017-07-24 08:55:59,686 - Retrying to install package hadoop_2_5_6_0_40-hdfs after 30 seconds
Command failed after 1 tries
... View more
07-24-2017
01:18 PM
017-07-24 08:55:57,187 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-07-24 08:55:57,193 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-07-24 08:55:57,194 - Group['hadoop'] {}
2017-07-24 08:55:57,195 - Group['users'] {}
2017-07-24 08:55:57,195 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,196 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,197 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,197 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-07-24 08:55:57,198 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,199 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,200 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,201 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-07-24 08:55:57,202 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,203 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-07-24 08:55:57,207 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-07-24 08:55:57,207 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-07-24 08:55:57,208 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-07-24 08:55:57,209 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-07-24 08:55:57,213 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-07-24 08:55:57,213 - Group['hdfs'] {}
2017-07-24 08:55:57,213 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-07-24 08:55:57,214 - FS Type:
2017-07-24 08:55:57,214 - Directory['/etc/hadoop'] {'mode': 0755}
2017-07-24 08:55:57,226 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-07-24 08:55:57,226 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-07-24 08:55:57,238 - Initializing 2 repositories
2017-07-24 08:55:57,238 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-07-24 08:55:57,244 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,245 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-07-24 08:55:57,247 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-07-24 08:55:57,248 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,358 - Skipping installation of existing package unzip
2017-07-24 08:55:57,358 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,390 - Skipping installation of existing package curl
2017-07-24 08:55:57,390 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,422 - Skipping installation of existing package hdp-select
2017-07-24 08:55:57,592 - Package['hadoop_2_5_6_0_40-yarn'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,705 - Skipping installation of existing package hadoop_2_5_6_0_40-yarn
2017-07-24 08:55:57,706 - Package['hadoop_2_5_6_0_40-mapreduce'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,738 - Skipping installation of existing package hadoop_2_5_6_0_40-mapreduce
2017-07-24 08:55:57,739 - Package['hadoop_2_5_6_0_40-hdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-07-24 08:55:57,771 - Installing package hadoop_2_5_6_0_40-hdfs ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs')
2017-07-24 08:55:59,481 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_6_0_40-hdfs' returned 1. Error: Package: hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2017-07-24 08:55:59,482 - Failed to install package hadoop_2_5_6_0_40-hdfs. Executing '/usr/bin/yum clean metadata'
2017-07-24 08:55:59,686 - Retrying to install package hadoop_2_5_6_0_40-hdfs after 30 seconds
Command failed after 1 tries
... View more