Member since
06-26-2017
34
Posts
0
Kudos Received
0
Solutions
03-29-2019
08:06 PM
Hi Tomas, Yes, i verifed user(hdpuser) groups and i didn't see the blrgroup on NN for hdpuser, this was the missing now i added to blrgroup same as edge node using usermod command and its working fine. Thanks Again for your kind support.
... View more
03-29-2019
07:48 PM
Hi Tomas, Thank you for reply! Yes, NN and EDGE node has the blrgroup with same GID and same permissions. here the screenshots
... View more
03-29-2019
04:25 AM
Hi Team, not able to load the files to /oradata3 dir as i am getting permission denied error. i logged into user "hdpuser" trying to to place files in /oradata3 dir and hdpuser is part of hdpadmin(normal group, just name) and blrgroup. /oradata3 dir is owned by blruser and blrgroup, i gave permission 775 to this dir and now everyone can load the data right users who are part of blrgroup ?. these groups are existed in all cluster machines. please fine the attached one for more clear.permission.PNG
... View more
Labels:
- Labels:
-
Apache Hadoop
03-28-2019
09:17 PM
Hi not able to load the files to /oradata3 dir as i am getting permission denied error. i logged into user "hdpuser" trying to to place files in /oradata3 dir and hdpuser is part of hdpadmin(normal group, just name) and blrgroup. /oradata3 dir is owned by blruser and blrgroup, i gave permission 775 to this dir and now everyone can load the data right users who are part of blrgroup ?. please fine the attached one for more clear.
... View more
Labels:
- Labels:
-
HDFS
02-22-2019
07:50 PM
dfs permissions.JPG I have created linux user in all machines with the user name same called "balu, 2305" and this is local linux account and no ldap, no AD [root@cn1 ~]# id balu uid=2305(balu) gid=2305(balu) groups=2305(balu),2003(hadoop) [root@cn1 ~]# above the user and groups info and i added group named "balu" in dfs permissions parameter as attached. now my requirement is i need to execute all my hdfs admin commands from this user account(balu). can you help me what i should do to meet this.
... View more
Labels:
- Labels:
-
Apache Hadoop
06-25-2018
04:20 PM
Hi team below error i am getting when i try to start the anbari server , please help # ambari-server start Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 1. Check /var/log/ambari-server/ambari-server.out for more information.
[root@cdhnifidvl ~]
# more /var/log/ambari-server/ambari-server.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
Error occurred during initialization of VM
java/lang/ClassNotFoundException: error in opening JAR file <Zip file open error> /usr/jdk64/jdk1.8.0_112/jre/lib/rt.jar
... View more
- Tags:
- hadoop
- Hadoop Core
Labels:
- Labels:
-
Apache Hadoop
05-07-2018
03:46 AM
Hi Team I need to configure the RM HA on my QA environemnt.I am looking at below url to proceed https://www.cloudera.com/documentation/enterprise/5-12-x/topics/cdh_hag_rm_ha_config.html#xd_583c10bfdbd326ba--43d5fd93-1410993f8c2--7f77 but existing cluster environemt is enabled with TLS 1.0 sucurity for all the components, so in this case can i follow as above mentioned Url to enable RM HA ? or do i need to do any extra steps , please help me also please confirm work preserving recovery is enabled on RM by default not on NM correct ? is it mandate on NM's to enable work preserving recovery ? if i configure RM HA JHS will be removed from cluster ? also any other components required to restart apart from YARN components(NM, RM) ? zookeepers configured already and Namenode HA is enabled. Please help me to get thsi done Thanks Balaji Vemula
... View more
Labels:
- Labels:
-
Apache YARN
-
Cloudera Manager
05-03-2018
01:33 AM
Thank you for your support
... View more
05-02-2018
06:01 AM
Thanks for your support
... View more
05-01-2018
07:02 AM
Thanks for your response so Stop cluster means - it will stop all the components right(include bothe namenodes and other components like soop, hive, yarn etc....) ? if i am not wrong...
... View more
04-30-2018
06:55 AM
Hi Team, Good Morning! why to stop Namenode services to take backup of namenode metadata, instead of stoping namenode services we can put namenode in safe mode "ON" mode right so that cluster will get read-only state and take the backup using #tar -cvf backname metadatapath i am using Cloudera distribution in below url i found to stop servies and proceed to take the backup.. why not with safe mode option ?? please help me https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cm_mc_hdfs_metadata_backup.html Thanks Balaji Vemula
... View more
03-31-2018
09:40 PM
Thank you! kerberos configured successfully bue when i am trying to execute commands, i was not able work wiht my cluster as i am getting below error please help [hdfs@cn1 ~]$ hdfs dfsadmin -safemode get 18/03/31 09:16:48 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 18/03/31 09:16:48 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 18/03/31 09:16:48 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] safemode: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "cn1.hadoop.com/192.168.56.121"; destination host is: "cn1.hadoop.com":8020; Thanks Balaji Vemula
... View more
03-29-2018
06:18 AM
Thank you so much, issue is as you said encryption type, i added default one and issue got fixed
... View more
03-27-2018
06:14 AM
i configured the kerberos manually as below and enabled kerberos from CM $ yum install krb5-server krb5-libs krb5-workstation Updated the krb5.conf with below config default_realm = HADOOP.COM [realms] HADOOP.COM = { kdc = cm.hadoop.com admin_server = cm.hadoop.com } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM and then installed kerberos libs and workstation pkgs areinstalled in all client machines and copied the krb5.conf file to all hosts Used kdb5_util create -s to create a KDC database and it went well next step created the principle kadmin.local -q "addprinc admin/admin" and created successfully then granted the access in /var/kerberos/krb5kdc/ file and started the both services krb5kdc,kadmin then i have verified whether its working or not usint kinit kinit root/admin@HADOOP.COM and it has got the TGT successfully now i have logged into CM and from administartion option enabled the kerberos and getting the error as my first post also one more doubt, do we require to configure AD or LDAP before enabling kerberos ?? Thanks Balaji
... View more
03-25-2018
09:19 AM
Unable to configure the kerberos from the Cloudera Manager
i have installed the packages as below:
#yum install krb5-libs krb5-server krb5-workstation -y
here kerberos got installed and i modfied /etc/krb5.conf fine accordingly, stared the both(krb5kdc.kadmin) services, created the KDC DB, created admin principle, granted the access as admin in kadm.acl fine and verified and went well without any issue as below:
[root@cm krb5kdc]# klist Ticket cache: KEYRING:persistent:0:0 Default principal: admin/admin@HADOOP.COM
Valid starting Expires Service principal 03/24/2018 10:29:34 03/25/2018 10:29:34 krbtgt/HADOOP.COM@HADOOP.COM [root@cm krb5kdc]#
but while enabling kerberos from Cloudera manager after next clicks getting the error message as below attached, can you please help on this
also i have seen some log from KDC server from krb5kdc.log file as server not found as below attached
Kindly help me here
Thanks
Balaji Vemula
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
03-06-2018
05:15 AM
Hi Team, when you enable the RM HA how to know which RMstatestore is configured in cloudera. i see that zookeeperrmstatestore will be configured b default but where we can see the related parameter. i have seen that yarn.resourcemanager.zk-address will be added and updated with zookerper server name or path of zookeeper, but not found this option itself
... View more
Labels:
- Labels:
-
Cloudera Manager
03-06-2018
05:07 AM
Thank you so much for your reply, but my question is something different!
... View more
- Tags:
- ou
01-22-2018
09:25 AM
Hi Team,
Myself Balaji working as a cloudera hadoop admin.
During cloudera CM and CDH installtion we have disabled the postfix service, now cloudera cm and cdh running fine.
Now i want to enable the postfix to send notifications, can i goahead and enable it.
or
please let me know if there is any impact if i enable it.
Please suggest on this.
Thanks
Balaji Vemula
... View more
Labels:
- Labels:
-
Cloudera Manager
07-17-2017
12:17 AM
Hi Team, unable to start the Namenodes after enabling the hdfs high availability in my environment. I request you to help on this, here the logs: Mon Jul 17 02:47:24 EDT 2017
+ source_parcel_environment
+ '[' '!' -z '' ']'
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /usr/libexec/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/../bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/usr/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /usr/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ . /etc/default/bigtop-utils
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jdk1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jre1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle/jre*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/java-7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.8.0_112
++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`'
++ '[' -e /usr/java/jdk1.8.0_112/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.8.0_112
++ JAVA_HOME=/usr/java/jdk1.8.0_112
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.8.0_112 ']'
+ echo JAVA_HOME=/usr/java/jdk1.8.0_112
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/usr/lib/hadoop
++ HADOOP_PREFIX=/usr/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ export HADOOP_COMMON_HOME=/usr/lib/hadoop
++ HADOOP_COMMON_HOME=/usr/lib/hadoop
++ export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ replace_pid -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#4086#g'
++ echo -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
+ export 'HADOOP_NAMENODE_OPTS=-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
++ replace_pid
++ sed 's#{{PID}}#4086#g'
++ echo
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4086#g'
++ echo
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4086#g'
++ echo
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#4086#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/usr/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.8.0_112 as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/165-hdfs-NAMENODE as CONF_DIR'
+ echo 'using as SECURE_USER'
+ echo 'using as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -type f '!' -path '/run/cloudera-scm-agent/process/165-hdfs-NAMENODE/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/165-hdfs-NAMENODE#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/165-hdfs-NAMENODE/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' rolling-upgrade-prepare = namenode ']'
+ '[' rolling-upgrade-finalize = namenode ']'
+ '[' nnDnLiveWait = namenode ']'
+ '[' refresh-datanode = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' nfs3 = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ '[' namenode = namenode -a rollingUpgrade = '' ']'
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/165-hdfs-NAMENODE namenode
Mon Jul 17 02:47:31 EDT 2017
+ source_parcel_environment
+ '[' '!' -z '' ']'
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /usr/libexec/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/../bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/usr/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /usr/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ . /etc/default/bigtop-utils
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jdk1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jre1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle/jre*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/java-7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.8.0_112
++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`'
++ '[' -e /usr/java/jdk1.8.0_112/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.8.0_112
++ JAVA_HOME=/usr/java/jdk1.8.0_112
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.8.0_112 ']'
+ echo JAVA_HOME=/usr/java/jdk1.8.0_112
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/usr/lib/hadoop
++ HADOOP_PREFIX=/usr/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ export HADOOP_COMMON_HOME=/usr/lib/hadoop
++ HADOOP_COMMON_HOME=/usr/lib/hadoop
++ export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ replace_pid -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#4220#g'
++ echo -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
+ export 'HADOOP_NAMENODE_OPTS=-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
++ replace_pid
++ echo
++ sed 's#{{PID}}#4220#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#4220#g'
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#4220#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#4220#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/usr/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.8.0_112 as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/165-hdfs-NAMENODE as CONF_DIR'
+ echo 'using as SECURE_USER'
+ echo 'using as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -type f '!' -path '/run/cloudera-scm-agent/process/165-hdfs-NAMENODE/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/165-hdfs-NAMENODE#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/165-hdfs-NAMENODE/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' rolling-upgrade-prepare = namenode ']'
+ '[' rolling-upgrade-finalize = namenode ']'
+ '[' nnDnLiveWait = namenode ']'
+ '[' refresh-datanode = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' nfs3 = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ '[' namenode = namenode -a rollingUpgrade = '' ']'
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/165-hdfs-NAMENODE namenode
Mon Jul 17 02:47:39 EDT 2017
+ source_parcel_environment
+ '[' '!' -z '' ']'
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /usr/libexec/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/../bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/usr/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /usr/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ . /etc/default/bigtop-utils
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jdk1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jre1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle/jre*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/java-7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.8.0_112
++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`'
++ '[' -e /usr/java/jdk1.8.0_112/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.8.0_112
++ JAVA_HOME=/usr/java/jdk1.8.0_112
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.8.0_112 ']'
+ echo JAVA_HOME=/usr/java/jdk1.8.0_112
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/usr/lib/hadoop
++ HADOOP_PREFIX=/usr/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ export HADOOP_COMMON_HOME=/usr/lib/hadoop
++ HADOOP_COMMON_HOME=/usr/lib/hadoop
++ export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ replace_pid -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ echo -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#4328#g'
+ export 'HADOOP_NAMENODE_OPTS=-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
++ replace_pid
++ sed 's#{{PID}}#4328#g'
++ echo
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4328#g'
++ echo
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#4328#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4328#g'
++ echo
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/usr/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.8.0_112 as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/165-hdfs-NAMENODE as CONF_DIR'
+ echo 'using as SECURE_USER'
+ echo 'using as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -type f '!' -path '/run/cloudera-scm-agent/process/165-hdfs-NAMENODE/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/165-hdfs-NAMENODE#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/165-hdfs-NAMENODE/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' rolling-upgrade-prepare = namenode ']'
+ '[' rolling-upgrade-finalize = namenode ']'
+ '[' nnDnLiveWait = namenode ']'
+ '[' refresh-datanode = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' nfs3 = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ '[' namenode = namenode -a rollingUpgrade = '' ']'
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/165-hdfs-NAMENODE namenode
Mon Jul 17 02:47:48 EDT 2017
+ source_parcel_environment
+ '[' '!' -z '' ']'
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /usr/libexec/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/libexec/bigtop-utils/../bigtop-detect-javahome ']'
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/usr/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /usr/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /usr/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ . /etc/default/bigtop-utils
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jdk1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/java/jre1.7*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/j2sdk1.7-oracle/jre*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd '/usr/lib/jvm/java-7-oracle*'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.8.0_112
++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`'
++ '[' -e /usr/java/jdk1.8.0_112/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.8.0_112
++ JAVA_HOME=/usr/java/jdk1.8.0_112
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.8.0_112 ']'
+ echo JAVA_HOME=/usr/java/jdk1.8.0_112
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/usr/lib/hadoop
++ HADOOP_PREFIX=/usr/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/165-hdfs-NAMENODE
++ export HADOOP_COMMON_HOME=/usr/lib/hadoop
++ HADOOP_COMMON_HOME=/usr/lib/hadoop
++ export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
++ replace_pid -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#4438#g'
++ echo -Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
+ export 'HADOOP_NAMENODE_OPTS=-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms395313152 -Xmx395313152 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh'
++ replace_pid
++ echo
++ sed 's#{{PID}}#4438#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4438#g'
++ echo
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4438#g'
++ echo
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ sed 's#{{PID}}#4438#g'
++ echo
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/usr/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.8.0_112 as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/165-hdfs-NAMENODE as CONF_DIR'
+ echo 'using as SECURE_USER'
+ echo 'using as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ tr '\n' :
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.8.3-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.8.3.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.7.3-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -type f '!' -path '/run/cloudera-scm-agent/process/165-hdfs-NAMENODE/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/165-hdfs-NAMENODE#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/165-hdfs-NAMENODE/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/165-hdfs-NAMENODE -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' rolling-upgrade-prepare = namenode ']'
+ '[' rolling-upgrade-finalize = namenode ']'
+ '[' nnDnLiveWait = namenode ']'
+ '[' refresh-datanode = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' nfs3 = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ '[' namenode = namenode -a rollingUpgrade = '' ']'
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/165-hdfs-NAMENODE namenode
... View more
Labels:
05-02-2017
05:18 AM
i tried using acls on hdfs as below command , its working fine
#hdfs dfs -setfacl -m user:venky:rwx /apps now venky able to write the files to HDFS. but when i giving the same dir access to users from ranger its not working. can you help me
... View more
05-02-2017
04:54 AM
Geoffrey, thank you for your response,
capture.pngnow the file permissions are with same user venky at OS level and still getting the same error as screen shot below.
if you can help in this like how to set the appropriate permissions in HDFS.
... View more
05-01-2017
12:17 PM
ranger.pngranger1.pnghi
i have configured ranger on my ambari machine(ham.hadoop.com) and i created linux user account on same machine and tring to put the data in HDFS from this created account. it was throwing an error as below.
[venky@ham ~]$ hdfs dfs -put /home/venky/sam3.txt /apps put: Permission denied: user=venky, access=WRITE, inode="/apps/sam3.txt._COPYING_":hdfs:hdfs:drwxr-xr-x [venky@ham ~]$
i followed as below screen shots to enable the access for venky user account on HDFS in ranger. clicked on test connection and passed successfully. Now service has created under HDFS with the policy name, clicked on the edit from the action then mentioned the resource path and then from user and group permissions selected the user name as (venky) and permissions has given RWX then saved. but no luck. can anyone help me
... View more
- Tags:
- hadoop
- Hadoop Core
Labels:
- Labels:
-
Apache Hadoop
03-27-2017
02:57 AM
Hi Team, Good Morning, After enable the NN ha history and app time line services are stopped running and tried to start but no luck. Below the err i am getting when i try to start history server: I need your help on this please. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 182, in <module>
HistoryServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 92, in start
self.configure(env) # FOR SECURITY
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 55, in configure
yarn(name="historyserver")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 72, in yarn
recursive_chmod=True
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 427, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 424, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 261, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 129, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 144, in get_property_for_active_namenode
active_namenodes = get_namenode_states(hdfs_site, security_enabled, run_user)[0]
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 57, in get_namenode_states
return doRetries(hdfs_site, security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py", line 48, in wrapper
return function(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 45, in doRetries
active_namenodes, standby_namenodes, unknown_namenodes = get_namenode_states_noretries(hdfs_site, security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 68, in get_namenode_states_noretries
nn_unique_ids_key = 'dfs.ha.namenodes.' + name_service
TypeError: cannot concatenate 'str' and 'int' objects
... View more
Labels:
- Labels:
-
Apache Hadoop
03-12-2017
06:36 PM
Jay SenSharma, Thank you so much. I ran the Hostcleanup.py script in all the nodes and checked the info about the packages, not found the info still i am getting the same error on ambari page. amb3.txt
... View more
03-12-2017
04:12 PM
Thank you Jay SenSharma for your help. now i able to install the hdp on my ambari server but i couldn't able to install it on ohter nodes. Getting error messgae as attached.Please help. amb2.txtamb1.png
... View more
03-12-2017
12:17 PM
input.txt Jay, Thanks for your quick reponse.Please find the attached output
... View more
03-12-2017
11:02 AM
.Hi, I am not able to install the HDP from ambari. below the failed logs. Ccan some one help on this. getting the error status like on ambari status page" Warnings encountered" " Failures encountered" LOGS: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py", line 32, in install_packages
Package(packages)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 49, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. Error: Nothing to do
... View more
- Tags:
- Hadoop Core
- hdp-2.4.0
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)