Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2510 | 04-27-2020 03:48 AM | |
| 4978 | 04-26-2020 06:18 PM | |
| 4056 | 04-26-2020 06:05 PM | |
| 3291 | 04-13-2020 08:53 PM | |
| 5020 | 03-31-2020 02:10 AM |
10-31-2017
02:21 PM
@Jay Kumar SenSharma yarn.resourcemanager.hostname.[RM_ID] was missing in yarn-site config, thanks a lot !
... View more
10-18-2017
07:39 AM
few minuets before i saw this post i just successfully solved the problem, i had two issues
one i did not create hive db CREATE DATABASE hive;
i base it on your post from https://community.hortonworks.com/answers/107905/view.html
another issue i had i in the db url connection,
i change it, to localhost. i am trying to accept your answer but i cant, don't have a button for it?
next stage is to try it with non root install
... View more
10-10-2017
01:13 PM
@Jay SenSharma Thank you for your pertinent explanation. Now, i found a way to change the default username through the DB.
... View more
10-10-2017
03:01 PM
2 Kudos
@Lou Richard Good to know that the issue is resolved. It will be great if you can mark this HCC thread as Answered by clicking on the "Accept" Button. That way other HCC users can quickly find the solution when they encounter the same issue. As it was a long thread hence i am writing a brief summary for the HCC users who might encounter this issue and can quickly find the answer. Issue: Atlas Installation Was failing with the following error: File "/usr/hdp/2.6.1.0-129/atlas/bin/atlas_config.py", line 232, in runProcess p = subprocess.Popen(commandline, stdout=stdoutFile, stderr=stderrFile, shell=shell)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exceptionOSError: [Errno 2] No such file or director . Solution: Making sure that the JAVA_HOME is set correctly on the host and it is pointing to a valid JDK (not JRE) and setting the JAVA_HOME properly inside the global environment variable. # cd /usr/hdp/2.6.1.0-129/atlas/server/webapp/
# mv atlas ../
# /usr/lib/jvm/jre-1.8.0-openjdk/bin/jar -xf atlas.war . Cause: "jar" utility comes with JDK (inside $JAVA_HOME/bin) which is being used by the "atlas_config.py" script to extract the atlas.war.
... View more
10-19-2017
09:11 PM
1 Kudo
@dsun During the upgrade process, a component is supposed to be restarted after the hdp-select command has been run so it will pick up the new binaries. However, the component needs to shut down and start up after the hdp-select command has been run. That way it will report to Ambari that it's version has changed and what it's current state is. In the event that you get stuck (as you did) during the upgrade you can unwind the versioning with a process like this: Make all pieces of the component are running Run `hdp-select set` command on all nodes in the cluster to set the new version. Make sure you get all of the pieces for the component (e.g. hadoop-hdfs-namenode, hadoop-ndfs-journalnode, etc.) Restart all processes for the component Verify that the O/S processes are running with the proper version of jar files Lather, rinse, and repeat for all components in the cluster Once you have successfully gotten everything restarted with the proper bits, you should be able to manually finalize the upgrade with the following command to the Ambari Server: ambari-server set-current --cluster=<custername> --version-display-name=HDP-2.6.2.0 If you get an error that components are not upgraded, you can check the components and hosts again. If everything seems ok, then you may need to tweak a table in the database. I ran into this when Atlas did not properly report the upgraded version to Ambari. NOTE: THIS SHOULD BE DONE WITH THE GUIDANCE OF HORTONWORKS SUPPORT ONLY ambari=> SELECT h.host_name, hcs.service_name, hcs.component_name, hcs.version FROM hostcomponentstate hcs JOIN hosts h ON hcs.host_id = h.host_id ORDER BY hcs.version, hcs.service_name, hcs.component_name, h.host_name;
host_name | service_name | component_name | version
----------------------------------+----------------+-------------------------+-------------
scregione1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm0.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm2.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionw0.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionw1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm0.field.hortonworks.com | ATLAS | ATLAS_SERVER | 2.6.1.0-129
scregionm1.field.hortonworks.com | DRUID | DRUID_BROKER | 2.6.2.0-205
scregionm1.field.hortonworks.com | DRUID | DRUID_COORDINATOR | 2.6.2.0-205
scregionw0.field.hortonworks.com | DRUID | DRUID_HISTORICAL | 2.6.2.0-205
scregionw1.field.hortonworks.com | DRUID | DRUID_HISTORICAL | 2.6.2.0-205
scregionw0.field.hortonworks.com | DRUID | DRUID_MIDDLEMANAGER | 2.6.2.0-205
scregionw1.field.hortonworks.com | DRUID | DRUID_MIDDLEMANAGER | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_OVERLORD | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_ROUTER | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_SUPERSET | 2.6.2.0-205
scregione1.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
scregionm0.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
scregionm1.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
. . . After verifying that you have, indeed, upgraded the components, a simple update command will set the proper version for the erroneous components and allow you to finalize the upgrade: ambari=> update hostcomponentstate set version='2.6.2.0-205' where component_name = 'ATLAS_CLIENT';
UPDATE 6
ambari=> update hostcomponentstate set version='2.6.2.0-205' where component_name = 'ATLAS_SERVER';
UPDATE 1
After cycling the Ambari Server, you should be able to finalize: [root@hostname ~]# ambari-server set-current --cluster=<cluster> --version-display-name=HDP-2.6.2.0
Using python /usr/bin/python
Setting current version...
Enter Ambari Admin login: <username>
Enter Ambari Admin password:
Current version successfully updated to HDP-2.6.2.0
Ambari Server 'set-current' completed successfully.
... View more
10-09-2017
03:59 PM
@raouia Do you have any queries regarding my previous update? Please feel free to update.
... View more
10-05-2017
11:27 AM
@Jay SenSharma Thank you for your detailed explanation.
... View more
10-04-2017
05:56 PM
@arjun more If you have KDC and AD integrated, this simply means the account to which the keytab is related has been disabled, locked, expired, or deleted. The AD service account should NEVER expire. If not could you validate the below steps Make sure the [realms] and [domain_realms] entries in cat /etc/krb5.conf is correct. Validate the contents of these 2 files /var/kerberos/krb5kdc/kdc.conf , /var/kerberos/krb5kdc/kadm5.acl Check the hdfs prinncipal # kadmin.local
Authenticating as principal hdfs-uktehdpprod/admin@EUROPE.ODCORP.NET with password.
kadmin.local: listprincs hdfs*
hdfs-uktehdpprod@EUROPE.ODCORP.NET
kadmin.local: Get the correct prncipal for hdfs # klist -kt /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET Try grabbing a valid Kerberos ticket # kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-uktehdpprod@EUROPE.ODCORP.NET Validate the avalability period # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-uktehdpprod@EUROPE.ODCORP.NET
Valid starting Expires Service principal
10/04/2017 19:36:12 10/05/2017 19:36:12 krbtgt/EUROPE.ODCORP.NET@EUROPE.ODCORP.NET Please revert
... View more
10-04-2017
03:18 AM
1 Kudo
@Michael Coffey The problem seems to be this: 2017-10-02T19:56:25.684428Z 0 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2017-10-02T19:56:25.684440Z 0 [Note] - '127.0.0.1' resolves to '127.0.0.1';
2017-10-02T19:56:25.684464Z 0 [Note] Server socket created on IP: '127.0.0.1'. . Your MySQL is starting and listening only on "127.0.0.1" bind_addr. You should edit the "bind-address" attribute inside your "/etc/my.cnf" to make it bind on hostname or all listen address. bind-address=0.0.0.0 . https://dev.mysql.com/doc/refman/5.7/en/server-options.html If the address is 0.0.0.0 , the server accepts TCP/IP connections on all server host IPv4 interfaces. If the address is :: , the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.
... View more
10-03-2017
02:48 AM
thanks @Jay SenSharma. I just find out about this article. Will go through...
... View more