Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4866 | 04-26-2020 06:18 PM | |
3970 | 04-26-2020 06:05 PM | |
3209 | 04-13-2020 08:53 PM | |
4904 | 03-31-2020 02:10 AM |
01-24-2017
10:48 AM
@Punit kumar I will suggest you to kill these DataNodes ()if there are any DN daemon processes running) and then try manually starting them as "hdfs" user. to see if those are getting started fine or not? In Parallel put the DataNode log in "tail" so that we can see if it is showing the same error or not ? Once they come up successfully then next time try from Ambari.
... View more
01-24-2017
10:27 AM
@Punit kumar
Problem seems to be directory permission related: java.io.IOException: the path component: '/var/lib/hadoop-hdfs' is owned by a user who is not root and not you. Your effective user id is 0; the path is owned by user id 508, and its permissions are 0751. Please fix this or select a different socket path. . - As the DN log is complaining about the permission on "/var/lib/hadoop-hdfs" so please check what kind of permission do you have there. By default it should be owned by "hdfs:hadoop" as following: # ls -lart /var/lib/hadoop-hdfs
drwxrwxrwt. 2 hdfs hadoop 4096 Aug 10 11:23 cache
srw-rw-rw-. 1 hdfs hadoop 0 Jan 24 09:09 dn_socket - It would be best if you compare the permission on this Directory "/var/lib/hadoop-hdfs" from your Working DataNode hosts. - In order to get more information about this exception, please see the use of "validateSocketPathSecurity0" method: https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocket.java#L82-L105 .
... View more
01-24-2017
09:51 AM
@Punit kumar Here based on the output of "output-30684.txt" file we see that the DataNode start instruction has been already given to the ambari-agent and following is the command snippet: /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode - So after that "hadoop-daemon.sh" script is actually responsible to start the DataNode with the given arguments. - Hence we should check the DataNode logs (.log and .out files) to finds out what is going wrong. - There might be some OS resource constraints as well like (Less memory/disk space ..etc) We might get information about using some OS tools like "top" and "df -h" . But looking at the DataNode log / out will give more better idea here. .
... View more
01-24-2017
09:03 AM
@Punit kumar 1. Do you see any error / exception in the DataNode log? 2. After triggering DataNode start operation from Ambari UI do you see any Error/Exception in ambari-server.log? If yest hen can you please share those log snippets here? 3. Are you able to start/stop the other components present on that agent host? (or only DataNode is having this issue) 4. The output of "top" command so that we can see if memory is available sufficiently. 5. Once you triger the commands from Ambari UI to start the DataNode you might see following kind of files getting created in "/var/lib/ambari-agent/data". Do you see any error in the errors file?
command-3231.json (Number might be different in your case but the time stamp should be latest for these files)
errors-3231.txt
output-3231.txt .
... View more
01-24-2017
08:50 AM
1 Kudo
@Nik Lam Following two articles will give little more detailed steps to accomplish the same task. [1] MySQL setup for Ambari https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_reference_guide/content/_using_ambari_with_mysql.html [2] How do I change an existing Ambari DB from Postgres to MySQL? http://www.hadoopadmin.co.in/bigdata/how-do-i-change-an-existing-ambari-db-postgres-to-mysql/
Nothing much from Ambari Side however there are some tools available like following which might help in dealing with some of the special cases like "\N" (NULL) replacement etc. https://dbconvert.com/postgresql/mysql/ .
... View more
01-23-2017
05:09 PM
@Baruch AMOUSSOU DJANGBAN In addition to previous comments, When you are registering a new HDP version then at that time are you specifying the "Build Version" for the HDP 2.5.3 ? Example: 2.5.3.0-37 As per the http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/hdp_25_repositories.html
The Version Definition file has this info: http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/HDP-2.5.3.0-37.xml - Also regarding one of your query "i don't understand very well HDP-major.minor concept" ... the following links provides this information in very detail: https://community.hortonworks.com/questions/41422/question-on-hdp-versioning.html .
... View more
01-23-2017
10:00 AM
@Zhao Chaofeng Looks like your kinit was successful earlier when you run the command as it did not show any error. So i think the ticket was generated fine. [root@bigdata013 centos]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-bigdata@ISTUARY.COM After running the above command did you check the output of "klist" command to see if you got the kerberos ticket? [root@bigdata013 centos]# klist . Example at my end: # kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-JoyCluster@EXAMPLE.COM
# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-JoyCluster@EXAMPLE.COM
Valid starting Expires Service principal
01/23/17 10:00:20 01/23/17 10:00:50 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 01/23/17 10:00:20 .
... View more
01-22-2017
11:17 AM
@Zhao Chaofeng Are you passing the keytab path to it? Example: Syntax:
kinit -kt /PATH/TO/Keytab_file $PRINCIPAL_NAME
Example:
kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-ClusterDemo@EXAMPLE.COM .
... View more
01-22-2017
06:52 AM
1 Kudo
@G I Is this "/tmp/b4cff5e9-2695-4658-b82c-798d6465227b_resources/hive-contrib-0.10.0.jar" the same jar that you got from the following location: /usr/hdp/<version>/hive/lib/hive-contrib-<version>.jar I means the version is correct? Which is shipped by the HDP? Or can you try the following once to make sure that the JAR that has the proper permission is loaded/added. hive>add jar /usr/hdp/<version>/hive/lib/hive-contrib-<version>.jar; Also you might want to try the following: - On your Hive Server host create a directory "/usr/hdp/<version>/hive/auxlib" - Now you should copy "/usr/hdp/<version>/hive/lib/hive-contrib-<version>.jar to "/usr/hdp/<version>/hive/auxlib" - Then restart the HS2 server. .
... View more
01-19-2017
08:53 AM
@Juan Manuel Nieto As you already mentined that you tried removing the "/var/lib/ambari-server/data/tmp" and created it again. However in the permission we see ambari:root at one place. If you are planning to run ambari as a non root user then you might want to recursively give the permission to "/var/lib/ambari-server/" directory as "ambari:ambari" drwxr-xr-x.5 ambari root 4.0K Jan 19 09:29.. Example: chown -R ambari:ambari /var/lib/ambari-server .
... View more