Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 04-27-2020 03:48 AM | |
| 5283 | 04-26-2020 06:18 PM | |
| 4448 | 04-26-2020 06:05 PM | |
| 3575 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
08-09-2017
02:46 AM
@Bhushan kumar In the terminal where you are running the Hadoop Cline you can set the "HADOOP_OPTS" to add the remote debugging option. Example: # export HADOOP_OPTS = $HADOOP_OPTS -agentlib:jdwp=transport=dt_socket,address=9999,server=y,suspend=n
# hdfs dfs -ls / .
... View more
08-08-2017
11:44 AM
1 Kudo
@Akhil S Naik Please do a kinit with the hDFS keytab and see if it works: Get Principal Name # klist -kte /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
9 06/15/2017 10:01:12 hdfs-kerberos_ambari@EXAMPLE.COM (des-cbc-md5)
9 06/15/2017 10:01:12 hdfs-kerberos_ambari@EXAMPLE.COM (des3-cbc-sha1)
9 06/15/2017 10:01:12 hdfs-kerberos_ambari@EXAMPLE.COM (arcfour-hmac)
9 06/15/2017 10:01:12 hdfs-kerberos_ambari@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
9 06/15/2017 10:01:12 hdfs-kerberos_ambari@EXAMPLE.COM (aes128-cts-hmac-sha1-96) . - Do a kinit # kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-kerberos_ambari@EXAMPLE.COM . Check the ticket. # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-kerberos_ambari@EXAMPLE.COM
Valid starting Expires Service principal
08/08/2017 11:43:48 08/09/2017 11:43:48 krbtgt/EXAMPLE.COM@EXAMPLE.COM Then try again. .
... View more
08-08-2017
07:24 AM
@Arsalan Siddiqi You should be able to access the sandbox using "localhost" as well. Did you try that ? If it's not working then please refer to the following section: . Please take a look at the following file: # cat /etc/sysconfig/shellinaboxd . And in the output of the above file check if the hostname is correct? It should be somewhere in the following line: # Simple configuration for running it as an SSH console with SSL disabled:
OPTS="-t -s /:SSH:sandbox.hortonworks.com --css white-on-black.css" . Correct the host name if it is wrong. (or unexpected) The restart the service. # service shellinaboxd start
# systemctl enable shellinaboxd.service
# netstat -nap | grep shellinabox .
... View more
08-07-2017
11:21 AM
@parag dharmadhikari Looks like your class "twitter4j.MediaEntityJSONImpl" version is not compatible. So please check if you can get a different version of JAR that contains the "twitter4j.MediaEntityJSONImpl" class to see if it fixed the issue. If you still face the issue then can you please share the list of JARs that your application is using? Better if you can share the whole "pom.xml" to see of the dependency is OK.
... View more
08-07-2017
08:09 AM
@PeiHe Zhang
Following are the two jars where we can find this class "org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider" in HDP 2.6 installation. Please make sure that these jars are added to the classpath. /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-common-2.7.3.2.6.1.0-129.jar
/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-common.jar . Typically the classpath setting can be found in Ambari UI --> Yarn --> Configs --> Advanced --> "Advanced yarn-site" as following, so please check if that path is correct at your end. (Following is the default value that we usually get) yarn.application.classpath = /etc/hadoop/conf,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/* .
... View more
08-06-2017
11:29 AM
1 Kudo
@Chiranjeevi Nimmala Good to hear that your issue is resolved. It will be also wonderful if you can mark this thread as "Answered" (Accepted) so that it will be useful for other HCC users to quickly browse the correct answer for their issues.
... View more
08-06-2017
07:25 AM
1 Kudo
@Debasish Nath - Your DataNode Installation failure is due to the following error: resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy-devel' returned 1.
Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS)
Requires: snappy(x86-64) = 1.0.5-1.el6 Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda) snappy(x86-64) = 1.1.0-3.el7
Available: snappy-1.0.5-1.el6.x86_64 (HDP-UTILS) snappy(x86-64) = 1.0.5-1.el6 Please install "snappy-devel" package on your own from the OS repositories before installing DataNode. Hadoop requires the snappy-devel package that is a lower version that what is on the machine already. Run the following on the host and retry. Solution that you should try: # yum remove snappy
# yum install snappy-devel - Also please see the reference : HCC Thread: https://community.hortonworks.com/questions/86406/hdp-253-install-failing-on-redhat-7.html Ambari Doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-troubleshooting/content/resolving_cluster_install_and_configuration_problems.html (See Section: "Problem: DataNode Fails to Install on RHEL/CentOS 7") .
... View more
08-06-2017
07:19 AM
1 Kudo
@Chiranjeevi Nimmala It looks somewhere similar to : https://issues.apache.org/jira/browse/HDFS-12029 (although it is for DataNode) But looks like the issue is "jsvc" is crashing due to less "Xss" (Stack Size Value) So please try increasing the stack size to a higher value like "-Xss2m" inside the file "/usr/hdp/2.6.0.3-8/hadoop-hdfs/bin/hdfs.distro" Example: (Following line should be added to somewhere at the top like above DEFAULT_LIBEXEC_DIR so that following script can utilize this value. export HADOOP_OPTS="$HADOOP_OPTS -Xss2m"
DEFAULT_LIBEXEC_DIR="$bin"/../libexec . OR set the -Xss2m inside the following block of the "hadoop.distro" To apply the setting specifically for NFS3, In all the "HADOOP_OPTS" of the following block: # Determine if we're starting a privileged NFS daemon, and if so, redefine appropriate variables
if [ "$COMMAND" == "nfs3" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_PRIVILEGED_NFS_USER" ]; then
if [ -n "$JSVC_HOME" ]; then
if [ -n "$HADOOP_PRIVILEGED_NFS_PID_DIR" ]; then
HADOOP_PID_DIR=$HADOOP_PRIVILEGED_NFS_PID_DIR
fi
if [ -n "$HADOOP_PRIVILEGED_NFS_LOG_DIR" ]; then
HADOOP_LOG_DIR=$HADOOP_PRIVILEGED_NFS_LOG_DIR
HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.dir=$HADOOP_LOG_DIR -Xss2m"
fi
HADOOP_IDENT_STRING=$HADOOP_PRIVILEGED_NFS_USER
HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.id.str=$HADOOP_IDENT_STRING -Xss2m"
starting_privileged_nfs="true"
else
echo "It looks like you're trying to start a privileged NFS server, but"\
"\$JSVC_HOME isn't set. Falling back to starting unprivileged NFS server."
fi
fi Then restart NFS. Reference RHEL7 kernel issue: https://access.redhat.com/errata/RHBA-2017:1674 .
... View more
08-06-2017
06:53 AM
@Chiranjeevi Nimmala The following line of error indicates that it's a JVM crash. # An error report file with more information is saved as: # /tmp/hs_err_pid19469.log . So if you can share the complete "/tmp/hs_err_pid19469.log" file here then we can check why the JVM crashed.
... View more
08-06-2017
06:11 AM
@Chiranjeevi Nimmala Do you see any error for NFS service ? Can you please share the nfs logs from the "/var/log/hadoop/root/" directory? Like "nfs3_jsvc.out", "nfs3_jsvc.err", "hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log" Can you check what is the ulimit value set for the NFS service ? Sometimes NFS crashes due to less file descriptor limit. If the value is set to too low then we can try increasing this value to alittle higher value from Ambari UI as: Navigate to "Ambari UI --> HDFS --> Configs --> Advanced --> Advanced hadoop-env --> hadoop-env template" Now add the following entry in this "hadoop-env" template if [ "$command" == "nfs3" ]; then ulimit -n 128000 ; fi Then try restarting the NFSGateway..
... View more