Member since
03-27-2017
10
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8638 | 03-30-2017 01:13 PM |
05-14-2017
03:22 PM
I am unable to poll hadoop file system or in that case any hadoop related services. hadoop fs -ls / ls: Call From sandbox.hortonworks.com/10.0.2.15 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused My /etc/hosts looks like - 127.0.0.1localhost.localdomain localhost 10.0.2.15sandbox.hortonworks.com sandbox ambari.hortonworks.com Can you help me trace the root cause ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
05-14-2017
09:55 AM
On this hortonworks sandbox, I have installed XAMPP server to run my front end scripts which in backend will talk to hadoop services like HBase and Hive, etc. In installing XAMPP server, the only thing I did was - sudo service httpd stop to start the xamp services. However now on this same box, I am not able to communicate with Hive, Hadoop, etc. This is the error it's throwing (:-) -bash: (hadoop): command not found similarly for ambari. Can some one help me understand what is the trouble I have caused, and how do I go on to fix it. I think the PATH_VARIABLE has been changed or something. Can some help, please ?
... View more
Labels:
03-30-2017
01:13 PM
Okay, so I am sharing what worked out for me : Starting with @Jay SenSharma, I did give the 777 recursive permissions several times in the past, without in the past. However what has finally worked for me is : I kept my custom JDK in /usr/lib, I then made a simulink of the full path of custom jdk kept in lib to /usr/lib/jvm. And then, it worked. I guess not keeping the jdk in the lib was a major fracture point. Thanks a lot for the support man.
... View more
03-30-2017
06:51 AM
Please find the logs when starting the Namenode from Ambari. As can be observed, it's trying to log in from hdfs user. I have already allotted it the necessary permissions : -rwxrwxrwx 1 hdfs hadoop 7734 2017-03-24 06:22 java However it still gets struck. resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sandbox.hortonworks.com.out /usr/hdp/2.4.0.0-169//hadoop-hdfs/bin/hdfs.distro: line 308: /root/java/jdk1.8.0_121/bin/java: Permission denied
/usr/hdp/2.4.0.0-169//hadoop-hdfs/bin/hdfs.distro: line 308: exec: /root/java/jdk1.8.0_121/bin/java: cannot execute: Permission denied
... View more
03-30-2017
06:15 AM
Hi, that was a mistake on my side. Obviously, my JAVA_HOME path points till /root/java/jdk1.8.0_121. However the pertinent issue is which user ambari is trying to run with. Please revert back on that.
... View more
03-29-2017
05:00 PM
Hi, I have successfully added custom JDK on my hortonworks. My JAVA_HOME refers to : /root/java/jdk1.8.0_121/bin/java This is the net stat of the java file : -rwxrwxrwx 1 root root 7734 2017-03-24 06:22 java --- However all my services on ambari, are now refering to this JAVA_HOME path, and returning same error : Permission denied. One of the services ran from hdfs user. So I want to ask, what is the required set of permissions, I need to pass the requisite permissions to this file to complete it finally. Please help asap.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
03-28-2017
07:57 PM
Hi @Jay SenSharma, Thanks for extending all the support. I have already tried out both the steps mentioned by you several times in the past. I did that one more time today, but the result was no different. Updating Ambari JDK and /usr/sbin/alternatives --config java help me run a normal java jar compiled on 1.8 on HDP. However in case of hadoop jar, I have noticed one thing, in hadoop-client/conf/hadoop-env.sh and several other configuration files, the JAVA_HOME is hard coded as /usr/lib/jvm/java. Now in the path - /usr/lib/jvm, java is a simulink to java_sdk java -> /etc/alternatives/java_sdk When I trace back to the path, /etc/alternatives/ I find java_sdk is a simulink to - java_sdk -> /usr/lib/jvm/java-1.7.0-openjdk.x86_64 And all this after re-iterating all the steps mentioned above. How can i change this path to get my hadoop jars run comfortably ? I have tried pointing /usr/lib/jvm/java simulink to my ${JAVA_HOME}. But this doesn't help. As hadoop runs from yarn user of hadoop group. How do i update all the permissions successfully ?
... View more
03-28-2017
06:29 AM
Hi, I want to update the JDK for HDP 2.4 from standard 1.7 to present 1.8. For this, I have tried several ways, few of which includes : 1) Updating the JAVA_HOME and PATH variable in /root/.bashrc to the 1.8 JDK location. This works well when I have my jar compiled on my local on 1.8 and I bring it on HDP to run on the newly updated JDK. There is no version mismatch. 2) However for any hadoop jar compiled on my local at 1.8 and brought to run on HDP at this 1.8, still fails due to versioning issue because hadoop-client, etc have their conf set as - /usr/lib/jvm/java. And to correct that, I have manually changed the simulink at /usr/lib/jvm/ and pointed it as : cd /usr/lib/jvm/ ln -s $JAVA_HOME java However this affects, when I try to restart my HDP, where webcat server fails to start. I have also tried, alternatives --config java pointing it to my custom JDK. However this as well doesn't seems to help much in updating JDK. Can you revert back asap.
... View more
Labels:
- Labels:
-
Apache Hadoop
03-27-2017
04:30 PM
Please reply asap. Thanks for your time.
... View more
03-27-2017
04:29 PM
Hi @Sindhu, Can you help me understand can I have my external table created in hive on top of the file location marked as one in the Google storage cloud (GS). I already have one created. I am able to add partitions in hive, which successfully creates a directory in Hive, however on adding file to the partitioned columns (directories in google storage), however when I try to update the meta-store with the : MSCK REPAIR TABLE <table_name> However this runs unsuccessfully, as - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask , @Sindhu, can you help me understand if the location of my external table can be Google Cloud storage or is it always going to be HDFS. I have my external table created on Hive (on top of HDFS) with location as that of the Google drive, however MSCK REPAIR TABLE is not working even though that google storage location is manually updated, but not being successfully loaded into Hive.
... View more