Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2730 | 04-27-2020 03:48 AM | |
| 5288 | 04-26-2020 06:18 PM | |
| 4458 | 04-26-2020 06:05 PM | |
| 3584 | 04-13-2020 08:53 PM | |
| 5385 | 03-31-2020 02:10 AM |
10-02-2017
06:08 PM
@Bharath N Please check if HiveServer2 is running or not? Please check the Ambari UI. # ps -ef | grep hiveserver2 Also please check the Hive Logs. Please also check the port (10000) and other ports opened by the HiveServer2 process. Like following: # netstat -tnlpa | grep `cat /var/run/hive/hive-server.pid`
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 1755/java . If you do not see the port opened yet then there must be some ERROR / WARNING mentioned in the HiveServer2 logs. In that case please check the hive logs , Please check and share if any. less /var/log/hive/hiveserver2.log .
... View more
10-02-2017
05:20 PM
@Bharath N [hadoop-admin@edge-node hive]$ telnet hive-server2 10000 telnet: hive-server2: Name or service not known Above message, means that your HiveServer2 Hostname (FQDN) is not set correctly Or might not be mapped correctly on the Beeline "/etc/hosts" file. So first of all please check the FQDN of your HiveServer2 host. 1. Login to HiveServer2 host using SSH and check if it has correct FQDN/Hostname # hostname -f
# cat /etc/hosts . 2. On the machine where you are running Beeline, please check if it has correct Hostname maping for the hiveserver2 inside it's "/etc/hosts" file. it should resolved the hiveserver2 host name correctly. # cat /etc/hosts . 3. Also we see that the port 10000 is not opened on the HiveServer2 host root@Name-node:~ # netstat -tnlpa|grep 10000 So please check the Hive configuration to find out if it is using the correct port 10000? And also check the hiveServer2 logs to find out if it started successfully or not? . Please refer to the following links to findout if you have configured the Hostname/FQDN properly: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/edit_the_host_file.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/set_the_hostname.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/edit_the_network_configuration_file.html . Also please make sure that the "iptables" is disabled on the HiveServer2 host. So that it's port can be accessed from a remote host. # service iptables status
# service iptables stop
(OR)
# systemctl disable firewalld
# service firewalld stop . Also permanently disable SELinux set, If it is not already done. SELINUX=disabled in /etc/selinux/config This ensures that SELinux does not turn itself on after you reboot the machine . .
... View more
10-02-2017
04:49 PM
@Bharath Nagamalla One of the reason may be that from the host where you are running the beeline, "Hive-server2:10000" host/port might not be accessible Or the port 10000 might not be opened by the HS2 server. So please check the following: 1. From the host where you are running the Beeline , are you able to access the host & Port of HS2 ? To isolate the Firewall or N/W issue. # telnet Hive-server2 10000
(OR)
# nc -v Hive-server2 10000 2. Please check on the HiveServer2 host if the port 10000 is opened ? Please run the following command on the Hive Server 2 host to see if the port is opened ? # netstat -tnlpa | grep 10000 3. If the port 10000 is not opened then please check the Hive Server2 log and share if you find any error there. . Additionally if this is a Kerberized Cluster then you might need to also pass the Principal in the connection URL. Please refer to the following link for more detail on the same: https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
... View more
10-02-2017
04:10 PM
@Prakash Punj The following error stack trace is not complete, We need to see from where this error begins. So can you please share the complete stackTrace of the following thread stack: [centos@hdp-m:/var/log/ambari-server ] $ tail -50 ambari-server.log
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) .
... View more
10-02-2017
03:32 PM
1 Kudo
@Karpagalakshmi Rajagopalan As you are getting the following error : http://dev2.hortonworks.com.s3.amazonaws.com/repo/dev/master/utils/repodata/repo md.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forb idden" Trying other mirror. You should set the "enabled=0" (this will disable the sandbox.repo) which has issue. As that is just development repo and might not be available always. # cat /etc/yum.repos.d/sandbox.repo
[sandbox]
baseurl=http://dev2.hortonworks.com.s3.amazonaws.com/repo/dev/master/utils/
name=Sandbox repository (tutorials)
gpgcheck=0
enabled=0
. After making the above change please do a yum clean all as following and then everything should be fine. # yum clean all .
... View more
10-02-2017
03:22 PM
@Prakash Punj Please have only the following directories inside the "/usr/hdp" and move rest of the files and directories. 2.3.4.0-3485
current . I am suspecting that the "ssl" directory there is actually causing the issue.
... View more
10-02-2017
10:24 AM
1 Kudo
@Sebastien F One approach will be to use hive JMX based approach to collect some of those details like: https://community.hortonworks.com/articles/62211/enabling-jmx-monitoring-for-hiveserver2.html Grafana also can provide many graphs related to Hive: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-operations/content/llap_overview.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-operations/content/grafana_hive_hiveserver2.html
... View more
10-02-2017
04:19 AM
@Prakash Punj "S020 Data storage error" is a generic error, Hence in order to findout the actual cause of failure, we will need to look at the detailed stacktrace of this error. So can you please check and share the "Hive View" logs and also the ambari-server.log.
... View more
10-02-2017
03:46 AM
@Prakash Punj We see the error: resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.3.4.0-3485 | tail -1`' returned 1.
Traceback (most recent call last): File "/usr/bin/hdp-select", line 378, in <module> printVersions() File "/usr/bin/hdp-select", line 235, in printVersions result[tuple(map(int, versionRegex.split(f)))] = fValueError: invalid literal for int() with base 10: 'ssl' . this usually happens if you have some Manually created / copied directories (or files) inside the "/usr/hdp" directory. Please remove any unwanted directory from "/usr/hdp" and then try installing Grafana again.
... View more
09-28-2017
05:30 AM
1 Kudo
@K D It does not look like a Heap issue, Your NameNode is not starting because it looks like you do not have correct permission "hdfs:hadoop" in the mentioned directory: 2017-09-28 01:02:32,556 ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.io.FileNotFoundException: /mach/hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:641) . Please change the permission as following and then restart NameNode again: # chown -R "hdfs:hadoop" /mach/hadoop/hdfs/namenode .
... View more