Member since
03-21-2016
233
Posts
62
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
923 | 12-04-2020 07:46 AM | |
1198 | 11-01-2019 12:19 PM | |
1631 | 11-01-2019 09:07 AM | |
2552 | 10-30-2019 06:10 AM | |
1280 | 10-28-2019 10:03 AM |
01-11-2017
05:49 PM
@Dezka Dex Problem seems to be with the ldap URL or the DN. And the error in the log shows "Root exception is java.net.SocketException: Connection reset" It could be that Active directory is set to accept connections over SSL. You may try using ldaps:// instead of ldap:// (this needs active directory cert to be imported to java cacerts)
... View more
01-10-2017
06:23 AM
@Karan AlangDownload ojdbc6.jar and copy to the location /usr/share/java/
You can download ojdbc6.jar from http://www.oracle.com/technetwork/apps-tech/jdbc-111060-084321.html
... View more
01-10-2017
06:12 AM
@Karan Alang Can you check if /usr/share/java/ojdbc6.jar file exist. "/usr/hdp/current/hbase-client/lib/ojdbc6.jar" is symlink to /usr/share/java/ojdbc6.jar.
... View more
01-09-2017
04:23 AM
@kotesh banoth Pre-requisite for HDP upgrade is to take the DB back up of services like oozie,hive metastore. Please review below pre-requisites for HDP2.5 upgrade . https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-upgrade/content/upgrading_HDP_prepare_to_upgrade.html No need to take the backup of HDFS data.
... View more
01-08-2017
12:35 PM
1 Kudo
@Lawrence Lau HDP source is accessible over github. You should be able to access the code. https://github.com/hortonworks/hadoop-release/tree/HDP-2.5.3.0-tag
... View more
01-08-2017
06:07 AM
@Prajwal Kumar Can you please verify if you have hadoop proxyuser set for livy in core-site.xml. And also the error show "Server not found in Kerberos database" and the principal used is "zeppelin/host.com@DOMAIN.COM" which doesnt seems to be correct. /home/zeppelin/zeppelin.keytab keytab mentioned in interpreter must me a user principal. Something like "zeppelin-hdp25@RAGHAV.COM". Create keytab with a user principal and use that in your interpreter. And also make sure below properties are set <property>
<name>hadoop.proxyuser.livy.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.livy.hosts</name>
<value>*</value>
</property>
... View more
01-07-2017
02:06 PM
1 Kudo
@Manoj Dhake Try to copy mapreduce tar file to hdfs and try hive query again. #su - hdfs
#hdfs dfs -mkdir -p /hdp/apps/2.2.9.0-3393/mapreduce/
#hdfs dfs -put /usr/hdp/<version>/hadoop/mapreduce.tar.gz /hdp/apps/2.2.9.0-3393/mapreduce/mapreduce.tar.gz
... View more
01-05-2017
01:20 PM
@chitrartha sur You dont need to change hadoop properties in core-site.xml. Problem is not with hadoop components, it is with hive/files view settings that needs to be modified to set webhdfs authentication to auth=KERBEROS,proxyuser=<proxyuser> Refer below doc about how to configure Ambari views if cluster is kerberized. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-views/content/Kerberos_Settings.html Revert back the changes in core-site and make sure all the services are UP and follow the above doc for view setting for kerberized cluster.
... View more
01-04-2017
02:00 PM
1 Kudo
@Vinay MP hive cli will start a Application Master on yarn. If you dont have enough resources on yarn, then AM launched by hive cli will stay in Accepted state until it gets the resources. As you mentioned stopping spark thrift server resolves this, it looks that you have short of resources on yarn. Spark thrift server will also lauch a AM on yarn and this AM will stay in RUNNING status until spark thrift server is stopped. If this is issue with yarn resources, you should see a reason mentioned on AM launched by hive CLI.
... View more
01-04-2017
11:19 AM
2 Kudos
@WAEL HORCHANI Connection refused usually is seen when interpreter process is not started. And this could be when you have Impersonate enabled in interpreter. Can you show the spark interpreter settings
... View more