Member since
10-04-2016
24
Posts
1
Kudos Received
0
Solutions
12-17-2023
06:03 PM
Saw the same issue and debug it by adding Java option -Dsun.security.krb5.debug=true In the logs, I found the IP address of KDC is shown instead of the hostname. That's suspicious. So I tried adding the IP -> hostname mapping of the KDC server in /etc/hosts. It resolved the issue. There could be other causes for your issue. The debug logs can show you more clues.
... View more
12-29-2018
12:22 PM
@Rohit Khose Ambari provides Patch upgrade feature for individual component upgrades, However that is possible only when you get a tested and certified VDF from Hortonworks support. NOTE:Before performing a patch upgrade, you must obtain from Hortonworks Customer Support, the specific VDF file associated with the patch release. To know more about Patch Upgrade please refer to: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-upgrade/content/performing_a_patch_upgrade.html Else If you just try to install a community release of Higher version of HBase then it is not going to work that easily as it has many additional dependencies changed.
... View more
02-28-2019
01:23 PM
In OPs case, it might be that the hdfs-site files need to be available when trying to connect to HBase. If I recall correctly, some HBase clients (such as the NiFi processor) need the Hadoop configuration files core-site and hdfs-site to be specified. If they can't be found or don't have the attributes above, it might cause the same error.
... View more
10-02-2017
02:14 AM
Hi @Rohit Khose, This occurs usually when spark code is not formatted(either the braces or quotes/ indentation (in case of python) are not ended) properly will result this exception. could you palest reformat your code and run the same (best is to use a text editor which highlight or submit the code block by block) On the other hand this may occur due to the network failures while on data shuffle between execution (I doubt this is not the case) - in such cases resubmitting the job should result the successful completion(presume not in your case).
... View more
07-18-2018
11:13 AM
Resource Manager allocates Application-Master for each Application/job. Application Master is responsible for lifetime of your Application/Job. Application-Master negotiates with Resource manager and allocates containers on nodemanagers. I am looking for How can i allocate container on specific Datanode??? Please follow following link for Details, https://community.hortonworks.com/questions/203537/container-allocation-by-application-master-in-hado.html If you have found the solution, Please Share.
... View more
06-03-2017
06:59 AM
@Artem Ervits can you please tell us exact date, is it end of 2017 or before it. Just need to confirm. Thanks in advance.
... View more
03-14-2017
05:13 AM
We're looking forward to implement spark on regionserver and hbase-2.0.0-SNAPSHOT has this feature. That's why we're eager to know as when you're gonna release that version of hbase in hdp.
... View more
10-04-2016
07:51 PM
Spark-hbase connector is marked stable and released with HDP 2.5 https://duckduckgo.com/?q=spark-hbase+connector+hortonworks&ia=web
... View more