Member since
01-25-2017
119
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13692 | 04-11-2017 12:36 PM | |
4130 | 01-18-2017 10:36 AM |
12-28-2018
12:09 PM
This issue occurred after mainboard change. Do you think it is related with this change? Or nothing to do with it?
... View more
12-28-2018
11:16 AM
Hello @scharan , Thanks for your reply. I have a feeling that renewal of agent keys (maybe both on agent and server) would be the proper way. Do you aggree? Regardless of that, of course I accept this answer! Agent can connect now and works fine! Thanks a lot @scharan! Best regards. Have a nice day and new year!
... View more
12-27-2018
02:06 PM
Hello, We had a problem with one of our node's mainboard and it was changed. As we re-opened the node ambari agent could not connect to ambari server with below error: INFO 2018-12-27 16:59:24,790 NetUtil.py:70 - Connecting to https://master01:8440/ca
ERROR 2018-12-27 16:59:24,797 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:618)
ERROR 2018-12-27 16:59:24,797 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-12-27 16:59:24,797 NetUtil.py:124 - Server at https://master01:8440 is not reachable, sleeping for 10 seconds... My humble guess is that old keys were not accepted by ambari server with new hardware. Guys who installed the mainboard says they updated the seriel with the old one. How can I get back this node? Is there any way to renew keys? PS: There are no files in path /var/lib/ambari-agent/keys/ Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
10-19-2018
01:25 PM
For my case, previously I had given my anaconda installation. Reverting it back to /usr/bin/python2.7 fixed the problem.
... View more
08-16-2018
01:52 PM
@Sampath Kumar, @SHAIKH FAIROZ AHMED, @Jack Marquez, did you find the solution to this problem? I am facing it too.
... View more
07-11-2018
03:23 PM
Thanks for your answer and also for the warning about the version in the stack. Current Spark2 version is 2.2.0. I am going to correct it on question. And also both answers are good news to me. Thanks again.
... View more
07-10-2018
10:13 PM
My team needs Spark v2.3 for new features. We have HDP 2.6.3 installed which has Spark 2.0 (Correction:2.2.0) within stack. Is that enough to comply such version requirement if I use a docker container as Spark Driver which has Spark 2.3 and configure it so as to use Yarn of current HDP installation? Or do i need all workers Spark 2.3 installed? The thing I need to understand is does workers (or nodemanagers) need new Spark libraries once job is submitted to Yarn? Following note in Spark Cluster overview page led me to think it may not be mandatory: "The user's jar
should never include Hadoop or Spark libraries, however, these will be added at runtime." Thanks in advance...
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
06-14-2018
09:58 AM
Does physical data really remain in the node? For my case I saw lots of (thousands and repeating) HDFS log lines (on the node) about deleting blocks. Are these lines unexpected (already have broken RAID controller)? They keep being scheduled and deleted. 2018-06-14 11:58:53,005 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(308)) - Deleted BP-1789482724-9.1.10.22-1491814552298 blk_1155905937_82210162 file /grid/2/hadoop/hdfs/data/current/BP-1789482724-9.1.10.22-1491814552298/current/finalized/subdir229/subdir185/blk_1155905937
... View more
03-20-2018
06:43 PM
Hello @Aditya Sirna Thank you for your answer. I have added the parameter with a value of 0 but got an exception (HDP 2.6.3.0 on CentOS 7.2) 2018-03-20 21:09:48,207 ERROR namenode.FSNamesystem (FSNamesystem.java:<init>(913)) - FSNamesystem initialization failed. java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 1 or greater than 6400000 Thus, I doubled the old value (4194304) and now it works. Will HDFS be removing the tmp dir? Is there any preset period of configuration for that? Otherwise may tmp dir exceed the new limit? Or may hdfs get OOM exception while cleaning it like i got trying to clean manually? You can check my other question if you have a comment on it. https://community.hortonworks.com/questions/179904/having-issue-with-tmp-directory-removal.html
... View more
03-20-2018
03:20 PM
Hello, I am having an issue with /tmp/hive/tmp directory: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/hive/hive is exceeded: limit=1048576 items=1048576 With a short search i find out that it is a setting with parameter "dfs.namenode.fs-limits.max-directory-items" which is in hdfs-default.xml file. However it is not available for Ambari. Which file should I update? What is the right path? Should i update it on both hosts for HA mode?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop