Member since
06-03-2016
12
Posts
1
Kudos Received
0
Solutions
02-18-2019
06:08 PM
Hi, I share a feedback about HDP 2.6 with centos 7.6. We have a cluster based on HDP 2.6.4. We added news nodes that are now systematically installed under centos 7.6 version. In that configuration, ambari-agent can not connect to the server and raises the following error in ambari-agent.log: INFO 2019-02-15 14:25:45,083 NetUtil.py:70 - Connecting to https://xhdpbackgtw02u.hadoop:8440/ca
ERROR 2019-02-15 14:25:45,085 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:618)
ERROR 2019-02-15 14:25:45,085 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2019-02-15 14:25:45,085 NetUtil.py:124 - Server at https://xhdpbackgtw02u.hadoop:8440 is not reachable, sleeping for 10 seconds... The root cause is NetUtil.py that fails for unkown reason. It seems to work if you download Python packages to centos 7.5 ones. We also tried to upgrade ambari-agent to 2.6.2.2 that is certified under centos 7.6 but with the same result. Definitively ambari-agent don't seem to work with Python 2.7.5-76. Good luck...
... View more
Labels:
09-14-2017
07:21 PM
Hi tom I have the same issue... Did you finally achieve to make oozie works with spark2 ? Régis
... View more
06-27-2017
02:55 PM
Hi everybody By default most UI address (for hdfs datanode, yarn nodemanager, etc) are started on http://<hostname's fqdn>:port (50075, 8042, etc). I do not achieve to change the default server name... which is important to me for I'd like to expose these ports to an all users network and restrict internal communication to a dedicated one...
I unsuccessully tried to change yarn.nodemanager.webapp.address to alter the hostname...
I also tried to play with public_hostname.sh to define a public hostname but webservers don't seem to start on it? Did someone already achieve to do this ? Regards
... View more
Labels:
08-23-2016
01:11 PM
1 Kudo
Hi Does anybody have information about the HDP 2.5 (component version and release date) ?
Someone told me that it should occur this summer, but I can't find any confirmation. Thanks in advance
... View more
06-15-2016
08:51 PM
Hum...
It seems that I have to use the new publisher and consumer API, and not the old one.
Now it works but I still have warnings in kafka.out... With 6 lines of warning every second, I will quickly have a problem.
... View more
06-15-2016
03:34 PM
Hi Neeraj I am trying to do exactly the same thing, ie using ranger with a non kerberized Kafka. Unfortunately I have following error : [root@mykafka kafka]# tail -f kafka.out
[2016-06-15 15:45:34,002] WARN got exception trying to get groups for user ANONYMOUS: id: ANONYMOUS: no such user (org.apache.hadoop.security.ShellBasedUnixGroupsMapping)
[2016-06-15 15:45:34,002] WARN No groups available for user ANONYMOUS (org.apache.hadoop.security.UserGroupInformation) The public group should be mapped to an ANONYMOUS user. https://cwiki.apache.org/confluence/display/RANGER/Kafka+Plugin#KafkaPlugin-WhydowehavetospecifypublicusergrouponallpoliciesitemscreatedforauthorizingKafkaaccessovernon-securechannel? Did you do something special to declare it manually within ranger ? Can you share the list of declared users within ranger ? Thx in advance. Regards
... View more
06-08-2016
08:00 AM
Of course you're right. I was thinking that this property was inherited from the server but it is a consumer feature. It worked as wished now...
I still have an issue when I want to personalize the offset topic : [2016-06-08 05:20:02,664] WARN Property offset.storage.topic is not valid (kafka.utils.VerifiableProperties) I will investigate, but if I ignore this property and use the default topic __consumer_offsets it works fine. Thk a lot for your clarification.
... View more
06-07-2016
04:25 PM
Yes I set it to true. I made a mistake... It seems that it is offsets.storage (with S) and offset.storage.topic (w/o S)... See :http://kafka.apache.org/documentation.html But even after having fixed this, it doesn't work. Did you achieve to us offset management within kafka ? Regards
... View more
06-07-2016
03:43 PM
Hi Almost evrything is in the title.. I want to switch from an offsets management in ZK to an offsets management in kafka.
How to do this in HDP 2.4 ? I tried to add offsets.storage and offsets.storage.topic in the "custom kafka broker" and parameters are well taken into account : [kafka@mykafka conf]$ grep offsets.storage * server.properties:offsets.storage=kafka server.properties:offsets.storage.topic=offset-topic
But it doesn't work... and nothing is recorded in my offset topic. Thx in advance. Regards...
... View more
Labels:
06-06-2016
07:36 AM
Hi Bert There is no problem to provision a kafka cluster from ambari. I did it from the webUI (I didn't test it with blue print yet), my cluster is just made of kaffa + zookeeper (managed as a dependance by ambari). I also added ambari metrics... and I don't have HDFS. So up to now it works fine... The only thing I saw is that if you also want ranger for kafka there is a dependance to HDFS (but I did not install Ranger).
My post is about the addition of kerberos... Ambari want to create this principal : hdp24.localdomain,/HDFS/NAMENODE/hdfs,${hadoop-env/hdfs_user}-hdp24@LOCALDOMAIN,USER,${hadoop-env/hdfs_user},/etc/security/keytabs/hdfs.headless.keytab,${hadoop-env/hdfs_user},r,hadoop,r,440,unknown but because I don't have HDFS, the variable ${hadoop-env/hdfs_user} is not set and the installer fails. If I found a way to set this variable it should works, even if I don't care about this kerberos principal... Keep in touch if you want. Regards
... View more
06-03-2016
08:29 AM
Hi everybody We are currently using HDP and we want to implement a new cluster only having a kafka cluster and based on HDP 2.4.2. This cluster will be independant from our hadoop one and will be used for both hadoop and ELK (and maybe more later... that the reason why we want it to be isolated).
This kafka cluster needs to be securized with kerberos... But during this setting with ambari, the installer wants to create a principal for HDFS, which has no means for we do not have HDFS on this cluster. The installer fails because of a missing parameter ${hadoop-env/hdfs_user}.
Is it a known bug that will be fixed in a next release ?
Can we do that without installing HDFS (which works, it tested it...) and trying to unistall it cleanly ? As a workaround can I add this variable ${hadoop-env/hdfs_user} within ambari or somewhere else to allow the installer to go further. I don't care if I have an additional and unuseful principal in kerberos.
Regards
... View more
Labels: