Member since
02-24-2016
24
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2358 | 02-02-2017 10:19 AM | |
6278 | 12-20-2016 10:40 AM |
12-20-2016
10:40 AM
Yes, Ambari support Hue 2.6.0 but Hortonworks is working to replace Hue (that is a Cloudera component) with Ambari Views. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-views/content/ch_using_ambari_views.html http://hortonworks.com/apache/ambari/#section_4 If you want to use Hue, you can use the old version 2.6.0 supported by Hortonworks or install manually the last version of the Cloudera Hue using the following guide http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial/. The guide is for install Hue 3.9 but I think you can follow it to install the version 3.12.
... View more
12-20-2016
10:09 AM
There is only one ambari-agent running, but I just discovered that every minute the PID change. I have reviewed again the ambari-agent log and exactly every minute appear the below error (Error in responseId sequence - restarting). So for a reason, every minute the ambari-agent is restarted. If I stop the ambari agent, the service is not start automatically, so the agent is only restarting when is running. Any idea of what is the root cause of this strange behavior? Is the first time that I see something like this. Thanks for your time bhagan.
... View more
12-19-2016
07:54 AM
Thanks bhagan. Below logs are from /var/log/ambari-server/ambari-server.log, are a bit strange and I don't know how interpret it. The configuration of the ambari-agent is the default one and all services are installed in the same server as the ambari-server (all ports are open).
... View more
12-16-2016
11:54 AM
Did you try with the code that I shared on https://community.hortonworks.com/comments/30193/view.html some comments above?
... View more
12-16-2016
11:49 AM
1 Kudo
I need help because my sandbox environment are having a strange behavior. I can't start/stop/restart services neither through Ambari neither using the REST API. As the below screenshot shows, after few seconds Ambari get a timedout error. Also the status of some services changes frequently without send any command. I updated the postgressql database to version 9.4, I reintalled the ambari-agent, delete temporary/cache files of ambari-server and ambari-agent. I found that always there are some threads between Ambari and the database. Maybe these connections not let Ambari send more commands? I tried to kill all the process, but after a couple of seconds all threads appear again... Someone have an idea of what is happening to the cluster? If you need more information, request it and I will provide it. Thanks in advanced.
... View more
Labels:
- Labels:
-
Apache Ambari
12-14-2016
12:29 PM
1 Kudo
There is another option that not require disable Ranger plugin. According to Atlas official documentation if the cluster has Ranger installed only is needed create a couple of Ranger policies for Kafka:
... View more
04-28-2016
01:32 PM
2 Kudos
After a hard day our team solved the issue and we have created a little
Spark application that can read from a kerberized Kafka and launching using yarn-cluster mode. Attached you have the pom.xml needed (very important use the Kafka 0.9 dependency) and the java code.
It is very important to
use the old Kafka API and to use PLAINTEXTSASL (and not SASL_PLAINTEXT, a value coming from the new Kafka API) when configuring the Kafka Stream. The jaas.conf file is the same that @Alexander Bij shared. Be careful because the keytab path is not a local path, is the keytab where the executor store the file (by default you only have to indicated the name of the keytab). KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="zookeeper";
};
Kafka will use this information to retrieve the kerberos configuration required when connecting to ZK and to the broker. In order to do that, the executor will require the keytab file to get the kerberos token before connecting to the broker. That is the reason of sending also the keytab to the container when launching the application. Please, notice that the extraJavaOptions refers to the container local path (the file will be placed in the root folder of the temp configuration) but the file path will require the local path where you have the jaas file in the server that is starting up the application. spark-submit (all your stuff) \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas.conf" \
--files "your_other_files,kafka_client_jaas.conf,serviceaccount.headless.keytab" \
(rest of your stuff)
Also verify that the parameter where you indicate the jaas file is java.security.auth.login.config. And that's all, following above instruction and using the attached code you will be able to read from kerberized Kafka using Spark 1.5.2. In our Kerberized environment we have Ambari 2.2.0.0, HDP 2.3.4.0 and Spark 1.5.2. I hope this information help you.
... View more
04-28-2016
12:33 PM
is not enable for the DirectApi connector, but is enable with the old one.
... View more
04-28-2016
07:20 AM
I have tried this approach and doesn't work 😞 . Spark doesn't throw any error, but doesn't read anything from the kerberized Kafka.
... View more
04-28-2016
07:18 AM
Do you have the code with the changes that you did in that jar?
... View more
- « Previous
-
- 1
- 2
- Next »