Member since
02-24-2016
24
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
883 | 02-02-2017 10:19 AM | |
3834 | 12-20-2016 10:40 AM |
01-05-2018
11:52 AM
There are some parameters to manage the log cleaner: - log.retention.check.interval.ms --> Interval to check log segments according the policies configured. - log.retention.bytes --> To define the size of the topic - log.retention.hours --> To define the time to store a message in a topic.
... View more
05-18-2017
12:42 PM
I went crazy searching the solution to integrate Atlas with LDAP before summer and I couldn't fix it. But now, this workaround has worked in my HDP 2.5 cluster. Thank you so much @dvillarreal.
... View more
02-02-2017
10:19 AM
Thanks, now is working fine. I think was a conflict of policies because I removed all resource based
policies and it is now able to get the tag based policies.
... View more
02-02-2017
08:04 AM
Yes, all clients are installing on the host
... View more
02-02-2017
08:02 AM
Thanks, but bug RANGER-1271 is related with Ranger 0.7 and HDP 2.5 include Ranger 0.6
... View more
02-02-2017
07:58 AM
You are right, the atlas folder is missing as the below screenshot shows. Executing below commands and restarting Ranger service after it, the "test connection" works. cd /usr/hdp/2.5.0.0-1245/ranger-admin/ews/webapp/WEB-INF/classes/ranger-plugins
mkdir atlas
cp /usr/hdp/2.5.0.0-1245/atlas/libext/ranger-atlas-plugin-impl/ranger-atlas-plugin-0.6.0.2.5.0.0-1245.jar atlas/
chown -R ranger:ranger atlas Thanks @Ramesh Mani 🙂
... View more
02-01-2017
11:23 AM
1 Kudo
I'm having problems integrating Ranger and Atlas in a clean HDP 2.5 environment. Ranger sync Atlas policies and Atlas is auditing, but when I click on "test connection" from the Ranger Atlas Repo, the test fails. The log /var/log/ranger/admin/xa_portal.log shows that there is a missing dependency in Ranger. To fix it I copied the ranger-atlas.plugin-VERSION.jar to the lib folder of Ranger. cp /usr/hdp/2.5.0.0-1245/atlas/libext/ranger-atlas-plugin-impl/ranger-atlas-plugin-0.6.0.2.5.0.0-1245.jar /usr/hdp/current/ranger-admin/ews/lib After that, the error changes and now i'm getting a ClassCastException that I don't know how to fix it. Anyone knows how to fix it or has passed the "test connection" for the Ranger Atlas plugin. Thanks in advanced 🙂
... View more
Labels:
02-01-2017
10:49 AM
Did you fix your problem with tagsync @Hitesh Rajpurohit? Currently I'm having the same issue with a clean HDP 2.5.0 installation.
... View more
12-23-2016
10:43 AM
2 Kudos
I have installed HDP 2.5 and Atlas 0.7.0.2.5 and I don't need HBase to use Atlas. Currently I'm using Solr (Ambari Infra) as the index engine and Berkeleyje as the storage engine. The real dependency between Atlas and HBase is the AuditRepository, by default use HBase. Is not easy change it, but investigating the source code I found a special property atlas.EntityAuditRepository.impl that you have to set with the value org.apache.atlas.repository.audit.InMemoryEntityAuditRepository (is case sensitive, so copy&paste exactly the name of the property and the value). @Chad Woodhedad, add the above property as the screenshot shows, restart Atlas services and you will have it, Atlas without HBase 🙂 And now some details about how I found that property: In this link from GitHub you can see why Atlas by default needs HBase. And in this link from GitHub you can find the available values that you can use for configure the audit. Don't worry about the rest of the properties related with HBase. Atlas will use the value of atlas.graph.storage.hbase.table to create the table in the storage backend that you choose (Berkeleyje or Cassandra). With these properties my Atlas services work very well. I hope this information is helpful and work for you to avoid install HBase in your clusters.
... View more
12-21-2016
11:45 AM
Yes, I did that steps. Here you have an example of the verbose log of Ambari Agent. Each text file is the log generated in each restart of the ambari-agent. Logs -->ambari-log1.txt and ambari-log2.txt
... View more
12-20-2016
10:40 AM
Yes, Ambari support Hue 2.6.0 but Hortonworks is working to replace Hue (that is a Cloudera component) with Ambari Views. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-views/content/ch_using_ambari_views.html http://hortonworks.com/apache/ambari/#section_4 If you want to use Hue, you can use the old version 2.6.0 supported by Hortonworks or install manually the last version of the Cloudera Hue using the following guide http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial/. The guide is for install Hue 3.9 but I think you can follow it to install the version 3.12.
... View more
12-20-2016
10:09 AM
There is only one ambari-agent running, but I just discovered that every minute the PID change. I have reviewed again the ambari-agent log and exactly every minute appear the below error (Error in responseId sequence - restarting). So for a reason, every minute the ambari-agent is restarted. If I stop the ambari agent, the service is not start automatically, so the agent is only restarting when is running. Any idea of what is the root cause of this strange behavior? Is the first time that I see something like this. Thanks for your time bhagan.
... View more
12-19-2016
07:54 AM
Thanks bhagan. Below logs are from /var/log/ambari-server/ambari-server.log, are a bit strange and I don't know how interpret it. The configuration of the ambari-agent is the default one and all services are installed in the same server as the ambari-server (all ports are open).
... View more
12-16-2016
11:54 AM
Did you try with the code that I shared on https://community.hortonworks.com/comments/30193/view.html some comments above?
... View more
12-16-2016
11:49 AM
1 Kudo
I need help because my sandbox environment are having a strange behavior. I can't start/stop/restart services neither through Ambari neither using the REST API. As the below screenshot shows, after few seconds Ambari get a timedout error. Also the status of some services changes frequently without send any command. I updated the postgressql database to version 9.4, I reintalled the ambari-agent, delete temporary/cache files of ambari-server and ambari-agent. I found that always there are some threads between Ambari and the database. Maybe these connections not let Ambari send more commands? I tried to kill all the process, but after a couple of seconds all threads appear again... Someone have an idea of what is happening to the cluster? If you need more information, request it and I will provide it. Thanks in advanced.
... View more
Labels:
- Labels:
-
Apache Ambari
12-14-2016
12:29 PM
1 Kudo
There is another option that not require disable Ranger plugin. According to Atlas official documentation if the cluster has Ranger installed only is needed create a couple of Ranger policies for Kafka:
... View more
04-28-2016
01:32 PM
2 Kudos
After a hard day our team solved the issue and we have created a little
Spark application that can read from a kerberized Kafka and launching using yarn-cluster mode. Attached you have the pom.xml needed (very important use the Kafka 0.9 dependency) and the java code.
It is very important to
use the old Kafka API and to use PLAINTEXTSASL (and not SASL_PLAINTEXT, a value coming from the new Kafka API) when configuring the Kafka Stream. The jaas.conf file is the same that @Alexander Bij shared. Be careful because the keytab path is not a local path, is the keytab where the executor store the file (by default you only have to indicated the name of the keytab). KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="zookeeper";
};
Kafka will use this information to retrieve the kerberos configuration required when connecting to ZK and to the broker. In order to do that, the executor will require the keytab file to get the kerberos token before connecting to the broker. That is the reason of sending also the keytab to the container when launching the application. Please, notice that the extraJavaOptions refers to the container local path (the file will be placed in the root folder of the temp configuration) but the file path will require the local path where you have the jaas file in the server that is starting up the application. spark-submit (all your stuff) \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas.conf" \
--files "your_other_files,kafka_client_jaas.conf,serviceaccount.headless.keytab" \
(rest of your stuff)
Also verify that the parameter where you indicate the jaas file is java.security.auth.login.config. And that's all, following above instruction and using the attached code you will be able to read from kerberized Kafka using Spark 1.5.2. In our Kerberized environment we have Ambari 2.2.0.0, HDP 2.3.4.0 and Spark 1.5.2. I hope this information help you.
... View more
04-28-2016
12:33 PM
is not enable for the DirectApi connector, but is enable with the old one.
... View more
04-28-2016
07:20 AM
I have tried this approach and doesn't work 😞 . Spark doesn't throw any error, but doesn't read anything from the kerberized Kafka.
... View more
04-28-2016
07:18 AM
Do you have the code with the changes that you did in that jar?
... View more