Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 992 | 06-04-2025 11:36 PM | |
| 1564 | 03-23-2025 05:23 AM | |
| 780 | 03-17-2025 10:18 AM | |
| 2811 | 03-05-2025 01:34 PM | |
| 1853 | 03-03-2025 01:09 PM |
04-25-2019
07:13 PM
@Yegane Ahmadnejad 1. Don't manually set the java_home if you are on RHEL/Centos tar xzf jdk-8u171-linux-x64.tar.gz
cd /opt/jdk1.8.0_171/
alternatives --install /usr/bin/java java /opt/jdk1.8.0_171/bin/java 2
alternatives --config java
alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_171/bin/jar 2
alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_171/bin/javac 2
alternatives --set jar /opt/jdk1.8.0_171/bin/jar
alternatives --set javac /opt/jdk1.8.0_171/bin/javac 2. Don't change the warehouse root directory 3. Don't create the hiveserver2 znode manually. 4. Didn't see the hive database setup step??
... View more
04-25-2019
04:38 AM
@Madhura Mhatre Surely you could just t^do that but what happens to the replicas stored on that particular data node? Somehow your cluster has to reconstruct those replicas if you had a replicator factor of more than 1. Here I was talking about planned maintenance! Just switching it off will also force your cluster to do the same thing in the background with alerts and ONLY when the replicas have been reconstructed will those alerts go away. There is a performance cost for both decommissioning and just unplugging the data node.
... View more
04-24-2019
04:40 PM
@Madhura Mhatre It's well documented in by hortonworks Once you launch the decommissioning the blocks on that node will be distributed to the other node remaining nodes If the replication factor is higher than the number of existing data nodes after the removal, the removal process is not going to succeed!
... View more
04-23-2019
10:55 PM
@Pierre Correia It seems to be an issue with your auth_to_local runs best option before manually editing the auth_to_local is to regenerate the keytabs. Tthe following clients installed hdfs,Yarn,spark client Check the rules HDFS-->Configs-->Advanced--> hadoop.security.auth_to_local RULE:[1:$1@$0](ambari-qa-{cluster_name}@DOMAIN.LOCAL)s/.*/ambari-qa/
RULE:[1:$1@$0](hbase-{cluster_name}@DOMAIN.LOCAL)s/.*/hbase/
RULE:[1:$1@$0](hdfs-{cluster_name}@DOMAIN.LOCAL)s/.*/hdfs/
RULE:[1:$1@$0](spark-{cluster_name}@DOMAIN.LOCAL)s/.*/spark/
RULE:[1:$1@$0](zeppelin-{cluster_name}@DOMAIN.LOCAL)s/.*/zeppelin/
RULE:[1:$1@$0](.*@DOMAIN.LOCAL)s/@.*//
RULE:[2:$1@$0](amshbase@DOMAIN.LOCAL)s/.*/ams/
RULE:[2:$1@$0](amszk@DOMAIN.LOCAL)s/.*/ams/
RULE:[2:$1@$0](atlas@DOMAIN.LOCAL)s/.*/atlas/
RULE:[2:$1@$0](beacon@DOMAIN.LOCAL)s/.*/beacon/
RULE:[2:$1@$0](dn@DOMAIN.LOCAL)s/.*/hdfs/
RULE:[2:$1@$0](hbase@DOMAIN.LOCAL)s/.*/hbase/
RULE:[2:$1@$0](hive@DOMAIN.LOCAL)s/.*/hive/
RULE:[2:$1@$0](jhs@DOMAIN.LOCAL)s/.*/mapred/
RULE:[2:$1@$0](knox@DOMAIN.LOCAL)s/.*/knox/
RULE:[2:$1@$0](nifi@DOMAIN.LOCAL)s/.*/nifi/
RULE:[2:$1@$0](nm@DOMAIN.LOCAL)s/.*/yarn/
RULE:[2:$1@$0](nn@DOMAIN.LOCAL)s/.*/hdfs/
RULE:[2:$1@$0](oozie@DOMAIN.LOCAL)s/.*/oozie/
RULE:[2:$1@$0](rangeradmin@DOMAIN.LOCAL)s/.*/ranger/
RULE:[2:$1@$0](rangertagsync@DOMAIN.LOCAL)s/.*/rangertagsync/
RULE:[2:$1@$0](rangerusersync@DOMAIN.LOCAL)s/.*/rangerusersync/
RULE:[2:$1@$0](rm@DOMAIN.LOCAL)s/.*/yarn/
RULE:[2:$1@$0](yarn@DOMAIN.LOCAL)s/.*/yarn/
DEFAULT Your rules shouldn't match but look like the above depending on the HDP components installed
... View more
04-23-2019
02:58 PM
@Shilpa Gokul If you found this answer addressed your question, please take a moment to log in and click the "Accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
04-23-2019
12:28 PM
@Dennis Suhari If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
04-23-2019
05:21 AM
1 Kudo
@Michael Bronson Out of the box configs are much easier but the config you have implemented is the correct way to integrate Presto with hadoop these files must be present on all the presto node 🙂
... View more
04-23-2019
04:12 AM
@Shilpa Gokul Please have a look at this HCC document by Neeraj Sabharwal how to setup kafka/ranger without Kerberos This should still be valid with a few tweaks
... View more
04-22-2019
03:55 PM
@Shilpa Gokul Is the plugin enabled for kafka?
... View more