Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 895 | 06-04-2025 11:36 PM | |
| 1491 | 03-23-2025 05:23 AM | |
| 739 | 03-17-2025 10:18 AM | |
| 2649 | 03-05-2025 01:34 PM | |
| 1767 | 03-03-2025 01:09 PM |
05-16-2018
11:44 AM
@Sriram Can you check the document reference by @Harald Berghoff
... View more
05-16-2018
11:29 AM
@Bhushan Kandalkar When you start or restart Hue on a secure cluster, keys are generated at $HUE_HOME. If generated keystore files already exist in that location, the script does nothing. The script is located here: $HUE_HOME/bin/secure.sh, and it runs with a set of default parameters, which should not be changed. What is your current cacert path?
... View more
05-16-2018
09:02 AM
1 Kudo
@Shailna Patidar Yes you can do that here is a variation of doing that to a particular file or entire directory You can use the below commands to set replication of an individual file to 4 hadoop dfs -setrep -w 4 /path of the file The below command will change for all the files under it recursively.To change replication of entire directory under HDFS to 4: hadoop dfs -setrep -R -w 4 /Directory path Hope that helps
... View more
05-16-2018
08:43 AM
@Bhushan Kandalkar Here is a tip In the [[ssl]] section of the file hue.ini (under the beeswax section), set validate to true. [[ssl]]
# SSL communication enabled for this server.
# Path to certificate authority certificates.
## cacerts=/path/cert.pem
# Choose whether Hue should validate certificates received from the server.
validate=true On a secure cluster: Make sure that no custom authentication mechanism is turned on and configure the hive-site.xml with the following properties: <name>hive.server2.thrift.sasl.qop</name>
<value>auth-conf</value>
<description>Sasl QOP value; one of 'auth', 'auth-int' and 'auth-conf'</description>
</property> To restart Hue To restart Hive Metastore To restart HiveServer2
... View more
05-16-2018
07:09 AM
@Markus Wilhelm I have done 100's of Kerberos setup on hadoop never encountered this particular error weird To be after to help you can you share a scrambled version of the below files /var/kerberos/krb5kdc/kadm5.acl
/etc/krb5.conf
/var/kerberos/krb5kdc/kdc.conf Can you ensure that all the components of your cluster BIKW are running? Validate the services are okay with "Run service Check" Make sure the below daemons are set to autostart and are running before launching the Ambari Kerberos Utility # chkconfig krb5kdc on
# chkconfig kadmin on Please revert
... View more
05-15-2018
04:10 PM
@Mokkan Mok Unfortunately there is no direct upgrade path from 2.2 to 2.6. You will have to first upgrade from 2.2 to 2.4 and then from 2.4 to 2.6. You should also remember to do the same for the Ambari server. If you are running HDP 2.3, 2.2, 2.1 or 2.0, in order to use Ambari 2.6 you must first upgrade to HDP 2.4 or higher using Ambari 2.5.2, 2.4.3, 2.2.2, 2.2.1, 2.2, 2.1, or 2.0 before upgrading to Ambari 2.6. Once completed, upgrade your current Ambari version to Ambari 2.6. Hope that answer your queries
... View more
05-14-2018
06:04 PM
@Arshadullah Khan To use SSL for Hiveserver2, you will need to first enable SSL for Hiveserver2.
If you have already done that then the connect string should look like below beeline -n abcd -u jdbc:hive2://xyz.abc.org:10001/default;ssl=true;sslTrustStore=/path/to/truststore.jks;trustStorePassword=Password Hope that helps
... View more
05-13-2018
03:12 PM
@Michael Bronson Yes I understand there should be some conflict in the config, that's the reason I want you to validate the above parameters.
... View more
05-13-2018
02:05 PM
@Michael Bronson What is the IP of the kafka02? Create directories Create data and log directories for ZooKeeper and Kafka. To simplify this process we can add the directories within the user home directory. In a production environment, we would use different locations, e.g. separate mount points or physical disks for data and log directories. $ mkdir -p /home/kafka/zookeeper/data
$ mkdir -p /home/kafka/kafka/kafka-logs ZooKeeper configuration for embedded Share your zookeeper.properties. Set the value to point to the new ZooKeeper directory created above. dataDir=/home/kafka/zookeeper/data At the end of this file add all available ZooKeeper servers. server.1=kafka01:2888:3888
server.2=kafka02:2888:3888
server.3=kafka03:2888:3888 Each in your cluster nodes needs a unique server id. ZooKeeper looks up this information from the following file in my case: /home/kafka/zookeeper/data/myid. You will have to execute a command like this on each server - using a different value for each instance. For instance 1 on server kafka01 we use the value "1". $ echo "1" > /home/kafka/zookeeper/data/myid Kafka server.properties. Each Kafka cluster node needs a unique id. We have to find the broker.id property in the configuration files and change the id for each server. broker.id=1 Make sure you have a unique id for your all your brokers broker.id ie 1,2 ,3 Did you change the log directory location specified in the log.dirs parameter? log.dirs= Update the listeners and advertised.listeners properties with the current Kafka node hostname.
listeners: the address/server name and protocol kafka is listening to (internal traffic between Kafka nodes) advertised.listener: the address/server name and protocol clients can use to connect to the Kafka cluster (external traffic). Only need to be specified if different from above setting. i.e listeners=PLAINTEXT://kafka01:9092
advertised.listeners=PLAINTEXT://kafka01:9092 Next step is to tell Kafka which ZooKeeper nodes can be used to connect to. zookeeper.connect=kafka01:2181,kafka02:2181,kafka03:2181 Hope that helps you resolve the reboot problem
... View more
05-13-2018
09:03 AM
@prarthana basgod Where is it picking this from "invalid cluster named test" if you won't install hive why not remove this component which is dependent of hive? webhcat_server
... View more