Member since
07-30-2019
14
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1019 | 05-31-2018 03:31 AM |
04-21-2020
01:46 PM
2 Kudos
Hello, Please check the following link with the hostname and /etc/hosts recommendations https://docs.cloudera.com/cdpdc/7.0/installation/topics/cdpdc-configure-network-names.html Additionally, remember that the hostnames must be fully qualified domain name (FQDN) example: 1.1.1.1 foo-1.example.com foo-1
2.2.2.2 foo-2.example.com foo-2
3.3.3.3 foo-3.example.com foo-3
4.4.4.4 foo-4.example.com foo-4
... View more
06-07-2018
03:03 PM
@Pirlouis Pirlouis do a kinit with your user and run the curl without "myuser:mypasswd"
... View more
06-05-2018
02:34 PM
First, remember you need to have an odd number of journalnode (3, 5, 7, etc). All the Journal nodes should be sync. Tail the log file across all the JN to be sure you are in the same edit. tail -f /var/log/hadoop/hdfs/hadoop-hdfs-journalnode*.log * Note: If you comment this post make sure you tag my name. And If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-04-2018
02:11 PM
Hello, If you have kerberos, first do a kinit with your user account then add the --negotiate parameter to curl command to be like this curl -iv --negotiate -u : "http://....."
... View more
05-31-2018
09:19 PM
Log levels are confgured in log4j files located in /etc/hive/conf/ and /etc/hive/conf/conf.server Go hiveserver2 node and run cd /etc/hive/conf/
grep -R DEBUG * | grep hive.root.logger
you will see a file called hive-log4j.properties Edit that file and change the line called hive.root.logger from DEBUG to INFO
... View more
05-31-2018
09:11 PM
First you need to know the error. - Check your namenode log file, by default located in the following directory /var/log/hadoop/hdfs run tail -f /var/log/hadoop/hdfs/hadoop-hdfs-namenode*.log . and try to start the service again. - Check if namenode process was already running on that node with ps -ef | grep namenode
... View more
05-31-2018
03:31 AM
I understand that you have multiple clusters, each one with apache ranger installed and you want to have a single database engine for all of them. You can have a single database engine outside of your HDP nodes, but you need to create a different database name for each cluster. I recommend you also to have a different database user account to administrate each database. Each ambari and ranger node will need to have access to MySQL host using the port 3306. By the way MariaDB is also supported. The following link may help you, check the point number 5 : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/ranger_admin_settings.html
... View more