Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1001 | 06-04-2025 11:36 PM | |
| 1568 | 03-23-2025 05:23 AM | |
| 784 | 03-17-2025 10:18 AM | |
| 2820 | 03-05-2025 01:34 PM | |
| 1861 | 03-03-2025 01:09 PM |
11-25-2019
02:41 PM
@JoSam As the issue pertains to Ambari please try to share the below file please tokenize sensitive data only ambari-server.logs ambari-server.properties /etc/hosts And the output of # hostname -f I have seen an instance where the IP in /etc/hosts doesn't match the one from the output above. Was the cluster functioning well if so what changes did you make that could have triggered?
... View more
11-25-2019
02:35 PM
@Caranthir I came across this have you seen this document additional Ranger Plugin Configuration Steps for Kerberos Clusters if you are running HDP 3.1.4 on the same page you will also find some specific configs for hive. You will need to create a Kerberos principal and the below values are different with your screenshot Values to Change Ranger service config user Ranger service config password For hive The above steps are for HDP 3.1.4 Hope that helps
... View more
11-25-2019
12:35 PM
1 Kudo
@Harisc4y Bingooooooooo ! You are trying to install CDH on Operating System that is not certified no wonder all this headache I can now go to sleep. We NEVER stress enough before embarking on deploying a software HDP or CDH you MUST read the vendor's release notes and most important the compatibility matrix. Not heeding to these pre-requisites always leads to a very bad first experience. I downloaded and fired up your VM successfully, I then started form the basics Memory, CPU and networking then I saw a strange UI so checking the version confirmed my fears Unsupported OS Cloudera Compatability Matrix Reference: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_os_requirements.html#c63_supported_os Currently, Centos/RHEL 7.6 is the latest compatible version so no need to proceed further with investigations Happy hadooping
... View more
11-24-2019
09:51 PM
@Harisc4y Thats weird.can you share the following information OS Contents of your repo file File System layout Steps followed or documents CM logs Deployment shouldn't be that complicated I will setup a VM to try that out.
... View more
11-24-2019
11:47 AM
1 Kudo
@shashank_naresh @KWiseman I have responded to this issue before can you have a look at the thread https://community.cloudera.com/t5/Support-Questions/DAS-Database-Empty/m-p/238223#M200034 Please let me know if you need more clarification, I have attached 2 pdf's to walk you through the solution. Happy hadooping
... View more
11-24-2019
11:36 AM
@Harisc4y The prepare database script is delivered by the previous steps Step 1: Configure a Repository Configure the Repository by running or chose appropriate CDH repos $ sudo wget https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/cloudera-manager.repo -P /etc/yum.repos.d/ Import the repository signing GPG key for Redhat 7 sudo rpm --import https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPM-GPG-KEY-cloudera Step 3: Install Cloudera Manager Server for RHEL7 or choose cm agent $ sudo yum install cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server After the above steps, scm_prepare_database.sh should have been delivered you can check that $ ll /opt/cloudera/cm/schema/ Hope that helps
... View more
11-24-2019
10:35 AM
@Kou_Bou If you want to change the java here is the procedure, I just test the link works so just copy and paste as other links are broken. I see you are using OpenJDK Remove OpenJDK # yum list java* # yum -y remove java* # java -version Install latest Oracle Java JDK # cd /opt/ wget --no-check-certificate --no-cookies --header "Cookie: gpw_e24=yippi ka yei madafaka;" "http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz" tar xzf jdk-8u144-linux-x64.tar.gz cd /opt/jdk1.8.0_144/ alternatives --install /usr/bin/java java /opt/jdk1.8.0_144/bin/java 2 alternatives --config java alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_144/bin/jar 2 alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_144/bin/javac 2 alternatives --set jar /opt/jdk1.8.0_144/bin/jar alternatives --set javac /opt/jdk1.8.0_144/bin/javac Check the new default java # java -version java -version java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b11, mixed mode) That now should work
... View more
11-24-2019
08:30 AM
@Kou_Bou Okay, now since you are having problems I would like first to confirm your HDP version I see you are on RHEL 7 I hope it's not higher than 7.6 as RHEL 7.7 is not yet certified with HDP or HDF see compatibility matrix To successfully deploy HDP you need to have strictly followed the Hortonworks stack installation document please confirm that you executed every step therein. I suspect a couple of issues like Firewall or passwordless ssh config issues but that can be overcome with manual registration of the hosts, the following step is valid for a single/multinode cluster let's get started. In the below steps, I am assuming you ex3cuted the steps in the above 2 documents First I would like you to use a fictitious domain name [hadoop.com]for you cluster this will help me eliminate other possible issues, I am assuming you have 2 servers and enough RAM at least 12-16 GB with 2 CPUs Servers master [Ambari-server + database + other component] node1 [datanode ] + n But the above step is valid for even a single node cluster to execute all the below steps as root Set hostname on Ambari server # hostnamectl set-hostname master.hadoop.com Verify the hostname # hostnamectl The desired output should be master.hadoop.com If not using single node do this on the subsequent hosts do the same for the remaining hosts Set hostname on other hosts # hostnamectl set-hostname nodex.hadoop.com Get the IP's On all the hosts get the IP using # ifconfig Note all the IP's including that of the Ambari server, now get the hostname of all the hosts in the cluster using Note all the output should be an FQDN. Edit the /etc/hosts on the Ambari server [master.hadoop.com] and the IP or IP's in the format IP FQDN ALIAS in case of multinode i.e 192.168.1.20 master.hadoop.com master 192.168.1.21 nodex.hadoop.com nodex If it's a multinode make sure you copy the hosts file to all other nodes !!! Check the repos are well set # ls -al /etc/yum.repos.d/ ..... ambari.repo hdp.repo The hdp and ambari repos should be accessible # yum repolist Install Ambari server # yum install -y ambari-server Setup Ambari In the below step I preparing the Ambari for Mysql or MAriaDB and NOT the embedde postgres database. I realise you were using embedded change to Mysql or MariaDB open source version of MySQL for this step I will use MariaDB but the connector is the same # ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Backup the ambari.properties # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.bak To change the default timeout of the ambari server run the below snippets # echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties # echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties # echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties # echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties # echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties # echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties update OS and restart the server # yum update # init 6 Install MariaDB 10.x compatibile with HDP 2.6.x - 3.x # yum install -y MariaDB-server Startup mariaDB and configure auto-start at boot # systemctl enable mariadb.service # systemctl start mariadb.service Check MariaDB is up # systemctl status mariadb Secure the MariaDB and change password important to change the default password # /usr/bin/mysql_secure_installation You already have a root password set, so you can safely answer 'n'. If you answer Y then set a new password for MariaDB Change the root password? [Y/n] Y [ Most important the rest just answer y ] Remove anonymous users? [Y/n] y ... Success Disallow root login remotely? [Y/n] y ... Success! Remove test database and access to it? [Y/n] y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reload privilege tables now? [Y/n] y ... Success! .... Thanks for using MariaDB! Assuming you changed the root password to welcome1 and I am using ambari as the user,password and database name mysql -u root -pwelcome1 CREATE USER 'ambari'@'%' IDENTIFIED BY 'ambari'; GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'ambari'; GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost'; CREATE USER 'ambari'@'tokyo.com' IDENTIFIED BY 'ambari'; GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'master.hadoop.com'; FLUSH PRIVILEGES; Log on as Ambari user and create the Ambari DB mysql -u ambari -pambari CREATE DATABASE ambari; USE ambari; SOURCE /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql; exit; Create the hive, ranger, oozie, rangerkms as required again here I am simplifying by using hive as the user, password and name you database an do the same for ranger,oozie,rangerkms # mysql -u root -pwelcome1 CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost'; CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%'; CREATE USER 'hive'@'master.hadoop.com' IDENTIFIED BY '[HIVE_PASSWORD]'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'master.hadoop.com'; FLUSH PRIVILEGES; CREATE DATABASE hive; quit; Run the Ambari DB setup # ambari-server setup To use MariaDB database, and select the ambari database name, user name, and password created earlier for that database, so enter 3. When asked do not enable Ambari to automatically download and install LZO !!! After finalizing the setup start # yum install -y ambari-agent Edit the ambari-agent.ini on all the node including the ambari server,change the entry below,hostname should be the name of the ambari host master.hadoop.com vi /etc/ambari-agent/conf/ambari-agent.ini [server] hostname=master.hadoop.com url_port=8440 secured_url_port=8441 connect_retry_delay=10 max_reconnect_retry_delay=30 Save On the Ambari server and any other node install ambari-agent and start it Start the ambari server # ambari-server start Hit the Ambari UI http://master.hadoop.com:8080 This time use the option manual registration see screenshot The host(s) should register successfully!
... View more
11-22-2019
11:12 AM
@Kou_Bou Is this problem still persisting? Please let me know tag me !
... View more
11-21-2019
09:49 AM
@Koffi Indeed your HDF nodes are out of sync can you backup the users.xml and authorizations.xml node01 ie $ mv users.xml users.xml.bak and on the same $ mv authorizations.xml authorizations.xml.bak then if possible stop node02 so that there are no changes made when you attempt to copy these same 2 files over to node01, ensure the location is the same and the file's permissions thereafter restart your 2 node HDF cluster. That should resolve the issue.
... View more