Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2444 | 04-27-2020 03:48 AM | |
4877 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3218 | 04-13-2020 08:53 PM | |
4924 | 03-31-2020 02:10 AM |
07-19-2019
05:56 AM
@Reed Villanueva Also as you are not able to connect to MySQL remotely. # mysql -u root -p -h <some remote cluster node>
ERROR 2003 (HY000): Can't connect to MySQL server on '<theremote cluster node>' (111) This also happens if for the "root" user the remote Privilleges are not setup correctly. So please do SSH to the MySQL Server host then connect (locally) to run the metntioned queries. From MySQL DB server host: # netstat -tnlpa | grep 3306
# service iptables stop
# mysql -u root -p
Enter Password:
mysql> use mysql;
mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'xxxxxxxxxxxxxxxxxxxxxxxxxx';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
Please check the Port 3306 is accessible remotely or not? # telnet <MYSQL_HOST> 3306 If your MySQL listening on localhost? or on all interfaces? # grep 'bind-address' /etc/my.cnf https://www.tecmint.com/fix-error-2003-hy000-cant-connect-to-mysql-server-on-127-0-0-1-111/ .
... View more
07-19-2019
05:48 AM
@Reed Villanueva Have you already run the following queries mentioned to setup the RangerDB in advance? https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/installing-ranger/content/configure_mysql_db_for_ranger.html MySQL users needs to be setup and GRANT privileges needs to be executed else the connection test from a remote host will not work. CREATE USER 'rangerdba'@'localhost' IDENTIFIED BY 'rangerdba;
GRANT ALL PRIVILEGES ON *.* TO 'rangerdba'@'localhost;
CREATE USER 'rangerdba'@'%' IDENTIFIED BY 'rangerdba;
GRANT ALL PRIVILEGES ON *.* TO 'rangerdba'@'%;
GRANT ALL PRIVILEGES ON *.* TO 'rangerdba'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'rangerdba'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES; . Just in case if you want to change/reset your MySQL "root" user password then you can change/reset it https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html
... View more
07-19-2019
05:36 AM
@Yassine OURAHMA After setting those properties have you tries restarting Ambari Server? What is your ambari version? Are you setting the properties for the correct Hive View instance name inside the "ambari.properties" Example: views.ambari.hive.AUTO_HIVE20_INSTANCE.connection.inactivity.timeout=1200000
views.ambari.hive.AUTO_HIVE20_INSTANCE.result.fetch.timeout=1200000 Also can you please share the exact query and the exact error with complete stack trace from Hive View log and ambari-server.log? /var/log/ambari-server/ambari-server.log
/var/log/ambari-server/hive20-view/hive20-view.log Also please share the output of the following commands: # grep -e "timeout\|views" /etc/ambari-server/conf/ambari.properties . Can you please try this as well to see if it works for you? Please execute the same Hive Query from Hive View 2.0 By setting the additional hive param in Hive View 2.0 "Settings Tab" in "Hive View 2.0" --> Settings (Tab) --> Add a new property Key as "hive.fetch.task.conversion" and value as "none" Then run the same query again from hive view.
... View more
07-17-2019
01:29 AM
@Michael Bronson DEBUG/TRACE logging consumes much disk space. Hence it is better to disable them once you have collected enough race/debug logs for your troubleshooting purpose..
... View more
07-17-2019
01:27 AM
1 Kudo
@Michael Bronson Inside the Ambari UI --> Zookeerper --> Configs --> Advanced --> "Advanced zookeeper-log4j" you will find the zookeeper-log4j template. Inside the template try this: Change-1). Change the rootLogger line and enable 'TRACEFILE' appender Comment the line: #log4j.rootLogger=INFO, CONSOLE, ROLLINGFILE And then uncomment the following line: log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE
Change-2). Now inside the same template alter the line: FROM log4j.appender.TRACEFILE.File=zookeeper_trace.log
TO log4j.appender.TRACEFILE.File={{zk_log_dir}}/zookeeper_trace.log Now restart Zookeeper. Then check the log "zookeeper_trace.log" # tail -f /var/log/zookeeper/zookeeper_trace.log .
... View more
07-11-2019
01:46 PM
@YOSUKE SHIBUYA Good to know that your issue is resolved. It will be great if you can mark this thread as Answered by clicking on the "Accept" button on the helpful answer.
... View more
07-11-2019
01:06 PM
1 Kudo
@YOSUKE SHIBUYA In your "hbase-ams-master-kvm07log.txt" log we see the following message. 2019-07-11 19:11:58,731 INFO [Thread-23] wal.ProcedureWALFile: Opening file:/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/pv2-00000000000000000001.log length=45336
2019-07-11 19:11:58,743 WARN [Thread-23] wal.WALProcedureStore: Unable to read tracker for file:/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/pv2-00000000000000000001.log
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat$InvalidWALDataException: Invalid Trailer version. got 48 expected 1
at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readTrailer(ProcedureWALFormat.java:189) Looks like the WAL Data "/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/" got corrupted. # ls -lart /var/lib/ambari-metrics-collector/hbase/MasterProcWALs/* May be you can take a backup of the dir "/var/lib/ambari-metrics-collector/hbase/" and then try to clean the file present inside the "/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/*" Then try to perform a tmp dir cleanup. After taking a backup of "/var/lib/ambari-metrics-collector/hbase-tmp/" Then remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper AND any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder "hbase.tmp.dir": (default value: /var/lib/ambari-metrics-collector/hbase-tmp) This is on local filesystem for both modes: # rm -fr /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*
# rm -fr /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/* Then try to restart the AMS. Better if you also increase the Metrics Collector Heap Size 1024MB and HBase Master Maximum Memory 2048MB. (or 4096MB) if you repeatedly see similar issue.
... View more
07-11-2019
09:31 AM
@YOSUKE SHIBUYA The error snippet which you posted is just the after effect of the actual cause and a very generic message. Can you please share the following logs for initial review? /var/log/ambari-metrics-collector/ambari-metrics-collector.log
/var/log/ambari-metrics-collector/hbase-ams-master-xxxxxxxx.log
/var/log/ambari-metrics-collector/gc.log
/var/log/ambari-metrics-collector/collector-gc.log Also most probably the AMS failure can happen due to incorrect tuning or heavy load. So can you please let us know the following: 1. How many nodes are there in your cluster? 2. How much memory have you allocated to the AMS collector and HMaster. 3. I guess you might be using default Embedded Mode AMS (not distributed) Both require slightly different kind of tuning.
... View more
07-11-2019
08:50 AM
1 Kudo
@abraham fikire /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.2
Error: could not find libjava.so Error: Could not find Java SE Runtime Environment We see the above error in your DataNode startup logs. Please validate the same with the user who is running the DataNOde process using the command: # java -version
# su - hdfs
# java -version this error indicates that you Might not have a Valid JDK installed on your machine. So please try this: 1. Install a Valid JDK 1.8 on your machine. You can download one from here: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 2. set the JAVA_HOME as following inside the "/etc/profile" or inside the "~/.bash_profile" file" export JAVA_HOME=/PATH/TO/jdk1.8.0-120
export PATH=$JAVA_HOME/bin:$PATH The path "/PATH/TO/jdk1.8.0-120" is a dummy path please use your own JDK path here. .
... View more
07-11-2019
02:40 AM
1 Kudo
@NUR ALEAH JEHAN ABDULLAH We see the failure is happening due to: WARNING: IOException occurred while connecting to ambari:5432
java.net.UnknownHostException: ambari . Which indicates that your Ambari DB hostname is not resolving. 1. Are you sure that your Ambari Postgres DB hostname is "ambari" ? 2. From a remote machine how do you connect to your Ambari Postgres DB (Using which hostname) ? 3. Do you have the correct "/etc/hosts" file entry to point the hostname 'ambari' to your DB IPAddress? Please share? # cat /etc/hosts 4. If you have incorrectly configured the Postgres DB hostname then please fix it inside the "/etc/ambari-server/conf/ambari.properties" . Specially the "server.jdbc.url" property. You can fix it manually to specify your Postgres DB hostname. # grep 'jdbc' /etc/ambari-server/conf/ambari.properties
... View more