Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2831 | 04-27-2020 03:48 AM | |
| 5504 | 04-26-2020 06:18 PM | |
| 4683 | 04-26-2020 06:05 PM | |
| 3716 | 04-13-2020 08:53 PM | |
| 5624 | 03-31-2020 02:10 AM |
07-25-2019
01:10 AM
@Reed Villanueva The most important requirement for the Cluster is that Not only ambari to be able to resolve each cluster nodes with they FQDN. BUT all the cluster nodes also should be able to resolve each other using their FQDN. So please make sure that all your cluster nodes are able to resolve each other using their hostname / FQDN Please check the "/etc/hosts" file on ambari server node as well as on other cluster nodes. (they can be same to resolve all the cluster nodes). If you are using DNS entry then please make sure that each host is able to resolve each others DNS name. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-installation-ppc/content/edit_the_host_file.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-installation-ppc/content/check_dns.html
... View more
07-24-2019
12:37 AM
@Reed Villanueva When we access FileView then by design it checks if the user who has logged in to the Ambari has a valid home directory in HDFS or not? Either we need to create the users home directory in HDFS on our own OR we can refer to the following doc for automatic home directory creation in HDFS for any newly added user in ambari. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_enable_user_home_directory_creation.html . If you are noticing that even after creating a valid home directory for the user if the FileView is failing on HDFS test. Please try this: Create a new FileView instance to see if the issue persist? Ambari UI --> Manage Ambari --> Views --> File View --> Create instance.Then please share the following data: Please share the complete log if the new File view also fails at hdfs test. /var/log/ambari-server/ambari-server.log
/var/log/ambari-server/files-view/files-view.log
... View more
07-23-2019
10:13 PM
@Reed Villanueva As we see that you have logged in to "Ambari File View" as admin user. Hence you will need to make sure that the "admin" user has a directory as following in HDFS "/user/admin". If it is missing then please create one. Example: # su - hdfs -c "hdfs dfs -mkdir /user/admin"
# su - hdfs -c "hdfs dfs -chown -R admin:hadoop /user/admin"
# su - hdfs -c "hdfs dfs -chmod -R 755 /user/admin" Then try to re-login to File View. .
... View more
07-19-2019
05:36 AM
@Yassine OURAHMA After setting those properties have you tries restarting Ambari Server? What is your ambari version? Are you setting the properties for the correct Hive View instance name inside the "ambari.properties" Example: views.ambari.hive.AUTO_HIVE20_INSTANCE.connection.inactivity.timeout=1200000
views.ambari.hive.AUTO_HIVE20_INSTANCE.result.fetch.timeout=1200000 Also can you please share the exact query and the exact error with complete stack trace from Hive View log and ambari-server.log? /var/log/ambari-server/ambari-server.log
/var/log/ambari-server/hive20-view/hive20-view.log Also please share the output of the following commands: # grep -e "timeout\|views" /etc/ambari-server/conf/ambari.properties . Can you please try this as well to see if it works for you? Please execute the same Hive Query from Hive View 2.0 By setting the additional hive param in Hive View 2.0 "Settings Tab" in "Hive View 2.0" --> Settings (Tab) --> Add a new property Key as "hive.fetch.task.conversion" and value as "none" Then run the same query again from hive view.
... View more
07-17-2019
01:29 AM
@Michael Bronson DEBUG/TRACE logging consumes much disk space. Hence it is better to disable them once you have collected enough race/debug logs for your troubleshooting purpose..
... View more
07-17-2019
01:27 AM
1 Kudo
@Michael Bronson Inside the Ambari UI --> Zookeerper --> Configs --> Advanced --> "Advanced zookeeper-log4j" you will find the zookeeper-log4j template. Inside the template try this: Change-1). Change the rootLogger line and enable 'TRACEFILE' appender Comment the line: #log4j.rootLogger=INFO, CONSOLE, ROLLINGFILE And then uncomment the following line: log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE
Change-2). Now inside the same template alter the line: FROM log4j.appender.TRACEFILE.File=zookeeper_trace.log
TO log4j.appender.TRACEFILE.File={{zk_log_dir}}/zookeeper_trace.log Now restart Zookeeper. Then check the log "zookeeper_trace.log" # tail -f /var/log/zookeeper/zookeeper_trace.log .
... View more
07-11-2019
01:46 PM
@YOSUKE SHIBUYA Good to know that your issue is resolved. It will be great if you can mark this thread as Answered by clicking on the "Accept" button on the helpful answer.
... View more
07-11-2019
01:06 PM
1 Kudo
@YOSUKE SHIBUYA In your "hbase-ams-master-kvm07log.txt" log we see the following message. 2019-07-11 19:11:58,731 INFO [Thread-23] wal.ProcedureWALFile: Opening file:/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/pv2-00000000000000000001.log length=45336
2019-07-11 19:11:58,743 WARN [Thread-23] wal.WALProcedureStore: Unable to read tracker for file:/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/pv2-00000000000000000001.log
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat$InvalidWALDataException: Invalid Trailer version. got 48 expected 1
at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readTrailer(ProcedureWALFormat.java:189) Looks like the WAL Data "/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/" got corrupted. # ls -lart /var/lib/ambari-metrics-collector/hbase/MasterProcWALs/* May be you can take a backup of the dir "/var/lib/ambari-metrics-collector/hbase/" and then try to clean the file present inside the "/var/lib/ambari-metrics-collector/hbase/MasterProcWALs/*" Then try to perform a tmp dir cleanup. After taking a backup of "/var/lib/ambari-metrics-collector/hbase-tmp/" Then remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper AND any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder "hbase.tmp.dir": (default value: /var/lib/ambari-metrics-collector/hbase-tmp) This is on local filesystem for both modes: # rm -fr /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*
# rm -fr /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/* Then try to restart the AMS. Better if you also increase the Metrics Collector Heap Size 1024MB and HBase Master Maximum Memory 2048MB. (or 4096MB) if you repeatedly see similar issue.
... View more
07-11-2019
09:31 AM
@YOSUKE SHIBUYA The error snippet which you posted is just the after effect of the actual cause and a very generic message. Can you please share the following logs for initial review? /var/log/ambari-metrics-collector/ambari-metrics-collector.log
/var/log/ambari-metrics-collector/hbase-ams-master-xxxxxxxx.log
/var/log/ambari-metrics-collector/gc.log
/var/log/ambari-metrics-collector/collector-gc.log Also most probably the AMS failure can happen due to incorrect tuning or heavy load. So can you please let us know the following: 1. How many nodes are there in your cluster? 2. How much memory have you allocated to the AMS collector and HMaster. 3. I guess you might be using default Embedded Mode AMS (not distributed) Both require slightly different kind of tuning.
... View more
07-11-2019
08:50 AM
1 Kudo
@abraham fikire /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.2
Error: could not find libjava.so Error: Could not find Java SE Runtime Environment We see the above error in your DataNode startup logs. Please validate the same with the user who is running the DataNOde process using the command: # java -version
# su - hdfs
# java -version this error indicates that you Might not have a Valid JDK installed on your machine. So please try this: 1. Install a Valid JDK 1.8 on your machine. You can download one from here: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 2. set the JAVA_HOME as following inside the "/etc/profile" or inside the "~/.bash_profile" file" export JAVA_HOME=/PATH/TO/jdk1.8.0-120
export PATH=$JAVA_HOME/bin:$PATH The path "/PATH/TO/jdk1.8.0-120" is a dummy path please use your own JDK path here. .
... View more