Member since
06-03-2019
59
Posts
21
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1156 | 04-11-2023 07:41 AM | |
7854 | 02-14-2017 10:34 PM | |
1382 | 02-14-2017 05:31 AM |
09-20-2017
02:23 AM
@Anwaar Siddiqui It seems to be a knox bug https://issues.apache.org/jira/browse/KNOX-890 Workaround:- Append "http.header.Connection=close" in the JDBC connection string. For example, with Beeline, use the following command:
beeline -u "jdbc:hive2://sandbox.hortonworks.com:8443/
;ssl=false;sslTrustStore=/tmp/myNewTrustStore.jks;trustStorePassword=changeit;transportMode
=http;httpPath=gateway/default/hive;http.header.Connection=close" -n admin -p admin-password
... View more
09-14-2017
02:59 AM
@Angel Mondragon Can you see if there is any PID value on this file /var/run/mysqld/mysqld.pid Try to grep the process if there is any dead process hanging. if so try to kill it and remove the pid file. Also you can delete the socket file /var/lib/mysql/mysql.sock and /tmp/mysql.sock.lock & /tmp/mysql.sock. During the mysql restart these files will be recreated.
... View more
09-14-2017
01:46 AM
@Juan Manuel Nieto Please see if this article helps for your problem. https://community.hortonworks.com/questions/61159/getting-untrusted-proxy-message-while-trying-to-se.html
... View more
08-30-2017
03:19 PM
3 Kudos
@Freemon Johnson You can also use the hdp-select status command as root in cli. It will list the services installed on the cluster. example:- ( None- means it is available in HDP.repo for install but not installed) [root@micprod ~]# hdp-select status
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - 2.5.3.51-3
atlas-server - 2.5.3.51-3
falcon-client - 2.5.3.51-3
falcon-server - 2.5.3.51-3
flume-server - 2.5.3.51-3
hadoop-client - 2.5.3.51-3
hadoop-hdfs-datanode - 2.5.3.51-3
hadoop-hdfs-journalnode - 2.5.3.51-3
hadoop-hdfs-namenode - 2.5.3.51-3
hadoop-hdfs-nfs3 - 2.5.3.51-3
hadoop-hdfs-portmap - 2.5.3.51-3
hadoop-hdfs-secondarynamenode - 2.5.3.51-3
hadoop-hdfs-zkfc - 2.5.3.51-3
hadoop-httpfs - None
hadoop-mapreduce-historyserver - 2.5.3.51-3
hadoop-yarn-nodemanager - 2.5.3.51-3
hadoop-yarn-resourcemanager - 2.5.3.51-3
hadoop-yarn-timelineserver - 2.5.3.51-3
hbase-client - 2.5.3.51-3
hbase-master - 2.5.3.51-3
hbase-regionserver - 2.5.3.51-3
hive-metastore - 2.5.3.51-3
hive-server2 - 2.5.3.51-3
hive-server2-hive2 - 2.5.3.51-3
hive-webhcat - 2.5.3.51-3
kafka-broker - 2.5.3.51-3
knox-server - 2.5.3.51-3
livy-server - 2.5.3.51-3
mahout-client - None
oozie-client - 2.5.3.51-3
oozie-server - 2.5.3.51-3
phoenix-client - 2.5.3.51-3
phoenix-server - 2.5.3.51-3
ranger-admin - 2.5.3.51-3
ranger-kms - None
ranger-tagsync - None
ranger-usersync - 2.5.3.51-3
slider-client - 2.5.3.51-3
spark-client - 2.5.3.51-3
spark-historyserver - 2.5.3.51-3
spark-thriftserver - 2.5.3.51-3
spark2-client - 2.5.3.51-3
spark2-historyserver - 2.5.3.51-3
spark2-thriftserver - 2.5.3.51-3
sqoop-client - 2.5.3.51-3
sqoop-server - 2.5.3.51-3
storm-client - None
storm-nimbus - None
storm-slider-client - 2.5.3.51-3
storm-supervisor - None
zeppelin-server - 2.5.3.51-3
zookeeper-client - 2.5.3.51-3
zookeeper-server - 2.5.3.51-3
... View more
08-28-2017
06:29 AM
3 Kudos
Ambari database cleanup - Speed up ambari-server db-cleanup -d 2016-09-30 --cluster-name=TESTHDP I have ran this on a ambari database used by on one of the 500 node cluster HDP cluster. It ran for more than 15 hours but without any success. I analyzed the ambari server logs to check where it is taking most of the time, it seems to be spending more time on batch deletes on ambari.alert_notice, ambari.alert_current ,ambari.alert_history table. To improve the performance of the db cleanup, I have created the index on the ambari.alert_notice table.. -bash-4.1$ psql -U ambari -d ambari
Password for user ambari:
psql (8.4.20)
Type "help" for help.
ambari=> CREATE INDEX alert_notice_idx ON ambari.alert_notice(history_id); After this i ran re-loaded my ambari database from the backup and ran the db-cleanup, it took only less than 2 min to complete the cleanup. Also to reclaim the disk space and reindex after the cleanup, i ran the following commands as super user "postgres" Vacuum full;
reindex database ambari; .
... View more
Labels:
08-02-2017
05:57 AM
3 Kudos
Caution: Running bad queries against the AMS hbase tables can crash the AMS collector PID due to load. Use it for debugging purpose. To connect to AMS Hbase instance which is running in distributed mode. cd /usr/lib/ambari-metrics-collector/bin ./sqlline.py localhost:2181:/ams-hbase-secure To get the correct znode, get the value of "zookeeper.znode.parent" from AMS collector configs.
... View more
07-19-2017
12:10 PM
1 Kudo
Run the following commands postgres user(super user) : To check the db total size:- (before and after comparision)
SELECT pg_size_pretty( pg_database_size('ambari')); vacuum full;
reindex database ambari;
SELECT pg_size_pretty( pg_database_size('ambari'));
... View more
03-30-2017
06:18 AM
1 Kudo
By default the GC logs are not enabled for Hive components. It is good to enable them to troubleshoot GC pauses on hiveserver2 instances. --------------------------------- Hiveserver2 / Metastore: --------------------------------- In Ambari navigate to the following path Services -- > Hive -- > Configs -- > Advanced --> Advanced hive-env --> hive-env template Add the following lines at the beginning, if [[ "$SERVICE" == "hiveserver2" || "$SERVICE" == "metastore" ]]; then HIVE_SERVERS_GC_LOG_OPTS="-Xloggc:{{hive_log_dir}}/gc.log-$SERVICE-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps" export HADOOP_OPTS="$HADOOP_OPTS $HIVE_SERVERS_GC_LOG_OPTS" fi --------------------------------- Webhact : --------------------------------- In Ambari navigate to the following path Services -- > Hive -- > Configs -- > Advanced --> Advanced webhcat-env --> webhcat-env template Add the following lines at the bottom, WEBHACAT_GC_LOG_OPTS="-Xloggc:{{templeton_log_dir}}/gc.log-webhcat-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps" export HADOOP_OPTS="$HADOOP_OPTS $WEBHACAT_GC_LOG_OPTS" Save the changes in ambari and restart the hive services , it will enable the GC logs writing at the restart. Thanks for the following articles. I changed the GC file name similar to namenode GC logs and kept all the GC variables in single parameter for simplicity. https://community.hortonworks.com/content/supportkb/49404/how-to-setup-gc-log-for-hiveserver2.html http://stackoverflow.com/questions/39888681/how-to-enable-gc-logging-for-apache-hiveserver2-metastore-server-webhcat-server?newreg=e73d605b7873494e810537edd040dcac
... View more
Labels:
03-27-2017
10:13 PM
@Sergey SoldatovThanks for the response. Do you aware if there is any apache JIRA open to include this feature in future releases.
... View more