Member since
06-03-2019
59
Posts
21
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1190 | 04-11-2023 07:41 AM | |
8010 | 02-14-2017 10:34 PM | |
1417 | 02-14-2017 05:31 AM |
08-04-2022
02:28 AM
Hello @achandra, This is an Old Post yet closing the same by sharing the feedback concerning your ask for wider audience. The API is failing owing to Space between "NOW-" & "7DAYS". There shouldn't be any gap between the same. In Summary, the Command is below, where Customer needs to set the HTTP(s) header, Solr Host & Solr Port accordingly. Additionally, the Example uses "ranger_audits" Collection & "evtTime" field to delete any Documents older than 7 Days: ### curl -k --negotiate -u : "http[s]://<Any Solr Host FQDN>:<Solr Port>/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-7DAYS]</query></delete>" Regards, Smarak
... View more
09-21-2018
06:11 PM
This article explains how to delete a Registered HDP cluster from DPS Steps: 1. Download the attached file dp-cluster-remove.txt and store it in postgres db server. 2. psql -f ./dp_cluster_remove.txt -d dataplane -v cluster_name=c149 -d --> <dataplane database name> cluster_name --> Your HDP clustername that should be removed from DPS cluster. Example Output:- [root@rraman bin]# docker exec -it dp-database /bin/bash bash-4.3# su - postgres de0ff40ad912:~$ ls -lrt total 8 drwx------ 19 postgres postgres 4096 Sep 21 01:27 data
-rwxrwxrwx 1 postgres postgres 892 Sep 21 17:59 dp_cluster_remove.txt de0ff40ad912:~$ psql -f ./dp_cluster_remove.txt -d dataplane -v cluster_name=c149
CREATE FUNCTION
remove_cluster_from_dps -------------------------
(1 row)
DROP FUNCTION
... View more
Labels:
06-13-2018
04:26 PM
2 Kudos
In DPS-1.1.0 we can't edit all the LDAP configuration properties after the initial setup. If we have to correct LDAP configs, you need to re-initialize the DPS setup to change it, which can be a painful task. To avoid re-initializing the DPS setup, we can make the changes directly in the postgres database of dataplane. Step1:- Find the container id of dp-database on DPS machine docker ps Step2: - connect to the docker machine docker exec -it cf3f4a31e146 /bin/bash Step3:- Login to the postgres database (dataplane) su - postgres psql -d dataplane Take backup of the table, create table dataplane.ldap_configs_bkp as select * from dataplane.ldap_configs; To view the existing configuration: select * from dataplane.ldap_configs; Sample Output: id | url | bind_dn | user_searchbase | usersearch_attributename | group_searchbase | groupsearch_attr
ibutename | group_objectclass | groupmember_attributename | user_object_class
----+---------------------------------------+----------------------------------------------------+-----------------------------------------+--------------------------+------------------------------------------+-----------------
----------+-------------------+---------------------------+-------------------
1 | ldap://ldap.hortonworks.com:389 | uid=xyz,ou=users,dc=support,dc=hortonworks,dc=com | ou=users,dc=support,dc=hortonworks,dc=com | uid | ou=groups,dc=support,dc=hortonworks,dc=com | cn
| posixGroup | memberUid | posixAccount Step 4:- Make the changes in database for the required field For example:- if i need to change usersearch_attributename from uid to cn, then i can issue this command update dataplane.ldap_configs set usersearch_attributename='cn'; That's it! it should reflect immediately on the dataplane UI. Note:- Use this doc, when you are newly installing DPS and made a mistake in LDAP configs.
... View more
Labels:
02-02-2018
10:53 PM
1 Kudo
To print GC details, please add the following line in Spark--> config--> Advanced spark-env --> spark-env template and restart spark history server. export SPARK_DAEMON_JAVA_OPTS=" -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{spark_log_dir}}/spark_history_server.gc.`date +'%Y%m%d%H%M'`"
... View more
Labels:
01-03-2018
07:22 PM
Hive metastore DB Connection verification from Command line:- You can run the following on any node where ambari agent installed on the cluster, You need the following information to run this test. You can use this to validate if the password stored in ambari and the actual mysql db password are same. 1. mysql db hostname 2. mysql db port number 3. mysql database name which hive metastore uses 4. mysql username 5. mysql password Syntax:- java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar -Djava.library.path=/usr/lib/ambari-agent org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://<mysql db hostname>:<mysql db port number>/<mysql database name>" "<mysql username>" "<mysql password>" com.mysql.jdbc.Driver Example:- /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar -Djava.library.path=/usr/lib/ambari-agent org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://test.openstacklocal:50001/hive" hive hive com.mysql.jdbc.Driver Connected to DB Successfully!
... View more
Labels:
08-02-2017
05:57 AM
3 Kudos
Caution: Running bad queries against the AMS hbase tables can crash the AMS collector PID due to load. Use it for debugging purpose. To connect to AMS Hbase instance which is running in distributed mode. cd /usr/lib/ambari-metrics-collector/bin ./sqlline.py localhost:2181:/ams-hbase-secure To get the correct znode, get the value of "zookeeper.znode.parent" from AMS collector configs.
... View more
03-30-2017
06:18 AM
1 Kudo
By default the GC logs are not enabled for Hive components. It is good to enable them to troubleshoot GC pauses on hiveserver2 instances. --------------------------------- Hiveserver2 / Metastore: --------------------------------- In Ambari navigate to the following path Services -- > Hive -- > Configs -- > Advanced --> Advanced hive-env --> hive-env template Add the following lines at the beginning, if [[ "$SERVICE" == "hiveserver2" || "$SERVICE" == "metastore" ]]; then HIVE_SERVERS_GC_LOG_OPTS="-Xloggc:{{hive_log_dir}}/gc.log-$SERVICE-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps" export HADOOP_OPTS="$HADOOP_OPTS $HIVE_SERVERS_GC_LOG_OPTS" fi --------------------------------- Webhact : --------------------------------- In Ambari navigate to the following path Services -- > Hive -- > Configs -- > Advanced --> Advanced webhcat-env --> webhcat-env template Add the following lines at the bottom, WEBHACAT_GC_LOG_OPTS="-Xloggc:{{templeton_log_dir}}/gc.log-webhcat-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps" export HADOOP_OPTS="$HADOOP_OPTS $WEBHACAT_GC_LOG_OPTS" Save the changes in ambari and restart the hive services , it will enable the GC logs writing at the restart. Thanks for the following articles. I changed the GC file name similar to namenode GC logs and kept all the GC variables in single parameter for simplicity. https://community.hortonworks.com/content/supportkb/49404/how-to-setup-gc-log-for-hiveserver2.html http://stackoverflow.com/questions/39888681/how-to-enable-gc-logging-for-apache-hiveserver2-metastore-server-webhcat-server?newreg=e73d605b7873494e810537edd040dcac
... View more
Labels:
03-27-2017
10:31 PM
Once we support custom select (PHOENIX-1505, PHOENIX-1506) it will be possible to do that (like ... select column_1 as c1, ... ). Honestly speaking I can't say when it will be done.
... View more