Member since
11-09-2016
68
Posts
16
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2526 | 12-07-2017 06:32 PM | |
941 | 12-07-2017 06:29 PM | |
1572 | 12-01-2017 11:56 AM | |
9567 | 02-10-2017 08:55 AM | |
3022 | 01-23-2017 09:44 PM |
03-18-2018
11:42 AM
It's more likely you don't have enough ram memory, double check the size of your queue.
... View more
03-16-2018
10:22 PM
Hi Dominique, Yes it does audit policy change/update, and logins. hope this answer your question.
... View more
03-16-2018
10:13 PM
Can you try to tune it by changing the following ( try 2G then 4G or 6G ..) : set hive.tez.container.size=2048; set hive.tez.java.opts=-Xmx2048m;
... View more
03-16-2018
02:59 PM
Quick command to find the total number of partitions in a Kafka cluster, it could help for example in Mirror Maker sizing. Please replace ZK_SERVER values with your cluster details. cd /tmp
zookeeper="ZK_SERVER1:2181,ZK_SERVER2:2181,ZK_SERVER3:2181"
sum=0
for i in $(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $zookeeper ); do count=$(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --zookeeper $zookeeper --topic $i |grep Leader | wc -l); sum=`expr $sum + $count` ; echo 'total partitions is ' $sum; done
If you want to count partitions with specific filter on the name for Topics zookeeper="ZK_SERVER1:2181,ZK_SERVER2:2181,ZK_SERVER3:2181" sum=0
for i in $(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $zookeeper | grep 'FILTER'); do count=$(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --zookeeper $zookeeper --topic $i |grep Leader | wc -l); sum=`expr $sum + $count` ; echo 'total partitions is ' $sum; done
... View more
Labels:
03-09-2018
02:55 PM
How to get the number of documents indexed : curl -o /tmp/result.txt --negotiate -u : -X GET "SOLR_SERVER:8886/solr/ranger_audits_shard1_replica1/select?q=*:*&distrib=false"
How to run a delete command via curl ( delete data older than 24h 😞 curl --negotiate -u: "SOLR_SERVER:8886/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-24HOURS]</query></delete>" How to run optimize command via curl : curl --negotiate -u: "SOLR_SERVER:8886/solr/ranger_audits/update?optimize=true"
... View more
Labels:
03-09-2018
02:36 PM
##install Luna client,
Unzip Luna client for example under /opt/LUNAHSM
under /opt/LUNAHSM/linux/64/
run
sh install.sh all
Follow the the instructions below, for the questions asked, please answer as below :
Accept conditions
(y/n) y
Products
Choose Luna Products to be installed
[1]: Luna SA
[2]: Luna PCI-E
[3]: Luna G5
[4]: Luna Remote Backup HSM
[N|n]: Next
[Q|q]: Quit
Enter selection: 1
Products
Choose Luna Products to be installed
*[1]: Luna SA
[2]: Luna PCI-E
[3]: Luna G5
[4]: Luna Remote Backup HSM
[N|n]: Next
[Q|q]: Quit
Enter selection: n
Advanced
Choose Luna Components to be installed
[1]: Luna Software Development Kit (SDK)
*[2]: Luna JSP (Java)
*[3]: Luna JCProv (Java)
[B|b]: Back to Products selection
[I|i]: Install
[Q|q]: Quit
Enter selection: i
List of Luna Products to be installed:
- Luna SA
List of Luna Components to be installed:
- Luna JSP (Java)
- Luna JCProv (Java)
... installation complete
<br>#now to swap the certificate : copy SERVER.pem from LUNA server to your KMS server /tmp
cp /tmp/SERVER.pem /usr/safenet/lunaclient/cert/server
#under lunaClient
[root@XXXXX lunaclient]# pwd
/usr/safenet/lunaclient
#get the local IP where the client is installed YY.YY.YY.YY (YY.YY.YY.YY is your local IP)
[root@XXXXX lunaclient]# bin/vtl createCert -n YY.YY.YY.YY
Private Key created and written to: /usr/safenet/lunaclient/cert/client/SERVERkey.pem
Certificate created and written to: /usr/safenet/lunaclient/cert/client/xx.xx.xx.xx.pem
#add a Luna SA Server to the trusted list of servers
[root@XXXXX lunaclient]# bin/vtl addServer -n xx.xx.xx.xx -c /usr/safenet/lunaclient/cert/server/SERVER.pem
New server xx.xx.xx.xx successfully added to server list.
transfer the pem generated to the Luna server.
SWAP COMPLETED.
[root@XXXXX lunaclient]# bin/vtl verify
... View more
03-09-2018
02:24 PM
Quick post to add an auto fix for Solr infra lock issue. Ranger server under : /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf Edit the file solrconfig.xml Uncomment and change <unlockOnStartup>false</unlockOnStartup> to <unlockOnStartup>true</unlockOnStartup> Submit the new xml: /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string XXXX:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --jaas-file Increase the sleep time from 5 to 30 seconds in /opt/lucidworks-hdpsearch/solr/bin/solr sed -i 's/(sleep 5)/(sleep 30)/g'/opt/lucidworks-hdpsearch/solr/bin/solr Or in the following : sed -i 's/(sleep 5)/(sleep 30)/g' /usr/lib/ambari-infra-solr/bin/solr you can also add in the script the following command : hadoop fs -rm /user/infra-solr/ranger_audits/core_node1/data/index/write.lock
... View more
Labels:
03-09-2018
02:07 PM
Quick tips to optimise your infra Solr for Ranger audits using SolrCloud. 1) Change the SolrCloud retention period of the audit. On the server of ranger under : /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf
#### Edit the file or use sed to replace the 90 Days in the solrconfig.xml , choose the right retention period, here is 6 hours sed -i 's/+90DAYS/+6HOURS/g' solrconfig.xml sed -i 's/86400/7200/g' solrconfig.xml 2) Change ZK config, by submitting the xml again /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string XXXXXX:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --jaas-file /usr/hdp/current/ranger-admin/conf/ranger_solr_jaas.conf
Check that we loaded it correctly, in the Solr UI or with the following command #Download the solrconfig.xml from Zookeeper
/usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/zkcli.sh --zkhost XXXXXX:2181 -cmd getfile /infra-solr/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
3) Restart Infra
... View more
Labels:
01-03-2018
11:24 AM
1 - could be your queues are busy and if you are on FIFO, this may explain the behaviour. 2- or could be your metastore is busy and not responding properly, could be due to the backend db ( postgres or mysql )
... View more
01-03-2018
11:14 AM
You have reached the max number of files for one folder, and an ls on this folder may not work Maybe your process is creating too many small files, maybe worth check why this is happening. For a quick workaround you can try the following : 1#get the total count of the table
2#get the creations script, make sure the table is partitioned accordingly.
3#take a copy of the table
create table tablecopy as select * from table;
4#check the count on the new table
select count(*) from table
5#check number of hdfs files
hdfs dfs -ls /apps/hive/warehouse//table
6#take a copy of the hdfs folder for further investigation
export HADOOP_HEAPSIZE="8096"
hdfs dfs -cp /apps/hive/warehouse//table /tmp
=> you may have OutOfMemoryError: GC overhead limit exceeded
7#truncate original table
truncate table table;
8#drop table
drop table table;
9#make sure hdfs folder is removed
10#create table again
11#put the data back with insert
... View more