Member since
11-09-2016
68
Posts
16
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2557 | 12-07-2017 06:32 PM | |
965 | 12-07-2017 06:29 PM | |
1590 | 12-01-2017 11:56 AM | |
9687 | 02-10-2017 08:55 AM | |
3103 | 01-23-2017 09:44 PM |
04-11-2020
09:16 PM
I was working on something unrelated, but I hit this same error, detailed the issue in Jira, and have proposed a workaround. The issue is that there is a feature in Hive called the REGEX Column Specification. IMHO this feature was ill conceived and is not standard SQL. It should be removed from Hive and this issue is yet another reason why. That's what I was working on when I hit this issue. When Hive looks at the table name surrounded by back ticks, it looks at that string and determines that it is a Regex. When Hive looks at the table name surrounded by quotes, it looks at that string and determines that it is a Table Name. The basic rule it uses is "most anything ASCII surrounded by back ticks is a Regex." However, when Hive sees the quotes, it sees the string as a table name. Using quotes (and technically back ticks too, but that's clearly broken) around table names can be allowed/disallowed with a feature in Hive called "hive.support.quoted.identifiers". This feature is enabled in the user's HS2 session by default. However, when performing masking, it is a multi step process: The query is parsed by HS2 The masking is applied The query is parsed again by HS2 The first parsing attempt respects the hive.support.quoted.identifiers configuration and allows a query with quotes to be parsed. However, the masking code does not pass this configuration information to the parser on the second attempt. And oddly enough, if the configuration information is not passed along, the parser will consider this feature to be disabled. So, it's actually on the second pass that it fails because the parser rejects the quotes. For the record, I hit this issue when I removed the Regex feature, because it forced all quoted strings to be considered table names (and subjected to this feature being enabled/disabled) instead of sneaking by as being considered a Regex. All the masking unit tests failed. https://issues.apache.org/jira/browse/HIVE-23182 https://issues.apache.org/jira/browse/HIVE-23176
... View more
03-16-2018
02:59 PM
Quick command to find the total number of partitions in a Kafka cluster, it could help for example in Mirror Maker sizing. Please replace ZK_SERVER values with your cluster details. cd /tmp
zookeeper="ZK_SERVER1:2181,ZK_SERVER2:2181,ZK_SERVER3:2181"
sum=0
for i in $(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $zookeeper ); do count=$(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --zookeeper $zookeeper --topic $i |grep Leader | wc -l); sum=`expr $sum + $count` ; echo 'total partitions is ' $sum; done
If you want to count partitions with specific filter on the name for Topics zookeeper="ZK_SERVER1:2181,ZK_SERVER2:2181,ZK_SERVER3:2181" sum=0
for i in $(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $zookeeper | grep 'FILTER'); do count=$(/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --zookeeper $zookeeper --topic $i |grep Leader | wc -l); sum=`expr $sum + $count` ; echo 'total partitions is ' $sum; done
... View more
Labels:
03-18-2018
11:42 AM
It's more likely you don't have enough ram memory, double check the size of your queue.
... View more
03-09-2018
02:55 PM
How to get the number of documents indexed : curl -o /tmp/result.txt --negotiate -u : -X GET "SOLR_SERVER:8886/solr/ranger_audits_shard1_replica1/select?q=*:*&distrib=false"
How to run a delete command via curl ( delete data older than 24h 😞 curl --negotiate -u: "SOLR_SERVER:8886/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-24HOURS]</query></delete>" How to run optimize command via curl : curl --negotiate -u: "SOLR_SERVER:8886/solr/ranger_audits/update?optimize=true"
... View more
Labels:
03-09-2018
02:36 PM
##install Luna client,
Unzip Luna client for example under /opt/LUNAHSM
under /opt/LUNAHSM/linux/64/
run
sh install.sh all
Follow the the instructions below, for the questions asked, please answer as below :
Accept conditions
(y/n) y
Products
Choose Luna Products to be installed
[1]: Luna SA
[2]: Luna PCI-E
[3]: Luna G5
[4]: Luna Remote Backup HSM
[N|n]: Next
[Q|q]: Quit
Enter selection: 1
Products
Choose Luna Products to be installed
*[1]: Luna SA
[2]: Luna PCI-E
[3]: Luna G5
[4]: Luna Remote Backup HSM
[N|n]: Next
[Q|q]: Quit
Enter selection: n
Advanced
Choose Luna Components to be installed
[1]: Luna Software Development Kit (SDK)
*[2]: Luna JSP (Java)
*[3]: Luna JCProv (Java)
[B|b]: Back to Products selection
[I|i]: Install
[Q|q]: Quit
Enter selection: i
List of Luna Products to be installed:
- Luna SA
List of Luna Components to be installed:
- Luna JSP (Java)
- Luna JCProv (Java)
... installation complete
<br>#now to swap the certificate : copy SERVER.pem from LUNA server to your KMS server /tmp
cp /tmp/SERVER.pem /usr/safenet/lunaclient/cert/server
#under lunaClient
[root@XXXXX lunaclient]# pwd
/usr/safenet/lunaclient
#get the local IP where the client is installed YY.YY.YY.YY (YY.YY.YY.YY is your local IP)
[root@XXXXX lunaclient]# bin/vtl createCert -n YY.YY.YY.YY
Private Key created and written to: /usr/safenet/lunaclient/cert/client/SERVERkey.pem
Certificate created and written to: /usr/safenet/lunaclient/cert/client/xx.xx.xx.xx.pem
#add a Luna SA Server to the trusted list of servers
[root@XXXXX lunaclient]# bin/vtl addServer -n xx.xx.xx.xx -c /usr/safenet/lunaclient/cert/server/SERVER.pem
New server xx.xx.xx.xx successfully added to server list.
transfer the pem generated to the Luna server.
SWAP COMPLETED.
[root@XXXXX lunaclient]# bin/vtl verify
... View more
03-09-2018
02:24 PM
Quick post to add an auto fix for Solr infra lock issue. Ranger server under : /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf Edit the file solrconfig.xml Uncomment and change <unlockOnStartup>false</unlockOnStartup> to <unlockOnStartup>true</unlockOnStartup> Submit the new xml: /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string XXXX:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --jaas-file Increase the sleep time from 5 to 30 seconds in /opt/lucidworks-hdpsearch/solr/bin/solr sed -i 's/(sleep 5)/(sleep 30)/g'/opt/lucidworks-hdpsearch/solr/bin/solr Or in the following : sed -i 's/(sleep 5)/(sleep 30)/g' /usr/lib/ambari-infra-solr/bin/solr you can also add in the script the following command : hadoop fs -rm /user/infra-solr/ranger_audits/core_node1/data/index/write.lock
... View more
Labels:
03-09-2018
02:07 PM
Quick tips to optimise your infra Solr for Ranger audits using SolrCloud. 1) Change the SolrCloud retention period of the audit. On the server of ranger under : /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf
#### Edit the file or use sed to replace the 90 Days in the solrconfig.xml , choose the right retention period, here is 6 hours sed -i 's/+90DAYS/+6HOURS/g' solrconfig.xml sed -i 's/86400/7200/g' solrconfig.xml 2) Change ZK config, by submitting the xml again /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string XXXXXX:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --jaas-file /usr/hdp/current/ranger-admin/conf/ranger_solr_jaas.conf
Check that we loaded it correctly, in the Solr UI or with the following command #Download the solrconfig.xml from Zookeeper
/usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/zkcli.sh --zkhost XXXXXX:2181 -cmd getfile /infra-solr/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
3) Restart Infra
... View more
Labels:
03-19-2018
01:21 PM
@dvillarreal oops, I have missed that ones. Thanks for pointing me policy change/update traces/audits.
... View more
05-22-2018
01:59 PM
i can understand Lukas' issue with "*-2" named .repo files. my install is error'ing out and giving me no clues, no breadcrumbs to follow. all my /var/lib/ambari-agent/data/errors* log files are either size 0-length or 86-length, with latter: "Server considered task failed and automatically aborted it." on centos7.4, Ambari2.6.1.5 when i installed with a ambari-hdp.repo Ambari complained and duplicated it as ambari-hdp-1.repo Justin
... View more
12-01-2017
10:41 AM
kinit PRINCIPAL -kt /etc/security/keytabs/PRINCIPAL.keytab
hive --hiveconf hive.execution.engine=mr
SET hive.execution.engine=tez;
SET tez.queue.name=QUEUE_NAME;
use MON_SCHEMA;
select count(*) from TABLE where id =1;
By starting Hive CLI with mr, it open the terminal quickly than default, this is because it will not request for AM resource.
PS : Hive CLI is not recommended and must be deprecated in your production environment, check here for more info
... View more
Labels: