Member since
05-07-2018
331
Posts
45
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7014 | 09-12-2018 10:09 PM | |
2734 | 09-10-2018 02:07 PM | |
9292 | 09-08-2018 05:47 AM | |
3067 | 09-08-2018 12:05 AM | |
4091 | 08-15-2018 10:44 PM |
09-10-2018
02:26 PM
Hello @A C. What do you see in the YARN UI? Is there any application_id running for your oozie workflow/Spark Job? Thanks.
... View more
09-10-2018
02:19 PM
Hello @tauqeer khan. If you're using HDP, then you can give it a shot with Falcon: https://community.hortonworks.com/articles/110398/mirroring-datasets-between-hadoop-clusters-with-ap.html The other thing would be to check the following links: https://cwiki.apache.org/confluence/display/Hive/Replication https://medium.com/@anishekagarwal/aapache-hive-introduction-to-replication-v2-2e12edcbeec And lastly, you want to replicate Hive and you don't wanna use distcp or none of the solutions listed above, you can try to use the following apache project from AirBnb (i've used once, it's pretty cool ) https://github.com/airbnb/reair Hope this helps!
... View more
09-10-2018
02:07 PM
1 Kudo
Hello @sandra Alvarez! Just to confirm, you're using NIFI, right? If so, check this article (have an example using Getsnmp + LogAttribute): https://community.hortonworks.com/articles/215092/how-to-use-make-a-simple-flow-using-snmpsetsnmpget.html Also, to get the attribute of FlowFiles and write into files, check the following link: https://community.hortonworks.com/questions/75255/nifi-how-to-write-a-flowfile-attribute-to-a-file.html Hope this helps!
... View more
09-10-2018
01:55 PM
Hello @rama. Hm. that's strange. By any chance to have the add partition + insert in a short period between them? I'm asking this, cause I'm suspecting 2 things: - That you had your partition added and somehow the table got locked. You can check this by running show locks; - Check if your HiveMetaStore DB (mysql, derby or etc) is healthy. So guess, in the next time you can try to do the following: - Enable DEBUG mode for HiveMetastore logs and check if you find something. - Login into the DB and check if your partitions have been added properly - Login into Hive with verbose and run SHOW LOCKS; - Just to confirm, make sure that you're running the msck repair table <TABLE>; after the whole process ended. Hope this helps!
... View more
09-08-2018
05:47 AM
Hello @Marshal Tito! Could you check the following? 1 - Hive user has permission to read the jar? Try to chmod 777 to the jar. 2 - Every time that you run a query that needs the jar, did you add the jar first in the same session? (cause using add jar command, you will need to add the jar for every query that hit the table with the specific jar) 3 - One test that you can give it a shot is to add the jar in the hive.aux.jars.path (to take effect you will need to restart hive afterwards). Lastly, If nothing works, I'd try to enable the debug for HiveCLI and take the same steps and watch if something shows up on the logs in the console: hive --hiveconf hive.root.logger=DEBUG,console ps: I've also followed the article and was working fine for me. Hope this helps!
... View more
09-08-2018
12:05 AM
Hi @Maxim Neaga. Just to make sure, if you run the following command for every keystore/trustore, are you able to see the certificate? keytool -v -list -keystore <pathtokeystore>/<keystore/trustore.jks> It should show up something like this: [root@vmurakami-1 ~]# keytool -v -list -keystore windows.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 2 entries
Alias name: nifi-cert
Creation date: Sep 1, 2018
Entry type: trustedCertEntry
Owner: CN=vmurakami-3, OU=NIFI
Issuer: CN=vmurakami-3, OU=NIFI
Serial number: 1649584f09b00000000
Valid from: Fri Jul 13 21:21:14 UTC 2018 until: Mon Jul 12 21:21:14 UTC 2021
Certificate fingerprints:
MD5: 02:BE:7D:37:22:5B:A8:37:F2:F0:02:E0:26:96:E7:54
SHA1: 1F:D0:EC:B5:1A:6E:E7:E5:B4:65:71:1B:8A:B3:99:C2:2A:50:28:0D
SHA256: 14:1C:40:B9:2E:6C:C4:5F:56:C8:9D:76:31:21:B5:CB:E2:FA:B1:A2:BE:9B:CA:7F:0D:B4:72:1B:32:2A:95:69
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions:
#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 78 2D ED D1 1D 7F F6 22 A3 60 39 EF CE AC 09 6E x-.....".`9....n
0010: CD 51 B9 D3 .Q..
]
]
#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
CA:true
PathLen:2147483647
]
#3: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
clientAuth
serverAuth
]
#4: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
DigitalSignature
Non_repudiation
Key_Encipherment
Data_Encipherment
Key_Agreement
Key_CertSign
Crl_Sign
]
#5: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 78 2D ED D1 1D 7F F6 22 A3 60 39 EF CE AC 09 6E x-.....".`9....n
0010: CD 51 B9 D3 .Q..
]
]
*******************************************
*******************************************
Alias name: windows
Creation date: Sep 1, 2018
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: CN=MSEDGEWIN10, OU=NIFI
Issuer: CN=MSEDGEWIN10, OU=NIFI
Serial number: 27ba96c9
Valid from: Sat Sep 01 07:21:44 UTC 2018 until: Tue Aug 27 07:21:44 UTC 2019
Certificate fingerprints:
MD5: B7:FE:EB:0C:3E:7D:EE:E9:58:54:EC:2B:F4:02:9C:0D
SHA1: 3A:9A:DD:05:FF:E8:41:99:C8:8B:D4:84:4C:4A:5E:56:6C:46:15:B0
SHA256: 22:CD:A6:CE:9E:F0:B8:A3:A8:6E:25:2E:4D:A2:AB:70:4F:98:36:AC:8C:C0:A0:B6:15:22:E8:27:80:CC:F3:A6
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions:
#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 3B FE 73 64 EC 9C 91 B6 AC 3D EC 44 9D AF DD 66 ;.sd.....=.D...f
0010: B8 DE 4A F8 ..J.
]
]
Certificate[2]:
Owner: CN=vmurakami-3, OU=NIFI
Issuer: CN=vmurakami-3, OU=NIFI
Serial number: 1649584f09b00000000
Valid from: Fri Jul 13 21:21:14 UTC 2018 until: Mon Jul 12 21:21:14 UTC 2021
Certificate fingerprints:
MD5: 02:BE:7D:37:22:5B:A8:37:F2:F0:02:E0:26:96:E7:54
SHA1: 1F:D0:EC:B5:1A:6E:E7:E5:B4:65:71:1B:8A:B3:99:C2:2A:50:28:0D
SHA256: 14:1C:40:B9:2E:6C:C4:5F:56:C8:9D:76:31:21:B5:CB:E2:FA:B1:A2:BE:9B:CA:7F:0D:B4:72:1B:32:2A:95:69
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions:
#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 78 2D ED D1 1D 7F F6 22 A3 60 39 EF CE AC 09 6E x-.....".`9....n
0010: CD 51 B9 D3 .Q..
]
]
#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
CA:true
PathLen:2147483647
]
#3: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
clientAuth
serverAuth
]
#4: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
DigitalSignature
Non_repudiation
Key_Encipherment
Data_Encipherment
Key_Agreement
Key_CertSign
Crl_Sign
]
#5: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 78 2D ED D1 1D 7F F6 22 A3 60 39 EF CE AC 09 6E x-.....".`9....n
0010: CD 51 B9 D3 .Q..
]
]
*******************************************
*******************************************
Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore windows.jks -destkeystore windows.jks -deststoretype pkcs12".
Also, make sure of 2 things: - Give the read permission to the ranger hdfs plugin keystore/trustore chmod o+r keystore.jks truststore.jks - Put the same owner (CN=<some_name>) in the Ranger HDFS plugin > Owner of the certificate in Ambari UI commonNameForCertificate in the Ranger Plugin for HDFS in Ranger UI And lastly, to make sure, go to the ranger-admin node into the default java keystore "cacerts" and add the client certificate. find / -name "cacerts" -type f keytool -import -file <your cert file> -alias <your alias> -keystore <the_path_for_the_jdk_listed_above>/cacerts -storepass changeit #default passwor Hope this helps!
... View more
09-03-2018
08:03 PM
1 Kudo
Hello @Simran kaur! Did you note any heap pick for the HS2? Also, could you check the thread dump for the HS2? BTW, is there anything else in the hs2 logs? Once I had a similar problem with hive + redash and my issue was related to the number of connections hanging in the thrift. Not sure if it's your case. Hope this helps.
... View more
09-03-2018
07:46 PM
Hello @Teresa Tavernelli! Could you share with us the parameter that you're passing to the ODBC? Btw, just asking but, are you using Linux/Windows? (at the client host). One thing to check as well is if you're able to telnet/netcat the port 10000 (default of Hive server2) from the host machine. One last thing, are you able to connect to the Hive using beeline inside the sandbox? E.g. beeline -u 'jdbc:hive2://<your-sandbox>:10000/default;' -n hive Hope this helps!
... View more
08-29-2018
06:01 PM
2 Kudos
How to use make a simple flow using SNMPSET/SNMPGET Append A - Troubleshooting common mistakes Pre-requisites: - Nifi Cluster Installed (I'm using HDF 3.1.2.0 hence with Nifi 1.5) - Centos7 With your NIFI cluster installed, you're up to initiate this step-by-step. First of all, we'll need to install a snmp server to retrieve/set values of/to MIB's. In my case I'm using net-snmp tool, and it comes with some MIB samples to play. Here we're going to use the SNMPv2-MIB.txt under the /usr/share/snmp/mibs path 1)Install net-snmp yum install -y net-snmp net-snmp-utils net-snmp-libs 2)Give full access of read-write to anyone at public community (DON'T DO IT IN PROD, please) echo > /etc/snmp/snmpd.conf
printf "agentAddress udp:161\nrwcommunity public\nrwuser public\nrocommunity public default system\nrouser public" >> /etc/snmp/snmpd.conf 3)Start the SNMP SERVER DAEMON - port 161 service snmpd start 4)Test a simple snmpwalk to see if those changes at step 2 are working snmpwalk -v2c -mALL -c public localhost system 5)Test a simple snmpget snmpget -v2c -mALL -c public localhost SNMPv2-MIB::sysContact.0 6)Setting a simple value to overwrite the value above, and check again to see if the value has been replaced with the new value snmpset -v2c -mALL -c public localhost SNMPv2-MIB::sysContact.0 = "Vinicius Higa Murakami"
snmpget -v2c -mALL -c public localhost SNMPv2-MIB::sysContact.0 7)Login as nifi user and take the same steps as above (just to ensure that the nifi user has access to read/write) su - nifi
snmpset -v2c -mALL -c public localhost SNMPv2-MIB::sysContact.0 = "Nifi it's here"
snmpget -v2c -mALL -c public localhost SNMPv2-MIB::sysContact.0 8)Now we're ready to use nifi and draw the simple flow, using SNMP processors. Go to nifi UI and draw the following components: GenerateFlowFile Property SetSNMP Property LogAttribute Property GetSNMP Property LogAttribute Property 9)Run the nifi flow (snmpset and snmpget) and check if your value is showing up in the nifi-app.log Append A To troubleshoot the snmp, you can enable the DEBUG log by adding the following line to the /etc/sysconfig/snmpd OPTIONS="-A -p /var/run/snmpd -a -LF 7 /var/log/snmpd.log" And then make some snmpget (step 7) to check if it's logging the connections into the /var/log/snmpd.log. Should appear 2 lines: Connection from UDP: [127.0.0.1]:40769->[127.0.0.1]:161
Received SNMP packet(s) from UDP: [127.0.0.1]:40769->[127.0.0.1]:161 If you're having issues with the SNMPSET like the below. Check if your snmpset it's able to reach the snmp server and send snmp packets, to do this, you can use tail -f /var/log/snmpd.log to monitor the connections and start the flow to watch the behaviour. Other attention point is to check if your nifi has permissions to set/get values from MIB's (make sure you did the step 7). And lastly is to check if your snmp$oid is valid. And here's the template used: template-kb-snmp.xml
... View more
Labels:
08-17-2018
02:19 PM
Hi @Serg Serg! What do you have for the following line? python2 -c "import urllib2,json; print(json.loads(urllib2.urlopen('https://www.howsmyssl.com/a/check').read())['tls_version'])" In my case, I've got TLS 1.2 And also, share with us the following: openssl ciphers -v | awk '{print $2}' | sort | uniq -u
... View more