Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3404 | 06-11-2020 02:45 PM | |
5115 | 05-01-2020 12:23 AM | |
2913 | 04-21-2020 03:38 PM | |
3587 | 04-14-2020 12:26 AM | |
2403 | 02-27-2020 05:51 PM |
07-06-2018
03:45 AM
hdfs dfs -du command returns the TOTAL size in HDFS, including all replicas. By default the replica is 3. The totalSize returned in Hive is only the actual size of the table itself, which is only 1 copy, so 11998371425 * 3 = 35995114275 = 33GB.
... View more
07-03-2018
03:45 PM
Hi, I tested, it works for me, at least the first column returned correctly, compare with yours. My result below: +----------------------+--------------------+-------------------+-------------------+-------------------+---------------+---------------------+----------------------------------------------------+--+
| deneme6.framenumber | deneme6.frametime | deneme6.ipsrc | deneme6.ipdst | deneme6.protocol | deneme6.flag | deneme6.windowsize | deneme6.info |
+----------------------+--------------------+-------------------+-------------------+-------------------+---------------+---------------------+----------------------------------------------------+--+
| 1 | NULL | "147.32.84.165" | "91.212.135.158" | "TCP" | NULL | NULL | "1040 → 5678 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1" |
| 2 | NULL | "147.32.84.165" | "91.212.135.158" | "TCP" | NULL | NULL | "[TCP Out-Of-Order] 1040 → 5678 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1" |
| 3 | NULL | "91.212.135.158" | "147.32.84.165" | "TCP" | NULL | NULL | "5678 → 1040 [SYN |
+----------------------+--------------------+-------------------+-------------------+-------------------+---------------+---------------------+----------------------------------------------------+--+ The reason for the NULL values for frametime, flag and windowsize columns is because you define them as INT type, but you have double quotes around those numbers. Hive does not interpret quotes in the file, as it only sees text file, not CSV file. Suggestion is that you remove all quotes in the file and try again, so that Hive can convert those numbers to INT correctly.
... View more
06-30-2018
01:12 PM
quickstart.cloudera is not matching with 127.0.0.1, what's the principal name used for Hive's keytab file? The host name need to match with the one defined in the principal. Try to make principal as cloudera/quickstart.cloudera@ CLOUDERA instead of using IP address.
... View more
06-30-2018
01:09 PM
It looks like that you are using Hive JDBC driver, and also seems like the driver has transformed the query which become invalid. Two options here: 1. check which version of JDBC driver you using, try use the latest one to see if it helps 2. disable the query transformation in JDBC using UseNativeQuery and set it to 1, please refer to user manual below: http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf
... View more
06-30-2018
01:02 PM
I think I understand you better from your another post: http://community.cloudera.com/t5/Batch-SQL-Apache-Hive/Something-similar-to-PL-HQL-available-on-Cloudera/m-p/69339#M2736 Currently Hive does not support loop, so you might have to do it at application level.
... View more
06-30-2018
12:55 PM
Have you explored Hive JDBC driver ,which you can connect from Java code, query Hive tables and loop through data. Official site: https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-1.html Doc: http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf Hope helps.
... View more
06-30-2018
12:51 PM
Hi, Can you please share the version of Hive or CDH you are using? So that I can try to see if I can re-produce? Thanks
... View more
06-08-2018
04:19 AM
Suggest to scan through the output of: journalctl -o cat -l for any errors. Also, what's the output of "cdsw status"?
... View more
05-23-2018
01:06 AM
Your HiveMetaStore server is not up and running probably. You need to check the HMS server log to see what's happening.
... View more