Member since
02-22-2016
28
Posts
1
Kudos Received
0
Solutions
08-04-2022
10:21 PM
1 Kudo
Reply might be late but KCM and keyring based Kerberos credentials cache are not supported with hadoop. # klist Ticket cache: KCM:0:86966
... View more
03-24-2020
10:27 PM
We do have below entries and we have confirmed there are no firewall rules. One more thing i missed to mention we started seeing this issues when we started upgrading OS in the cluster nodes. From OEL 6.x to OEL 7.x. But this seems to be happening on both type of host also looking at the logs. 127.0.0.1 localhost.localdomain localhost # special IPv6 addresses ::1 localhost6.localdomain6 localhost6 fe00::0 ipv6-localnet ff00::0 ipv6-mcastprefix ff02::1 ipv6-allnodes ff02::2 ipv6-allrouters ff02::3 ipv6-allhosts
... View more
03-24-2020
03:57 AM
@Shelton Thanks for responding. Why would the same error comes for communication within the host also for different ports? any clues. We are using static IP (private IP for cluster communication) and it is specified /etc/hosts across all hosts.
... View more
03-24-2020
03:22 AM
We are seeing lot of no route to host in datanode logs and impala queries are also failing due to this. We are seeing this within the nodes and between nodes also. Issue is happening with multiple nodes. Host inspector are running with no issues.
We did lot of checks with OS and network team we couldn't find any. Any help on this.
1004:DataXceiver error processing WRITE_BLOCK operation src: /192.168.225.165:55010 dst: /192.168.225.165:1004 java.net.NoRouteToHostException: No route to host
1004:DataXceiver error processing WRITE_BLOCK operation src: /192.168.225.68:35322 dst: /192.168.225.68:1004 java.net.NoRouteToHostException: No route to host
1004:DataXceiver error processing WRITE_BLOCK operation src: /192.168.225.171:40718 dst: /192.168.225.165:1004 java.net.NoRouteToHostException: No route to host
... View more
Labels:
- Labels:
-
Apache Impala
03-16-2017
02:30 AM
Thx @csguna for the detailed explanation. Much appreaciated . So i think there is not much difference in terms size , for snappy compressed and non compressed parquet table.
... View more
03-09-2017
09:12 PM
@csguna@saranvisa Thx for the detailed response. I have 2 follow up questions (sorry i am just learning) 1) Since snappy is not too good at compression (disk), what would be the difference on disk space for a 1 TB table when stored as parquet only and parquet with snappy compression. 2) Is it possible to compress a non-compressed parquet table later with snappy?
... View more
03-08-2017
07:59 AM
Hi, 1) If we create a table (both hive and impala)and just specify stored as parquet . Will that be snappy compressed by default in CDH? 2) If not how do i identify a parquet table with snappy compression and parquet table without snappy compression?. Also how to specify snappy compression for table level whiel creating and also at global level, even if nobody specified at table level (all table stored as parquet should be snappy compressed). Please help
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Apache Spark
02-15-2017
08:31 AM
Hi I wanted to list down all insert statements run in impala for a specific duration (impala queries), How do i query that. I am looking for something like statement like insert* (it should list all the statment which starts with insert). But this option not available. Pls suggest any alternatives.
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager