Member since
02-18-2019
83
Posts
3
Kudos Received
0
Solutions
03-17-2020
09:39 PM
Hi, Do we need to stop or put all hadoop services in maintenance, during our DB patching activity or just putting Cloudera Manager in maintenance mode will suffice? Thanks Anm
... View more
03-01-2020
08:50 PM
Hello,
I would like to know if there a way to rebalance data in Kudu evenly across all kudu t-servers. Our Kudu deployment is as follows:
3 Kudu Masters
9 Tablet Servers
kudu 1.7.0-cdh5.16.2/ CM 5.16.2
Data across these 9 T-Servers is not evenly distributed, out these 9 t-servers, I see data is more stored on 3 t-servers and not distributed evenly. I was going through some articles and found that currently there is no rebalance tool like HDFS (https://community.cloudera.com/t5/Support-Questions/Kudu-Tablet-Server-Data-Directories-rebalancing/td-p/79649),
However if we Go to Clusters > Kudu > Click Actions I see Run Kudu Rebalancer Tool, would like to know, what is the purpose of this. Will this distribute data for overall Kudu or just Kudu Master or Kudu T-Servers too. Request some advice / assistance on the same.
Thanks
Amn
... View more
Labels:
- Labels:
-
Apache Kudu
02-09-2020
07:43 PM
Hi @kingpin Thanks for the assistance, wanted to confirm if restarting TS have any impact on Kudu service? Regards
... View more
01-23-2020
09:49 PM
Hello, We have been getting alerts for Open file descriptors on Kudu T-servers “Concerning: Open file descriptors: 16,954. File descriptor limit: 32,768. Percentage in use: 51.74%. Warning threshold: 50.00%.” request assistance in resolving this. Also does restarting services on affected T-Servers have any performance impact on Kudu as whole. [root@myserver ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 1546606 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1546606 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [root@myserver ~]# cat /proc/sys/fs/file-max 39233193 [root@myserver ~]# cat /proc/sys/fs/file-nr 35200 0 39233193 [root@myserver ~]# lsof -u kudu |wc -l 18326 Regards Amn
... View more
Labels:
- Labels:
-
Apache Kudu
-
Cloudera Manager
11-28-2019
09:37 PM
Hi Tim, Thanks for you reply, we ran couple of other queries and don't encounter this issue, we will resolve this as a one of case, appreciate your explanation in this regard.
... View more
11-27-2019
01:47 AM
Hello, I am facing an issue with Impala queries, though the query status says 'finished' on Impala status page it still shows 'executing'. I was following Cloudera Article : https://my.cloudera.com/knowledge/Finished-Queries-show-as-Executing-in-the-Cloudera-Manager?id=71576 the difference in my case is queries were run through impala shell via jdbc connection. would the resolution outlined in the article be applicable, for queries submitted through impala jdbc connection? and secondly what would be the fix for such queries which are in hung state. Regards Amn
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
11-25-2019
11:49 PM
Hi Robbiez, Strangely while checking on other Daemons I see different error, although the error received from CM is the same (The health test result for IMPALAD_QUERY_MONITORING_STATUS has become bad: There are 0 error(s) seen monitoring executing queries, and 1 errors(s) seen monitoring completed queries for this role in the previous 5 minute(s). Critical threshold: any. ) whereas the logs paint a different picture, the new log I see is attached and I see the same in other daemons also although the file numbers are a bit different. [root@Server02 impalad]$ tail -n 100 impalad.Server02.impala.log.WARNING.20191120-115803.6341
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:804)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:738)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:485)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:666)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:904)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:981)
at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
W1126 14:44:49.208544 6755 ShortCircuitCache.java:826] ShortCircuitCache(0x116b2697): could not load 1122093192_BP-196508081-172.18.208.227-1510815452859 due to InvalidToken exception.
Java exception follows:
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /user/hive/warehouse/MYAPPLICATION.db/network/id=4/PRT_id=31268741116146977/part-00000_copy_293 Regards Amn
... View more
11-25-2019
09:23 PM
Hi Attila, Appreciate your reply, not quite sure when you say 'how are clients accessing" as far as I know we use Impala to query kudu tables, if that makes sense…. still trying to learn the ropes of Kudu. Where and how can I check ? Regards Amn
... View more