Member since
04-25-2020
43
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1014 | 11-19-2023 11:07 AM | |
1249 | 09-30-2023 09:10 AM | |
1422 | 03-20-2022 03:00 AM | |
1601 | 03-20-2022 02:47 AM |
10-14-2020
09:46 AM
Hi,
I want to enable an on-demand metadata feature for impala as it gives a lot of improvement.
one of the features is that the cached metadata gets evicted automatically under memory pressure.
I want to know if this feature can be enabled in Cloudera CDH 5.14.4 version.
Has anyone implemented this on prod on CDH 5.14.4 version?
... View more
Labels:
10-14-2020
07:10 AM
Hi Tim, Your suggestion was very helpful. I have a good understanding now. I am accepting as a solution. I just have one more thing to ask, to fix the issue of the query utilizing the resources it is better to increase the Impala Daemon Memory Limit (mem_limit). what do you suggest?
... View more
10-03-2020
07:21 AM
Hi Tim, Can you explain in more detail how can I do this? E.g. you could set up memory-based admission control with a min memory limit of 2GB and a max memory limit of 20GB to prevent anyone query from taking up all the memory on a node.
... View more
10-03-2020
12:29 AM
Hi Tim, Thanks for your reply I can only see two parameters for the memory limit. one is Single Pool Mem Limit (default_pool_mem_limit) = -1 B the second one is Impala Daemon memory limit (mem_limt) = 60 GB So how do I set now min memory limit and max memory limit.
... View more
10-02-2020
11:18 AM
After running Impala query select distinct(partition_date) parts from mddb_servt; I am getting below error: Execution time 10 seconds 2) java.sql.SQLException: [Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS, sqlState:HY000, errorMessage:ExecQueryFInstances rpc query_id=937d334667doe010:4967d4b900000111 failed: Failed to get minimum memory reservation of 68.00 MB on daemon ec2-3-128-13.us-east-2:22000 for query 937d334667doe010:4967d4b900000111 because it would exceed an applicable memory limit. Memory is likely oversubscribed. Reducing query concurrency or configuring admission control may help avoid this error. Memory usage: Process: Limit=60.00 GB Total=50.68 GB Peak=50.70 GB Buffer Pool: Free Buffers: Total=0 Buffer Pool: Clean Pages: Total=2.31 GB Buffer Pool: Unused Reservation: Total=-2.29 GB Free Disk IO Buffers: Total=1.10 GB Peak=1.15 GB RequestPool=root.default: Total=48.28 GB Peak=48.37 GB Query(78befceb1eef47:d33db5f200030000): Reservation=47.49 GB ReservationLimit=48.00 GB OtherMemory=293.93 MB Total=47.78 GB Peak=47.81 GB Query(e12345ed0a094d14:f4616fb90030000): Reservation=238.00 MB ReservationLimit=48.00 GB OtherMemory=4.27 MB Total=242.27 MB Peak=303.22 MB Query(be7896564af6f2c:1e675bb00000000): Reservation=272.00 MB ReservationLimit=48.00 GB OtherMemory=4.23 MB Total=276.23 MB Peak=314.22 MB Query(914d001522ce0e10:264bd4b900000000): Reservation=0 ReservationLimit=48.00 GB OtherMemory=0 Total=0 Peak=0 RequestPool=root.anp: Total=0 Peak=536.50 MB Untracked Memory: Total=1.28 GB Please note: Impala Daemon Memory Limit mem_limit = 60 GiB Please let me know what could be the reason.
... View more
Labels:
- Labels:
-
Apache Impala
08-15-2020
11:10 PM
@GangWar thanks for your quick reply. As per your suggestion I have checked the logs on both the locations on namenode and datanodes. On one of the data nodes I have checked the logs in /var/run/cloudera-scm-agent/process/28-hdfs-DATANODE/logs, I have found below results in the log when searched by the keyword Error: ++ replace_pid -Xms521142272 -Xmx521142272 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/hdfs_hdfs-DATANODE-111b6db5e742dbffe061f0c1d6bc8878_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh ++ sed 's#{{PID}}#5409#g' However, I don't see any error message in /var/log/hadoop-hdfs, please also suggest which log file to check to debug. audit hadoop-cmf-hdfs-NAMENODE-namenode1.us-east1-b.c.coherent-elf-271314.internal.log.out hdfs-audit.log SecurityAuth-hdfs.audit stacks
... View more
08-15-2020
02:56 AM
I am stuck in the middle of Cluster setup after installing Cloudera manager. The first 3 steps have completed however when it comes to starting HDFS it has successfully formatted the name directories of the current NameNode but got stuck in Start HDFS, please find below the error. Cluster Setup First Run Command Status Running Aug 15, 9:16:45 AM There was an error when communicating with the server. See the log file for more information. Completed 3 of 8 step(s). Show All Steps Show Only Failed Steps Show Running Steps Ensuring that the expected software releases are installed on hosts. Aug 15, 9:16:45 AM 90ms Deploying Client Configuration Cluster 1 Aug 15, 9:16:45 AM 16.13s Start Cloudera Management Service, ZooKeeper Aug 15, 9:17:01 AM 27.87s Start HDFS 0/1 steps completed. Aug 15, 9:17:29 AM Execute 3 steps in sequence Waiting for command (Start (77)) to finish Aug 15, 9:17:29 AM Formatting the name directories of the current NameNode. If the name directories are not empty, this is expected to fail. NameNode (namenode1) Aug 15, 9:17:29 AM 14.86s Start HDFS There was an error when communicating with the server. See the log file for more information. I am unable to check the logs as the cluster is not fully setup, please suggest what could be the reason and how to fix these. I am installing with version 5.16.2
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
08-11-2020
04:30 AM
Thanks a lot, @GangWar . You are absolutely correct I was using OpenJDK instead of Oracle JDK. I thought there is a bug in this but I didn't have any clue how to fix this but after making changes as per your suggestion it worked. Thanks a lot. I am accepting it as a solution.
... View more
08-06-2020
03:25 AM
I can't open /run/cloudera-scm-agent/process/256-yarn-NODEMANAGER/container-executor.cfg: Permission denied. + perl -pi -e 's#{{CGROUP_GROUP_CPU}}##g' /run/cloudera-scm-agent/process/256-yarn-NODEMANAGER/yarn-site.xml I am getting this error after enabling Kerberos in CDH cluster, HDFS and yarn are not able to start. After checking the yarn Node Manager logs I see the below error. yarn nodemanager logs: : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.authorize.AuthorizationException: : User: cloudera@CLUSTERIE.LOCAL is not allowed to impersonate yarn/ip-10-0-xxxxx@xyz.com Any suggestion why am I getting this error, when I disable the Kerberos everything works well. Please assist as the severity of this is very high.
... View more
07-30-2020
09:33 AM
Thanks a lot for your quick reply and for providing a detailed explanation, really appreciated. I was using a 5.14.1 version of Cloudera. Yes, there is a bug in the older version I realized soon after that, but your explanation has made me understand the reason behind the error.
... View more
- « Previous
- Next »