Member since
01-24-2017
12
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4067 | 07-20-2017 08:26 AM | |
8031 | 05-23-2017 07:38 AM |
09-19-2017
08:35 AM
I find the issue occurs when the table is written and then read with minimal time in between. A workaround I used was to put a sleep for some seconds.
... View more
07-20-2017
08:26 AM
Figured out. Need both: - username/password to connect to cm api - user launching script to have valid kerberos ticket (surprising find) Thanks everyone. 🙂
... View more
07-20-2017
06:56 AM
Hi Friends, Getting the following exception: cm_api.api_client.ApiException: HTTP Error 401: basic auth failed (error 401) CM is on the same host where I am executing the python script. Seems a very basic error. Passing admin CM credentials. Seeking pointers to - which port is it using? - do I have to enable CM REST API? - anyothers? Complete stack trace. [rizwmian@w0575oslshcea01 cluster_diff]$ python --version Python 2.6.6 [rizwmian@w0575oslshcea01 cluster_diff]$ cdh.py Traceback (most recent call last): File "./cdh.py", line 67, in <module> main() File "./cdh.py", line 62, in main cluster = find_cluster(api, None) File "./cdh.py", line 12, in find_cluster all_clusters = api.get_all_clusters() File "/usr/lib/python2.6/site-packages/cm_api/api_client.py", line 128, in get_all_clusters return clusters.get_all_clusters(self, view) File "/usr/lib/python2.6/site-packages/cm_api/endpoints/clusters.py", line 66, in get_all_clusters params=view and dict(view=view) or None) File "/usr/lib/python2.6/site-packages/cm_api/endpoints/types.py", line 139, in call ret = method(path, params=params) File "/usr/lib/python2.6/site-packages/cm_api/resource.py", line 110, in get return self.invoke("GET", relpath, params) File "/usr/lib/python2.6/site-packages/cm_api/resource.py", line 73, in invoke headers=headers) File "/usr/lib/python2.6/site-packages/cm_api/http_client.py", line 174, in execute raise self._exc_class(ex) cm_api.api_client.ApiException: HTTP Error 401: basic auth failed (error 401) cm_api.api_client.ApiException: HTTP Error 401: basic auth failed (error 401)
... View more
Labels:
- Labels:
-
Cloudera Manager
05-23-2017
07:38 AM
2 Kudos
Thanks everyone for their input. I have done some research on the topic and share my findings. 1. any static number is a magic number. I propose the number of block threshold to be: heap memory (in gb) x 1 million * comfort_%age (say 50%) Why? Rule of thumb: 1gb for 1M blocks, Cloudera [1] The actual amount of heap memory required by namenode turns out to be much lower. Heap needed = (number of blocks + inode (files + folders)) x object size (150-300 bytes [1,2]) For 1 million *small* files: heap needed = (1M + 1M) x 300b = 572mb <== much smaller than rule of thumb. 2. High block count may indicate both. namenode UI states the heap capacity used. For example, http://namenode:50070/dfshealth.html#tab-overview 9,847,555 files and directories, 6,827,152 blocks = 16,674,707 total filesystem object(s). Heap Memory used 5.82 GB of 15.85 GB Heap Memory. Max Heap Memory is 15.85 GB. ** Note, the heap memory used is still higher than 16,674,707 objects x 300 bytes = 4.65gb To find out small files, do hdfs fsck <path> -blocks | grep "Total blocks (validated):" It would return something like: Total blocks (validated): 2402 (avg. block size 325594 B) <== which is smaller than 1mb 3. yes. a file is small if its size < dfs.blocksize. 4. * each file takes a new data block on disk, though the block size is close to file size. so small block. * for every new file, inode type object is created (150B), so stress on heap memory of name node Small files pose problems for both name node and data nodes: name nodes: - Pull the ceiling on number of files down as it needs to keep metadata for each file in memory - Long time in restarting as it must read the metadata of every file from a cache on local disk data nodes: - large number of small files means a large amount of random disk IO. HDFS is designed for large files, and benefits from sequential reads. [1] https://www.cloudera.com/documentation/enterprise/5-8-x/topics/admin_nn_memory_config.html [2] https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=26148906
... View more
01-25-2017
07:50 AM
1. A threshold of 500,000 or 1M seems like a "magic" number. Shouldn't it be a function of memory of node (Java Heap Size of DataNode in Bytes)? Other interesting related questions: 2. What does a high block count indicate? a. too many small files? b. running out of capacity? is it (a) or (b)? how to differentiate between the two? 3. What is a small file? A file whose size is smaller than block size (dfs.blocksize)? 4. Does each file take a new data block on disk? or is it the meta data associated with new file that is the problem? 5. The effects are more GC, declising execution speeds etc. How to "quantify" the effects of high block count?
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
01-24-2017
09:28 AM
Users are getting *non-deterministic* "Memory limit exceeded" for impala queries. Impala Daemon Memory Limit: 100 GB. spill to disk is enabled However, a query failed with the above memory with Aggregate Peak Memory Usage: 125 MiB Explored query profile by CM -> Impala -> Queries -> {failed query "oom=true AND stats_missing=false"} Want help in narrowing down the cause of the failure: inaccurate stats, congestion, hdfs disk rebalancing, or something else? Where can I find the detailed of the failure? /var/log/impalad and catalogd state the "Query ID" but not the failure details. For example, impala logs stated the Query ID only: a24fb2eae077b513:45c8d35936a35e9e impalad.w0575oslphcda11.bell.corp.bce.ca.impala.log.INFO.20170124-031735.13521:I0124 03:57:06.889070 44232 plan-fragment-executor.cc:92] Prepare(): query_id=a24fb2eae077b513:45c8d35936a35e93 instance_id=a24fb2eae077b513:45c8d35936a35e9e
... View more