Member since
10-01-2018
802
Posts
143
Kudos Received
130
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3072 | 04-15-2022 09:39 AM | |
| 2475 | 03-16-2022 06:22 AM | |
| 6554 | 03-02-2022 09:44 PM | |
| 2907 | 03-02-2022 08:40 PM | |
| 1914 | 01-05-2022 07:01 AM |
10-30-2020
02:39 AM
@K_K I don’t have any Win machine to this behaviour but I tried Googling this and found some links this might can help you. https://community.cloudera.com/t5/Support-Questions/Is-there-a-working-Python-Hive-library-that-connects-to-a/td-p/167575 https://github.com/dropbox/PyHive/issues/161#issuecomment-626506079 https://stackoverflow.com/questions/44522797/pyhive-sasl-and-python-3-5 https://stackoverflow.com/questions/29814207/python-connect-to-hive-use-pyhs2-and-kerberos-authentication
... View more
10-27-2020
02:02 AM
@irfangk1 This might help you since this is only AWS console specific task. Add instance store volumes to your EC2 instance
... View more
10-27-2020
02:00 AM
@md186036 Yes, there is no issue on this. Cloudera Manager should always we equal or higher to the CDH version. You can refer this page for your understanding: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_cdh_compatibility.html#cm_cdh_compatibility
... View more
10-26-2020
11:46 PM
@Amn_468 This is due to the Java Heap Size. Let's say the default setting for the namenode_java_heapsize is 1GB. Cloudera recommends having 1GB of heap space for every 1M blocks in a cluster. If the data in your cluster is growing rapidly, factor in the potential future number of blocks your cluster will require when determining the size setting, so you can avoid having to restart the namenode. It is only possible to change the setting by restarting the namenode. Calculating the Required Heap Size Determine the number of blocks in the cluster. This information is available on the namenode web UI under the Summary section, with information like the following: 117,387 files and directories, 56,875 blocks = 174,262 total filesystem object(s). Alternatively, the information is available from the output of the fsck command: Total size: 9958827546 B (Total open files size: 93 B)
Total dirs: 20397
Total files: 57993
Total symlinks: 0 (Files currently being written: 1)
Total blocks (validated): 56874 (avg. block size 175103 B) (Total open file blocks (not validated): 1)
... Given the number of blocks, allocate 1GB of heap space for each 1M blocks, plus some additional memory for growth. For example, if there are 6,543,567 blocks, you need 6.5GB of heap to cover the current cluster size, but 8GB would be a sensible setting to allow for growth of the cluster. After that you can adjust the Java Heap Size for NN. Hope this helps.
... View more
10-26-2020
11:25 PM
1 Kudo
Looks okay to me. However you can always increase these as per your cluster loads.
... View more
10-26-2020
01:49 AM
1 Kudo
@sagarspathak This is strange that certification@cloudera.com is bouncing back. However I have sent a mail from my side to test/inform concern team. Hope you will we reached out soon.
... View more
10-26-2020
01:35 AM
@Pengad1973 Looking at the guide https://www.cloudera.com/tutorials/learning-the-ropes-of-the-hdp-sandbox.html this is maria_dev if that not works try try "hadoop".
... View more
10-24-2020
08:06 AM
@K_K Check out below thread this might help you. https://community.cloudera.com/t5/Support-Questions/Is-there-a-working-Python-Hive-library-that-connects-to-a/td-p/167575
... View more
10-24-2020
08:03 AM
@Waloz These below doc will help you. CDP Private Cloud Base Requirements and Supported Versions Database Requirements Hardware requirements for Cloudera Manager Server and related components. This will give the info about RAM, CPU & Disk etc. Cloudera Manager Server Hardware Requirements.
... View more
10-22-2020
07:08 AM
@hitachi_ben Go to CM > Hive > Configuration and then set MapReduce Service to YARN. For example below. Let me know if this helps.
... View more