Member since
10-03-2020
235
Posts
15
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1295 | 08-28-2023 02:13 AM | |
1836 | 12-15-2021 05:26 PM | |
1694 | 10-22-2021 10:09 AM | |
4788 | 10-20-2021 08:44 AM | |
4803 | 10-20-2021 01:01 AM |
09-10-2021
10:48 PM
Hi @Ben621 , Please check this community post should answer your question. https://community.cloudera.com/t5/Support-Questions/How-are-the-primary-keys-in-Phoenix-are-converted-as-row/td-p/147232 Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-10-2021
10:30 PM
Hi @clouderaskme Creating same folder name in same directory is not allowed. Test: # sudo -u hdfs hdfs dfs -mkdir /folder1 # sudo -u hdfs hdfs dfs -mkdir /folder1/subfolder1 # sudo -u hdfs hdfs dfs -mkdir /folder1/subfolder1 mkdir: `/folder1/subfolder1': File exists So if you see two subfolder under folder1 with same name, it may due to contain special characters in name. Can you log into the terminal and execute hdfs commands to check and also show us the output? hdfs dfs -ls /folder1 | cat -A Regards, Will
... View more
09-10-2021
10:04 PM
1 Kudo
Hi @Sudheend, Pseudo-distributed mode means each of the separate processes run on the same server, rather than multiple servers in a cluster. In CDH6/7, "start-hbase.sh" is not existing anymore, and "service hbase-master start/stop" is also not working anymore. Instead, Cloudera Manager will use multiple scripts to do this. You can check how a HBase role is started in CM by expanding steps and check stderr.log of the running commands. So the best way is to use Cloudera Manager to install HBase service, you can choose same host to install Master role, RegionServer role. Then you could stop/start the HBase roles via CM > HBase > Instances > Select role > Actions > Stop/Start. Then use jps or ps -ef to check the running processes: For example: # jps 7251 DataNode 8019 NodeManager 7253 NameNode 15238 HMaster 11384 HRegionServer 16105 Jps 7085 QuorumPeerMain Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-10-2021
07:07 AM
Hi @ighack, If you mean the current RS heap is 50 megabytes or 80 megabytes, it's usually not enough. A good number is 16GB ~ 31 GB for most cases. If you indeed don't have enough resource in RS nodes, at lease keep RS heap 4GB as default, if still see many long GC pauses you have to increase it. Refer to below link to install phoenix and validate installation. https://docs.cloudera.com/documentation/enterprise/latest/topics/phoenix_installation.html#concept_ofv_k4n_c3b If you installed as above steps, then in any of the CDH node find the JDBC jar: find / -name "phoenix-*client.jar" and follow this guide: https://docs.cloudera.com/runtime/7.2.10/phoenix-access-data/topics/phoenix-orchestrating-sql.html Check your JDBC URL syntax should looks like: jdbc:phoenix:zookeeper_quorum:zookeeper_port:zookeeper_hbase_path Regards, Will
... View more
09-09-2021
06:58 AM
Hi @ighack, Please search keywords "JvmPauseMonitor" in that RegionServer log to see if there are GC pause and determine is it GC or Non-GC. - For GC pause general step is to increase heap size. Please go through the setting of "Java Heap Size of HBase RegionServer in Bytes" in CM > HBase > Configuration. If the current setting is small please increase it. Please check this KB for the heap size tuning concept. https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-1/ta-p/248137 - For Non-GC pause you will need to check if: Kernel blocking the process due to: hardware issues blocking I/O page allocation failures due to heavy memory utilization etc… Look into kernel messages for clues dmsg /var/log/messages Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-05-2021
09:17 PM
Hi @shean It looks like the hdfs pathname is wrong, you should use double "/" after "hdfs:": hdfs://localnode2:8020/user/hue/oozie/workspaces/hue-oozie-1630559728.5 Please check your command or configurations for this wrong path setting and correct it. Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-04-2021
04:49 AM
Hi @Sainath90 , Do you mean to install HDP sandbox (Hortonworks data platform for hadoop ) in your mac? Please refer to below tutorials: https://www.cloudera.com/tutorials/getting-started-with-hdp-sandbox.html It has virtual box / vmware / docker versions to choose. I would recommend you to install docker version as it is easy to deploy / remove / stop / start. I have installed docker version successfully in my mac with 16GB RAM, and mine is Intel core not M1 core, but I believe the docker will work in M1. Please note the prerequisites is minimum 10 GB RAM dedicated to the virtual machine. Below is the docker version tutorial. https://www.cloudera.com/tutorials/sandbox-deployment-and-install-guide/3.html Thanks, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-03-2021
12:36 AM
3 Kudos
Hello @AnuradhaV, Thanks for raising questions in community. We usually don't suggest to put files with special characters in file name to hdfs. If you have to do this you should replace the special characters with URL encoding. For example, below characters should be encoded as: # encoded as %23
? encoded as %3F
= encoded as %3D
; encoded as %3B A full list of URL encoding characters: https://www.degraeve.com/reference/urlencoding.php To help you test it out, I created these files in my local and then put them to hdfs: # ls
900-0314-Slide#2.vsi abc.html?C=S;O=A
# hdfs dfs -put 900-0314-Slide%232.vsi /tmp/
# hdfs dfs -put abc.html%3FC%3DS%3BO%3DA /tmp/
# hdfs dfs -ls /tmp
Found 2 items
-rw-r--r-- 3 hdfs supergroup 0 2021-09-03 07:07 /tmp/900-0314-Slide#2.vsi
-rw-r--r-- 3 hdfs supergroup 0 2021-09-03 07:15 /tmp/abc.html?C=S;O=A Thanks, Will If the answer helps, please accept as solution and click thumbs up.
... View more
08-31-2021
10:45 PM
Hello @npr20202 , Could you specify from which log do you see this error. And what jobs are failing because of this error? What is the CM version and CDH version? Please ensure that KMS, Keytrustee Server, and keyHSM are in good health. Please check KTS log, if the error is there please share the full error stack. Thanks, Will
... View more
08-18-2021
07:53 PM
Hi @rizkymalm The answer is yes, but need to follow the correct steps. Please refer to below documents for detailed instructions of backing up hdfs meta: https://docs.cloudera.com/runtime/7.2.10/data-protection/topics/hdfs-back-up-hdfs-metadata.html If the answer helps please accept as solution and click thumbs up button. Regards, Will
... View more
- « Previous
- Next »