Member since
10-03-2020
236
Posts
15
Kudos Received
18
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2806 | 11-11-2024 09:31 AM | |
| 2605 | 08-28-2023 02:13 AM | |
| 3015 | 12-15-2021 05:26 PM | |
| 2716 | 10-22-2021 10:09 AM | |
| 7179 | 10-20-2021 08:44 AM |
09-10-2021
10:04 PM
1 Kudo
Hi @Sudheend, Pseudo-distributed mode means each of the separate processes run on the same server, rather than multiple servers in a cluster. In CDH6/7, "start-hbase.sh" is not existing anymore, and "service hbase-master start/stop" is also not working anymore. Instead, Cloudera Manager will use multiple scripts to do this. You can check how a HBase role is started in CM by expanding steps and check stderr.log of the running commands. So the best way is to use Cloudera Manager to install HBase service, you can choose same host to install Master role, RegionServer role. Then you could stop/start the HBase roles via CM > HBase > Instances > Select role > Actions > Stop/Start. Then use jps or ps -ef to check the running processes: For example: # jps 7251 DataNode 8019 NodeManager 7253 NameNode 15238 HMaster 11384 HRegionServer 16105 Jps 7085 QuorumPeerMain Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-10-2021
07:07 AM
Hi @ighack, If you mean the current RS heap is 50 megabytes or 80 megabytes, it's usually not enough. A good number is 16GB ~ 31 GB for most cases. If you indeed don't have enough resource in RS nodes, at lease keep RS heap 4GB as default, if still see many long GC pauses you have to increase it. Refer to below link to install phoenix and validate installation. https://docs.cloudera.com/documentation/enterprise/latest/topics/phoenix_installation.html#concept_ofv_k4n_c3b If you installed as above steps, then in any of the CDH node find the JDBC jar: find / -name "phoenix-*client.jar" and follow this guide: https://docs.cloudera.com/runtime/7.2.10/phoenix-access-data/topics/phoenix-orchestrating-sql.html Check your JDBC URL syntax should looks like: jdbc:phoenix:zookeeper_quorum:zookeeper_port:zookeeper_hbase_path Regards, Will
... View more
09-09-2021
06:58 AM
Hi @ighack, Please search keywords "JvmPauseMonitor" in that RegionServer log to see if there are GC pause and determine is it GC or Non-GC. - For GC pause general step is to increase heap size. Please go through the setting of "Java Heap Size of HBase RegionServer in Bytes" in CM > HBase > Configuration. If the current setting is small please increase it. Please check this KB for the heap size tuning concept. https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-1/ta-p/248137 - For Non-GC pause you will need to check if: Kernel blocking the process due to: hardware issues blocking I/O page allocation failures due to heavy memory utilization etc… Look into kernel messages for clues dmsg /var/log/messages Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-05-2021
09:17 PM
Hi @shean It looks like the hdfs pathname is wrong, you should use double "/" after "hdfs:": hdfs://localnode2:8020/user/hue/oozie/workspaces/hue-oozie-1630559728.5 Please check your command or configurations for this wrong path setting and correct it. Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-03-2021
12:36 AM
3 Kudos
Hello @AnuradhaV, Thanks for raising questions in community. We usually don't suggest to put files with special characters in file name to hdfs. If you have to do this you should replace the special characters with URL encoding. For example, below characters should be encoded as: # encoded as %23
? encoded as %3F
= encoded as %3D
; encoded as %3B A full list of URL encoding characters: https://www.degraeve.com/reference/urlencoding.php To help you test it out, I created these files in my local and then put them to hdfs: # ls
900-0314-Slide#2.vsi abc.html?C=S;O=A
# hdfs dfs -put 900-0314-Slide%232.vsi /tmp/
# hdfs dfs -put abc.html%3FC%3DS%3BO%3DA /tmp/
# hdfs dfs -ls /tmp
Found 2 items
-rw-r--r-- 3 hdfs supergroup 0 2021-09-03 07:07 /tmp/900-0314-Slide#2.vsi
-rw-r--r-- 3 hdfs supergroup 0 2021-09-03 07:15 /tmp/abc.html?C=S;O=A Thanks, Will If the answer helps, please accept as solution and click thumbs up.
... View more
08-31-2021
10:45 PM
Hello @npr20202 , Could you specify from which log do you see this error. And what jobs are failing because of this error? What is the CM version and CDH version? Please ensure that KMS, Keytrustee Server, and keyHSM are in good health. Please check KTS log, if the error is there please share the full error stack. Thanks, Will
... View more
08-14-2021
02:43 AM
1 Kudo
Hi @vciampa, You can refer to this doc for migrating data from secured HDP to secured CDP: https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/data-migration/topics/rm-migrate-securehdp-securecdp-distcp.html For question 2) I think it says about to set hadoop configuration file location to Destination cluster if you use HFTP(refer to HFTP for explanation), if not using HFTP, you should set it to Source cluster. But I think this link is kinda confusing it looks like HDP documents not CDP documents. There are different HADOOP_CONF location between HDP and CDP. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/concepts/topics/cm-server-client-configuration.html For example, In HDP, HADOOP_CONF location is /etc/hadoop/conf, it contains both service/client sides configuration files. In CDP, Cloudera manager separate the client and server configurations: Client configurations located at /etc/hadoop/conf/hdfs-site.xml Server configurations located in the latest process directory of each role: under /var/run/cloudera-scm-agent/process/<latest_pid>-hdfs-NAMENODE/hdfs-site.xml As for question 1) please follow the answer of @vidanimegh in the previous post. As for question 2) please just ignore this step and Follow the first link I provided in this post. Thanks, Will If the answer helps, please accept as solution and click thumbs up.
... View more
08-14-2021
01:46 AM
Hi Roshan, Thanks for raise question in Cloudera community ! Firstly you need to check what is the current kudu version. Notes: Use the kudu-spark_2.10 artifact if using Spark with Scala 2.10. Note that Spark 1 is no longer supported in Kudu starting from version 1.6.0. So in order to use Spark 1 integrated with Kudu, version 1.5.0 is the latest to go to. And please refer to below articles for some code examples. [1] CDP 7.2.10 documents: https://docs.cloudera.com/runtime/7.2.10/kudu-development/topics/kudu-integration-with-spark.html [2] Kudu official site: https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark If it helps please accept the solution and click thumbs up.
... View more
07-31-2021
08:53 PM
Checking CM agent log is a good start to this issue. The Cloudera Manager Agent is not able to communicate with this role's web server. You should firstly check if there are network issues between CM and HBase hosts. E.g. Are krb5.conf and /etc/hosts consistent We don't know where did you see below exception: Authentication exception: GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed) And this KB (https://my.cloudera.com/knowledge/Troubleshooting-Kerberos-Related-Issues-Common-Errors-and?id=76192) is a good start to troubleshoot kerberos related issues, see the symptom 15 and 18.
... View more
07-30-2021
07:33 PM
If you want to list all files owned by a specific user in a specific directory, you can use "hdfs dfs -ls" with grep. Syntax: hdfs dfs -ls /path | grep "\- username" Example: # hdfs dfs -ls / | grep "\- hdfs" drwxrwxrwt - hdfs supergroup 0 2021-07-29 16:02 /tmp drwxr-xr-x - hdfs supergroup 0 2021-07-31 02:26 /user If you want to recursively list files add -R after -ls I hope this answered your question.
... View more
- « Previous
- Next »