Member since
01-15-2019
274
Posts
23
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
757 | 01-29-2024 03:30 AM | |
1124 | 02-21-2023 05:50 AM | |
893 | 01-17-2023 05:53 AM | |
828 | 12-29-2022 03:07 AM | |
3350 | 06-28-2022 08:16 AM |
07-24-2020
06:20 AM
@rmr1989 Ideally, for a missing mounts in the cluster you would automatically get alerts for services if any hadoop service directories cannot be accessed which are mapped to the mountpoint. They can be generic though like "no such file or directory" or "file not found" errors. If you are looking for specific mountpoint availability, you should consider using script to scan for mountpoints in the cluster hosts and can send email alerts using SMTP from the host instead of Cloudera Manager. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-23-2020
08:25 AM
1 Kudo
The data node question has been answered, but one tangental comment - you say you are using the Secondary Name Node service. You almost certainly do not want to be using that. You do not get any HA with the SNN. What you probably want is the Standby Namenode. In Cloudera Manager you can enable HA from the HDFS service actions and that will replace your Secondary Name Node with a Standby Name Node.
... View more
07-22-2020
07:21 AM
Thanks a lot, Paras. pdev
... View more
07-22-2020
04:31 AM
@Prav You can leverage CM API to track parcel distribution status: /api/v19/clusters/{clusterName}/parcels - This can be used to note the parcel name and version the cluster has access to /api/v19/clusters/{clusterName}/parcels/products/{product}/versions/{version} - This can be used to track the parcel distribution status Refer below link for more details http://cloudera.github.io/cm_api/apidocs/v19/path__clusters_-clusterName-_parcels_products_-product-_versions_-version-.html Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-22-2020
04:08 AM
1 Kudo
@rok You can use the Cloudera Manager REST API to export and import all of its configuration data. The API exports a JSON document that contains configuration data for the Cloudera Manager instance. You can use this JSON document to back up and restore a Cloudera Manager deployment. Refer below document for the steps: https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_intro_api.html#concept_dnn_cr5_mr Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-09-2020
02:05 AM
@sarm Minimum heap size should be set to : 4 GB Increase the memory for higher replica counts or a higher number of blocks per DataNode. When increasing the memory, Cloudera recommends an additional 1 GB of memory for every 1 million replicas above 4 million on the DataNodes. For example, 5 million replicas require 5 GB of memory. Set this value using the Java Heap Size of DataNode in Bytes HDFS configuration property. Reference: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_hardware_requirements.html#concept_fzz_dq4_gbb Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-08-2020
11:59 PM
@SeanU This level of detailed log scanning and alert functionality is not available. The existing service role logs for which rules can be set will not contain each application exceptions logged since detailed information is present in the application logs. You can check the available job history server logs and resource manager logs available to check if the logged in information during application run time helps serve your purpose.
... View more
07-05-2020
02:49 PM
@paras ,Please find workflow.xml as below. <workflow-app name="Customer_Journey_FirstActions_Events_Set2" xmlns="uri:oozie:workflow:0.5"> <credentials> <credential name="hs2-cred" type="hive2"> <property> <name>hive2.jdbc.url</name> <value>jdbc:hive2://adcuxxxx.adcbmis.local:10001/db_appldigi;transportMode=http;httpPath=cliservice</value> </property> <property> <name>hive2.server.principal</name> <value>hive/adcuxxxx.adcbmis.local@xxxxxMIS.LOCAL</value> </property> </credential> </credentials> <start to="Trash1"/> <action name="Trash1" cred="hs2-cred"> <fs> <name-node>${nameNode}</name-node> <delete path="/user/anja21614/.Trash"></delete> </fs> <ok to="FirstActions2"/> <error to="kill"/> </action> <action name="FirstActions2" cred="hs2-cred"> <hive2 xmlns="uri:oozie:hive2-action:0.2"> <job-tracker>${resourceManager}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>fs.trash.interval</name> <value>0</value> </property> <property> <name>hive.exec.scratchdir</name> <value>/appldigi/tmp</value> </property> </configuration> <jdbc-url>jdbc:hive2://adcuxxxx.adcbmis.local:10001/db_appldigi;transportMode=http;httpPath=cliservice</jdbc-url> <script>FirstActions2.hql</script> <param>-Dyarn.app.mapreduce.am.staging-dir=/appldigi/tmp</param> <file>/user/anja21614/CustomerJourney/FirstActions2.hql</file> </hive2> <ok to="Trash3"/> <error to="kill"/> </action> <action name="Trash3" cred="hs2-cred"> <fs> <name-node>${nameNode}</name-node> <delete path="/user/anja21614/.Trash"></delete> </fs> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>${wf:errorMessage(wf:lastErrorNode())}</message> </kill> <end name="end"/> </workflow-app> please note you can find scratch directory configuration in workflow which I just added. Please check and let me know if the configuration is correctly mentioned, I had already tried this configuration in Ambari hive which was giving me error.
... View more
07-01-2020
08:32 PM
Hi @paras here is the value for yarn.app.mapreduce.am.staging-dir' also i think i don't longer need to set this set hive.exec.scratchdir=/tmp/mydir because i've already set in the hive config,
... View more
06-29-2020
10:21 PM
@AnjaliRocks , I have sent you a PM for further details.
... View more