Member since
01-15-2019
193
Posts
19
Kudos Received
26
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
132 | 12-29-2022 03:07 AM | |
466 | 06-28-2022 08:16 AM | |
348 | 01-20-2022 03:37 AM | |
505 | 09-21-2020 03:44 AM | |
1767 | 08-12-2020 08:24 AM |
01-17-2023
05:53 AM
@hanumanth It seems you are using other services also with embedded DB than Cloudera Manager? I see hive user also in the output. Embedded DB is only for trial poc use cases and not designed to scale. You should consider moving to external databases. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/installation/topics/cdpdc-installing-trial-cluster.html Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-30-2022
03:45 AM
@sanjaysubs glad to know manual install worked. The fact that install was successful in CM node, means there was probably issue with missing allkeys.asc. This file should be present in the repository that you configure in the add host wizard on CM UI. This is required by the install wizard in the nodeconfigurator process for installation of other CM agent nodes.
... View more
12-29-2022
04:27 AM
1 Kudo
This issue would be fixed in our upcoming release. You can check for a hotfix. You can try adding the below workaround as well to see if performance improves. a. On the target cluster, Go to CM-> Hive-> Configuration b. Add "HIVE_REPL_STATS_ENGINE=hive" to the "Hive Replication Environment Advanced Configuration Snippet (Safety Valve)" configuration . Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-29-2022
04:03 AM
you should check the container logs for the one mentioned in the snapshots above for more details. Generally caused due to resource crunch. You can consider increasing the below yarn configs as well. yarn.nodemanager.resource.memory-mb yarn.nodemanager.resource.cpu-vcores Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-29-2022
03:50 AM
@frank_albers @PNW supervisor has a listener process which keeps monitoring the CDH processes. For any service with auto restart configured if faces an unexpected exit , supervisord will restart the process. Agent looks for unexpected exits in the notifications it receives via the supervisor listener and forwards relevant event info to the associated role's monitor to update its unexpected exit state. A normal agent restart does not affect supervisord process, it continues to be running and managing CDH processes. A hard stop/restart on agent will affect supervisord and in turn kill all managed CDH processes. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-29-2022
03:07 AM
@sathishCloudera This will only restart the stale management services. No impact to your CM server and the web UI. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-29-2022
03:00 AM
@sanjaysubs You should be able to track operations and relevant logs under details page if you retry the operation. Also check CM server logs before the message: Failed to complete installation that should give more details. Make sure your CM repo has allkeys.asc file. If not add it from the cloudera repository link. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
06-28-2022
08:49 AM
1 Kudo
@snm1523 1. This is a functionality to be provided by the OS vendor since package installations use yum which is not Cloudera managed. 2. No we don't have another way. Ideally for both these security specific use cases, you should create temporary /local repositories only where authentication is not required. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade/topics/cm_ig_create_local_package_repo.html Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
06-28-2022
08:16 AM
2 Kudos
@JinGon The error comes up at parcel unpacking [27/Jun/2022 09:45:00 +0900] 22914 WorkerThread parcel_cache INFO Unpacking /data/opt/cloudera/parcels/.flood/CDH-6.3.4-1.cdh6.3.4.p0.6751098-el7.parcel/CDH-6.3.4-1.cdh6.3.4.p0.6751098-el7.parcel into /data/opt/cloudera/parcels 1. Make sure you have enough space left on your hosts even after the parcel is downloaded; more than twice the parcel size for the unpacking to be successful. 2. To verify the same and also check in case the downloaded parcel file is corrupt, try to untar the parcel file on the host to some different location. If there are any of the above issues, this command would fail with appropriate errors. # tar -xvf CDH-6.3.4-1.cdh6.3.4.p0.6751098-el7.parcel clear the untarred contents once you are finished testing to save space. Clear the parcel directory, restart the agent and monitor the progress. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
05-19-2022
01:17 AM
@PDDF_VIGNESH I hope you have connected to the port configured for your cluster.Does the URL return successful response on browser? The responses need to be checked only during the time of issue. The JMX is generated by datanode. Not getting a response means there are either issues with datanode or any network issues with the coomunication.
... View more
05-12-2022
01:30 AM
@PDDF_VIGNESH See if you are able to get successful response from the agent to the host reported in the logs below: http://datanode02.hadoop:1006/jmx Few checks: 1. curl http://datanode02.hadoop:1006/jmx 2. telnet datanode02.hadoop:1006 If this is successful, restart the agent. If there is issue with response, you need to review DN logs for issues/workarounds suggested previously. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
04-21-2022
01:56 AM
@Juris The default filter value of 7 days is hardcoded in Hue. You can find this set in your cluster hosts /usr/lib/hue/apps/jobbrowser/src/jobbrowser/templates/job_browser.mako [default file] as shown below. As a workaround this file can be edited across all hosts followed by Hue restart but changes would persist for all users. We generally don't recommend to edit the install files but just providing you with the information here. self.timeValueFilter = ko.observable (7).extend({ throttle: 500 }); self.timeUnitFilter = ko.observable ('days').extend({ throttle: 500 }); Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
01-20-2022
04:08 AM
@Jua Cloudera expects HA solutions for RDBMS to be transparent to Cloudera software, and therefore are not supported and debugged by Cloudera. Having said that you can try to modify the /etc/cloudera-scm-server/db.properties with the URL in below format and test it out. com.cloudera.cmf.db.type=oracle
com.cloudera.cmf.orm.hibernate.connection.driver_class=oracle.jdbc.driver.OracleDriver
com.cloudera.cmf.orm.hibernate.connection.url=jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=off)(FAILOVER=on) /
(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS=(PROTOCOL=TCP)(HOST=hostname1)(PORT=1521)) /
(ADDRESS=(PROTOCOL=TCP)(HOST=hostname2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<Oracle-SID>)))
com.cloudera.cmf.orm.hibernate.connection.username=<CM-Oracle-user>
com.cloudera.cmf.orm.hibernate.connection.password=<password> I suggest to check with your DB vendor to validate the jdbc URL if this still fails. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
01-20-2022
03:37 AM
1 Kudo
@syedshakir Refer below link for the transition steps for Navigator https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/atlas-migrating-from-navigator-overview.html When you decide you have exhausted the value of the Navigator audits and after you've converted Navigator metadata to Atlas content, you can disable Navigator servers by simply stopping the service, delete the Navigator service and remove the navigator metadata storage directory. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-27-2021
04:08 AM
@Zhaojie The errors to focus here are the below ones which are causing unknown health alerts. This is being caused because CM agent is not able to connect to Host Monitor. Please ensure the hosts generating the alerts have space available and are not overloaded in terms of COU and memory. You can also try to hard restart the CM agents if this is feasible option since hard restart will stop the services running on the host and clear any thread deadlocks in cluster services. The health test result for HOST_SCM_HEALTH has become bad: This host is in contact with the Cloudera Manager Server. This host is not in contact with the Host Monitor. The health test result for HOST_AGENT_LOG_DIRECTORY_FREE_SPACE has become unknown: Not enough data to test: Test of whether the Cloudera Manager Agent's log directory has enough free space. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
04-05-2021
05:16 AM
@akshay0103 Please check the Hue.ini content under field [useradmin] if there are any non default permissions being used? Are you adding the user using create home directory permissions?
... View more
09-21-2020
03:44 AM
1 Kudo
@Mondi Please refer below document, you can setup a local package repository for Cloudera Manager upgrade. https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_ig_create_local_package_repo.html#internal_package_repo For CDH upgrade you can follow below document https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/installation/topics/cdpdc-using-local-parcel-repository.html Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-12-2020
08:24 AM
@Mondi It is not compulsory to enable SSL but recommended to prevent the passage of plain text key material between the KMS and HDFS data nodes. You can continue to install Java Keystore KMS without adding SSL configurations. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-11-2020
06:53 AM
@Mondi KMS service should be installed on your CDH cluster. Before installing KMS, you should have a dedicated cluster added using the Cloudera manager Add Cluster option which has the KTS service roles installed. If you are installing default Hadoop KMS Java Keystore KMS, the above can be ignored since the default Hadoop KMS included in CDH uses a file-based Java KeyStore (JKS) for its backing keystore. You can simply add the service from Cloudera Manager. Cloudera strongly recommends that you enable TLS for both the HDFS and the Key Trustee KMS services to prevent the passage of plain text key material between the KMS and HDFS data nodes. Refer below document https://docs.cloudera.com/documentation/enterprise/latest/topics/sg_hdfs_encryption_wizard.html#concept_fcq_phr_wt Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-10-2020
06:25 AM
@Mondi Cloudera provides two implementations of the Hadoop KMS. Refer below document for more details. https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_kms.html You need to install Key Trustee KMS only when using KTS as backing keystore instead of the file-based Java KeyStore (JKS) used by the default Hadoop KMS. There should be a separate cluster for keytrustee server. This would be mentioned as one of the steps when you enable HDFS encryption via the wizard. Refer below document https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/sg_hdfs_encryption_wizard.html#concept_n2p_5vq_vt Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-07-2020
01:09 AM
@Mondi You do not need to install Cloudera Navigator for KMS and KTS. Refer : https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/encryption_prereqs.html#concept_g23_454_y5__section_n4w_b5v_ls Please refer below documents for encrypting data at rest requirement and installing KMS and KTS. You must install Key Trustee Server before installing and using Key Trustee KMS. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/encryption_planning.html#concept_c4m_knq_w5 https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/key_trustee_install.html#xd_583c10bfdbd326ba-590cb1d1-149e9ca9886--7b84 https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ig_install_keytrustee.html#xd_583c10bfdbd326ba-590cb1d1-149e9ca9886--7860 Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-04-2020
08:42 PM
1 Kudo
@ebeb It seems there is some issue with either your credentials or connectivity to the URL. Please confirm if you are using credentials generated from the license file. Refer below document for obtaining username and password for adding to the baseurl https://docs.cloudera.com/cdp/latest/release-guide/topics/cdpdc-cm-download-information.html Also, confirm the URL connectivity from your CM server host using curl command : #ping archive.cloudera.com # curl https:// username : password @archive.cloudera.com/p/cm7/ 7.1.2 /redhat6/yum Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-29-2020
10:50 PM
@Mondi Please refer below document which explains steps to configure jar for UDFs. You can configure the cluster in one of several ways to find the JAR containing your UDF code, and then you register the UDF in Hive. https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_mc_hive_udf.html#xd_583c10bfdbd326ba--43d5fd93-1410993f8c2--7ea3 Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-24-2020
06:20 AM
@rmr1989 Ideally, for a missing mounts in the cluster you would automatically get alerts for services if any hadoop service directories cannot be accessed which are mapped to the mountpoint. They can be generic though like "no such file or directory" or "file not found" errors. If you are looking for specific mountpoint availability, you should consider using script to scan for mountpoints in the cluster hosts and can send email alerts using SMTP from the host instead of Cloudera Manager. Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-23-2020
04:09 AM
@rmr1989 You can edit the expression manually to include the NOT operator. Check the below example which uses NOT operator for a trigger. A trigger expression takes the form: IF (CONDITIONS) DO HEALTH_ACTION A condition is any valid tsquery statement with below syntax. SELECT [metric expression]WHERE [predicate] Mount point is an attribute which can be used in filter conditions but not metric expression. Hence, I believe it would not be possible to create a trigger on mount point scans. Refer below document which lists the supported metrics: https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_metrics.html#xd_583c10bfdbd326ba--7f25092b-13fba2465e5--7e52 Example: IF (SELECT total_xceivers_across_datanodes WHERE serviceName=$SERVICENAME AND last(total_xceivers_across_datanodes) != 0 AND entityName = "HDFS-1:ns1" AND category = "SERVICE") DO health:concerning Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-23-2020
03:36 AM
@fransyk It is not compulsory to have both the roles on different hosts. Having said that, in production environments it is recommended to differentiate worker hosts and master hosts. Please refer below document for recommended role allocations for different cluster sizes: https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_ig_host_allocations.html#concept_f43_j4y_dw Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-23-2020
02:55 AM
@fransyk Yes, you can have datanode and namenode role together on a host as long as the hardware requirements are met for the service to avoid any out of memory issues. Please refer https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_hardware_requirements.html#concept_fzz_dq4_gbb Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-22-2020
06:55 AM
@pdev Yes, BDR is supported between different CDH versions. Refer below links for more details https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_bdr_replication_intro.html#concept_rt2_1wt_bx https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_pcm_bdr.html#bdr Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-22-2020
04:47 AM
@pdev Please refer document [1] for details on how to enable replication between cluster with kerberos configuration. To configure encryption of data transmission between source and destination clusters: Enable TLS/SSL for HDFS clients on both the source and the destination clusters. You may also need to configure trust between the SSL certificates on the source and destination. The certificates of the source cluster should be trusted by your destination cluster. Enable TLS/SSL for the two peer Cloudera Manager Servers. Refer link [2] for more details. [1] https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_bdr_replication_and_kerberos.html#xd_583c10bfdbd326ba-5676e95c-13ed333c3d9--7ff3 [2] https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_bdr_replication_and_encryption.html#concept_lrr_rcf_4r Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
07-22-2020
04:31 AM
@Prav You can leverage CM API to track parcel distribution status: /api/v19/clusters/{clusterName}/parcels - This can be used to note the parcel name and version the cluster has access to /api/v19/clusters/{clusterName}/parcels/products/{product}/versions/{version} - This can be used to track the parcel distribution status Refer below link for more details http://cloudera.github.io/cm_api/apidocs/v19/path__clusters_-clusterName-_parcels_products_-product-_versions_-version-.html Hope this helps, Paras Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more