Member since
10-01-2018
110
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
221 | 09-02-2021 08:55 AM | |
1210 | 10-15-2019 10:42 AM | |
1710 | 10-09-2019 05:46 AM | |
440 | 10-09-2019 04:19 AM |
09-04-2021
08:44 AM
We cant directly install CDH or HDP distribution from mac itself. I would suggest to install VM ware or vitual box and try to deploy a standalone sing node Hadoop cluster . As per the recent post from Sainath you might have links respective to that Refer that
... View more
09-04-2021
02:02 AM
So you wanted to install Cloudera Hadoop in your local mac system as a standalone cluster for learning.Is this your question?
... View more
09-03-2021
01:05 AM
Hi Please check the CF stack in AWS for this deployment. If it is failing at creating a load balancer then check Maybe it hit ELB quota. The dex-base installation creates an LB. You might want to increase the AWS ELB quota and retry the provisioning. Otherwise, let us know if you have overcome this issue with a different resolution
... View more
09-03-2021
12:10 AM
Verifying whether the Replication Manager service interacts with the Data Lake Cloudera Manager instance and vice-versa for Hive replication https://docs.cloudera.com/replication-manager/cloud/operations/topics/rm-port-requirements-cdh.html
... View more
09-02-2021
08:59 AM
you should "kinit" with your workload user credentials in CDP.before running any Hadoop commands
... View more
09-02-2021
08:49 AM
Seems like your environment creation is failing at the point where it uses CCM. When you are creating CDP environment you will have an option to use Cluster connectivity Manager where you will only use private subnets and private outbound access to our CDP control plane. Just go through these documentations https://docs.cloudera.com/management-console/cloud/connection-to-private-subnets/topics/mc-ccm-overview.html If you are testing using public subnets and public end points for GCP dont use CCM while registering
... View more
10-18-2019
09:07 AM
Thanks bantone for your article
... View more
10-16-2019
03:49 AM
Hi Sandeep, Can you please check the owner and permissions of "/var/run/cloudera-scm-server" properties whether it is root or cloudera-scm ?? Thanks and Regards, Bhuvan.
... View more
10-15-2019
10:42 AM
1 Kudo
Hi Baris, Yeah, you were correct there won't be any issue having those checks failed in worker nodes. When you ran CDSW status commands the following checks are useful in Master node like ingress controller which is required in your master node and run as part of the Kube-controller-manager binary. Failing these checks in your worker node will not cause you problems. Regards, Bhuvan
... View more
10-09-2019
06:05 AM
Hi Adding new cluster hosts or services to a cluster with auto-TLS enabled automatically creates and deploys the required certificates. https://www.cloudera.com/documentation/enterprise/latest/topics/auto_tls.html#auto_tls Regards, Bhuvan
... View more
10-09-2019
05:46 AM
1 Kudo
Hi Please check if you have faced any errors like below in the CDSW logs like below after upgrade to 1.6 ERROR EngineInit.BrowserSvcs fgtugi359w4b5n58 Finish apiGet, failed to execute API request data = {"err":"Get https://cdsw.unedic.intra/api/v1/projects/Priseenmain/formation_python/engines/fgtugi359w4b5n58/permissions?user=kkhataei-ext: x509: certificate signed by unknown authority","user":"root"} The method by which the Cloudera Data Science Workbench web UI session tokens are stored has been hardened. https://www.cloudera.com/documentation/data-science-workbench/latest/topics/cdsw_release_notes.html#rel_160 as the issue happened after the upgrade to CDSW 1.6 I suspect that it is related to the issue addressed under "DSE-7173". It seems that in CDSW 1.6 all internal traffic is et to go through the ingress-controller proxy for more security, however, it seems that some certificates are not making it into the sessions A possible solution can be by manually copy the root CA and intermediate CA into the underlying cert file that contains all of the OS level certs. Check if below steps works Please find the steps below, try it and let us know how it goes. 1. Copy all of the certificates you are using in your chain of trust (root CA, intermediate CA's). It should start and end with ----BEGIN CERTIFICATE----, ----END CERTIFICATE----) 2. Create a session like normal and copy the engineID from the URl. Then figure out which host the pod is running on (last column) with: # kubectl get po --all-namespaces=true -o wide 3. SSH into that host and run: docker ps | grep -i <engineID> (to get the container id) docker cp <containerID>:/etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt This step will copy the ca-certificate file out of the docker container. This file contains all of the certificates that are trusted by the OS. 4. Append ALL of your internal root CA / intermediate CA certificates to the end of this file. Copy this file to all of the CDSW nodes that you have 5. Go back to CDSW -> Admin -> Engines, and under Mounts, add /etc/ssl/certs/ca-certificates.crt to the engine mount. This will ensure that the changes persist across restarts. 6. Create a brand new session in CDSW and ensure that you can open the _Terminal and access Jupyter workbench.
... View more
10-09-2019
04:32 AM
Hi tuk, No, it is not supported. Please check our documentation for more information. About the supported OS details https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#c516_supported_os Thanks, Bhuvan
... View more
10-09-2019
04:19 AM
Hi Yes, your understanding of adding an existing SCM host to the new cluster is correct. We won't recommend you to add the new cluster with existing SCM hosts. https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_mc_add_delete_cluster.html#cmug_topic_6_1__section_klz_tj4_fn Regards, Bhuvan
... View more
10-09-2019
04:15 AM
Hi Can you try restarting Impala service and regenerate Kerberos ticket and try to access tables. Check if that helps. https://cloudera-portal.force.com/articles/KB_Article/TSB-2018-297 Regards, Bhuvan
... View more
06-10-2019
07:36 AM
Hi Davey, Please refer the below link for EOL support policy for all the CM/CDH versions.It may help you. https://www.cloudera.com/legal/policies/support-lifecycle-policy.html Regards, Bhuvan
... View more
06-07-2019
09:30 AM
Hi, Have you tried using mail triggers.Please go through the steps you will find a way to configure alerts for CPU usage based on triggers configured. Mail alert triggers could be configured as seen below at [1]. For example for creating a custom trigger on Host CPU Usage you could follow these steps: 1. Login to Cloudera Manager 2. Navigate to Hosts 3. Select the host you would like to monitor and/or: 3.1. During trigger creation you can set it to affect all hosts if needed 4. Under "Details" on the left side, there is the "Health Tests" section 5. Select the "Create Trigger" button on the top-right of the "Health Tests" section 6. Name the Trigger 7. Edit the "Expression" to test: Last "cpu_percent" > 80 => Mark as concerning 8. Select if trigger should affect all hosts or only the host listed. 9. Verify the Chart on the "Preview" section on the right side of the page Creating a custom trigger from a present Chart could be done with following Step 1-3 above, then: 4. In the "Charts" section on the right there are charts from which you can create triggers with the following steps: 4.1. Select the small gear icon in the top-right corner of the chart (appears on mouseover) e.g. "Host CPU Usage" 4.2. Select "Create Trigger" from the drop-down menu 4.3. You should modify the trigger as needed. 4.4. Select if trigger should affect all hosts or only the host listed. [1] http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-14-x/topics/cm_dg_triggers.html [2] http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-14-x/topics/cm_ag_alerts.html setting type of Alert: 1. You don't need to restart anything for the triggers to work. (Not hosts, not CM, no restart needed.) When you created the trigger it should have by default set it to enabled. 2. Checking the host details you can also check all its triggers configured, if you suspect it's disabled or if you want to change the trigger later. 3. If you do not receive the alerts as expected you might need to review the following configuration settings: 3.1. Depending on how you set the trigger to change the health state: "Concerning or "Bad", you should set the alert threshold at: Event Server configuration -> Health Alert Threshold -> Concerning OR Bad (by default it doesn't alert on concerning, we are sorry if this caused a confusion) 3.2. If you still don't receive the alerts as intended, please check the following steps at [1]. Enabling Alerts on triggers: 1. Login to Cloudera Manager 2. Navigate to Administration/Alerts 3. Select "Hosts" in "Alert Type" list on the left 4. Select the small "wrench" icon next to Hosts to edit the settings 5. Search for "Enable Health Alerts for This Host" and 6. enable the alerting on health tests
... View more
06-07-2019
08:53 AM
Yes Sam, you can get the details of configuration from configuration tab of each service.For example for hive service as shown in below.
... View more
06-07-2019
08:47 AM
Hi , As per our documentation creating trigger will only be possible like below Triggers can be created for services, roles, role configuration groups, or hosts. Create a trigger by doing one of the following: Directly editing the configuration for the service, role (or role configuration group), or host configuration. Clicking Create Trigger on the drop-down menu for most charts. Note that the Create Trigger command is not available on the drop-down menu for charts where no context (role, service, and so on) is defined, such as on the Home > Status tab. Use the Create Trigger expression builder. See documentation link below Ref Link: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_dg_triggers.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--7d8e Regards, Bhuvan
... View more
03-05-2019
09:11 AM
Hi Bryan, Please try to check the version of avro libraries you are using $ rpm -qi avro-libs if they are 1.7 version libraries then try to upgrade those libraries using yum command $yum upgrade avro-libs Please also check the prerequisites before using sqoop https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cdh_ig_sqoop_installation.html#topic_13_4 for more documentation regarding sqoop please use the link ..which might help you for better under standing. https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html# To check the supported database versions https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_supported_databases Please use username and password as well for accessing the oracle database like below,If credentials are required. sqoop import --connect jdbc:oracle:thin:@localhost:1521/orcl --username MOVIEDEMO --password welcome1 Regards, Bhuvan
... View more
01-10-2019
04:17 AM
Hi Priya, Sorry for late reply.Please try to connect thei hive server2 as below format "!connect jdbc:hive2://127.0.0.1:10000/default/;auth=nosasl" Please try to login with root user and try to access database and files Regards, Bhuvan
... View more
12-26-2018
08:30 AM
Hi priya, Can you tell me from which CLI you are connecting to hive and post me the command which you are running.Please also let me know whether you have logged in as root or normal user for that shell. Regards, Bhuvan
... View more
12-10-2018
09:31 AM
Hi, Both SSL and SASL are different, please add the parameter which i have posted.The one which I asked you is for SASL ..the one which you have disbled is for SSL
... View more
12-08-2018
03:05 PM
Hi, Check your HiveServer2 config file hive-site.xml ...add this <property> <name>hive.server2.authentication</name> <value>NOSASL</value> </property>
... View more
12-08-2018
02:40 PM
Hi, Check for agent host logs where this impala service resides.you might find errors in agent logs.It happens when the agent host, where these services reside is not connecting to master server.
... View more
12-07-2018
05:09 AM
Hi after intiating scala session ..go to terminal access cat /tmp/spark-driver.log let me know error/exception you are facing. please check your kerberos authentication for user configured in the cdsw user-->settings-->Hadoop authentication-->check for kerbros ..If there is no ticket granted from kdc you wont get scala session. Regards, Bhuvan
... View more