Member since
01-15-2015
313
Posts
28
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1058 | 10-19-2021 12:04 AM | |
4290 | 10-18-2021 11:54 PM | |
1584 | 05-04-2021 02:38 AM | |
6012 | 11-19-2020 05:48 AM | |
6044 | 11-18-2020 12:08 AM |
03-29-2019
07:29 AM
Thanks, I'll look at it. Unfortunately we have a mandate to use ansible and full automation to the largest extent possible. That's because we need to be able to set up a large variety of configurations to match what our customers use. A good model is my HDFS playbook. It 1. installs the required YUM packages 2. formats the HDFS filesystem 3. adds the standard test users 4. prepares the Kerberos keytab files (tbd) 5. prepares the SSL keystores (tbd) and sets the flags for standard mode. We can then easily turn on Kerberos and/or RPC privacy via plays that modify just a few properties and restart the services. There's an HBase playbook that sets up the HBase servers. It can use HDFS but from the conf files it looks like we could also use a traditional file and do many of our tests without also setting up a full HDFS node. That means it will require fewer resources and can run on a smaller instance or even the dev's laptop. Since it's all yum and ansible anyone can modify the image without needing to learn new tools. TPTB are fine with creating an AMI that only requires updating the crypto material but they want to be able to rebuild the AMI image from the most basic resources. Hmm, I might be able to sell this particular story as an exception. The two use cases are 1) creating new configurations that we don't have a playbook for yet and 2) verifying the configuration files for an arbitrary configuration. This won't be used in the automated tests. (tbd - I know how to do it. The blocker is reaching a consensus on the best way to manage the resources so our applications don't require tweaking the configuration everytime. Do we use a standalone KDC, an integrated solution like FreeIPA, etc.)
... View more
03-29-2019
06:33 AM
1 Kudo
To review the state please Get the affected directory path from CM > Configuration > Scope: YARN (MR2 included) > NodeManager Log Directory. The default is /var/log/hadoop-yarn Verify the available disk space for this path: # df -h /var/log/hadoop-yarn Look up what are the biggest disk space consumers on this mount point Clean up where possible If no clean up is possible and there is sufficient disk space available then you can in decrease the alarm threshold in the CM > YARN > Configuration > Scope: Node Manager > Category: Monitoring > Log Directory Free Space Monitoring Percentage Thresholds configuration property.
... View more
03-28-2019
08:53 AM
Happy to hear the issue is resolved! Just for completeness, did you delete /var/lib/cloudera-scm-agent/cm_guid on the cluster nodes?
... View more
03-28-2019
08:51 AM
Instructions for changing hostnames are provided in the Changing Hostnames documentation chapter. For CM server the hostname change should be trivial and working fine afterwards. Is your CM TLS enabled? Do you get any error message when connecting to the CM web UI? The CM server logs as asked by @manuroman will reveal if there is an issue.
... View more
03-28-2019
08:46 AM
Frequent pauses in the JVM by the Garbage Collector indicate the heap memory settings are too low. Please increase the CM -> Cloudera Management Service -> Configuration -> Scope: Service Monitor -> Category: Resource Management -> Java Heap Size of Service Monitor in Bytes and Java Heap Size of Service Monitor in Bytes configuration property values accordingly. The Service Monitor Requirements documentation chapter has guidance for the values to choose.
... View more
03-28-2019
12:48 AM
Please make the verifier.pem file contain only the root CA certificate. Then list contents with # openssl x509 -text -in /opt/cloudera/security/pki/verifier.pem And repeat the connection test with this exact command # openssl s_client -connect cmhost.antuit.internal:7182 -CAfile /opt/cloudera/security/pki/verifier.pem
... View more
03-14-2019
12:04 AM
1 Kudo
Stderr: bash: /root/password.sh: Permission denied Please put the script into an other directory than /root and adjust db.properties accordingly. Make sure the cloudera-scm user has permissions to read and execute that file.
... View more
03-12-2019
08:04 AM
1 Kudo
As @Consult mentioned, you can use the Cloudera Navigator UI to query for Hive audit events, in the Audits tab. Each audit event is associated with the username and the IP address of the client request. This should help you to get an idea of who runs the most queries.
... View more
03-01-2019
01:45 PM
1 Kudo
Looks like a netgear switch was causing the problem. switched to wifi connect between the workstation and the ISP router and all is well... Thanks
... View more