Member since
09-14-2017
120
Posts
11
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3134 | 06-17-2021 06:55 AM | |
1922 | 01-13-2021 01:56 PM | |
17196 | 11-02-2017 06:35 AM | |
18992 | 10-04-2017 02:43 PM | |
34410 | 09-14-2017 06:40 PM |
05-27-2021
09:39 AM
Some more progress: It appears CDP 7.1.6 we need to create the unencrypted dummy key file as below. To create an unencrypted private key file from an encrypted key we have to run: openssl rsa -in ssl_certificate.key -out ssl_certificate-nocrypt.key The output file (ssl_certificate-nocrypt.key) is an unencrypted PEM-formatted key that is used for the parameter key_file=/opt/cloudera/security/saml/ssl_certificate-nocrypt.key Now this error is gone: Could not deserialize key data. But we are getting different error below: AttributeError at /saml2/acs/ 'NoneType' object has no attribute 'strip' Request Method: POST Request URL: http://xxxx.com:8889/saml2/acs/ Django Version: 1.11.29 Exception Type: AttributeError Exception Value: 'NoneType' object has no attribute 'strip' Exception Location: /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/build/env/lib/python2.7/site-packages/pysaml2-4.9.0-py2.7.egg/saml2/response.py in for_me, line 212 Python Executable: /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/build/env/bin/python2.7 Python Version: 2.7.5 Python Path: ['/opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/desktop/libs/libsaml/attribute-maps', ------------------------------------------------- Below is the python code in line 212 which errors out: 202 def for_me(conditions, myself): 203 """ Am I among the intended audiences """ 204 205 if not conditions.audience_restriction: # No audience restriction 206 return True 207 208 for restriction in conditions.audience_restriction: 209 if not restriction.audience: 210 continue 211 for audience in restriction.audience: 212 if audience.text.strip() == myself: 213 return True 214 else: 215 # print("Not for me: %s != %s" % (audience.text.strip(), 216 # myself)) 217 pass 218 219 return False
... View more
05-26-2021
04:26 PM
Hello, After we upgraded from CDH 5.15 to CDP 7.1.6 runtime. The HUE SAML login got broken. It gives an error below. Any ideas? ValueError at /saml2/login/ Could not deserialize key data. Request Method: GET Request URL: http://xxxxx.com:8889/saml2/login/?next=/ Django Version: 1.11.29 Exception Type: ValueError Exception Value: Could not deserialize key data. Exception Location: /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/build/env/lib/python2.7/site-packages/cryptography-2.9-py2.7-linux-x86_64.egg/cryptography/hazmat/backends/openssl/backend.py in _handle_key_loading_error, line 1382 Python Executable: /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/build/env/bin/python2.7 Python Version: 2.7.5 Python Path: ['/opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/lib/hue/desktop/libs/libsaml/attribute-maps',
... View more
Labels:
- Labels:
-
Cloudera Hue
-
Security
01-15-2021
08:23 AM
There seems to be different version of thrift-sasl and impyla that work or dont work and it is not easy to figure out these version mismatches. So we finally abandoned impyla and went with pyodbc with cloudera impala odbc driver which is easier to make it work and is working good so far. Check out this link: https://plenium.wordpress.com/2020/05/04/use-pyodbc-with-cloudera-impala-odbc-and-kerberos/
... View more
01-13-2021
03:22 PM
DBeaver connection with JDBC kerberos to Hive/Impala is somewhat difficult to make it work. Try an easier method using ODBC as given in https://plenium.wordpress.com/2019/10/15/connect-dbeaver-sql-tool-to-cloudera-hive-impala-with-kerberos/
... View more
01-13-2021
01:56 PM
1 Kudo
This was resolved by manually upgrading the agents in the other nodes which were still at CM5.16 by running the commands below: First update the /etc/yum.repos.d/cloudera-manager.repo on the nodes with the proper repo for CM7.2.4. After that run command on each agent to be upgraded: $yum upgrade cloudera-manager-daemons cloudera-manager-agent After that restart the cloudera manager agents on all nodes: $ systemctl restart cloudera-scm-agent Next go to Cloudera Manager GUI and restart the Cloudera Management Service. After that restart the CDH Cluster all services. This should resolve the issues.
... View more
01-12-2021
03:25 PM
Hello Experts, I have upgraded CDH 5.16 to CDP 7.2.4 Cloudera Manager only( not the runtime yet). After upgrade the Cloudera Manager server is running good including agent. But the Cloudera Agents on the rest of the nodes wont start when running $ sudo systemctl restart cloudera-scm-agent The agents are still at 5.16 as the final step for upgrade does show the upgrade button for the agents. Before the upgrade the agents were running fine. Also the agents where shut down during the Cloudera Manager upgrade. How to start the agents and upgrade them to CDP 7.2.4 now. When trying to restart agents It just shows: $ sudo systemctl status cloudera-scm-agent ● cloudera-scm-agent.service - LSB: Cloudera SCM Agent Loaded: loaded (/etc/rc.d/init.d/cloudera-scm-agent; bad; vendor preset: disabled) Active: active (exited) since Tue 2021-01-12 17:52:10 EST; 7s ago Docs: man:systemd-sysv-generator(8) Process: 14093 ExecStop=/etc/rc.d/init.d/cloudera-scm-agent stop (code=exited, status=0/SUCCESS) Process: 14210 ExecStart=/etc/rc.d/init.d/cloudera-scm-agent start (code=exited, status=0/SUCCESS) In the /var/log/cloudera-scm-agent/cloudera-scm-agent.log I see some below messages: u'KS_INDEXER', u'ZOOKEEPER-SERVER', u'SERVER', u'HIVE_ON_TEZ', u'HIVE_ON_TEZ', u'HIVE_LLAP', u'HIVE_LLAP', u'KEYTRUSTEE_SERVER', u'KEYTRUSTEE_SERVER', u'SCHEMAREGISTRY-SCHEMA_REGISTRY_SERVER', u'SCHEMA_REGISTRY_SERVER', u'OZONE', u'OZONE', u'MAPREDUCE-JOBTRACKER', u'JOBTRACKER', u'THALES_KMS-HSMKP_THALES', u'HSMKP_THALES'], u'flood_seed_timeout': 100, u'eventserver_port': 7185} Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.16.2-py2.7.egg/cmf/agent.py", line 1566, in handle_heartbeat_response self._handle_heartbeat_response(response) File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.16.2-py2.7.egg/cmf/agent.py", line 1581, in _handle_heartbeat_response self.java_home_config = self.extra_configs['JAVA_HOME'] KeyError: 'JAVA_HOME' Any thoughts? Thanks!
... View more
Labels:
01-09-2021
09:19 AM
@GangWar you are a genius! After this java parameter change all CDH services started smoothly and everything running fine with Active Directory kerberos. Thanks so much!
... View more
01-08-2021
12:36 PM
Hello Experts, After changing MIT Kerberos to AD Kerberos and Regenerating all the Kerberos credentials in CM the zookeeper, YARN etc. is not starting. There is an error about the Active Directory samaccount not able to login as the zookeeper principal. I checked that the principals are created in the AD OrgUnit for Cloudera. And the $ kinit -kt zookeeper.keytab zookeeper/redacted@ADREALM on the linux servers works fine. Any thoughts how to fix? SERVICE_TYPEZOOKEEPER SEVERITYCRITICAL STACKTRACE javax.security.sasl.SaslException: Problem with callback handler [Caused by javax.security.sasl.SaslException: redacted@ADREALM.COM is not authorized to connect as zookeeper/redacted@ADREALM.COM] at com.sun.security.sasl.gsskerb.GssKrb5Server.doHandshake2(GssKrb5Server.java:333) at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:161) at org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.authenticate(SaslQuorumAuthServer.java:98) at org.apache.zookeeper.server.quorum.QuorumCnxManager.handleConnection(QuorumCnxManager.java:449) at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReceiverThread.run(QuorumCnxManager.java:423) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: javax.security.sasl.SaslException: Thanks!
... View more
Labels:
01-07-2021
02:04 PM
One specific issue with Impala connection pool timeout error got resolved by increasing in Impala configuration in CM and then restarting Impala daemon: fe_service_threads from 64 =====> increased to 128 as per recommendation below. ------------------------------------------------------------------------------------------------------ Following are the recommended configuration setting for the best performance with Impala. https://docs.cloudera.com/best-practices/latest/impala-performance/topics/bp-impala-recommended-configurations.html#:~:text=Set%20the%20%2D%2Dfe_service_threads%20startup,of%20concurrent%20client%20connections%20allowed. Set the --fe_service_threads startup option for the Impala daemon (impalad) to 256. This option specifies the maximum number of concurrent client connections allowed. See Startup Options for impalad Daemon for details. Below are the errors that got resolved by increasing Impala daemon pool size: --------------------------------------------------------------------------------------------------------- com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.PoolInitializationException: Exception during pool initialization: [Cloudera][ImpalaJDBCDriver](700100) Connection timeout expired. Details: None. SQL Error [3] [S1000]: [Cloudera][ThriftExtension] (3) Error occurred while contacting server: ETIMEDOUT. The connection has been configured to use a SASL mechanism for authentication. This error might be due to the server is not using SASL for authentication.
... View more
12-22-2020
07:36 AM
Hello Experts, Any thoughts or documents on how to configure CDH 7.x Kerberos for central authentication with Active Directory where users are in multiple AD domains/realms and no trust setup between domains in an AD forest? I believe SSSD can be configured to authenticate the linux users to multiple AD realms but the question is how CDH cluster services like HDFS can be made to trust kerberos tickets from multiple AD domains. Thanks!
... View more