Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
21053 | 03-03-2020 08:12 AM | |
11986 | 02-28-2020 10:43 AM | |
3482 | 12-16-2019 12:59 PM | |
2990 | 11-12-2019 03:28 PM | |
4806 | 11-01-2019 09:01 AM |
07-24-2018
11:32 AM
@yassine24, io.storefile.bloom.block.size requires an integer, not a boolean. The background is good, but I'm not sure what problem you are seeing when you try to update the configuration.
... View more
07-24-2018
11:11 AM
1 Kudo
@martinbo, As mentioned by others, there are some options to ease the management of users and groups. Common ones are: 1 - SSSD, IPA, Centrify OS level integration so that application calls to the OS are handled by those apps to make queries to a central LDAP source. This requires a good deal of configuration, but it is a robust, enterprise-grade solution 2 - Manage your group and passwd files with automation tools like puppet, chef, etc. (mod once, "push out" changes to all hosts) 3 - Configure LdapGroupsMapping in HDFS so that hadoop services will do group lookups directly to LDAP. NOTE: If you intend on letting users run jobs directly on YARN, you will still need to create local users on each host with a NodeManager since contains require the os user to be present.
... View more
07-24-2018
11:02 AM
1 Kudo
@Anudas, Since the error shows that the "JDBC Driver class not found: com.mysql.jdbc.Driver" that means there must be something wrong with permissions or otherwise. In /etc/default/cloudera-scm-server there is the following configuration by default: export CMF_JDBC_DRIVER_JAR="/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar" So, let's make sure that permissions are such that the cloudera-scm user can access that JAR. Run: ls -la /usr/share/java See if permissions allow the cloudera-scm user access. Since you ran scm_prepare_database.sh as root, that might explain why that worked but CM's JVM cannot locate the class.
... View more
07-24-2018
10:49 AM
@Huriye, You can also review tcpdump output to see if a conneciton is being made. Check your mysql logs to see if a connection attempt is registered, etc. Somewhere between the JDBC driver and your mysql server there is a failure of some sort, so diagnostics need to be performed to figure out what is failing. This may take some digging, so if you have friends, colleagues, etc. who can work with you on this system and know how to debug these sort of connection issues, I'd ask them for some help. I was suggesting the java app to test as an example of something you might try, but if it is outside your skillset, then there may be other ways. Start by seeing if you can determine if the JDBC connection is able to reach its destination (mysql server).
... View more
07-24-2018
10:38 AM
@alexmc6, I checked the docs and the indications are that you should not need to configure a peer of both source and target Hive services are managed in the same Cloudera Manager. If you did configure a peer for the replication schedule, maybe let's try creating a new replication schedule that does not use a peer. You should be able to select the source Hive service from the desired source without the peer. I am wonding if the remote execution of that Hive Export command is failing since the parent command is already running... Perhaps without the peer, the conflict check handles that... Just a thought.
... View more
07-24-2018
09:50 AM
@alexmc6, Thanks for the clarification on the CM / Hive Service situation ... I get it now. Can you confirm that when you query your database for NAME = HiveReplicationCommand and STATE = STARTED that nothing is returned if you have no replication schedules running. I wasn't quite sure based on your previous comment. The query that is performed to generate the result you are seeing shouldn't care about clusters or anything I think. I'll double check and let you know if I find differently.
... View more
07-24-2018
09:42 AM
@proxim The error of interest is in your kt_renwer log: /bin/kinit -k -t /run/cloudera-scm-agent/process/961-hue-KT_RENEWER/hue.keytab -c /var/run/hue/hue_krb5_ccache hue/npmnru01l.vf-nz.internal.vodafone.com@NASA-UAT-VFNZ.COM kinit: Failed to store credentials: Internal credentials cache error (filename: /var/run/hue/hue_krb5_ccache) while getting initial credentials The error tells us there was a problem for kinit storing the credentials in the credentials cache located in /var/run/hue/hue_krb5_ccache I would check that file and parent directory to make sure your hue process user can create the cache file there. Try running kinit -c /var/run/hue/hue_krb5_ccache Try running ls -la /var/run/hue
... View more
07-24-2018
09:26 AM
@AppaRao, Thanks for bringing this up. In the future, it would be a good idea to create a new thread rather than continue on an older one like this. When that initial message was posted, I think Cloudera Manager's agent was not able to use anything but TLSv1 (which has since been fixed). Cloudera is working on documenting this information. For now, I'll add in what we have. NOTE: - You must have CDH 5.13.1 or higher in order for these steps to work for you - We have only tested so far on Redhat and CentOS 6 and 7 ---------------------------------------------------------------------------------------------------------------- Java Based Components The vast majority of the components in the ecosystem are written in Java. This includes Accumulo, Flume, HBase, HDFS, Hive, Kafka, the Keystore Indexer, MapReduce, Oozie, Sentry, Solr, Spark, Sqoop, Yarn, and ZooKeeper. What you need to do is to disable the ciphers on Java which then applied to most of the Java-based services once they are restarted. Fortunately, there is one universal place to enforce TLS usage for all Java-based components. This is the "java.security" file. Step 1: Update java.security Open $JAVA_HOME/jre/lib/security/java.security in an editor Add or replace this line: jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, MD5withRSA, DH keySize < 768, 3DES_EDE_CBC NOTE: This must be done on every machine in the cluster. Whenever Java is upgraded, this step must be performed again. HUE The components of interest in Hue are the Hue load balancer and the Hue servers. We recommend the use of the Hue load balancer because more cipher suites will be available, and there is no issue with Firefox (as described below). This first change is to the Hue load balancer. Step 2: The Hue load balancer On the CM machine, edit /usr/share/cmf/hue/httpd/httpd.conf Edit the SSLProtocol line to look like this: SSLProtocol -all +TLSv1.2 NOTE: This step also has to be performed after each CM upgrade. Step 3: The Hue Server In CM, add the following to the “Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini” [desktop] ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLSv1 NOTE: The Firefox web browser does not support any cipher suites that overlap with these ciphers. As a result, the Firefox web browser will not load the page and will display the error “SSL_ERROR_NO_CYPHER_OVERLAP”. Chrome, IE, Edge, and Safari do not have this issue, and are fine. This issue does not exist for the Hue Load Balancer above. Impala There are two different mechanisms to get TLS 1.2 support, depending on your operating system. On RHEL/CentOS 7, add the following to a CM Configuration Snippet (Safety Valve). Step 4a: Impala on RHEL/CentOS 7 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_minimum_version=tlsv1.2 On RHEL/CentOS 6, the above flag unfortunately does not work. Add the following instead: Step 4b: Impala on RHEL/CentOS 6 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLS1 NOTE: This solution has the same caveats with respect to Firefox as described above for Hue. Kudu There are two ports to configure for Kudu, the rpc protocol port and the webserver protocol port. In CDH 5.13.1 it’s possible to restrict the TLS protocol to TLS 1.2 for the rpc protocol port. This is the port where that all the data travels through. Unfortunately it’s not possible to similarly enforce TLS 1.2 on the web server port in CDH 5.13.1. The traffic that goes over the web server port is of a generally non-sensitive nature - like status. Like Impala, there are two different solutions depending on the OS version. For RHEL/CentOS 7, do this: Step 5a: Kudu on RHEL/CentOS 7 In CM, add the following parameter to the “Kudu Service Advanced Configuration Snippet (Safety Valve) for gflagfile” -rpc_tls_min_protocol=TLSv1.2 On RHEL/CentOS 6, add the following instead: Step 5b: Kudu on RHEL/CentOS 6 In CM, add the following parameter to the “Kudu Service Advanced Configuration Snippet (Safety Valve) for gflagfile” -rpc_tls_ciphers=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLSv1 NOTE: Once again, this solution for RHEL/CentOS 6 has the same caveats with respect to Firefox as described above for Hue. Restart Now you’ll want to restart everything. Step 6: Restart In CM, restart all the affected clusters In CM, restart the Cloudera Management Services On the CM server machine, restart CM itself using sudo service cloudera-scm-server restart
... View more
07-23-2018
11:23 PM
1 Kudo
@hadoopNoob, In order to address this issue, you will need to free up space in /run/cloudera-scm-agent/process. To do so, we need to know much space each process directorty is taking and also how old they are. You can try listing the directories in order of size with a command like: du -h --max-depth=1 /run/cloudera-scm-agent/process | sort -h It is OK to delete directories in /run/cloudera-scm-agent/process provided that process directory is not used for a running process. /run/cloudera-scm-agent/process is where the configuration for any role you are starting resides, so if you run out of space, you will not be able to start processes on that host.
... View more
07-23-2018
10:16 PM
@AppaRao, We are working on publishing this publicly, but for now on CM/5.13.1 and higher: (1) Cloudera Manager: Update java.security for the Java version used by Cloudera Manager: - Open $JAVA_HOME/jre/lib/security/java.security in an editor Add or replace this line: - jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, MD5withRSA, DH keySize < 768, 3DES_EDE_CBC (2) Impala: There are two different mechanisms to get TLS 1.2 support, depending on your operating system. On RHEL/CentOS 7, add the following to a CM Configuration Snippet (Safety Valve). Impala on RHEL/CentOS 7 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_minimum_version=tlsv1.2 On RHEL/CentOS 6, the above flag unfortunately does not work. Add the following instead: Impala on RHEL/CentOS 6 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLS1
... View more