Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19542 | 03-03-2020 08:12 AM | |
10578 | 02-28-2020 10:43 AM | |
3181 | 12-16-2019 12:59 PM | |
2515 | 11-12-2019 03:28 PM | |
4301 | 11-01-2019 09:01 AM |
07-23-2018
10:16 PM
@AppaRao, We are working on publishing this publicly, but for now on CM/5.13.1 and higher: (1) Cloudera Manager: Update java.security for the Java version used by Cloudera Manager: - Open $JAVA_HOME/jre/lib/security/java.security in an editor Add or replace this line: - jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, MD5withRSA, DH keySize < 768, 3DES_EDE_CBC (2) Impala: There are two different mechanisms to get TLS 1.2 support, depending on your operating system. On RHEL/CentOS 7, add the following to a CM Configuration Snippet (Safety Valve). Impala on RHEL/CentOS 7 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_minimum_version=tlsv1.2 On RHEL/CentOS 6, the above flag unfortunately does not work. Add the following instead: Impala on RHEL/CentOS 6 In CM, add the following parameter in Impala's safety valve: Impala Command Line Argument Advanced Configuration Snippet (Safety Valve) -ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLS1
... View more
07-20-2018
04:48 PM
1 Kudo
@Anudas, This is the cause of the exception java.sql.SQLException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the serverTimezone configuration property) to use a more specifc time zone value if you want to utilize time zone support. I did find some evidence that the latest Connector/J 8.0.x drivers may cause some trouble in this area. See: https://bugs.mysql.com/bug.php?id=85816 We have encountered such issues internally and are recommending against using Connector/J 8. In cases where the exception occurs with the 8 driver, version 5 drivers owrk fine. Cloudera is looking at documenting this publicly and considering how to address the issue long term.
... View more
07-20-2018
04:25 PM
Hi @yassine24, Please share exactly what you tried to do in your code and the exact error, stack, and/or JSON response so we can help.
... View more
07-20-2018
04:22 PM
@alexmc6, While no replication commands are running, does the query for HiveReplicationCommand with STATE STARTED return any results? If not, this is indeed quite a mystery. The only curiosity here for me is that you are replicating from one Hive Service to another on the same cluster. I can't explain how that would lead to this particular condition, so it may not be involved. The codes hows that the error you get is coming directly out of the result of finding a STARTED HiveReplicationCommand that is configured to copy the same database/tables, so the answer must be there.
... View more
07-20-2018
04:08 PM
@manuh, I recommend you start a new thread since the answer to this one doesn't really make sense. enabling kerberos for web consoles will not help resolve a PKIX error (which occurs when a client cannot find trust for the signer of the server certificate of the server to which the client is connecting). Enabling kerberos for web-consoles will not solve TLS problems. Something else that was done must have resolved the issue. Enabling Kerberos Authentication for Web Consoles will require that any clients connecting to them use SPNEGO to authenticate. This requires browser configuration and sometimes OS-level and krb5.conf configuration changes. It is best to plan this move carefully and make sure you know how to configure clients to use SPNEGO if you are going to enable kerberos for web consoles. If you are having any problems similar to what was described in this thread, please give us some background of what you are trying to do and what isn't working. Thanks, Ben
... View more
07-19-2018
09:30 PM
1 Kudo
@yongie, If you are seeing 80% CPU in the Cluster CPU chart, expand that chart and take a closer look at what host or hosts are reporting the highest cpu. Then you can go look at the charts for those hosts and find out if CDH processes may be responsible for the cpu. Also you can use tools like "top" to look at what processes are taking the most CPU. cpu_percent_across_hosts takes into account cpu % over all your hosts; it is not only explaining what CDH roles are using.
... View more
07-19-2018
10:15 AM
@alexmc6, I know this is frustrating; however, in order to isolate the cause, we need to be clear about the details. When you run a Hive Replication command, Cloudera Manager checks for any other "Active" Hive Replication Commmands. if it finds any Hive Replication Commands listed as in a STARTED state based on a query of its database, then it will check the arguments and return the error you see if it sees that there are any conflicts. The question becomes: why is a Hive Replication command detected as STARTED if no replication command is running. From what you mention, it could be that there is something out of sync if there are no Hive commands listed as running in CM. I would recommend restarting Cloudera Manager (servcie cloudera-scm-server restart (from the command line)) when you have an opportunity. My feeling is that it is more likely there is something transiet going on here that may be very complex to debug over a community board. Hopefully restarting will clear out jvm objects and build afresh from the database, thereby eliminating the condition that led to the issue. If this does not help, then let us know... it is possible to mimic the database query that is responsible for detecting active commands.
... View more
07-19-2018
06:34 AM
@Huriye, You need to view the pcap file in WireShark and check to see what's going on. The exception that is displayed does not tell the entire story, stack-wise. Usually there is a caused by that shows "connection refused" or something else. I'm not sure why we don't see that in this case. I suggest using some test code outside Cloudera Manager to test as the stack trace indicates there are problems connecting via Java. There are example sources everywhere... this one I think is pretty simple. You could compile after making the JDBC url look like is formulated by scm_prepare_database.sh http://www.vogella.com/tutorials/MySQLJava/article.html At the least this may give us the full stack trace.
... View more
07-18-2018
03:12 PM
@Huriye, Try using tcpdump to see if a connection is established with mysql for example: # tcpdump -i any -w ~/mysql.pcap port 3306 Run that while using scm_prepare_database.sh and find out if the connection is being made. You can open mysql.pcap in WireShark to view the packets,etc. If it is, check mysql logs for clues if the connection is being made at the TCP level but the server is rejecting the connection for some reason.
... View more
07-18-2018
02:40 PM
1 Kudo
@yassine24, This shows how to update a service configuration http://cloudera.github.io/cm_api/docs/python-client/#configuring-services-and-roles You need to update the config with the attribute and value. The configuration is JSON format, but the safety valve you want is in XML format. An example of how to update a safety valve (hdfs in this case) via REST API is here: curl -iv -X PUT -H "Content-Type:application/json" -H "Accept:application/json" -d '{"items":[{ "name": "core_site_safety_valve","value": "<property><name>hadoop.proxyuser.ztsps.users</name><value>*</value></property><property><name>hadoop.proxyuser.ztsps.groups</name><value>*</value></property>"}]}' http://admin:admin@10.1.0.1:7180/api/v12/clusters/cluster/services/hdfs/config I am pretty sure you can pass the JSON as shown above in the -d argument hbase.update_config() or whatever
... View more