Member since
01-08-2018
133
Posts
31
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11750 | 07-18-2018 01:29 AM | |
2148 | 06-26-2018 06:21 AM | |
3749 | 06-26-2018 04:33 AM | |
1933 | 06-21-2018 07:48 AM | |
1365 | 05-04-2018 04:04 AM |
06-05-2018
12:55 AM
According to this https://issues.apache.org/jira/browse/HIVE-14217 Druid integration is fixed in Hive 2.2.0. CDH5 is based on Hive 1.x as mentioned before, and I cannot find the specific patch in the release notes. Probably CDH6 (which is based on Hive 2.1.0) will support it. We have to wait.
... View more
05-29-2018
10:01 AM
Have you checked the trust store file that it is copied to the host?
... View more
05-29-2018
03:23 AM
Everything seem to be ok. I have the same configuration and cannot reproduce your issue. I have CM 5.14.3 installed so my jar is /usr/share/cmf/lib/agent-5.14.3.jar What version are you using?
... View more
05-28-2018
07:52 AM
1 Kudo
hi Krishna, rolling upgrade of Java is not supported according to https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_cm_upgrading_to_jdk8.html Cloudera does not support a rolling upgrade to JDK 1.8. You must shut down the entire cluster. You should either upgrade all nodes to Java 8, or downgrade to Java 7.
... View more
05-28-2018
07:46 AM
Regarding the query, the "ETL_DEV" is probably the display name, so instead of "AND clusterName=ETL_DEV" you should try "AND clusterDisplayName=ETL_DEV" The fact, that nothing is displayed, is an indication that the configuration is not complete. Can you check that the directory, specified in " Cloudera Manager Container Usage Metrics Directory " is created in the HDFS and user defined in " Container Usage MapReduce Job User " has full permissions? If not, then you will need to re-run the Create YARN Container Usage Metrics Dir command.
... View more
05-28-2018
01:39 AM
You can check the last of the points you have listed as it is almost sure it will fail to hearbeat, if you have not manually did that step (set use_tls=1 and restart agent) If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here. If you did this already, then make sure that the configured keystore and truststore files, have been copied to the new host.
... View more
05-28-2018
01:34 AM
This is a normal behavior. You should either create dynamic folder name (e.g. output_dir_timestamp) but you may end up having a lot of directories, or add an HDFS action to delete the HDFS directory, just before the sqoop action. I recomend the last approach.
... View more
05-27-2018
11:43 PM
It is strange. Your config, seems to be ok. I don't know what is the problem. I could recommend check the output of hostname -f and host -t A 10.142.0.4 if the output is the FQDN, then probably something is wrong with your version of Dnstest. Can you also check the "/etc/nsswitch.conf" file? Usually the hosts line is: hosts: files dns Can you check if something else is before the above? E.g. "sss files dns". The order matters. If "files" is first, then the local "/etc/hosts" will be checked first http://man7.org/linux/man-pages/man5/nsswitch.conf.5.html
... View more
05-23-2018
05:45 AM
Can you post you /etc/hosts file?
... View more
05-11-2018
12:02 AM
@RajeshBodollaunfortunatelly you are correct and I have realized the hard way. I have upgraded the CDH during the previous week and this week I was trying to configure some wildcard topics only to find out that this is not possible. When I wrote the previous post, it was clearly mentioned in the release notes, that is supported. I had copied this part which was saying: * Wildcard usage for Kafka-Sentry components
You can specify an asterisk (*) in a Kafa-Sentry command for the TOPIC component of a privilege to refer to any topic in the privilege. Supported with CDH 5.14.1.
You can also use an asterisk (*) in a Kafka-Sentry command for the CONSUMERGROUPS component of a privilege to refer to any consumer groups in the privilege. This is useful when used with Spark Streaming, where a generated group.id may be needed. Supported with CDH 5.14.1. Now, this part is gone from the documentation. I apologize that I have not tested it before. But as you can see in http://archive.cloudera.com/cdh5/cdh/5/sentry-1.5.1-cdh5.14.2.CHANGES.txt it is still mentioned as commited: commit e9efe1b3b38912af8799d37a67679295d98ebe63
Author: amishra <amishra@cloudera.com>
Date: Thu Feb 8 15:16:15 2018 +0530
CDH-57131 CDH-61471: Add consumergroup and topic wildcard for Kafka privilege validation
Change-Id: I19cc4b8b047eac668721e85131287f56b6f66fcd
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/30142
Tested-by: Jenkins User
Reviewed-by: Viktor Somogyi <viktor.somogyi@cloudera.com>
Reviewed-by: Sergio Pena <sergio.pena@cloudera.com>
... View more
05-07-2018
12:42 AM
To be honest, I have not used lzo in spark. I suppose that you have Spark running under yarn and not stand-alone. In that case, the first thing I would check, is that lzo is configured in YARN available codecs "io.compression.codecs". Moreover, have you configured HDFS https://www.cloudera.com/documentation/enterprise/latest/topics/cm_mc_gpl_extras.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--7c3e
... View more
- Tags:
- o be hons
05-04-2018
04:04 AM
2 Kudos
Just select "None" in Sentry service section of Kafka. You don't have to delete rules. The rules are stored in sentry and since kafka will not ask, rules are useless.
... View more
05-04-2018
02:27 AM
1 Kudo
You are using sqoop1. Sqoop1 is not a service, it is a tool, that submits the job to YARN. So, apart from your stdout and YARN logs, there are no sqoop logs. Number of mappers (-m 4) means that your job will open 4 connections to your database. If there is no indication in logs about out of memory or illegal value on a column (I mean yarn logs), then you should check that your DB can accept 4 concurrent connections.
... View more
05-03-2018
11:55 PM
According to Cloudera "https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#concept_ihg_vf4_j1b" JDK 8
All JDK 8 updates, from the minimum required version, are supported in Cloudera Manager/CDH 5.3 and higher unless specifically excluded. Updates above the minimum that are not listed are supported but not tested. So, as about your question, 8u171 is supported but not tested. Not tested means that you may be the first which will face some issues. In such case, were issue is due to a bug in JDK version, Cloudera will add this version as "excluded". If you want to be safe, then use the latest tested, mentioned by @csguna
... View more
04-30-2018
12:08 AM
The error is about CDH and not Cloudera Manager. The specific version of Cloudera Kafka needs both to be at least, version 5.13.
... View more
04-30-2018
12:04 AM
This is not expected at all. Tables should be created under db directory. E.g. /user/hive/warehouse/database1.db/table1 /user/hive/warehouse/database2.db/tableA etc. This case is very weird. In such cases I suspect user error. Can you provide the command you have used?
... View more
04-27-2018
12:18 AM
There is another topic that may be related to your issue http://community.cloudera.com/t5/Interactive-Short-cycle-SQL/After-upgrading-to-cdh-5-14-2-Impala-daemon-stopped-suddenly/m-p/66472#M4357 In a few words Impala 2.11 has an issue that causes minidumps and a ticket was opened https://issues.apache.org/jira/browse/IMPALA-6882 Can you check if you have the same problem? Does your cpu has the "popcnt" flag?
... View more
04-26-2018
11:55 PM
1 Kudo
There is no need to have a separate area only for oozie workflows. I understand that we were used to it. But I think the new interface is better, once you get familiar with. So, if you go to documents and select the one you want to copy, press the three vertical dots icon and the following menu will appear. If you want to see only the oozie workflows, you can customize your view. On the left side of search button, there is a drop down (by default is "All") menu where you can select only oozie workflows to be displayed.
... View more
04-24-2018
04:12 AM
The link you have provided refers to possible data loss in case of disabling security but it is in "authorization" part of security. Enabling/disabling Kerberos is in the scope of "authentication". Personally I have done it to enable/disable with no data loss. There is no reason to lose data. But if you have defined authorization rules, then it is diferent.
... View more
04-23-2018
01:36 AM
According to your logs 18/04/18 15:09:05 WARN security.UserGroupInformation: PriviledgedActionException as:m0162109 (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN security.UserGroupInformation: PriviledgedActionException as:m0162109 (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN security.UserGroupInformation This is a Kerberos issue. If you are using HUE notebook, Kerberos authentication has been done from hue user which is also allowed to impersonate your account "m0162109". If you are using other Notebook, then probably you do the authentication when you start the notebook.
... View more
04-23-2018
01:33 AM
According to your logs 18/04/18 15:09:05 WARN security.UserGroupInformation: PriviledgedActionException as:m0162109 (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN security.UserGroupInformation: PriviledgedActionException as:m0162109 (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/04/18 15:09:05 WARN security.UserGroupInformation
... View more
04-20-2018
05:56 AM
I sedond to that this is an issue with Kerberos. If kinit didn't help then try to use: --principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
Moreover, some more infor regarding how do you use pyspark etc. would be helpful.
... View more
04-18-2018
06:11 AM
There should be no impact. It is the same private key. You just encrypt it with a password.
... View more
04-18-2018
02:42 AM
You can add a password into your private key file. Suppose that you private key file is test.pem. Its contents should be like: -----BEGIN PRIVATE KEY----- . . . -----END PRIVATE KEY----- or -----BEGIN RSA PRIVATE KEY----- . . . -----END RSA PRIVATE KEY----- Run the following command $ openssl rsa -des3 -in test.pem -out test1.pem -passout pass:test This command will create the test1.pem file which is protected by password. Its contents will be similar to : -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,3716DAF995B742A4 . . . -----END RSA PRIVATE KEY-----
... View more
04-18-2018
02:18 AM
There are various possible issues for that. Have you checked connectivity from agent node to cloudera manager? The agent connects to CM on port 7182. Have you tested that you get a reply and there is no network or firewall issue?
... View more
04-18-2018
02:08 AM
According to the output of "systemctl status cloudera-scm-server" the service is down. Have you checked both "cloudera-scm-server.out" and "cloudera-scm-server.log"? Can you provide those files?
... View more
04-18-2018
01:57 AM
1 Kudo
No you don't have to stop all Management Services. Only Navigator Metadata Server is enough, but you can stop also Navigator Audit server if you feel more safe.
... View more
04-18-2018
01:54 AM
If you delete the log files that end with a number (e.g. mgmt-cmf-mgmt-NAVIGATORMETASERVER-my.node.com.log.out.1), then you will have no issue. Don't delete the ones with ".log" suffix, because they are open for writing by services. Suppose that you have Max Logs Size = 200MB Maximum Log File Backups = 10 That means that each time your .log file reaches 200MB will be rolled, for ten times in total (will be copied to .log.1, .log.2,.....,.log.10). So in total, the log files of a service can occupy up to ((10* 200MB) + 200MB)=2200MB. Depending on your disk size you can reduce only one or both of them, depending on what is better for you. E.g. if you don't want so big files you can set "Max Log Size"=100MB, that means ((10*100MB) + 100MB) = 1100 MB. So just reducing this parameter into half, you save 1100MB of disk space. The same applies to all services.
... View more
04-17-2018
10:01 AM
Use the example I wrote above. It will change the owner of /inputnew directory in hdfs to "cloudera" sudo -u hdfs hdfs dfs -chown cloudera /inputnew
... View more
04-17-2018
09:24 AM
Found it https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html Additionally, since Cloudera does not support mixed environments, all nodes in your cluster must be running the same major JDK version. Cloudera only supports JDKs provided by Oracle. In any case you are doing upgrade, you will not have this status for long time.
... View more