Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5131 | 06-11-2020 02:45 PM | |
| 4598 | 04-21-2020 03:38 PM | |
| 3593 | 02-27-2020 05:51 PM | |
| 3581 | 01-23-2020 03:41 AM | |
| 23313 | 01-14-2020 07:14 PM |
10-06-2019
03:20 PM
@pramana , Looks like that you are using Ubuntu "bionic", which is not supported in CDH/CM 5.16.x, Bionic is only supported from CDH 6.2 onwards. https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#c516_supported_os https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_os_requirements.html#c63_supported_os So you need to either try 6.2 version of CM/CDH, or change the version of your Ubuntu OS. Hope that helps. Cheers Eric
... View more
10-06-2019
03:11 PM
@priyanka1_munja, Are you complaining that same partition appears multiple times? Did you notice the extra space before some of the partition keys? For example, "03-04-2015" vs " 03-04-2015"? I think that's the reason for the duplicates. Cheers Eric
... View more
10-04-2019
04:02 AM
1 Kudo
Hmm, you missed the database name in the connection string, try below: beeline -u 'jdbc:hive2://slave1:10000/default;ssl=true;sslTrustStore=/var/run/cloudera-scm-agent/process/72-hive-HIVESERVER2/cm-auto-host_keystore.jks;trustStorePassword=yeap4IhJzRvK5gBGVMeTahoL21BNmBF2TSi46pbQTP6' Cheers Eric
... View more
10-04-2019
12:04 AM
@Mekaam, Can you please add quotes around the JDBC connection string? So like below: beeline -u 'jdbc:hive2://slave1:10000;ssl=true;sslTrustStore=/var/run/cloudera-scm-agent/process/72-hive-HIVESERVER2/cm-auto-host_keystore.jks;trustStorePassword=yeap4IhJzRvK5gBGVMeTahoL21BNmBF2TSi46pbQTP6' I believe without quotes it will cause issues. If still not working, check HS2 log to see what it complains on the server side. Cheers Eric
... View more
09-30-2019
04:33 PM
1 Kudo
@parthk , There is no current date locked in for the new impala release that will support Ranger at the moment. However, I would like to ask why you do not want to have kerberos? Authorization does not work properly without Authentication in the front. Think about an online application, you surely want users to be able to login first, before you can say what level of access they should have. Same applies in CDH world. Kerberos acts as the front end login, and Sentry/Ranger acts as the backend authorization control. So without Kerberos, you are allowing everyone to be able to access CDH. I strongly suggest you to implement Kerberos first before Sentry, Ranger is the same story regardless. Cheers Eric
... View more
09-25-2019
04:02 PM
@aohl, Thanks for sharing the details about your resolution on the issue. I am glad that it has been resolved and sure others will benefit from your findings here. Cheers Eric
... View more
09-16-2019
03:14 PM
@ChineduLB No you can't, you can only save data into temp tables, or simply use sub-query instead. Cheers Eric
... View more
09-10-2019
11:55 PM
Hi Aaron, This is not the log file that I was after. Can you add: LogPath=/path/to/dir LogLevel=6 Into ODBC driver configuration file and test again? Then the trace logs for the driver should be logged under /path/to/dir and see what error we can get from there. Cheers
... View more
09-10-2019
11:54 PM
Hmm, looks like Spark is not able to reach to NN, are both of your NN up and running? Can you run normal HDFS commands and operations? Can you also share the spark-defaults.conf file which is under /etc/spark/conf or /etc/spark2/conf directory for review? Thanks
... View more
09-10-2019
12:06 AM
@DataMike, I am afraid that there is no such option that I am aware of, you would have to stop and start one by one manually. Cheers Eric
... View more