Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1344 | 11-26-2019 11:47 PM | |
1301 | 11-25-2019 11:44 AM | |
9470 | 08-07-2019 12:48 AM | |
2171 | 04-17-2019 03:09 AM | |
3483 | 02-18-2019 12:23 AM |
10-10-2024
07:35 AM
1 Kudo
Don't forget to run this: chmod 644 /usr/share/java/mysql*.jar
... View more
05-17-2023
11:48 PM
Where do you get/access to this kind of UI? I'm stuck with the user-group thingy when working with Sentry too.
... View more
12-18-2022
07:25 PM
1 Kudo
@Tomas79 To get the truststore password and location use below CM API curl -s -k -u admin:admin 'https://CM HOSTNAME(FQDN):7183/api/v45/certs/truststorePassword' If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
11-03-2022
10:15 PM
@AcharkiMed Is it not possible to compute incremental statistics on KUDU tables? Do I have to do 'compute stats' every day to compute statistics for all the data in the table?
... View more
01-17-2022
01:04 AM
@Mayarn, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
11-24-2021
10:26 AM
Hello @MahendraDevu Did you resolve the error in HUE SAML we are getting this in CDP 7.1.7 after upgrade. SAML was working in CDH5.16 HUE before upgrade: [05/Apr/2019 16:37:03 -0400] views ERROR SAML Identity Provider is not configured correctly: certificate key is missing! UPDATE: Resolved this issue by making the IDP <md:EntityDescriptor entityID same as that on the metadata.xml we specified in HUE Advanced Configuration snippet hue_safety_valve.ini metadata_file . There was a mismatch between IDP value and what was in the metadata file.
... View more
09-24-2021
03:53 AM
Hi @Tomas79 While launching spark-shell, you need to add spark.yarn.access.hadoopFileSystems parameter. And also ensure to add dfs.namenode.kerberos.principal.pattern parameter value * in core-site.xml file. For example, # spark-shell --conf spark.yarn.access.hadoopFileSystems="hdfs://c1441-node2.coelab.cloudera.com:8020"
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/09/24 07:23:25 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
Spark context Web UI available at http://c2441-node2.supportlab.cloudera.com:4040
Spark context available as 'sc' (master = yarn, app id = application_1632395260786_0004).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0.7.1.6.0-297
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_232)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val textDF = spark.read.textFile("hdfs://c1441-node2.coelab.cloudera.com:8020/tmp/ranga_clusterb_test.txt")
textDF: org.apache.spark.sql.Dataset[String] = [value: string]
scala> textDF.show(false)
+---------------------+
|value |
+---------------------+
|Hello Ranga, |
| |
+---------------------+
... View more
01-19-2021
09:45 AM
Upgrading to a newer version of Impala will solve most scalability issues that you'd see on Impala 2.9, mostly because of https://blog.cloudera.com/scalability-improvement-of-apache-impala-2-12-0-in-cdh-5-15-0/.
... View more
07-24-2020
08:09 AM
May be it's because of a diferente version (I'm using HDP and Hadoop 3) but this doesn't work as described here. In first place if you try to set a variable in the "hiveConf:" namespace you will get an error like this: Error while processing statement: Cannot modify hiveConf:test at runtime You have to use the "hivevar:" namespace for this like: :2> set hivevar:test=date_sub(current_date, 5); But more importante, Hive won't expand the variable value definition as shown here: :2> set hivevar:test; +-----------------------------------------+ | hivevar:test=date_sub(current_date, 5) | +-----------------------------------------+ So the INSERT will not interpreted as you stated but instead as: INSERT INTO TABLE am_temp2 PARTITION (search_date=date_sub(current_date, 5)) and this for some reason is not supported in Hive and gives a compilation error: FAILED: ParseException line 1:50 cannot recognize input near 'date_sub' '(' It would be very useful to insert data into static partitioning using pre-calculated variable values like this, from functions or select queries, but I still haven't found how to do this in HiveQL. As a reference, this seems to be (at least partially) related with this: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive_use_variables.html
... View more
06-10-2020
02:42 AM
I was not stoping cdh and cloudera management services and deploying Kerberos configurations. Now I stopped cdh and cloudera management services and deploying Kerberos configurations. It worked for me and /etc/krb5.conf is updated for all hosts in cluster. This issue resolved. Thanks.
... View more