Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 08-28-2018 02:00 AM | |
| 2696 | 07-31-2018 06:55 AM | |
| 5685 | 07-26-2018 03:02 AM | |
| 2981 | 07-19-2018 02:30 AM | |
| 6465 | 05-21-2018 03:42 AM |
09-28-2017
01:44 PM
1 Kudo
@ebeb If you don't find the configuration by default you can use "Advanced Configuration Snippet" as follows Also the document says "Add the following property to the HiveServer2 and Hive metastore's sentry-site.xml". So you can set your property in below locations and make sure to restart the services 1. CM -> Hive -> Configuration -> Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml 2. CM -> Sentry -> Configuration -> Sentry Service Advanced Configuration Snippet (Safety Valve) for sentry-site.xml
... View more
09-26-2017
12:07 PM
@desind I will not recommend to change your settings, instead you can pass the memory & java Opt when you execute your Jar. Ex: Below are some sample value, you can change it as needed. hadoop jar ${JAR_PATH} ${CONFIG_PATH}/filename.xml ${ENV} ${ODATE} mapMem=12288 mapJavaOpts=Xmx9830 redurMem=12288 redurJavaOpts=Xmx9830 Note: mapJavaopts = mapMem * 0.8 redurJavaOpts = redurMem * 0.8
... View more
09-12-2017
07:22 AM
@ni4ni Yes that is not the right place, according to the link that i've given above this configuration change should go to core-site.xml, so search for "Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml" and add/modify as needed and restart the namenode
... View more
09-11-2017
08:54 AM
@ni4ni Looks like this is one of the known issue in Cloudera https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_upgrade.html#concept_sz3_th1_rq Just to make sure, did you restart the namenode after increase ipc.maximum.data.length ?
... View more
09-11-2017
07:13 AM
@syamsri Apache Kudu is not like hive. It is like HDFS. The difference is HDFS stores data in row wise where as Kudo stores in column wise
... View more
09-01-2017
01:31 PM
@makcuk i went through the steps in detail now and i've few questions/suggestions, it may help you 1. Grant DATABASE to ROLE Grant URI to ROLE the above 2 grants are fine but ROLE should be granted to User/Group. Pls include this step 2. You have mentioned, "when we try same query in hive, it works well". So how did you try, via beeline, hive CMD (or) Hue? 3. hope you should have enabled Kerberos. if so, make sure the required principals are added for hive and spark 4. if you try crating table via CMD/beeline, pls check the klist default principal 5. if you try via Hue, make sure CM -> Hue -> Configuration -> Sentry Service is enabled
... View more
09-01-2017
07:24 AM
@makcuk Are you using dataframes? then it can be one of the known issue mentioned in the below link. Look for the topic: Tables saved with the Spark SQL DataFrame.saveAsTable method are not compatible with Hive https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ki.html#concept_qmr_hg5_vt if so, the link has a workaround as well
... View more
08-01-2017
09:56 AM
@Srini4u There are different options 1. If you have linux monitoring tools like Nagios, New Relic, ganglia, etc. You can set-up an alert for a file system (/tmp will be mounted on a file system) and trigger a mail if any file system running out of space 2. you can create a shell script to triger a mail based on the space availability and schedule via cron
... View more
07-30-2017
07:39 PM
@gimp077 I didn't get a chance to test it, but just try this, it may help Go to Hue -> Metastore manager -> db.table -> property -> update comment
... View more
07-30-2017
07:05 PM
@MobinRanjbar Set the hive execution engine to spark is sufficient to run a hive query in spark. set hive.execution.engine=spark; But where did you set this and from where did you try to execute your query? There are 3 options a. In CLI, login as hive/beeline and run the above set command, but this is effective only for that session. You cannot control Oozie with this command. Because Oozie will be a new session. b. In Hue, login to hue and go to Hive query and run the above command, this is also session specific. You cannot control Oozie with this command. c. CM -> Hive -> configuration -> set hive.execution.engine to spark, this is a permanent setup and it will control all the session including Oozie In your case, if you want to try temporarly for a specific query. Run the 'set' command in Oozie itself 'along with your query' as follows ex: set hive.execution.engine=spark; select * from test_table;
... View more