Member since
07-30-2013
723
Posts
109
Kudos Received
80
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5091 | 04-05-2019 07:00 AM | |
9235 | 05-02-2018 12:42 PM | |
12101 | 05-02-2018 12:39 PM | |
4966 | 08-28-2017 07:55 AM | |
2931 | 05-31-2017 08:43 AM |
01-15-2018
02:12 PM
1 Kudo
For issue : Hue not showing hbase tables, we needs to give grant privilage to hue user in hbase shell. take kerberos authentication using hbase keytab. then start hbase shell & use below command. grant 'hue', 'RWXC' Now you will find that hbase tables are showing in hue. same process is for hive also.
... View more
11-02-2017
01:57 AM
1 Kudo
I think you are missing this which it was mentioned here: [desktop] use_new_editor=true Hope it helps
... View more
09-18-2017
08:44 AM
@SINGHabhijeet wrote: Hi Romainr Thanks for linking relevant JIRAs and highlighting Hue 4.0 features. We are using Hue 3.12 and it does not have workflow.xml Import/Export feature. > If some XML is missing you could tell us what that way we could improve the editor and remove the need to manually edit the XML. I was looking at adding <arg> </arg> in the workflow.xml as shown here using Hue, but was unable to do so. Can you please point out how we can achieve this? I ran into this error and resolved it by just using Arguments. Earlier was trying to combine <command> and <arg> and since it's a xs:choice in DTD, I just used <arg> to get it working.
... View more
09-12-2017
12:09 PM
Partitioner is not invoked when used in oozie mapreduce action (Creating workflow using HUE). But works as expected when running using hadoop jar commad in CLI, I have implemented secondary sort in mapreduce and trying to execute it using Oozie (From Hue). Though I have set the partitioner class in the properties, the partitioner is not being executed. So, I'm not getting output as expected. The same code runs fine when run using hadoop command. And here is my workflow.xml <workflow-app name="MyTriplets" xmlns="uri:oozie:workflow:0.5">
<start to="mapreduce-598d"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="mapreduce-598d">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.output.dir</name>
<value>/test_1109_3</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/apps/hive/warehouse/7360_0609_rx/day=06-09-2017/hour=13/quarter=2/,/apps/hive/warehouse/7360_0609_tx/day=06-09-2017/hour=13/quarter=2/,/apps/hive/warehouse/7360_0509_util/day=05-09-2017/hour=16/quarter=1/</value>
</property>
<property>
<name>mapred.input.format.class</name>
<value>org.apache.hadoop.hive.ql.io.RCFileInputFormat</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>PonRankMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>PonRankReducer</value>
</property>
<property>
<name>mapred.output.value.comparator.class</name>
<value>PonRankGroupingComparator</value>
</property>
<property>
<name>mapred.mapoutput.key.class</name>
<value>PonRankPair</value>
</property>
<property>
<name>mapred.mapoutput.value.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapred.reduce.output.key.class</name>
<value>org.apache.hadoop.io.NullWritable</value>
</property>
<property>
<name>mapred.reduce.output.value.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.partitioner.class</name>
<value>PonRankPartitioner</value>
</property>
<property>
<name>mapred.mapper.new-api</name>
<value>False</value>
</property>
</configuration>
</map-reduce>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/> When running using hadoop jar command, I set the partitioner class using JobConf.setPartitionerClass API. Not sure why my partitioner is not executed when running using Oozie. Inspite of adding <property>
<name>mapred.partitioner.class</name>
<value>PonRankPartitioner</value>
</property>
... View more
08-28-2017
08:21 AM
Thank you everyone... This issue is fixed after I opened the IP Address 0.0.0.0 and all ports in the security group.
... View more
08-14-2017
02:09 AM
Hello, We have the exact same issue. I would be also interested to know when you're planning 5.12.1 release. Is there a roadmap for minor/major releases? Thank you!
... View more
06-15-2017
10:45 PM
1 Kudo
Hi Jtasipit, It works!! Thank you!
... View more
06-15-2017
08:21 PM
thanks. my pain point is when i finished a spark , java, hive, sqoop job, i tested it on Linux System , mostly it's normal. then i deploy the job in oozie by Hue, it always throw some errors like Class not found . i know it because some jars missed, then i am going to search which jars is missed, if found , i add this jar file. so i mean i don't know which jar files is needed while i deploy these job in oozie, i need the try and try to found missed jars. this is my pain point. these jobs executed on Linux System model is normal i think it's because the job load all the classpath jar files, so it's no this kind issue.
... View more
06-12-2017
08:47 AM
Hi Guys, is there is an alternative to the --jars option of spark-submit in the spark notebook in Hue?
... View more
06-01-2017
12:35 PM
Thanks for looking into it and providing the patch. It is not 1:1 usable for the CDH 5.11 Hue but I'll find my way to adjust it a bit for our version.
... View more