Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4189 | 09-16-2016 11:56 AM | |
| 1749 | 09-13-2016 08:47 PM | |
| 6943 | 09-06-2016 11:00 AM | |
| 4174 | 08-05-2016 11:51 AM | |
| 6246 | 08-03-2016 02:58 PM |
05-27-2016
02:01 PM
1 Kudo
Hi @nyadav As per doc here when running spark on yarn. "The number of executors. Note that this property is incompatible with spark.dynamicAllocation.enabled . If both spark.dynamicAllocation.enabled and spark.executor.instances are specified, dynamic allocation is turned off and the specified number of spark.executor.instances is used".
... View more
05-27-2016
12:22 PM
@omkar pathallapalli Can you please share the workflow.xml file?
... View more
05-27-2016
12:17 PM
1 Kudo
Hi @MarcdL Did you tried with --conf "spark.executor.extraJavaOptions?
... View more
05-27-2016
09:24 AM
1 Kudo
@Smart Solutions This seems a bug with HDP 2.4.0 with Kerberos + hive HTTP mode enabled, I'm able to reproduce this locally. Please contact HWX official support for possible fix.
... View more
05-27-2016
12:43 AM
@Jason Knaster I just tested this scenario on Hbase 1.1.2 and it worked. [root@ey ~]# cat /usr/hdp/current/hbase-master/conf/hbase-site.xml <configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///test/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/test/zookeeper</value>
</property>
</configuration> root@ey hbase-master]# ./bin/start-hbase.sh
[root@ey hbase-master]# hbase shell
hbase(main):001:0> create 't1','c1'
0 row(s) in 1.4790 seconds
=> Hbase::Table - t1
hbase(main):002:0> put 't1','123','c1:id','123m'
0 row(s) in 0.1500 seconds
hbase(main):003:0> scan 't1'
ROW COLUMN+CELL
123 column=c1:id, timestamp=1464183553280, value=123m
1 row(s) in 0.0340 seconds
Then Stopped the standlone Hbase .
Compressed and transfer the hbase dir to another node.
[root@ey hbase-master]# tar -cvf test.tar /test
[root@ey ~]# scp test.tar root@AD:/root/
On Node "AD"
[root@AD ~]# tar xvf test.tar
[root@AD ~]# mv /root/test /
Copied same hbase-site.xml from primary hbase.
[root@AD hbase-master]# ./bin/start-hbase.sh
root@AD hbase-master]# hbase shell
hbase(main):001:0> list
TABLE
t1
1 row(s) in 0.2860 seconds
=> ["t1"]
hbase(main):002:0> scan 't1'
ROW COLUMN+CELL
123 column=c1:id, timestamp=1464183553280, value=123m
1 row(s) in 0.1290 seconds
hbase(main):003:0>
... View more
05-26-2016
11:35 PM
@Jason Knaster I don't think we have any direct method available with current HDP release. Please see HBASE-7912 as Ted mentioned.
... View more
05-26-2016
10:11 PM
1 Kudo
Hi @Ankit A In this case you need to set your "import *" statement into set of arguments as mentioned by @Christine so that oozie can pass it to sqoop. Here is an example workflow. Link <workflow-app xmlns="uri:oozie:workflow:0.2" name="sqoop-freeform-wf">
<start to="sqoop-freeform-node"/>
<action name="sqoop-freeform-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/sqoop-freeform"/>
<mkdir path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<arg>import</arg>
<arg>--connect</arg>
<arg>jdbc:hsqldb:file:db.hsqldb</arg>
<arg>--username</arg>
<arg>sa</arg>
<arg>--password</arg>
<arg></arg>
<arg>--verbose</arg>
<arg>--query</arg>
<arg>select TT.I, TT.S from TT where $CONDITIONS</arg>
<arg>--target-dir</arg>
<arg>/user/${wf:user()}/${examplesRoot}/output-data/sqoop-freeform</arg>
<arg>-m</arg>
<arg>1</arg>
<file>db.hsqldb.properties#db.hsqldb.properties</file>
<file>db.hsqldb.script#db.hsqldb.script</file>
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop free form failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
... View more
05-26-2016
06:57 PM
screen-shot-2016-05-26-at-75641-pm.png @Tajinderpal Singh Please check your resource manager UI for queue usage. see screenshot. Also please don't kill any job until you confirm whether it is critical or some hang jobs.
... View more
05-26-2016
04:00 PM
@Bigdata Lover I think impersonation with Livy will work with set of prerequisites, please refer below article published by @vshukla https://community.hortonworks.com/articles/34424/apache-zeppelin-on-hdp-242.html Please also refer for general roadmap https://cwiki.apache.org/confluence/display/ZEPPELIN/Zeppelin+Roadmap
... View more
05-26-2016
03:45 PM
Hi @Mamta Chawla, Please feel free to accept an answer which helped you, so that this thread can be closed. Thanks
... View more