Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1543 | 07-09-2019 12:53 AM | |
9293 | 06-23-2019 08:37 PM | |
8052 | 06-18-2019 11:28 PM | |
8677 | 05-23-2019 08:46 PM | |
3473 | 05-20-2019 01:14 AM |
07-20-2014
07:42 AM
2 Kudos
Your local Hive CLI JVM heap size is insufficient for even building and submitting the job. Please try raising it as below, and retrying: ~> export HADOOP_CLIENT_OPTS="-Xmx2g" ~> hive -e "select count(station_id) from aws_new;"
... View more
07-20-2014
07:41 AM
CDH Hive is based on Apache Hive and does support .hiverc for Hive CLI. If you are asking about Beeline instead, support for such a file loader is incoming in a future CDH5 release.
... View more
07-20-2014
06:37 AM
1 Kudo
Since Oozie lacks knowledge of where your HBase configs lie, you will need to pass the client hbase-site.xml file (placed somewhere on HDFS, by copying from /etc/hbase/conf/hbase-site.xml on any HBase gateway node) via the <job-xml>…</job-xml> option. Alternatively, try the below command instead (will not be sufficient for secured clusters, which need further properties), replacing zk-host1,zk-host2,zk-host3 with your actual 3 hosts appropriately: sqoop import -Dhbase.zookeeper.quorum=zk-host1,zk-host2,zk-host3 --connect jdbc:oracle:thin:@XXX:port/XXX --username XXX --password XXX --table XXX -m 1 --incremental lastmodified --last-value '2014-06-23' --check-column XXX --append --hbase-table XXX --column-family info --hbase-row-key XXX --hbase-bulkload
... View more
07-20-2014
12:42 AM
1 Kudo
Please post your cluster's memory configuration, such as the resource MB offered by the NodeManagers, and individual MapReduce settings of AM, Map and Reduce task memories. It appears that the cluster's unable to schedule more than 1 or 2 containers at a time, causing the job to eternally hang cause Oozie runs 2x AMs grabbing 2x containers already.
... View more
07-19-2014
10:06 PM
2 Kudos
You receive the error because the 'hbase' user does not have a login shell assigned to it. You can set a shell for the 'hbase' user on the machine, to allow direct 'su' based login to that user, by following http://www.cyberciti.biz/faq/howto-set-bash-as-your-default-shell/ However, if your goal is to simply use the 'hbase' user for running superuser level commands, we instead recommend using 'sudo' style commands. For example: ~> sudo -u hbase hbase hbck ~> sudo -u hbase hbase shell You can also invoke a shell as the 'hbase' user in certain cases, via: ~> sudo -u hbase /bin/bash
... View more
07-15-2014
02:59 AM
Thanks. It is now working in distributed mode.
... View more
06-06-2014
11:26 AM
1 Kudo
Stopping a job may not be easily controllable, unless perhaps you are running the job in a FairScheduler pool and you deallocate all resources out of that pool (the config change can be done in dynamic fashion). This would also stop all other jobs in that pool, however. Killing a job from within a Map Task is possible, if only in insecure environments. You can use the JobClient API and call its killJob method. Since this call relies on authentication, it may not work right out in a kerberos secured runtime cluster.
... View more
05-09-2014
01:45 AM
1 Kudo
Thanks for the tips. I had missed enabling SLA support in the oozie-site config <property> <name>oozie.services.ext</name> <value> org.apache.oozie.service.EventHandlerService, org.apache.oozie.sla.service.SLAService </value> </property> <property> <name>oozie.service.EventHandlerService.event.listeners</name> <value> org.apache.oozie.sla.listener.SLAJobEventListener, org.apache.oozie.sla.listener.SLAEmailEventListener </value> </property> Now, I can see my SLA details in Hue and am looking at ActiveMQ for handling the JMS notificatoins.
... View more
05-07-2014
04:09 AM
Hi, I downloaded hadoop src 2.3 (from http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-2.3.0/), compile it to run under 64bits, and used the following method for setting the input split size public static void setMaxInputSplitSize(Job job, long size) { job.getConfiguration().setLong(SPLIT_MAXSIZE, size); } best
... View more
05-04-2014
08:58 AM
Thanks again.
... View more