Member since
10-21-2014
6
Posts
2
Kudos Received
0
Solutions
09-27-2017
04:05 AM
2 Kudos
I had this problem as well with sqoop and found a way to solve this without downloading a jar from somewhere in the internet. Sqoop tries to get the classpath from several scripts (hbase classpath, hcat -classpath, accumulo classpath) and adds them to the own classpath. As it seems sqoop is not complaining that it can not find hcat and so it skips it silently. I solved this problem with installing hive-hcatalog and now sqoop adds the hive classpath over hcat -classpath as well and can use the JSONObject class that is included in hive-exec.jar (CDH5.11).
... View more
06-01-2017
12:35 PM
Thanks for looking into it and providing the patch. It is not 1:1 usable for the CDH 5.11 Hue but I'll find my way to adjust it a bit for our version.
... View more
05-31-2017
01:27 AM
Hi, I configured in Hue (3.9.0+cdh5.11.0+5033-1.cdh5.11.0.p0.41 deb package) a group that is only allowed to use the notebook and beeswax. In this way the user only has the option to click Hive at the top where you normally have an option to select query editors. In this configuration, the link goes to /beeswax and shows are warning with the following text: This is the old SQL Editor, it is recommended to instead use: Hive If I add Impala as well, the option to select a query editor appears and the links go to the notebook editor as it should. Is this a bug in Hue or do I have to configure something else to force the notebook editor even if only hive/beeswax is allowed and no other query editor? Thanks, Jürgen
... View more
Labels:
- Labels:
-
Cloudera Hue
10-23-2014
12:33 AM
Thanks for the answer! I also found the workaround after some time, but you were faster to post it. I'll open a Jira for it, that it will be fixed in new versions.
... View more
10-21-2014
05:45 AM
Hi, we updated sqoop from cdh5.0.1 to cdh5.2 and now it fails everytime with a GC overhead limit exceeded error. The old version was able to import over 14GB of data over one mapper and the import fails now when a mapper gets too many rows. I checked a heap dump and the memory was completely used by over 3.5 million rows of data (-Xmx 1700M). The connector is mysql-jdbc version 5.1.33 and the job imports the data as text file in a have table. Can I avoid this with a setting or is this a bug that should go to jira? Thank you, Jürgen
... View more
Labels:
- Labels:
-
Apache Sqoop