Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9065 | 09-17-2018 06:33 AM | |
2370 | 08-29-2018 07:48 AM | |
3364 | 08-28-2018 12:38 PM | |
2851 | 08-03-2018 05:42 AM | |
2576 | 07-27-2018 04:00 PM |
09-27-2016
07:54 AM
1 Kudo
@anjul tiwari Please share the output of command jar tvf sparkPhoenixHbase-1.0-SNAPSHOT-job.jar.
... View more
09-27-2016
07:31 AM
@anjul tiwari Seems to be related to Jira SPARK-4298. Please try sqoop command as below: spark-submit \ --verbose --master local[4] \ --class SparkPhoenixHbase com/training/bigdata/sparkPhoenixHbase-1.0-SNAPSHOT-job.jar
... View more
09-06-2016
07:23 AM
3 Kudos
@Gaurab D Please run kinit before running sqoop script and then try.
... View more
09-01-2016
07:32 AM
@Ashnee Sharma Issue seems to be related to the incorrect HDFS URI at the time of table creation. Try creating new sample table and try.
... View more
08-29-2016
10:50 AM
1 Kudo
@Ashnee Sharma It seems like either one of the nodes is not updated with the correct status. Try restarting one of the Namenodes and check. Hope this helps. Thanks and Regards, Sindhu
... View more
08-24-2016
09:18 AM
@da li This issue is observed when the proxyuser is set incorrectly for oozie. Set below properties at Ambari HDFS component > click on the Configs tab > and restart Affected components. hadoop.proxyuser.oozie.hosts = * hadoop.proxyuser.oozie.groups = *
... View more
08-17-2016
06:40 AM
@Harini Yadav Seems to be related to below Jira where issue is seen when"hive.server2.authentication" is set to NONE. https://issues.apache.org/jira/browse/HIVE-12754 Workaround is to set "hive.server2.authentication" to "NOSASL" and try.
... View more
08-16-2016
02:30 PM
1 Kudo
@srinivasa rao This is a limitation with sqoop as only default schema under the userid would be used for extracting all the tables with sqoop import-all-tables. Workaround is to import tablewise using sqoop import.
... View more
08-11-2016
10:44 AM
Yes, it would be same.
... View more
08-11-2016
10:06 AM
1 Kudo
@Bala Vignesh N V Yes, the difference in count is expected. ORC converts the table data into groups of rows called stripes along with auxiliary information in a file footer, default size of stripe is 250 MB. Hence, there will be difference in wc -l on orc file compared to actual numbers of rows in the table.
... View more