Member since
01-09-2019
401
Posts
163
Kudos Received
80
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1986 | 06-21-2017 03:53 PM | |
3016 | 03-14-2017 01:24 PM | |
1931 | 01-25-2017 03:36 PM | |
3092 | 12-20-2016 06:19 PM | |
1509 | 12-14-2016 05:24 PM |
01-25-2017
04:38 PM
Something like below command should work as long as both databases are in same Hive instance. create table databaseB.testtable as select * from databaseA.testtable
... View more
01-25-2017
03:36 PM
Execute does not create a sqoop job. sqoop job creates the job and you will get an error if you try creating multiple sqoop job with the same job name. You need to pass --incremental and --check-col on the job to trigger incremental updates.
... View more
12-20-2016
06:19 PM
1 Kudo
You should see both core-site.xml and hdfs-site.xml at /etc/hadoop/conf.
... View more
12-20-2016
06:14 PM
If looks like an output on 'hdfs dfs -ls ' which tries to get the user home directory (which points to /user/ritjain) in your case. This is missing here. Use hdfs user to create home directory for the user and change owner to 'ritjain'. Then if you execute it, you should be able to get there. Regarding namenode status, post a screenshot of what you are looking at on browser. Error from command shows that NN seems to be running fine.
... View more
12-14-2016
05:44 PM
As you can see, adding this config in ambari will add it in mapred-site.xml which will be default value. As it is not a final value, if you set it from hiveconf, that will take precedence. I think this is a case of mismatched configs. If you see you are using mapred.job.queuename and not mapreduce.job.queuename. Try changing it to mapreduce.job.queuename and you should see if going to the right queue
... View more
12-14-2016
05:24 PM
1 Kudo
Use cases that I have run into for node labels are specialized hardware and licensing requirements for 3rd party libraries. I have seen node labels used in production for some of these use cases. Most likely reason why it is still not 'majority of people' is because these use cases themselves are not very common.
... View more
09-13-2016
05:17 PM
Yes. That is the issue. This has been resolved based on that, but I haven't updated the question with this answer. Thanks for answering this.
... View more
08-30-2016
05:26 PM
Try hdfs groups for that user to see if group mapping is working. If its then, your configuration should be like below. <property>
<name>yarn.scheduler.capacity.queue-mappings</name>
<value>u:user1:queue1,g:group1:queue2,u:%user:%user,u:user2:%primary_group</value>
<description>
Here, <user1> is mapped to <queue1>, <group1> is mapped to <queue2>,
maps users to queues with the same name as user, <user2> is mapped
to queue name same as <primary group> respectively. The mappings will be
evaluated from left to right, and the first valid mapping will be used.
</description>
</property>
... View more
08-30-2016
05:13 PM
1 Kudo
You have to give a queuename. When you don't specify a queuename, by defalut everything tries to hit default queue which is removed now. You have to either specify a queuename or create a user to queue mapping, so based on user, the job goes to a specific queue.
... View more
08-09-2016
08:03 PM
1 Kudo
Have you tried using -- --schema? You will need two sets of '--' there.
... View more