Member since
04-03-2017
164
Posts
8
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2242 | 03-09-2021 10:47 PM | |
3274 | 12-10-2018 10:59 AM | |
5878 | 12-02-2018 08:55 PM | |
8684 | 11-28-2018 10:38 AM |
08-25-2019
11:20 PM
Also, If you run the same job from other node then it fails with the same message? Can you please confirm? Regards Nitish
... View more
08-25-2019
11:15 PM
Hi, Did you install the sqoop metastore? Any configuration changes you made in Sqoop configurations via CM? Regards Nitish
... View more
08-25-2019
10:46 PM
Hi, Can you please install Sqoop gateway on that node. We always recommend to install sqoop gateways on that host on which you are running the Sqoop commands. Link:- https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_mc_sqoop1_client.html Also can you please confirm if you are able to run the normal sqoop import commands from that host? Can you run :- sqoop job --list and share the output of this? You have enough disk space on that host ? Because Sqoop needs to create the local directories as a metastore? Regards Nitish
... View more
08-25-2019
10:23 PM
Hi Akhila, This is the output of what command? Can you please run the below commands from the host and share the output. ## ls -ltr /var/lib/sqoop ## ls -ltra /home/<username> Regards Nitish
... View more
08-25-2019
09:17 PM
Hi, For this I would request you to place the keytab file in HDFS and just reference the name in the <spark-opts> Example <spark-opts> --principal <abc> --keytab <abc.keytab> </spark-opts> <file> <path of HDFS keytab></file> NOTE:- Do add the <file> tag which will be pointing to the location of keytab on HDFS. This will localize the keytab file and will use in the oozie spark action. Kindly try the above and let us know how it goes. Regards Nitish
... View more
08-25-2019
09:11 PM
Hi, It looks like the spark job that is started is getting failed. Can you please share the logs for the Spark job. The application ID is mentioned in the snapshot for the spark as it gets initiated but after that it fails. ## yarn logs -applicationId <application ID> -appOwner <owner name> Kindly copy the logs in notepad and attach to the case. This will give more leads on this. Regards Nitish Regards Nitish
... View more
08-25-2019
09:05 PM
Hi, Can you share the output of below command from the host on which you are running the Sqoop job? ## ls -ltr /var/lib/sqoop ## ll -aR /home/<username> This will help us to check the permission on these folders. Regards Nitish
... View more
12-12-2018
05:54 AM
Hi, This error of permission is happening only when oozie is starting the shell action by the name of yarn user and not the hbase user. Regards Nitish
... View more
12-10-2018
10:59 AM
1 Kudo
Hi, It looks like you are having non kerberized cluster, Can you please confirm on this? When you run a shell action via oozie in non kerberized cluster then the container gets launched with the yarn user and that is the reason why it is throwing permission error of yarn user not able to write it. To overcome this error I would request you to add the below command in shell action and re run the job. ## export HADOOP_USER_NAME=hbase Please set this above parameter in the shell script and what it will do is it will make the commands run by the name of hbase user and not yarn. Hope this helps. Regards Nitish
... View more
12-06-2018
11:01 PM
Hi, The probable reason for this job failure is that YARN is not allowing you to submit the job in this queue. Either this queue is not present over there or the placement rules are causing some issue. There could be a scenario where this user has not been given the permission to submit the job in this queue. 1. Can you share the snapshot of the YARN dynamic resource pools that you have created. 2. Can you also share the snapshot of the YARN placement policies that you have set. 3. The username who is submitting the Sqoop job. 4. Kindly share the snapshot of the users you have given the access to this queue where you are submitting the job. This will help us to check if there is some issue with the queue or permissions or placement policies. Regards Nitish
... View more