Member since
06-03-2016
77
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3841 | 09-08-2016 01:16 PM | |
2300 | 07-28-2016 10:42 AM |
05-06-2020
03:24 AM
This could be permission issue. you can see the hive server2 log for the error. Log will be in /var/log/hive on the node to which you connect the hive
... View more
09-09-2016
03:03 AM
1 Kudo
@Rajib Mandal I get it, but usually Sqoop jobs are kicked with a Scheduler. As I said, Sqoop is already taking advantage of YARN containers and is MapReduce dependent. Yarn distributed shell is not the appropriate way to handle this type of Sqoop jobs. Again, YARN distributed shell is an example of a non-MapReduce application built on top of YARN. *** If any of the responses to your question helped don't forget to vote and accept the answer. If you fix the issue on your own, don't forget to post the answer to your own question. A moderator will review it and accept it.
... View more
08-25-2016
02:43 PM
@Rajib Mandal for example on redhat you can run ipa krbtpolicy-mod theuser --maxlife=3600
... View more
07-12-2017
11:55 PM
@ Kuldeep Kulkarni could you please let me know how to update oozie-env.sh file using ambari ui. Not able to see even /etc directory by loging using maria_dev login id. please help
... View more
07-28-2016
10:42 AM
@Ashnee Sharma Installed jdk 1.8.0_91 and the issue got resolved.
... View more
07-22-2016
01:26 PM
1 Kudo
There are a number of ways that you can do this. Personally, I would opt for using Apache Knox rather than pulling in the client jars and config for Hadoop. This will allow you to use JDBC to HiveServer2 and the HBase RestServer API instead. Assuming that you will authenticate the enduser in your web application, you can then propagate the user identity via the Pre-authenticated SSO provider in Knox [1]. Coupled with mutual authentication with SSL [2], you have a trusted proxy that is able to authenticate to HiveServer2 via keberos and act on behalf of your endusers which are authenticated in your web application. [1] - http://knox.apache.org/books/knox-0-9-0/user-guide.html#Preauthenticated+SSO+Provider [2] - http://knox.apache.org/books/knox-0-9-0/user-guide.html#Mutual+Authentication+with+SSL
... View more
07-25-2016
11:45 AM
Hi @Rajib Mandal Thanks for confirming, please accept the answer to close this thread.
... View more
07-13-2016
11:07 AM
1 Kudo
My first question would be why do you want to do that. If you want to manage your cluster you would normally install something like pssh or ansible or puppet and use that to manage the cluster. You can put that on one control node define a list of servers and move data/execute scripts on all of them at the same time. You can do something very simple like that with a one line ssh program To execute a script on all nodes: for i in server1 server2;do echo $i; ssh $i $1;done To copy files to all nodes: for i in server1 server2;do scp $1 $i:$2;done [all need keyless ssh from the control node to the cluster nodes] If on the other hand you want to execute job dependencies, something like the distributed mapreduce cache is normally a good idea. Oozie provides the <file> tag to upload files from hdfs to the execution directory of the job. So honestly if you go into more details what you ACTUALLY want we might be able to help more.
... View more
06-20-2017
07:12 AM
I was facing the same issue. It looks like it is related to hostname. In dfs.namenode.http properties I have added IP address instead of hostname. Which resolved my issue.
... View more
07-11-2016
07:14 PM
In your hosts file add: IP_ADDR_HERE HOSTNAME_HERE Add all ips and hostnames in this file. Copy this file to all the nodes. Seems to me your cluster is not able to map hostname to ip (DNS issue). After the host file changes, try to ping using hostname, which should resolve to IPs.
... View more