Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3557 | 05-03-2017 05:13 PM | |
| 2933 | 05-02-2017 08:38 AM | |
| 3182 | 05-02-2017 08:13 AM | |
| 3142 | 04-10-2017 10:51 PM | |
| 1621 | 03-28-2017 02:27 AM |
05-30-2016
12:46 PM
Hi, If I understand we can start multiple nfs gateway server on multiple servers (datanode, namenode, client hdfs). if we have (servernfs01, servernfs02, servernfs03) and (client01, client02) client01# : mount -t nfs servernfs01:/ /test01
client02# : mount -t nfs servernfs02:/ /test02 My question is how to avoir a service interruption ? What's happened if servernfs01 is failed ? How to keep access to hdfs for client01, in this case ?
... View more
02-11-2016
10:52 PM
@PJ Moutrie Please see this As mentioned earlier, Hive hook is in place.
http://www.slideshare.net/hortonworks/data-governance-atlas-7122015
... View more
03-04-2016
12:08 AM
1 Kudo
Does it mean if the cluster is kerberized, we don't need Knox ? Only Ranger installation is enough.
... View more
02-10-2016
09:33 AM
What are you trying to do? @kavitha velaga
... View more
07-19-2019
09:52 PM
Hello guys, 2019 and i still have the same problem with hive api rest . I can successfully run the request : hduser@master:~$ curl -s -d execute="select+*+from+mytable;" -d statusdir="/home/output" 'http://localhost:50111/templeton/v1/hive?user.name=HIVEU' and when i want to display the job results : hduser@master:~$ hdfs dfs -ls output hduser@master:~$ ls: `output': No such file or directory Can anyone please give some hepl !
... View more
02-15-2016
11:51 AM
@Vinod Nerella Its different issue. Its related to postgres. See this
... View more
02-11-2016
12:18 AM
Hi Artem - I appreciate you looking into this. I checked the directories and permissions and they were all in order. The issue turned out to be yarn and mapreduce needing restarts. So I restarted, and got through the immediate crisis. But I do appreciate you taking a look nonetheless. Cheers, Mike
... View more
09-13-2018
01:29 AM
I have experienced this problem after changing Ambari Server to run as a non privileged user "ambari-server". In my case I can see in the logs (/var/log/ambari-server/ambari-server.log) the following: 12 Sep 2018 22:06:57,515 ERROR [ambari-client-thread-6396] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
This error happens because in CentOS/RedHat /var/run is really a symlink to /run which is a tmpfs filesystem mounted at boot time from RAM. So if I manually create the folder with the required privileges it won't survive a reboot and because the unprivileged user running Ambari Server is unable to create the required directory the error occurs. I was able to partially fix this using systemd-tmpfiles feature by creating a file /etc/tmpfiles.d/ambari-server.conf with following content: d /run/ambari-server 0775 ambari-server hadoop -
d /run/ambari-server/stack-recommendations 0775 ambari-server hadoop - With this file in place running "systemd-tmpfiles --create" will create the folders with the required privileges. According to the following RedHat documentation this should be automagically run at boot time to setup everything: https://developers.redhat.com/blog/2016/09/20/managing-temporary-files-with-systemd-tmpfiles-on-rhel7/ However sometimes this doesn't happens (I don't know why) and I have to run the previous command manually to fix this error.
... View more
02-09-2016
04:37 PM
@Ahmad Debbas Thanks for linking on linkedin. I have accepted this question and let's get in touch through linkedin and I will get you introduce to someone in that area.
... View more