Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2569 | 11-01-2016 05:43 PM | |
| 8503 | 11-01-2016 05:36 PM | |
| 4861 | 07-01-2016 03:20 PM | |
| 8182 | 05-25-2016 11:36 AM | |
| 4336 | 05-24-2016 05:27 PM |
02-10-2016
09:33 AM
What are you trying to do? @kavitha velaga
... View more
07-19-2019
09:52 PM
Hello guys, 2019 and i still have the same problem with hive api rest . I can successfully run the request : hduser@master:~$ curl -s -d execute="select+*+from+mytable;" -d statusdir="/home/output" 'http://localhost:50111/templeton/v1/hive?user.name=HIVEU' and when i want to display the job results : hduser@master:~$ hdfs dfs -ls output hduser@master:~$ ls: `output': No such file or directory Can anyone please give some hepl !
... View more
02-15-2016
11:51 AM
@Vinod Nerella Its different issue. Its related to postgres. See this
... View more
02-10-2016
03:39 AM
2 Kudos
Use case: There is lot of data in the locally managed table and we want to convert those table into external table because we are working on a use case where our spark and home grown application has trouble reading locally managed tables. Solution: alter table table_name SET TBLPROPERTIES('EXTERNAL'='TRUE'); Test: Locally managed table dropped hive> drop table sample_07; OK Time taken: 0.602 seconds Data deleted as it's locally managed table.
hive> dfs -ls /apps/hive/warehouse/sample_07; ls: `/apps/hive/warehouse/sample_07': No such file or directory Command failed with exit code = 1 Query returned non-zero code: 1, cause: null Converting sample_07_test from locally managed to external to keep the data there after drop. hive> alter table sample_07_test SET TBLPROPERTIES('EXTERNAL'='TRUE'); OK Time taken: 0.463 seconds Table dropped after converting into external table.
hive> drop table sample_07_test; OK Time taken: 0.552 seconds Data is still on HDFS
hive> dfs -ls /apps/hive/warehouse/sample_07_test; Found 1 items -rwxrwxrwx 3 hive hdfs 46059 2016-02-10 03:24 /apps/hive/warehouse/sample_07_test/000000_0 hive> Table removed from HCatalog
hive> select * from sample_07_test; FAILED: SemanticException [Error 10001]: Line 1:14 Table not found 'sample_07_test' hive>
... View more
Labels:
02-11-2016
12:18 AM
Hi Artem - I appreciate you looking into this. I checked the directories and permissions and they were all in order. The issue turned out to be yarn and mapreduce needing restarts. So I restarted, and got through the immediate crisis. But I do appreciate you taking a look nonetheless. Cheers, Mike
... View more
05-21-2019
01:46 PM
a. ambari-metrics-monitor status b. ambari-metrics-monitor stop (if it is running) c. Check the ambari-metrics-collector/hbase-tmp directroy path in Ambari AMS config d. Move the hbase zk temp directory to somewhere mv /hadoop/journalnode/var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/* /tmp/ams-zookeeper-backup/ e. Restart AMS from the Ambari That should resolve the issue Cheers, Pravat Sutar
... View more
06-16-2016
05:44 AM
Hi, I also had the same issue, ambari-server restart could resolve my issue. Regards, Harshit Shah
... View more
07-21-2016
12:30 PM
I am using Atlas-Ranger sandbox machine,so is it possible to delete those tag which are present on Atlas UI? if yes,then how can we delete those using REST API or with something different techniques? Here is the Atlas UI,I am using of sandbox machine. atlas-home-screen.png
... View more
09-13-2018
01:29 AM
I have experienced this problem after changing Ambari Server to run as a non privileged user "ambari-server". In my case I can see in the logs (/var/log/ambari-server/ambari-server.log) the following: 12 Sep 2018 22:06:57,515 ERROR [ambari-client-thread-6396] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
This error happens because in CentOS/RedHat /var/run is really a symlink to /run which is a tmpfs filesystem mounted at boot time from RAM. So if I manually create the folder with the required privileges it won't survive a reboot and because the unprivileged user running Ambari Server is unable to create the required directory the error occurs. I was able to partially fix this using systemd-tmpfiles feature by creating a file /etc/tmpfiles.d/ambari-server.conf with following content: d /run/ambari-server 0775 ambari-server hadoop -
d /run/ambari-server/stack-recommendations 0775 ambari-server hadoop - With this file in place running "systemd-tmpfiles --create" will create the folders with the required privileges. According to the following RedHat documentation this should be automagically run at boot time to setup everything: https://developers.redhat.com/blog/2016/09/20/managing-temporary-files-with-systemd-tmpfiles-on-rhel7/ However sometimes this doesn't happens (I don't know why) and I have to run the previous command manually to fix this error.
... View more
02-09-2016
04:37 PM
@Ahmad Debbas Thanks for linking on linkedin. I have accepted this question and let's get in touch through linkedin and I will get you introduce to someone in that area.
... View more