Member since
03-15-2017
24
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1813 | 09-29-2017 12:28 PM |
03-29-2017
09:07 AM
Thank you. As you said, the client I am using is generating a new session at each call, that was the problem. Sylvain.
... View more
03-28-2017
02:07 PM
My cluster is not kerberized and I do not have ranger policies for Hive.
... View more
03-28-2017
02:06 PM
Yes, the tables created via ODBC are showing up in beeline. Creating a database via ODBC works too. The problem seems to be only appearing with temporary tables.
... View more
03-28-2017
01:46 PM
Thank you mliem, I am able to access hive through beeline. I can create my temporary table, and see it appearing with a "show tables" command using beeline. I tried creating my temporary table in the default database, and in a specified one, but I get the same result. Sylvain.
... View more
03-28-2017
01:22 PM
Hi, I am facing the following problem : Using the Hortonworks ODBC driver for Hive, I can create a temporary table (hiveserver2.log tell me that it is done without error), but I can't retrieve this table with a SELECT query thereafter. When I try to do so, I get a "table not found" error. Does it mean the ODBC driver does not keep the session between 2 hive calls ? Is there something I am doing wrong or some configuration I need to set ? I believe since hive 0.14, the "temporary table" part in the ODBC driver configuration is not necessary, or is it ? Thanks in advance, Sylvain.
... View more
Labels:
- Labels:
-
Apache Hive
03-27-2017
08:15 AM
Thanks @ssathish, here are my log files.
... View more
03-23-2017
02:37 PM
I forgot this : Ambari version : Version
2.4.2.0 Hadoop version 2.5.0.0-1245 installed from Ambari (not a sandbox)
... View more
03-22-2017
06:38 PM
Hello, I'm trying to execute some existing examples using the Rest API (with or without using the Knox gateway) It seems to work, but the task is always marked as failed in the Yarn Web UI. I Use the hadoop-mapreduce-examples.jar to launch a wordcount example. It creates a sub task which is properly executed, it creates all the expected files in /user/yarn/output10. But the parent task always failed with an exit code = 0 ( looks like a success 😞 , but it's not) I send a POST request on this URL (I did the new-application command before to get my application-id) https://guest:guest-password@serverip:8443/gateway/default/resourcemanager/v1/cluster/apps (I tried also without knox : http://serverip:8088/ws/v1/cluster/apps, with the same result) And I transmit this JSON command : { "application-id":"application_1490174038938_0018", "application-name":"WordCount", "am-container-spec": { "commands":
{ "command":"yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar wordcount /user/yarn/input /user/yarn/output10" } }, "max-app-attempts":1, "application-type":"YARN" } I tried a lot of other JSON config with the same result (this one is the simpliest I tried) Do you have any idea what I did wrong ? Thanks in advance
... View more
Labels:
- Labels:
-
Apache Knox
-
Apache YARN
03-20-2017
01:04 PM
It should be alright now.
... View more
03-20-2017
10:30 AM
Hi, I am running Hadoop on a 3 nodes cluster (3 virtual machines) with respectively 20Gb, 10Gb and 10Gb of disk space available. When I run this command on the namenode : hadoop fs -df -h / I get the following result : When I run this command : hadoop fs -du -s -h / I get the following result : Knowing that the replication number is set to 3, shouldn't I get 3*2,7 = 8,1G in the first screenshot ? I tried to execute expunge command and it did not change the result. Thanks in advance ! Sylvain.
... View more
Labels:
- Labels:
-
Apache Hadoop
- « Previous
-
- 1
- 2
- Next »