Member since
03-15-2017
24
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1815 | 09-29-2017 12:28 PM |
09-29-2017
12:28 PM
Ok I have the solution. We were using ODBC via OTL and so we needed to set a particular flag "ImplicitSelect" to "true" directly on the connection level. When this flag is not set, "SHOW" queries return empty results. Thanks, Sylvain.
... View more
09-29-2017
07:05 AM
No, I'm using my own software based on Hortonworks ODBC driver (SHOW queries used to work without kerberos).
... View more
09-28-2017
05:06 PM
Hi, I'm facing the following problem : When I run SHOW queries through HortonWorks ODBC Driver with my kerberos-enable cluster, it returns an empty result (there is no error, just an empty result). However, I can perform SELECT queries, I can also run CREATE DATABASE or CREATE TABLE queries. Note : I can perform SHOW queries using beeline client. Note 2 : I tried to activate TRACE logging for the driver, and I get one error : "Sep 28 18:48:03.508 ERROR 2741458688 Connection::SQLSetConnectAttr: [Hortonworks][ODBC] (11470) Transactions are not supported." Any idea ? Thanks in advance, Sylvain.
... View more
Labels:
- Labels:
-
Apache Hive
07-12-2017
04:26 PM
Thanks. I was just misunderstanding the meaning of the property "Reserved space for HDFS". I actually thought it was the disk space we set for file storage ...
... View more
07-03-2017
04:37 PM
Hi, I am facing an alert while starting my services on ambari : HDFS tells me that it has no more disk space available, even though I haven't right anything on HDFS yet. The alert is the following : Capacity Used:[100%, 36864], Capacity Remaining:[0] The thing is that I have set 300000000000 bytes of capacity for HDFS (property "Reserved space for HDFS"), so I can't see where my problem is ... Thanks in advance for your answer ! Sylvain.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Hortonworks Cloudbreak
06-30-2017
12:14 PM
It looks like it was indeed. I killed the process and I did not get the error anymore. Thanks.
... View more
06-26-2017
07:28 AM
Hi, I juste installed a 4-nodes Hadoop cluster, and I can't start the services because the start-up fails at starting NFSGateway on my 4-th node, where it throws : resource_management.core.exceptions.ExecutionFailed The whole error is : Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/nfsgateway.py", line 89, in <module>
NFSGateway().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/nfsgateway.py", line 58, in start
nfsgateway(action="start")
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_nfsgateway.py", line 74, in nfsgateway
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 274, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start nfs3' returned 1. nfs3 running as process 3138. Stop it first. Thanks in advance for your answer ! Sylvain.
... View more
Labels:
- Labels:
-
Apache Hadoop
03-30-2017
11:49 AM
Hi @Predrag Minovic and @ssathish, I tried with WebHCat, it works. I already tried with the Distributed Shell application, but I didn't success. Maybe I did wrong. But I think I will try with Oozie soon. Thanks for you help
... View more
03-30-2017
06:54 AM
I just have another question : do Temporary table have better performances than normal tables ?
... View more
03-29-2017
09:08 AM
Yes I can connect to Hive with ODBC driver. The problem was that a new session was generated at each call from my client. Thanks, Sylvain.
... View more