Member since
05-07-2019
7
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3574 | 05-20-2019 08:54 AM | |
14416 | 05-17-2019 03:23 PM |
05-23-2019
07:06 AM
Thanks for letting me know! Is there any estimate/timeline when HDP 3.1 will allow to upgrade the shipped version of TEZ 0.9.1. to a newer release? I don't want to upgrade/patch one component myself because I am afraid I will look the upgradeability of the entire HDP Stack when future releases surface....
... View more
05-20-2019
08:54 AM
Just for further reference: our issues was solved here https://community.hortonworks.com/questions/246302/hive-tez-vertex-failed-error-during-reduce-phase-h.html
... View more
05-17-2019
03:23 PM
So far, it seems that our issues were solved by setting the HDFS Setting "fs.permissions.umask-mode" to the value of "022". In our HDP 2.7 installation, this was the case out of the box. HDP 3.1 seems to have a default value of 077 - which doesn't work for us and yields the error mentioned above. We've done some intensive testing right now and the value 022 seems to work and has solved our problems, as far as I can tell now. It would be great if you guys could verify or falify the issue on your installation as well. Let me know if I can help you with anything!
... View more
05-17-2019
08:40 AM
Hi, we are facing more or less exactly the same issue on HDP 3.1.0.0-78 on a Cluster with 11 nodes. Maybe we can talk / chat and work out a solution. I contacted you on LinkedIn 🙂
... View more
05-10-2019
01:36 PM
Of course we want to save money 😉 We thought the database issue is the reason we have problems connecting to the hive metastore but we now figured out that the original source for this issue seems to be related to kerberos authentication. The main problem is, that our (new) HDP 3.1 Cluster is not running the hive scripts we developed on our "old" HDP 2.3 installation and we got lots of vertex failures. I will try to provide more information with the help of our admin. We are using CentOS 7.6 (all recent updates installed).
... View more
05-10-2019
08:51 AM
We were not able to solve the problem yet. Currently, we are waiting for professional consulting support. I will post any furhter details on that issue here when we get more information about the cause of the problem. Thank you for your reply!
... View more
05-08-2019
09:48 PM
Hi Guys, we are trying to install the HDP 3.1.0 platform on-premise. We have finished the installation but ran into a problem setting up MariaDB for the Hive metastore. Hive explicitly lists MariaDB as supported: https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+3.0+Administration and suggests using the driver class "org.mariadb.jdbc.Driver" When trying to setup "JDBC Driver Class" to "org.mariadb.jdbc.Driver" in Ambari of the HDP3.1.0 installation, then the start of the metastore fails with: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_client.py", line 60, in <module>
HiveClient().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 966, in restart
self.install(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_client.py", line 38, in install
import params
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/params.py", line 273, in <module>
else: raise Fail(format("JDBC driver '{hive_jdbc_driver}' not supported."))
resource_management.core.exceptions.Fail: JDBC driver 'org.mariadb.jdbc.Driver' not supported. As far as I can tell, the driver class "org.mariadb.jdbc.Driver" is not included in the role out script. Can you help us or is there a fix for this? Thanks!
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)