Member since
05-12-2016
10
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6732 | 05-27-2016 12:39 AM |
01-29-2018
09:37 AM
@Sami Ahmad did you finally get any solution. After a year also I'm getting same error.
... View more
07-14-2017
02:13 AM
Great it helped me too.
... View more
05-24-2017
10:00 PM
@SathisKumar I tried the same idea but getting ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.sql.SQLException: Invalid SQL type: sqlKind = UNINITIALIZED Kindly suggest how you maneged to solve this.
... View more
01-18-2017
11:43 PM
I'm getting same error with BO
Though I tried testing the driver and connectivity from R [though in background that is using JDBC only]. Following code works without any error.
drvH <- JDBC(driverClass = "com.simba.hive.jdbc4.HS2Driver",
classPath = normalizePath(list.files("Drivers/BO-Simba/BO_Drivers/hive012simba4server1/", pattern = ".jar$", full.names = T, recursive = T)))
connH <- dbConnect(drvH, "jdbc:hive2://myserver.mycompany.org:10000;AuthMech=1;KrbRealm=MYREALM.COM;KrbHostFQDN=master1.mycompany.org;KrbServiceName=hive")
dbGetQuery(connH, "show databases")
But following code
drvI <- JDBC(driverClass = "com.simba.impala.jdbc4.Driver",
classPath = normalizePath(list.files("Drivers/BO-Simba/BO_Drivers/impala10simba4/", pattern = ".jar$", full.names = T, recursive = T)))
connI <- dbConnect(drvI, "jdbc:impala://slave1.mycompany.org:21050;AuthMech=1;KrbRealm=MYREALM.COM;KrbHostFQDN=master1.mycompany.org;KrbServiceName=impala")
# getting error [Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: Unable to connect to server
Gives the error
[Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: Unable to connect to server
Kindly help if you know the reason. I have not enabled SSL in the cluster. I have Kerberos and Sentry in CDH 5.9 [OS RedHat 6]. Client is as of now one of the nodes in the cluster [minial firewall intervention].
Hive works but Impala gives these problem. I have tried with Cloudera Drivers too [again Hive works not Impala].
... View more
05-27-2016
12:39 AM
I was able to recover by changing the cluster id in the VERSION file stored locally in the disk of each nodes.
... View more
05-27-2016
12:30 AM
Michalis wrote: Did you fail to upgrade Cloudera Manager Server and Agents? Yes What version of Cloudera Manager you are looking to upgrade to? latest [now I'm in Version: Cloudera Express 5.7.0 (#76 built by jenkins on 20160401-1334 git: ec0e7e69444280aa311511998bd83e8e6572f61c)] What is the value for your 'com.cloudera.cmf.db.type' in the /etc/cloudera-scm-server/db.properties file. it is com.cloudera.cmf.db.type=postgresql
... View more
05-12-2016
11:39 PM
Hi All, I'm in serious trouble. I was working on Node Configuration : Operating System CentOS 6.7 (Final) Cluster 9 Node cluster Cluster : Hadoop Distribution Cloudera Cluster : Hadoop Distribution Version CDH 5.4.2, Parcels Cluster : HDFS Capacity 2.7 TB Cluster : YARN Configuration 56 v-cores and 70 GB of memory Now today i tried to upgrade the pracel. Here in my organization there is a proxy update I made th change in CSM and I donwloaded and succefully upgraded to CDH 5.4.10 [CDH-5.4.10-1.cdh5.4.10.p0.16] Everything was working fine. But I was trying to upgrade cloudera manager iteself. Unfortunetly after that I failed to install/upgrde cloudera agent in nodes. So i followed kind of bilndly this post. So I ran rm -vRf /etc/yum.repos.d/cloudera* /etc/cloudera-*
rm -vRf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera*
rm -vRf /var/log/cloudera-*
yum remove cloudera*
yum clean all Now after that I'm not completely lost I was not able to install or upgrade CSM. I have setup proxy but still its failing. I want to know whether I'll be able to recover existing HDFS data. Kindly tell me. I have not deleted the namenode, secondary namenode or datanaode files from local disk.
... View more
Labels: