Member since
05-12-2016
10
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3754 | 05-27-2016 12:39 AM |
01-29-2018
09:37 AM
@Sami Ahmad did you finally get any solution. After a year also I'm getting same error.
... View more
01-29-2018
01:45 AM
Is it fixed or you freshly ingested new tweets. My probelem is that I can switch now to cloudea flume source but ealier data will be lost. Kindly help.
... View more
07-14-2017
02:13 AM
Great it helped me too.
... View more
05-24-2017
10:33 PM
I do have same problem kindly let me know.
... View more
05-24-2017
10:00 PM
@SathisKumar I tried the same idea but getting ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.sql.SQLException: Invalid SQL type: sqlKind = UNINITIALIZED Kindly suggest how you maneged to solve this.
... View more
01-30-2017
12:36 AM
Hadoop Setup :- 8 node Cluster with CDH 5.9 Hive Version :- Hive 1.1.0-cdh5.9.0 Compiled by jenkins on Fri Oct 21 00:54:46 PDT 2016 From source with checksum 9c5d0bee25fab27d28098c3080f8aedc Impala Version :- Impala v2.7.0-cdh5.9.0 (4b4cf19) built on Fri Oct 21 01:07:22 PDT 2016 Issue:- I ran same query on Hive and Impala (through Hue, Screenshot attached) SELECT ( AVG ( cost_of_liquidity_provision * risk_of_liquidity_provision ) - AVG ( cost_of_liquidity_provision ) * AVG ( risk_of_liquidity_provision )) / ( 1 . 00000000 * STDDEV_POP ( cost_of_liquidity_provision ) * STDDEV_POP ( risk_of_liquidity_provision )) AS corr_coeff
FROM liquidity The table is in parquet format (non partitioned) Screenshot I got different output in two different runs Hive :- 0.8465 (correct as verified by external application e.g. R) Impala :- 0.0636 Similar thing happened with another query too. I have created a StackoverFlow Question and Cloudera Impala Issue [IMPALA-4841] for the same.
... View more
01-18-2017
11:43 PM
I'm getting same error with BO
Though I tried testing the driver and connectivity from R [though in background that is using JDBC only]. Following code works without any error.
drvH <- JDBC(driverClass = "com.simba.hive.jdbc4.HS2Driver",
classPath = normalizePath(list.files("Drivers/BO-Simba/BO_Drivers/hive012simba4server1/", pattern = ".jar$", full.names = T, recursive = T)))
connH <- dbConnect(drvH, "jdbc:hive2://myserver.mycompany.org:10000;AuthMech=1;KrbRealm=MYREALM.COM;KrbHostFQDN=master1.mycompany.org;KrbServiceName=hive")
dbGetQuery(connH, "show databases")
But following code
drvI <- JDBC(driverClass = "com.simba.impala.jdbc4.Driver",
classPath = normalizePath(list.files("Drivers/BO-Simba/BO_Drivers/impala10simba4/", pattern = ".jar$", full.names = T, recursive = T)))
connI <- dbConnect(drvI, "jdbc:impala://slave1.mycompany.org:21050;AuthMech=1;KrbRealm=MYREALM.COM;KrbHostFQDN=master1.mycompany.org;KrbServiceName=impala")
# getting error [Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: Unable to connect to server
Gives the error
[Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: Unable to connect to server
Kindly help if you know the reason. I have not enabled SSL in the cluster. I have Kerberos and Sentry in CDH 5.9 [OS RedHat 6]. Client is as of now one of the nodes in the cluster [minial firewall intervention].
Hive works but Impala gives these problem. I have tried with Cloudera Drivers too [again Hive works not Impala].
... View more
05-27-2016
12:39 AM
I was able to recover by changing the cluster id in the VERSION file stored locally in the disk of each nodes.
... View more
05-27-2016
12:30 AM
Michalis wrote: Did you fail to upgrade Cloudera Manager Server and Agents? Yes What version of Cloudera Manager you are looking to upgrade to? latest [now I'm in Version : Cloudera Express 5.7.0 (#76 built by jenkins on 20160401-1334 git: ec0e7e69444280aa311511998bd83e8e6572f61c)] What is the value for your 'com.cloudera.cmf.db.type' in the /etc/cloudera-scm-server/db.properties file. it is com.cloudera.cmf.db.type=postgresql
... View more
05-12-2016
11:39 PM
Hi All, I'm in serious trouble. I was working on Node Configuration : Operating System CentOS 6.7 (Final) Cluster 9 Node cluster Cluster : Hadoop Distribution Cloudera Cluster : Hadoop Distribution Version CDH 5.4.2, Parcels Cluster : HDFS Capacity 2.7 TB Cluster : YARN Configuration 56 v-cores and 70 GB of memory Now today i tried to upgrade the pracel. Here in my organization there is a proxy update I made th change in CSM and I donwloaded and succefully upgraded to CDH 5.4.10 [CDH-5.4.10-1.cdh5.4.10.p0.16] Everything was working fine. But I was trying to upgrade cloudera manager iteself. Unfortunetly after that I failed to install/upgrde cloudera agent in nodes. So i followed kind of bilndly this post. So I ran rm -vRf /etc/yum.repos.d/cloudera* /etc/cloudera-*
rm -vRf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera*
rm -vRf /var/log/cloudera-*
yum remove cloudera*
yum clean all Now after that I'm not completely lost I was not able to install or upgrade CSM. I have setup proxy but still its failing. I want to know whether I'll be able to recover existing HDFS data. Kindly tell me. I have not deleted the namenode, secondary namenode or datanaode files from local disk.
... View more