Member since
08-30-2013
19
Posts
0
Kudos Received
0
Solutions
11-03-2013
09:24 PM
hi, I installed CDH4.3 by CM4.6.3 about 3 months ago. and from Cloudera Manager web page, we have: -------------------------------------------------------------------- Component Version Component Version Cloudera Manager Agent Not applicable 4.6.3 Hadoop CDH4 2.0.0+1367 Hive CDH4 0.10.0+134 and I can use hiveserver2. but from: https://cwiki.apache.org/confluence/display/Hive/Setting+up+HiveServer2 says: "Introduced in Hive version 0.11". So, one said hs2 is introduced in hive0.11, but I use hs2 in hive0.10 So, which one is correct? So far, I encountered many issues with hiveserver2 with ODBC (always timeout or all kinds of exception for big data retrieved from hive). I just want to know: Is this Hive version 0.10.0+134 a product ready version with hiveserver2? or I should upgrade to hive0.11 or even hive 0.12? Thanks.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Cloudera Manager
10-31-2013
08:41 PM
I found it under /var/run/.... Thanks. Although, I don't know where to define datanode service uses /var/run/cloudera-scm-agent/process/.../hdfs-site.xml as the configuration file to start the service.
... View more
10-31-2013
04:14 AM
@smark, thanks for your informational reply. Yes, this is a "Overriding Configuration Settings". I just changed the value of one instance of datanode role group. Following "Services > hdfs > Instances > [click on a Datanode from the list] > Processes "-->"Show" under "Configuration Files/Environment", click hdfs-site.xml. I found the updated value in this file. But the hyper-link of hdfs-site.xml is from remote CM machine, instead of local machine. So, my question are: Is this hdfs-site.xml on local or remote? If it is on local, but I cannot find it, where is it? I suppose it should be "/etc/hadoop/conf/hdfs-site.xml", but NOT true. if it is on remote(from the hyper-link, this link points to remote CM node), this is a big surprise to me. The datanode service can use a remote hdfs-site.xml as its configuration file to start the service!? And I even cannot find the file on remote CM machine, it is only on memory? Thanks.
... View more
10-29-2013
08:52 PM
The cluster is on CentOS 6.
... View more
10-29-2013
08:51 PM
hi, I have updated "dfs.datanode.data.dir" on one datanode in CM4.6.3. After reboot the whole cluster, I can still read the new value in CM configuration tab. But I cannot find the new value in any hdfs-site.xml on this datanode. I suppose this new value must be somewhere in this datanode, right? So, could you tell me in which configuration file this value is stored? Thanks.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
10-09-2013
01:52 AM
Great! I will try it. Many thanks.
... View more
09-24-2013
01:33 AM
I am using hive server 1. OK, I will try hive server 2. Thanks.
... View more
09-24-2013
01:28 AM
Error message when select sqlserver view "tblFactValidationErrors_viewFromImpala": OLE DB provider "MSDASQL" for linked server "ImpalaDW" returned message "Requested conversion is not supported.". Msg 7341, Level 16, State 2, Line 2 Cannot get the current row value of column "[MSDASQL].sessionid" from OLE DB provider "MSDASQL" for linked server "ImpalaDW". Background: 1. One hive table "tblFactValidationErrors_view4impala", it works well since I can select it in impala-shell 2. One Linked server "ImpalaDW", based on ODBC driver for Impala v2.5.2.1002 3. create view [dbo].[tblFactValidationErrors_viewFromImpala] AS SELECT * FROM OPENQUERY(ImpalaDW, 'select * from tblFactValidationErrors_view4impala'); on SQL server 2008R2 4. select sqlserver view "tblFactValidationErrors_viewFromImpala", if select int type column, no error; but if select varchar/text type column, with error above. Question: Is it a bug that ODBC driver for Impala? that means ODBC driver for impala doesn't support sqlserver 2008R2. or Can I do anything to workaround it? If this is a bug, then a block bug, string/varchar/text is a most basic type, we cannot move on without it. There is also some other source about this issue: https://groups.google.com/a/cloudera.org/forum/#!topic/impala-user/fvGRgL3lSU4 then search "a problem with strings", then find "This is only affecting SQL Server linked servers, I believe it's because it expects VARCHAR(4000) but IMPALA string size is INT_MAX (32,767 f)."
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
09-12-2013
10:01 AM
sometimes, error message on thrift server console:
FAILED: Error in semantic analysis: Lock manager could not be initialized, check hive.lock.manager Check hive.zookeeper.quorum and hive.zookeeper.client.port
background:
1. sqlserver's linked server ----odbc---->hive thrift server.
2. just run select * from sqlserver on hive tables/views, no error; when build cube by sqlserver, then this error.
any suggestions?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Zookeeper