Member since
08-13-2019
10
Posts
2
Kudos Received
0
Solutions
04-02-2023
12:57 AM
@araujo We are currently facing exactly same issue and we have been asked by our Infra team to switch from rc4-hmac to aes encryption. I thought your workaround python script would be really helpful. But unfortunately the pre-req modules seems to be insecure and I am now not sure if I can still use this script without the pre-req "impacket" module and its dependent modules. Appreciate if anyone can share some suggestions/solutions. Note: I am on RHEL 7.9 and upgrading to 8 and above is not an option at the moment.
... View more
11-03-2020
07:17 PM
Hi All, I have two Cloudera clusters and each cluster has one HBase table . I first write to cluster1 and then get my data replicated to cluster2 with WAL replication. Cluster1(has 3 Region servers) and cluster2(has 80 Region servers). I am using org.apache.hadoop.hbase.spark.HBaseContext.bulkPut to write data to Hbase. My table has two column families and writes/flushes are successful for one column family in source cluster(cluster1) and the other column family is not logging any errors/warnings to Flush/write failures in region server logs. One weird issue here is, my writes are reaching WAL successfully(both Column Family data) and they are replicating successfully in destination cluster(cluster2) and I can see data in both Column families in destination whereas in source, I believe the writes are reaching WAL only, somehow it is not being picked up by memstore and hence not being flushed to one of the column families.(It works for one column family and not working for the other). I tried to write data to that column family manually from hbase shell (put command) and it works fine without any issues. What I have tried so far to fix this: hbase hbck -details , no inconsistencies found. Used hbck2 tool to fix hdfs filesystem for Hbase tables/hdfs directories Dropped the table in source, exported a snapshot from destination cluster which has data for both column families and tried to rerun my batch job. Still the writes are going to one column family only, other one is not getting any write requests. tried to tune HBase write performance by changing several parameters from link below. No luck. https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-2/ta-p/248152 Not sure if its a bug in my current Hbase version. It was working just fine for over an year now. ( Hbase version :2.1.0), CDH 6.2.1
... View more
Labels:
- Labels:
-
Apache HBase
10-29-2019
05:40 PM
Hi Cloudera Community,
I have just completed installation of CDSW 1.6.1 in my DEV cluster.
I just want to know the initial "admin" account credentials.
I have tried below listed username/password combinations so far, but no luck:
1. admin/admin
2. cloudera/cloudera
3.admin/cloudera
Regards,
Nanda
... View more
Labels:
08-14-2019
06:46 PM
1 Kudo
Hi Andre, Your solution is right. But my situation was little different. Below are the checks and fix I did with cloudera support helping me in the process: 1. From Hive-server2 logs we found that one of the Hiveserver2 instance is not talking to zookeeper quorum(only in case of querying Hbase data) 2. Installed Hbase-gateway services on all the Hue instances and Hiveserver2 instances. 3. restart Hbase services and Deploy client configuration. 4. Restart the Hiveserver2 instance which had the problem of trying to connect to localhost:2181 as zookeeper quorum Then tried to submit the query from beeline and Hue . All worked as expected this time.
... View more
08-13-2019
09:26 PM
This is how my zoo.cfg looks : tickTime=2000 initLimit=10 syncLimit=5 dataDir=/bdp/znode/cdh dataLogDir=/bdp/znode/cdh clientPort=2181 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=60000 autopurge.purgeInterval=24 autopurge.snapRetainCount=5 quorum.auth.enableSasl=false quorum.cnxn.threads.size=20 server.1=ZK1:3181:4181 server.2=ZK2:3181:4181 server.3=ZK3:3181:4181 server.4=ZK4:3181:4181 server.5=ZK5:3181:4181 leaderServes=yes authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider kerberos.removeHostFromPrincipal=true kerberos.removeRealmFromPrincipal=true
... View more
08-13-2019
09:23 PM
Hi Eric, I just checked hbase-site.xml,hive-site.xml from CM->Hue->Instances-> processes tab as well. They all look good. Below is an example from hive-site.xml, it is the same in hbase-site.xml as well. This problem is not happening on all 3 Hue Web UI, for some users it is working in Hue1 and not working in other two, for some users it works in Hue1 and Hue2, But not in Hue3. I have a load balanced Hue(recommended one to use among 3 Hue instances) . Here it is not working for 90% of the users including my ID). Are we hitting maximum client connections to Zookeeper (maxClientCnxns=60 in my cluster). If that is the case, I don't even see any errors in zookeeper logs saying "too many connections from <IP_address> max is 60" etc., Error is same for all users. Unable to submit mapreduce job "can't get locations of Hbase regions/data" and client trying to connect to zookeeper on localhost:2181 instead of actual zookeeper nodes. <name>hive.zookeeper.quorum</name>
<value>ZK1,ZK2,ZK3,ZK4,ZK5</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
... View more
08-13-2019
08:56 PM
Hi All,
I am using Hue--> Hive editor to submit a query on an external table and View created on top an Hbase table. I have 3 instances of Hue running in my cluster(1 among them as Load balancer).
1.When a submit a query from Beeline on this external table and View. It works perfectly fine.
2. When i submit the same query from Hue it doesn't work.(simple query like: select * from hbase_view limit 10)
3. hbase-site.xml, hive-site.xml all have zookeeper.quorum defined correctly(I have 5 zookeeper server instances, so it has 5 nodes in zookeeper.quorum properties). Clientport:2181. That also looks fine.
But I am getting below error in hive-server2.log file when a query is submitted from Hue.
Instead of trying to connect to one of the zookeeper host, it is trying to connect to localhost4/127.0.0.1:2181. This is not the case when query runs successfully, it takes any of the zookeeper node for client connection.
2019-08-13 22:10:23,353 INFO org.apache.zookeeper.ClientCnxn: [HiveServer2-Background-Pool: Thread-8315-SendThread(localhost4:2181)]: Opening socket connection to server localhost4/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-13 22:10:23,353 WARN org.apache.zookeeper.ClientCnxn: [HiveServer2-Background-Pool: Thread-8315-SendThread(localhost4:2181)]: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2019-08-13 22:10:27,855 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: [HiveServer2-Background-Pool: Thread-8315]: ZooKeeper getData failed after 4 attempts
2019-08-13 22:10:27,855 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: [HiveServer2-Background-Pool: Thread-8315]: hconnection-0x5bd726980x0, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
Eventually the query fails without even generating a mapreduce application/job:
2019-08-13 22:10:46,572 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-8315]: Job Submission failed with exception 'org.apache.hadoop.hbase.client.RetriesExhaustedException(Can't get the locations)'
org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:329)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:157)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
why is it trying to connect to localhost:2181 instead of zookeeper hosts? Any solutions for this problem?
Regards,
Nanda
... View more
Labels: