Member since
10-04-2017
113
Posts
11
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
18164 | 07-03-2019 08:34 AM | |
2081 | 10-31-2018 02:16 AM | |
12788 | 05-11-2018 01:31 AM | |
8414 | 02-21-2018 03:25 AM | |
2918 | 02-21-2018 01:18 AM |
11-28-2018
02:08 AM
Hi, How do we know how many nodes in the cluster are used for licensing? Like edge nodes are not used for billing etc.,. We have clusters in CDH 5.5.4 and CDH 5.14.2.
... View more
Labels:
- Labels:
-
Cloudera Manager
11-27-2018
02:54 AM
1 Kudo
CFile is an on-disk columnar storage format which holds data and associated B-Tree indexes. https://github.com/cloudera/kudu/blob/master/docs/design-docs/cfile.md
... View more
10-31-2018
02:16 AM
This is because navigator upgrade generally takes lot of time based on the number of objects and relations you have. Increasing the navigator heap size can help. Calculation of the required heap is available in cloudera site.
... View more
10-27-2018
12:37 PM
1 Kudo
Hi, We often see NOT_LEADER_FOR_PARTITION in kafka. On what basis does kafka leader election happen ? What are all the reasons that cause kafka partition leader election to trigger?
... View more
Labels:
- Labels:
-
Apache Kafka
09-12-2018
02:43 AM
@Harsh J Thanks for the detailed answer. The cluster we have is having very less data and all the datanodes haven't utilized atleast 10% of storage yet but is expected to use atleast 60% in a week. Would running HDFS balancer and compaction make any difference in this case!
... View more
09-11-2018
10:55 AM
1 Kudo
Hi, What is the procedure to follow when addin additional region server to an existing cluster? We use CDH 5.14
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
08-31-2018
06:12 AM
Hi, We have recently upgraded from 5.11.2 to 5.14.2. The upgrade was untill we identified that many databases were not visible/loaded in navigator. The logs have below. Any related pointers are welcome. 2018-08-31 10:12:17,344 ERROR com.cloudera.nav.hive.extractor.AbstractHiveExtractor [CDHExecutor-0-CDHUrlClassLoader@7fbf5fdd]: Failed to extract database dummy_database with error: java.net.SocketException: Broken pipe (Write failed) org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed) at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147) at org.apache.thrift.transport.TTransport.write(TTransport.java:107) at org.apache.thrift.transport.TSaslTransport.writeLength(TSaslTransport.java:391) at org.apache.thrift.transport.TSaslTransport.flush(TSaslTransport.java:499) at org.apache.thrift.transport.TSaslClientTransport.flush(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.TFilterTransport.flush(TFilterTransport.java:77) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_database(ThriftHiveMetastore.java:664) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:656) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1213) at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.extractDatabase(AbstractHiveExtractor.java:147) at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.extractDatabases(AbstractHiveExtractor.java:133) at com.cloudera.nav.hive.extractor.HiveExtractor.run(HiveExtractor.java:63) at com.cloudera.nav.hive.extractor.AbstractHiveExtractor.run(AbstractHiveExtractor.java:118) at com.cloudera.nav.hive.extractor.HiveExtractorShim.run(HiveExtractorShim.java:35) at com.cloudera.cmf.cdhclient.CdhExecutor$RunnableWrapper.call(CdhExecutor.java:221) at com.cloudera.cmf.cdhclient.CdhExecutor$RunnableWrapper.call(CdhExecutor.java:211) at com.cloudera.cmf.cdhclient.CdhExecutor$CallableWrapper.doWork(CdhExecutor.java:236) at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at com.cloudera.cmf.cdh5client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:44) at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186) at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketException: Broken pipe (Write failed) at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) Thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Navigator
08-30-2018
02:02 AM
Hi, Try only below in your consumer properties file. security.protocol=SASL_PLAINTEXT sasl.mechanism=GSSAPI sasl.kerberos.service.name=kafka group.id=testgroup
... View more
06-21-2018
01:30 AM
Hi, We have a CDH cluster in 5.14.2. Is it compatable with KTS 3.8.x ? Due to some limitations, we are unable to use KTS 5.14.0 in this cluster. The cloudera documentation shows KTS compatability with KMS and CM but nor CDH.
... View more
Labels:
- Labels:
-
Cloudera Navigator