Member since
08-16-2015
97
Posts
16
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
884 | 07-11-2021 08:05 PM | |
1655 | 07-11-2021 06:37 PM | |
39409 | 06-04-2021 12:01 AM | |
1040 | 06-03-2021 11:43 PM | |
3422 | 04-26-2021 06:58 PM |
06-04-2021
12:01 AM
1 Kudo
Hello The table column type must match the columnar file format of the Parquet file, else you will result in the error you got One way if you prefer to use string instead of double for that data attribute, you can recreate the parquet file with the column type of string
... View more
06-03-2021
11:54 PM
Hello You can take a look of the Cloudera Manager Agent logs, and get the details about the issue Reference: https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/managing-clusters/topics/cm-manage-agent-logs.html
... View more
06-03-2021
11:43 PM
Hello It is common for customer to run preferred 3rd party component along side with the CDP cluster, and for your case, you can deploy Flume on the edge nodes of the cluster
... View more
05-16-2021
07:27 PM
1 Kudo
Hello Since it is related to Load Balancer, you can take a look of your configurations # cluster load balancing properties # nifi.cluster.load.balance.host=192.170.108.140 nifi.cluster.load.balance.port=6342 nifi.cluster.load.balance.connections.per.node=50 nifi.cluster.load.balance.max.thread.count=600 nifi.cluster.load.balance.comms.timeout=45 sec For example nifi.cluster.load.balance.connections.per.node=50 The maximum number of connections to create between this node and each other node in the cluster. For example, if there are 5 nodes in the cluster and this value is set to 4, there will be up to 20 socket connections established for load-balancing purposes (5 x 4 = 20). The default value is 4. Now you set it to 50, not sure how many nodes you got, you can do the math The rest of the configuration details you can refer here: https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.5.1/nifi-system-properties/content/cluster_node_properties.html
... View more
05-13-2021
07:42 PM
Hello Can you review this post and check the whether Hive namespace is created in ZK? https://community.cloudera.com/t5/Support-Questions/How-to-resolve-Unable-to-read-HiveServer2-configs-from/td-p/233701
... View more
05-13-2021
07:28 PM
Hello Please check the below instructions (HDP platform) and make sure you are following the right steps https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/fault-tolerance/content/update_the_hive_metastore.html If you are using pure open source, you can seek helps from Hive community forum
... View more
05-13-2021
07:24 PM
Hello Kafka is "pub and sub" If you plan you to use Spark, you can use Spark to subscribe to the topics and get the message Here is one example using Spark Streaming https://docs.cloudera.com/runtime/7.2.8/developing-spark-applications/topics/spark-streaming-example.html
... View more
05-02-2021
11:49 PM
Hello It is self-explained in the documentation https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/security-ranger-authorization/topics/security-ranger-resource-policies-importing-exporting.html
... View more
05-02-2021
11:46 PM
Hello Extracted from your log "Incompatible sink and channel settings defined. sink's batch size is greater than the channels transaction capacity."
... View more
05-02-2021
09:56 PM
1 Kudo
Hello if the cluster is critical to your business, you should consider to get the subscription from Cloudera, and for your facing issues, Cloudera can create a patch by backporting the fix to an earlier CM version
... View more