Member since
09-01-2020
291
Posts
16
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
71 | 03-04-2024 07:58 AM | |
377 | 11-15-2023 07:50 AM | |
529 | 08-03-2023 04:54 AM | |
1779 | 12-08-2022 03:44 AM | |
1180 | 10-19-2022 10:51 AM |
03-06-2024
03:07 AM
1 Kudo
Hello @pranoy, It may be possible that the user you are using to log in initially does not have access to some of the configuration files you use in the Kafka command and it's not able to get live brokers hence facing this issue. You should check the details you are using in the Kafka command and user privileges. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
03-06-2024
02:58 AM
1 Kudo
Hello @BrianChan, We should check the consumer offset topic (__consumer_offsets) health using the Kafka describe command in such issues. And check min.insync.replicas setting of this topic in describe command output. It should be less than or qual to topic ISR. For example: If the topic has replication factor 3 then min ISR should be 2 or 1 for failover. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
03-06-2024
02:44 AM
2 Kudos
Hello @hegdemahendra, 1) Please refer to the following article to connect Kafka from Nifi: https://community.cloudera.com/t5/Community-Articles/Integrating-Apache-NiFi-and-Apache-Kafka/ta-p/247433 2) Also, to isolate the issue you can try to connect Kafka from the same settings from nifi node using the Kafka command Please let us know if you still have any questions regarding the same or facing any issues. We will be happy to assist you with it. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
03-04-2024
07:58 AM
1 Kudo
Hello @steinsgate, The CDP Private Cloud Data Services will use dedicated OCP only so it doesn’t affect other services. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
03-04-2024
07:03 AM
Hello @npdell, You can the monitoring system with the help of the CM Alerts regarding the same. Refer the following article for more details: https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/monitoring-and-diagnostics/topics/cm-manage-alerts.html If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
12-29-2023
12:48 AM
1 Kudo
Hello @StanislavJ , The Linux kernel parameter, vm.swappiness, is a value from 0-100 that controls the swapping of application data (as anonymous pages) from physical memory to virtual memory on disk. The higher the value, the more aggressively inactive processes are swapped out from physical memory. The lower the value, the less they are swapped, forcing filesystem buffers to be emptied. On most systems, vm.swappiness is set to 60 by default. This is not suitable for Hadoop clusters because processes are sometimes swapped even when enough memory is available. This can cause lengthy garbage collection pauses for important system daemons, affecting stability and performance. Cloudera recommends that you set vm.swappiness to a value between 1 and 10, preferably 1, for minimum swapping on systems where the RHEL kernel is 2.6.32-642.el6 or higher. To view your current setting for vm.swappiness, run: #cat /proc/sys/vm/swappiness To set vm.swappiness to 1, run: #sudo sysctl -w vm.swappiness=1 Also, To give an overview, swapping alerts are generated in Cloudera Manager when host swapping or role process swap usage exceeds a defined threshold. The warning threshold of "500 MiB" will mean that any swap usage beyond this on a given host will generate an alert and critical if set to any would generate an alert even if a small amount of swapping occurs. The swap memory usage threshold value can be set at the "host" level or at the "process/service" level. >> To set the threshold at the process level settings can be done as follows: From CM UI >> Clusters >> yarn >> Configuration >> search for "Process Swap Memory Thresholds" >> (For resource manager) Warning and Critical >> Select Specify >> Specify value here (You can set the values in Bytes/KB/MB/GB) >> Save changes You can increase the value and then further suggest you monitor the cluster for the swap usage and adjust the values accordingly. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you, Babasaheb Jagtap
... View more
11-15-2023
07:50 AM
1 Kudo
Hello @one4like , Pushing every local file of a job to HDFS will cause issues, especially in larger clusters. Local directories are used as scratch location. Spills of mappers are written there and moving that over to the network will have performance impacts. The local storage of the scratch files and shuffle files is done exactly to prevent this. It also has security impacts as the NM now pushes the keys for each application on to a network location which could be accessible for others. A far better solution is to use the fact that the value of yarn.nodemanager.local-dirs can point to multiple mount points and thus spreading the load over all mount points. So the answer is NO. local-dirs must contain a list of local paths. There's an explicit check in code which only allows local FS to be used. See here: https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java#L224 Please note that an exception is thrown when a non local file system is referenced. If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you. Bjagtap
... View more
08-03-2023
04:54 AM
1 Kudo
Hello @little-jack , Thank you for contacting Cloudera Support. Yes, "Docker on YARN" is supported in CDP. If you are trying it in CDH, then you can try adding required/custom settings in CM >> Yarn >> YARN Service Advanced Configuration Snippet (Safety Valve) for yarn-site.xml and restart yarn. If it will not work then you continue making changes from yarn-site.xml and container-executor.cfg from SSH/node and set chattr for these files so that changes will not revert when you will restart yarn. But if you have to make any changes in CM >> yarn and even any in other services dependent on yarn then you should remove chattr >> make the changes >> restart yarn and tweak docker on yarn settings again. But would suggest you to please upgrade CDH to CDP where all bugs are fixed and you will be able to use the latest features added in CDP 7.1.8. If this information helped you, then it will be appreciated if you will take a moment to click on KUDOS 🙂 Thank you.
... View more
07-17-2023
11:00 AM
Hello @Timo & @George-Megre If you are facing an issue after enabling Kerberos and are unable to produce/consume then we would suggest you to please follow the below steps and let us know how it goes: Make sure that all partitions are in a healthy state using the Kafka describe command or that there is no warning/alerts for Kafka in CM. If there is no alert for Kafka then, please follow the below steps to connect to the Kafka topic: 1) kinit with the keytab and make sure that the user is having required permissions enabled in Ranger 2) Create jaas.conf file with the contents: vi /tmp/jaas.conf KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true serviceName="kafka"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true serviceName="zookeeper"; }; 3) Run the following command export KAFKA_OPTS="-Djava.security.auth.login.config=/tmp/jaas.conf Note: Make sure and replace jaas.conf complete path. 4) Create the client.properties file containing the following properties. vi /tmp/client.properties security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka 5) Start console producer kafka-console-producer --broker-list <broker1.test.com:6667,broker2.test.com:6667> --topic <topic-name> --producer.config /tmp/client.properties 6) Start console consumer kafka-console-consumer --bootstrap-server <broker1.test.com:6667,broker2.test.com:6667> --topic <topic-name> --consumer.config /tmp/client.properties --from-beginning Note: Use the complete hostname of the broker, Also, replace the topic, client.properties name in the above commands. Please check and f you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post. Thank you.
... View more
04-24-2023
06:05 AM
Hello @AndreyKravtsov Please refer to the article below to Integrating Apache NiFi and Apache Kafka [1] [1] https://community.cloudera.com/t5/Community-Articles/Integrating-Apache-NiFi-and-Apache-Kafka/ta-p/247433 This example is with the PLAINTEXT Kafka protocol. It looks like you are using SSL/TLLS for kafka. You should check whether you are using SSL or SASL_SSL protocol for kafka from CM >> Kafka conf and created a StandardSSLContextService controller setting and update with Kafka Keystore and Truststore details. You can refer the below article for details [2]: [2] https://community.cloudera.com/t5/Support-Questions/Need-help-with-SSL-config-in-Nifi-ConsumeKafka/td-p/320594 Additionally, refer the following articles for more details: [1] - https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.17.0/org.apache.nifi.processors.standard.InvokeHTTP/index.html [2] - https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-ssl-context-service-nar/1.20.0/org.apache.nifi.ssl.StandardSSLContextService/index.html If this information helped you, then it will be appreciated if you will take a moment to click on KUDOS 🙂
... View more