Member since
07-20-2021
9
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
798 | 01-31-2023 12:03 AM |
02-20-2024
12:52 AM
1 Kudo
Hey there, we're running a CDP PC Base Cluster in our datacenter. In the last months we saw serveral disks failing due to theire age. The Problem we had, is that, when a disk fails, HDFS DN and YARN NM creating directories in the root-vg of the node. During normal operations the DN has it directories on the grids (/grid/[0-16]/*). Is there a parameter to prevent them of writing into the root-vg, when a grid points to the root-vg and not it's physical device under /dev/sd*? Regards, Timo
... View more
Labels:
12-11-2023
02:08 AM
Hey there, I successfully added a banner to hue, as described in the dokumentation. what unfortunately does not work, is setting a background color, but im still ongoing. Regards, Timo
... View more
11-29-2023
01:24 AM
Hey there, we're running CDP Private Cloud Base in 7.1.9 in 4 different Stages (Lab, DEV, QA and Prod). Is it possible to add a banner in different colours to the different WebUIs so that we can always distinguish the stages well? Like changing the colours in Cloudera Manager? Thanks and Regards, Timo
... View more
07-24-2023
12:57 AM
Thanks for your reply. In our case we're operating the kafka cluster not on CDP. Consuming data from kafka is no problem. Please have a look at this "org.apache.zookeeper.Login: TGT renewal thread ha... - Cloudera Community - 373868
... View more
07-24-2023
12:49 AM
hey there, is there an option to delete the events? Actually I'm around 6.5 Million Events, Threshold is 5 Million. Regards, Timo
... View more
07-13-2023
04:48 AM
Hey there, we did some analysis on our log files to get a better insight of the cluster whats happening. We found out that the log message "WARN org.apache.zookeeper.Login: TGT renewal thread has been interrupted and will exit." is written to the log over 1000 times per minute on a single RS. We've got these messages on all worker nodes in the cluster, but with a much lower count (10 - 100). Only when the usecase restarts the applications - RS changes until the next restart. Does anyone have an idea by what this excessive logging is triggered? Distribution over the workernodes over a 24h timeframe: first chart: counting all messages of loglevel warn per Host; second chart: count all messages of loglevel warn grouped by the java class first chart: counting all messages of loglevel warn per Host; second chart: count all messages of warning grouped by the java class Regards, Timo
... View more
07-13-2023
01:59 AM
Hey there, has anybody a solution for this topic? I'm getting similar messages in thousands per Minute on HBase Region Server. Although always only on a single region server, but after a restart of the application of the use case always on another region server. Regards, Timo WARN org.apache.zookeeper.Login: TGT renewal thread has been interrupted and will exit.
... View more
01-31-2023
12:03 AM
We added the new machines and configured it to all Disk. Working fine so far.
... View more
01-26-2023
01:44 AM
Hey together, we're running a CDP Private Cloud Base Cluster with actually 6 Nodes. All 6 Nodes running fully on SSD storage. I found out that HDFS thinks the SSDs are normal disk, because it is not configured as SSD storage like this --> "[SSD]/grid/n/dn", actually its "/grid/n/dn". Now we're planning to add four other servers from an older cluster with normal disks. My action plan is as follows: 1. configure existing storage/server to SSD -> "[SSD]/grid/n/dn" 2. add the other four server an configure it -> "[DISK]/grid/n/dn" 3. restart recommended services I've got the following questions: - Will HDFS distribute the blocks across all nodes (SSD and disk?) without any storage policy configured? So that we have a homogeneous storage. - How will services react to this changes in storage, for example HBase? - If we configure a homogeneous storage, as i described: does reconfiguring the storage paths of the Datanodes to SSDs add any value/benefit at all? Thanks for your help! Regards, Timo
... View more
Labels: