Member since
11-12-2018
189
Posts
177
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
511 | 04-26-2024 02:20 AM | |
665 | 04-18-2024 12:35 PM | |
3226 | 08-05-2022 10:44 PM | |
2937 | 07-30-2022 04:37 PM | |
6418 | 07-29-2022 07:50 PM |
03-27-2020
01:52 AM
Hi @JasmineD, We might need to consider backing up the following: flow.xml.gz users.xml authorizations.xml All config files in NiFi conf directory NiFi local state from each node NiFi cluster state stored in zookeeper. Please make sure that you have stored the configuration passwords safely. NiFi relies on sensitive.props.key password to decrypt sensitive property values from flow.xml.gz file. If they do not know sensitive props key, they would need to manually clear all encoded values from flow.xml.gz. This action will clear all passwords in all components on the canvas. We need to re-enter all of them once NiFi was recovered. Also, if there are any local files that are required by the DataFlows, that would also need to be backed up as well. (i.e., Custom processor jars, user-built scripts, externally referenced config/jar files used by some processors, etc.). Note: All the repositories in NiFi are backed up by default. Here is a good article to see how backup works in NiFi. https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Hope this helps. Please accept the answer and vote up if it did.
... View more
11-24-2019
09:52 PM
Hi @anshuman Yes, we have Node labels support in the new CDP. For more details, you can check CDP documentations. https://docs.cloudera.com/ -> Cloudera Data Platform -> Runtime -> Cloudera Runtime https://docs.cloudera.com/runtime/7.0.2/yarn-allocate-resources/topics/yarn-configuring-node-labels.html FYI. Cloudera Runtime is the core open-source software distribution within Cloudera Data Platform (CDP) that is maintained, supported, versioned, and packaged as a single entity by Cloudera. Cloudera Runtime includes approximately 50 open source projects that comprise the core distribution of data management tools within CDP, including Cloudera Manager, which is used to configure and monitor clusters managed in CDP.
... View more
01-15-2019
12:03 PM
1 Kudo
@Michael Bronson Decommissioning is a process that supports removing components and their hosts from the cluster. You must decommission a master or slave running on a host before removing it or its host from service. Decommissioning helps you to prevent potential loss of data or disruption of service. Below HDP documentation for Ambari-2.6.1 help you to decommission a DataNode. When DataNode decommissioning process is finished, the status display changes to Decommissioned. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-operations/content/how_to_decommission_a_component.html I hope that the above answers your questions.
... View more
01-14-2019
06:16 PM
1 Kudo
@Michael Bronson Below article help you to replacing faulty disks on datanode. https://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html Please accept the answer you found most useful.
... View more
01-02-2019
11:18 AM
1 Kudo
@Suraj Singh Seems it's similar like what we discussed in below thread https://community.hortonworks.com/questions/232093/yarn-jobs-are-getting-stuck-in-accepted-state.html If resubmit jobs will get success ? As discussed earlier this is open bug which fixed in further releases. If you need to apply patch, please involve Hortonworks support. If you are a customer, HWX can release a patch for you if it's technically possible based on specifics of the JIRAs. If you don't have support, you can certainly do it but test it first apply the patch in dev/test and see if it resolves your problem.
... View more
01-01-2019
11:03 AM
1 Kudo
@Michael Bronson
Please can you check the zookeeper logs (/var/log/zookeeper) of master1.sys89.com. This can happen if there are too many open connections. Check where there were any warning messages stating with “Too many connections from {IP address of master1.sys89.com}”. Using netstat command also you can verify
netstat -no | grep :2181 | wc -l
To fix this issue, kindly clear up all stale connections manually or try increasing the maxClientCnxns setting at /etc/zookeeper/2.6.4.0-91/0/zoo.cfg. From your zoo.cfg file I can see value is maxClientCnxns=60 which is default. You can increase it by adding the maxClientCnxns=4096 and restart respective affected services.
... View more
01-01-2019
10:31 AM
2 Kudos
@Suraj Singh Actually particular fix is released in following version 3.1.0, 2.10.0, 2.9.1, 3.0.1 . Related JIRA https://issues.apache.org/jira/browse/YARN-7873. Fix you read complete comments then you will get idea about why they revert YARN-6078 and not released on 3.0.0.
... View more
12-28-2018
03:31 PM
2 Kudos
@Slimani Ibrahim
It's not recommended to format the NameNode more than once except when NameNode loses metadata information. The reason could be this property which tells NameNode where to store its metadata information on disk is dfs.namenode.name.dir in your case its points to /tmp, so every time you restart your system the /tmp directory gets flushed and hence you have to format the NameNode again. So, make sure you point the property dfs.namenode.name.dir to a more persistent location (something like /hadoop/hdfs/namenode similar for datanode property/hadoop/hdfs/datanode) which does not get's cleared every time if you restart your system that will resolve this problem. I hope that the above answers your questions. Please accept the answer you found most useful.
... View more
12-28-2018
07:03 AM
3 Kudos
@Artyom Timofeev As for containers are stuck in localizing phase, seems you are hit on this reported bug on Yarn which is resolved in 3.0.0 version. https://issues.apache.org/jira/browse/YARN-6078
... View more
12-27-2018
10:57 AM
3 Kudos
@Michael Bronson After configuration changes, it's safe to restart required services, those restart will make necessary new changes into the system. In our case, yarn.nodemanager.local-dirs will point out to new location /grid/sdb/hadoop/yarn/local instead of old location /var/hadoop/yarn/local . In short, restart will not cause any issue either after delete old files or after change in YARN configuration. I hope this answered your concerns.
... View more