Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1996 | 06-15-2020 05:23 AM | |
| 16431 | 01-30-2020 08:04 PM | |
| 2144 | 07-07-2019 09:06 PM | |
| 8334 | 01-27-2018 10:17 PM | |
| 4727 | 12-31-2017 10:12 PM |
10-03-2020
09:50 AM
curl -u *********:************** -H "X-Requested-By:ambari" -i GET http://bathdi-pp-ne-petra-hive-prod-02.azurehdinsight.net:8080/api/v1/clusters/cl1/components?fields=host_components/HostRoles/state Getting error : curl: (6) Could not resolve host: GET
... View more
09-13-2020
09:08 AM
thank you for the post but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node hdfs zkfc -formatZK -force thank you for the post but since we have active name node and standby name node we assume that our namenode is HA enable example from our cluster but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
08-14-2020
12:38 AM
1 Kudo
@mike_bronson7 Let me try to answer all your 3 questions in a shot [snapshot] Zookeeper has 2 types of logs the snapshot and transactional log files. As changes are made to the znodes i.e addition or deletion of znodes these changes are appended to a transaction log, occasionally, when a log grows large, a snapshot of the current state of all znodes will be written to the filesystem. This snapshot supersedes all previous logs. To put you in context it's like the edit-logs and the fsimage in Namenode architecture, all changes made in the HDFS is logged in the edits-logs in secondary Namenode when a checkpoint kick in it merges the edits log with the old fsimage to incorporate the changes ever since the last checkpoint. So zk snapshot is synonym to the fsimage as it contains the current state of the znode entries and ACL's Snapshot policy In the earlier command shared the snapshot count parameter -n <count> if you really want to have sleep then you can increment it to 5 or 7 but I think 3 suffice to use the autopurge feature so I keep only 3 snapshots and 3 transaction logs. When enabled, ZooKeeper auto-purge feature retains the autopurge.snapRetainCount most recent snapshots and the corresponding transaction logs in the dataDir and dataLogDir respectively and deletes the rest. Defaults to 3. The minimum value is 3. Corrupt snapshots The Zookeeper might not be able to read its database and fail to come up because of some file corruption in the transaction logs of the ZooKeeper server. You will see some IOException on loading the ZooKeeper database. In such a case, make sure all the other servers in your ensemble are up and working. Use the 4 letters command "stat" command on the command port to see if they are in good health. After you have verified that all the other servers of the ensemble are up, you can go ahead and clean the database of the corrupt server. Solution Delete all the files in datadir/version-2 and datalogdir/version-2/. Restart the server. Hope that helps
... View more
07-26-2020
11:18 AM
1 Kudo
@mike_bronson7 log.retention.bytes is a size-based retention policy for logs, i.e the allowed size of the topic. Segments are pruned from the log as long as the remaining segments don't drop below log.retention.bytes. You can also specify retention parameters at the topic level To specify a retention time period per topic, use the following command. kafka-configs.sh --zookeeper [ZooKeeperConnectionString] --alter --entity-type topics --entity-name [TopicName] --add-config retention.ms=[DesiredRetentionTimePeriod] To specify a retention log size per topic, use the following command. kafka-configs.sh --zookeeper [ZooKeeperConnectionString] --alter --entity-type topics --entity-name [TopicName] --add-config retention.bytes=[DesiredRetentionLogSize] That should resolve your problem Happy hadooping
... View more
07-20-2020
01:45 AM
@mike_bronson7 The similar discussion happened on one of other Community thread by @Shelton You can follow this here: https://community.cloudera.com/t5/Support-Questions/add-new-data-node-to-existing-cluster/td-p/213133 Also the decommissioning is more suitable approach you can see this doc more more details though: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/managing-and-monitoring-ambari/content/amb_manage_components_on_a_host.html
... View more
07-19-2020
01:40 AM
1 Kudo
@mike_bronson7 I meant to say there is not a bunch of people using package install because of following reason[1]: Cloudera does not support clusters that are not deployed and managed by Cloudera Manager. If you choose to install CDH manually using these instructions, you cannot use Cloudera Manager to install additional parcels. This can prevent you from using services that are only available via parcel. For testing purpose you are free to choose any method but it depends what you are looking i.e easy managing of cluster using CM web UI or just want to play with a demo installation. Regarding the end of life question, Yes that's the nature of software support that end of the day it will be end of life after some years. For Cloudera software support cycle you can here is a glance: https://www.cloudera.com/legal/policies/support-lifecycle-policy.html Cheers! [1] https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/install_cloudera_packages.html
... View more
07-15-2020
05:15 AM
1 Kudo
You can try to use this command : http://<ambari-server>:8080/api/v1/stacks/{stackName}/versions/{stackVersion}/services To get help on API calls use this : http://<ambari-server>:8080/api-docs. You can try out api calls to understand what api returns.
... View more
07-14-2020
12:27 PM
1 Kudo
@mike_bronson7 It is recommended to have minimum 3 data nodes in the cluster to accommodate 3 healthy replicas of a block as the default replication factor is 3. HDFS will not write replicas of same blocks on the same data node. In this scenario there will be under replicated blocks and 2 healthy replicas will be placed on the available 2 data nodes.
... View more
06-15-2020
05:23 AM
the API is curl -sH "X-Requested-By: ambari" -u admin:admin http://AMBARI_SERVER_FQDN:8080/api/v1/hosts/DATA_NODE_FQDN | grep cpu
... View more
06-02-2020
06:22 PM
Hello @mike_bronson7 , Thank you for posting your query You can execute 'get' on the same zookeeper client shell for the znode you would be able to get the hostname Example: zookeeper-shell.sh zoo_server1:2181 <<< "ls /brokers/ids/1018" It returns output as follows (example - in my case) [zk: localhost:2181(CONNECTED) 5] get /brokers/ids/10 {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://simple01.cloudera.com:9092"],"jmx_port":9393,"host":"simple01.cloudera.com","timestamp":"1590512066422","port":9092,"version":4} cZxid = 0x1619b ctime = Tue May 26 09:54:26 PDT 2020 mZxid = 0x1619b mtime = Tue May 26 09:54:26 PDT 2020 pZxid = 0x1619b cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x1722ddb1e844d50 dataLength = 238 numChildren = 0 so, my brokerID 10 is mapped with the host: simple01.cloudera.com
... View more