Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2058 | 06-15-2020 05:23 AM | |
| 17037 | 01-30-2020 08:04 PM | |
| 2220 | 07-07-2019 09:06 PM | |
| 8555 | 01-27-2018 10:17 PM | |
| 4837 | 12-31-2017 10:12 PM |
11-20-2019
11:41 AM
hi all
we are using the following HDP cluster with ambari ,
list of nodes and their RHEL version
3 masters machines ( with namenode & resource manager ) , installed on RHEL 7.2
312 DATA-NODES machines , installed on RHEL 7.2
5 kafka machines , installed on RHEL 7.2
now we want to add the following machines to the cluster but with RHEL 7.5
85 DATA-NODES machines , should be installed on RHEL 7.5 version
2 kafka brokers machines , should be installed on RHEL 7.5 version
so my question is
can we mix in HDP cluster RHEL 7.2 with RHEL 7.5 version ?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
11-20-2019
08:16 AM
hi all when we look on https://supportmatrix.hortonworks.com/ And choose the RHEL 7.7 HDP or AMBARI versions are not marked from some unclear reason! so we cant captured the HDP or ambari version for RHEL 7.7 is it problem in support-matrix page ? or something else ? any way if we want to use RHEL 7.7 what are the HDP and AMBARI Versions? that fit this RHEL version ?
... View more
Labels:
11-19-2019
01:09 PM
thank you so much btw - can I get your advice about other thread - https://community.cloudera.com/t5/Support-Questions/schema-registry-service-failed-to-start-due-schemas-topic/td-p/283403
... View more
11-19-2019
12:30 PM
1 Kudo
I prefer to move the folder - version-2 and create it again with all permissions user: group
... View more
11-19-2019
11:52 AM
Dear Shelton do we need also to create empty folder - version-2 ? under /opt/confluent/zookeeper/data after we moved the original folder - version-2
... View more
11-19-2019
11:22 AM
thank you , any risks with that option? do we need also to create empty folder - version-2 ? under /opt/confluent/zookeeper/data/
... View more
11-19-2019
10:39 AM
1 Kudo
We have kafka cluster with 3 nodes , each kafka include zookeeper server and schema registry
We get the following error on one of the zookeeper server
[2019-11-12 07:44:20,719] ERROR Unable to load database on disk (org.apache.zookeeper.server.quorum.QuorumPeer) java.io.IOException: Unreasonable length = 198238896 at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127) at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92) at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:629) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:166) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:601) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:591) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:164) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
seems that some snapshot files under folder /opt/confluent/zookeeper/data/version-2 are corrupted
under folder version-2 , we have the following example files
many files as log.3000667b5 many files as snapshot.200014247 one file - acceptedEpoch one file – currentEpoch
so the question is – how to start the zookeeper server
from my understanding we have two options , but not sure about them
one option is to move version-2 folder to other place as version-2_backup and create new folder - version-2 under /opt/confluent/zookeeper/data then start the zookeeper server and hope that snapshot will copied from other good active zookeeper server ?
second option is maybe to move version-2 folder to other place as version-2_backup , create new folder as - version-2 and copy all content from version-2 from good machine to the bad zookeeper server to version-2 , but I not sure if this is right option?
... View more
Labels:
11-18-2019
11:07 PM
now it works I set FQDN , incited just node1
... View more
11-18-2019
10:43 PM
I change it as your advice but still: curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" { "status" : 404, "message" : "org.apache.ambari.server.controller.spi.NoSuchResourceException: The specified resource doesn't exist: Service not found, clusterName=HDP, serviceName=component: SPARK2_THRIFTSERVE" . .note - spark thrift server is on - node01 and on node 03 its strange because: ( SPARK2_THRIFTSERVER is the component name ) curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://node02:8080/api/v1/clusters/HDP/components/ | grep -i spark | grep component_name | tail -1 "component_name" : "SPARK2_THRIFTSERVER",
... View more
11-18-2019
01:08 PM
I get also <p>The requested method DELETE is not allowed for the URL /api/v1/clusters/............/host_components/SPARK2_THRIFTSERVER.</p>
... View more