Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1923 | 06-15-2020 05:23 AM | |
| 15500 | 01-30-2020 08:04 PM | |
| 2076 | 07-07-2019 09:06 PM | |
| 8122 | 01-27-2018 10:17 PM | |
| 4575 | 12-31-2017 10:12 PM |
11-19-2019
11:52 AM
Dear Shelton do we need also to create empty folder - version-2 ? under /opt/confluent/zookeeper/data after we moved the original folder - version-2
... View more
11-19-2019
11:22 AM
thank you , any risks with that option? do we need also to create empty folder - version-2 ? under /opt/confluent/zookeeper/data/
... View more
11-19-2019
10:39 AM
1 Kudo
We have kafka cluster with 3 nodes , each kafka include zookeeper server and schema registry
We get the following error on one of the zookeeper server
[2019-11-12 07:44:20,719] ERROR Unable to load database on disk (org.apache.zookeeper.server.quorum.QuorumPeer) java.io.IOException: Unreasonable length = 198238896 at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127) at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92) at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:629) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:166) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:601) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:591) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:164) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
seems that some snapshot files under folder /opt/confluent/zookeeper/data/version-2 are corrupted
under folder version-2 , we have the following example files
many files as log.3000667b5 many files as snapshot.200014247 one file - acceptedEpoch one file – currentEpoch
so the question is – how to start the zookeeper server
from my understanding we have two options , but not sure about them
one option is to move version-2 folder to other place as version-2_backup and create new folder - version-2 under /opt/confluent/zookeeper/data then start the zookeeper server and hope that snapshot will copied from other good active zookeeper server ?
second option is maybe to move version-2 folder to other place as version-2_backup , create new folder as - version-2 and copy all content from version-2 from good machine to the bad zookeeper server to version-2 , but I not sure if this is right option?
... View more
Labels:
11-18-2019
11:07 PM
now it works I set FQDN , incited just node1
... View more
11-18-2019
10:43 PM
I change it as your advice but still: curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" { "status" : 404, "message" : "org.apache.ambari.server.controller.spi.NoSuchResourceException: The specified resource doesn't exist: Service not found, clusterName=HDP, serviceName=component: SPARK2_THRIFTSERVE" . .note - spark thrift server is on - node01 and on node 03 its strange because: ( SPARK2_THRIFTSERVER is the component name ) curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://node02:8080/api/v1/clusters/HDP/components/ | grep -i spark | grep component_name | tail -1 "component_name" : "SPARK2_THRIFTSERVER",
... View more
11-18-2019
01:08 PM
I get also <p>The requested method DELETE is not allowed for the URL /api/v1/clusters/............/host_components/SPARK2_THRIFTSERVER.</p>
... View more
11-18-2019
01:03 PM
yes I did it ( i forget to write it in my post ( but cant edit the post ) so after I run it as you wrote still with the same problem < HTTP/1.1 405 Method Not Allowed HTTP/1.1 405 Method Not Allowed
... View more
11-18-2019
10:02 AM
we have two SPARK2_THRIFTSERVER on node01/03 in our ambari server
we want to delete both SPARK2_THRIFTSERVER on both nodes
we try the following API , but without success , any idea where we are wrong?
component_name : SPARK2_THRIFTSERVER
the REST API: ( we try to delete the thrift server on one of the nodes )
curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://node02:8080/api/v1/clusters/HDP/hosts/node01/SPARK2_THRIFTSERVER
* About to connect() to node02 port 8080 (#0) * Connected to node02 (45.3.23.4) port 8080 (#0) * Server auth using Basic with user 'admin' > DELETE /api/v1/clusters/HDP/hosts/node01/SPARK2_THRIFTSERVER HTTP/1.1 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.29.0 > Accept: */* > X-Requested-By: ambari > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
11-15-2019
05:24 AM
since we have both current folders as: /hadoop/hdfs/namenode/current ( when fsimage exists i /hadoop/hdfs/journal/hdfsha/current/ do you mean to backup both them? second how backup for time prescriptive for example one week or more ?
... View more
11-15-2019
04:37 AM
about option one I guess you not mean to backup the metadata by copy it as scp or rsync maybe you means that there are tool for backup like barman for postgresql so do you know tool for this option? on each name nodes we have the following folders /hadoop/hdfs/namenode/current ( when fsimage exists i /hadoop/hdfs/journal/hdfsha/current/ do you means to backup only these folders , lets say every day ?
... View more