Member since
11-05-2018
6
Posts
0
Kudos Received
0
Solutions
08-27-2019
12:22 PM
Hi There,
The linux admins changed the root password of the hadoop box and then after 2 days reverted back. After that we have been having issues with the entire node. The disk wait and I/operations are taking longer than usual. I suspect issue with the zookeeper.
session established for client) 2019-08-27 22:23:31,908 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 1180ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:23:42,789 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 2060ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:24:01,821 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 1090ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:24:11,818 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 1086ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:24:20,787 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /host2:52159 2019-08-27 22:24:20,789 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /host2:52159 2019-08-27 22:24:20,789 - INFO [CommitProcessor:1:ZooKeeperServer@617] - Established session 0x16cd24bcb850012 with negotiated timeout 180000 for client /host2:52159 2019-08-27 22:24:21,244 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /host2:52159 which had sessionid 0x16cd24bcb850012 2019-08-27 22:24:23,179 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 2446ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:24:30,245 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /host1:60574 2019-08-27 22:24:33,130 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 2397ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2019-08-27 22:24:34,710 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2019-08-27 22:24:34,710 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /host1:60574 (no session established for client)
... View more
Labels:
- Labels:
-
Apache Zookeeper
05-12-2019
04:44 AM
Hi All, I am have started to experiment the spark client installed in our system but i am getting the below error while running the spark sql The current setting for Spark 1.6.1.2.4.2.0-258 built for Hadoop 2.7.1.2.4.2.0-258 spark.driver.maxResultSize =5g spark.kryoserializer.buffer - 2m spark.kryoserializer.buffer.max - 256m /* org.apache.spark.SparkException: Job aborted due to stage failure: Task 625 in stage 224854.0 failed 4 times, most recent failure: Lost task 625.3 in stage 224854.0 (TID 14802942,xxxxxxxxx): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1596. To avoid this, increase spark.kryoserializer.buffer.max value. at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:299) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: */
... View more
Labels:
- Labels:
-
Apache Spark
11-19-2018
07:30 PM
Hi There, I was looking into nifi examples and I am getting the below error. I am trying to split this date 20181119112100 I am getting my expected results. Not sure why I am getting the error and warning.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Spark
11-05-2018
08:43 PM
Hi All, The nifi content repository has been filling our disk. Previously I would assume it would be java garbage collector not working. A simple stop and restart of the nifi application would bring down the size. But this has not been the case for last two days. Its been reaching 99%. Our other cluster runs on nifi1.7 that has no issues our main one is on version 112 which is causing the issue. Below is the parameters used. Is there something that I can do to fix this. Please let me know. nifi.content.claim.max.appendable.size=10 B
nifi.content.claim.max.flow.files=1
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache NiFi