Member since
04-27-2017
8
Posts
0
Kudos Received
0
Solutions
05-27-2018
08:01 AM
zoo.cfg # # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # clientPort=2181 initLimit=10 autopurge.purgeInterval=24 syncLimit=5 tickTime=3000 dataDir=/hadoop/zookeeper autopurge.snapRetainCount=30 server.1=vla-poc-mst2.vmware.com:2888:3888 server.2=vla-poc-wrk1.vmware.com:2888:3888 server.3=vla-poc-wrk3.vmware.com:2888:3888
... View more
05-27-2018
07:55 AM
server. properties # Generated by Apache Ambari. Sun May 27 00:29:19 2018 auto.create.topics.enable=true auto.leader.rebalance.enable=true broker.rack=/default-rack compression.type=producer controlled.shutdown.enable=true controlled.shutdown.max.retries=3 controlled.shutdown.retry.backoff.ms=5000 controller.message.queue.size=10 controller.socket.timeout.ms=30000 default.replication.factor=1 delete.topic.enable=false external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec,kafka.server.KafkaServer.ClusterId external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request fetch.purgatory.purge.interval.requests=10000 kafka.ganglia.metrics.group=kafka kafka.ganglia.metrics.host=localhost kafka.ganglia.metrics.port=8671 kafka.ganglia.metrics.reporter.enabled=true kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter kafka.timeline.metrics.hosts=vla-poc-wrk3.vmware.com kafka.timeline.metrics.maxRowCacheSize=10000 kafka.timeline.metrics.port=6188 kafka.timeline.metrics.protocol=http kafka.timeline.metrics.reporter.enabled=true kafka.timeline.metrics.reporter.sendInterval=5900 kafka.timeline.metrics.truststore.password=bigdata kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks kafka.timeline.metrics.truststore.type=jks leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 listeners=PLAINTEXT://vla-poc-clt.vmware.com:6667 log.cleanup.interval.mins=10 log.dirs=/data/kafka-logs log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.retention.bytes=-1 log.retention.hours=168 log.roll.hours=168 log.segment.bytes=1073741824 message.max.bytes=1000000 min.insync.replicas=1 num.io.threads=8 num.network.threads=3 num.partitions=1 num.recovery.threads.per.data.dir=1 num.replica.fetchers=1 offset.metadata.max.bytes=4096 offsets.commit.required.acks=-1 offsets.commit.timeout.ms=5000 offsets.load.buffer.size=5242880 offsets.retention.check.interval.ms=600000 offsets.retention.minutes=86400000 offsets.topic.compression.codec=0 offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 offsets.topic.segment.bytes=104857600 port=6667 producer.purgatory.purge.interval.requests=10000 queued.max.requests=500 replica.fetch.max.bytes=1048576 replica.fetch.min.bytes=1 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.lag.max.messages=4000 replica.lag.time.max.ms=10000 replica.socket.receive.buffer.bytes=65536 replica.socket.timeout.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 zookeeper.connect=vla-poc-wrk1.vmware.com:2181,vla-poc-mst2.vmware.com:2181,vla-poc-wrk3.vmware.com:2181 zookeeper.connection.timeout.ms=25000 zookeeper.session.timeout.ms=30000 zookeeper.sync.time.ms=2000
... View more
05-27-2018
06:34 AM
(kafka.server.KafkaConfig) [2018-05-26 22:59:49,051] INFO starting (kafka.server.KafkaServer) [2018-05-26 22:59:49,061] INFO [ThrottledRequestReaper-Fetch], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) [2018-05-26 22:59:49,061] INFO [ThrottledRequestReaper-Produce], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) [2018-05-26 22:59:49,065] INFO Connecting to zookeeper on vla-poc-wrk1.vmware.com:2181,vla-poc-mst2.vmware.com:2181,vla-poc-wrk3.vmware.com:2181 (kafka.server.KafkaServer) [2018-05-26 22:59:49,239] INFO Cluster ID = EMaV_bIyR7ClsSRsA-a8uA (kafka.server.KafkaServer) [2018-05-26 22:59:49,300] INFO Loading logs. (kafka.log.LogManager) [2018-05-26 22:59:49,305] ERROR There was an error in one of the threads during logs loading: java.lang.NumberFormatException: For input string: "logs" (kafka.log.LogManager) [2018-05-26 22:59:49,306] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.lang.NumberFormatException: For input string: "logs" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229) at scala.collection.immutable.StringOps.toInt(StringOps.scala:31) at kafka.log.Log$.parseTopicPartitionName(Log.scala:1110) at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:147) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:58) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2018-05-26 22:59:49,308] INFO shutting down (kafka.server.KafkaServer)
... View more
Labels:
- Labels:
-
Apache Kafka
04-26-2018
10:45 PM
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:782) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 18/04/25 11:53:01 INFO SparkContext: Successfully stopped SparkContext Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:319) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173) at org.apache.spark.SparkContext.<init>(SparkContext.scala:509) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:922) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:914) at scala.Option.getOrElse(Option.scala:121)
... View more
Labels:
- Labels:
-
Apache Spark
04-16-2018
11:54 PM
Connection failed on host wrk2.vmware.com:10000 (Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/alerts/alert_spark2_thrift_port.py", line 144, in execute Execute(cmd, user=hiveruser, path=[beeline_cmd], timeout=CHECK_COMMAND_TIMEOUT_DEFAULT) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) ExecutionFailed: Execution of '! beeline -u 'jdbc:hive2://wrk2.vmware.com:10000/default' transportMode=binary -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL'' returned 1. Error: Could not open client transport with JDBC Uri: jdbc:hive2:// wrk2.vmware.com:10000/default: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0) Error: Could not open client transport with JDBC Uri: jdbc:hive2:// wrk2.vmware.com:10000/default: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0) ) Due to above not able start Spark2 Thrift Server
... View more
Labels:
- Labels:
-
Apache Spark
04-27-2017
12:37 AM
Below is the error while adding zookeeper to one of the node. + ZOOKEEPER_SERVER_OPTS='-Djava.net.preferIPv4Stack=true -Dzookeeper.log.file=zookeeper-cmf-zookeeper-SERVER-hdp-poc-d1.vmware.com.log -Dzookeeper.log.dir=/var/log/zookeeper -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.rmi.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djute.maxbuffer=4194304 -Dzookeeper.datadir.autocreate=false -Xms784334848 -Xmx784334848 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/zookeeper_zookeeper-SERVER-153098168435d1b55999a6d815852712_pid44416.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh' + exec /usr/java/jdk1.7.0_67-cloudera/bin/java -cp '/run/cloudera-scm-agent/process/375-zookeeper-server:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/zookeeper/lib/log4j.jar:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/zookeeper/build/*:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/zookeeper/build/lib/*:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/zookeeper/*:/opt/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/zookeeper/lib/*:/usr/share/cmf/lib/plugins/event-publish-5.11.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.11.0.jar' -Djava.net.preferIPv4Stack=true -Dzookeeper.log.file=zookeeper-cmf-zookeeper-SERVER-hdp-poc-d1.vmware.com.log -Dzookeeper.log.dir=/var/log/zookeeper -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.rmi.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djute.maxbuffer=4194304 -Dzookeeper.datadir.autocreate=false -Xms784334848 -Xmx784334848 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/zookeeper_zookeeper-SERVER-153098168435d1b55999a6d815852712_pid44416.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh org.apache.zookeeper.server.quorum.QuorumPeerMain /run/cloudera-scm-agent/process/375-zookeeper-server/zoo.cfg Error: Exception thrown by the agent : java.lang.NullPointerException
... View more
Labels:
- Labels:
-
Apache Zookeeper