Member since
02-18-2016
13
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15445 | 03-20-2017 08:03 PM |
11-03-2017
08:37 AM
Thank you @Xiaoyu Yao it works!
... View more
11-02-2017
03:45 PM
Hello, We have a file which we can't backup by distcp due to a mismatch lenght. After a fsck on this file I can see it is still open, but after speaking with file owner it was not. Searching on application logs show us this message: 2017-10-31 18:28:13.466 INFO [pr-8243-exec-12][o.a.hadoop.hdfs.DFSClient] Unable to close file because dfsclient was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
2017-10-31 18:28:13.468 ERROR [pr-8243-exec-12][r.c.a.LogExceptionHandler] error while processing request - uri: /xxx/yyy- query string: [zzzzzzzzzzzzz] - exception: java.io.IOException: Unable to close file because dfsclient was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
com.aaaa.bbb.ccc.ddd.exception.AmethystRuntimeException: java.io.IOException: Unable to close file because dfsclient was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
We restart HDFS service at this hour, so we think it's the main reason of this problem. Copying the file by a "hdfs dfs -cp" create a new file correctly closed, so we probably can replace the unclosed file. Nevertheless I want to know if there is another simple method to directly close a file still opened from a HDFS point of vue? Thanks for your help
... View more
Labels:
- Labels:
-
Apache Hadoop
09-04-2017
06:08 PM
Hello everyone, From what I understood every files use a minimum of 1 block in HDFS. So how can it be possible to have more files than blocks? Below is an exemple: $ hdfs fsck /app-logs
Total size: 14663835874 B
Total dirs: 2730
Total files: 8694
Total symlinks: 0
Total blocks (validated): 8690 (avg. block size 1687437 B)
Minimally replicated blocks: 8690 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Mon Sep 04 19:54:45 CEST 2017 in 353 milliseconds
The filesystem under path '/app-logs' is HEALTHY
I appreciate any help in understanding this ouput. Regards
... View more
Labels:
- Labels:
-
Apache Hadoop
07-20-2017
07:11 AM
Hello @Vipin Rathor, Thanks for your clear explanations. Regards
... View more
07-19-2017
08:55 AM
Hi, In our clusters we have a strange behavior certainly due to a misconfiguration. In all ranger actions (like downloading policies, asking ranger KMS, get audits , etc....) we have a failed authentication before a success one. Everything works as expected, but do not appear optimal. Below is some examples extracted from access logs in ranger admin and kms 192.168.0.1 - - [19/Jul/2017:10:29:52 +0200] "GET /service/plugins/secure/policies/download/clusterName_kafka?lastKnownVersion=42&pluginId=kafka@host.domain-clusterName_kafka HTTP/1.1" 401 -
192.168.0.1 - - [19/Jul/2017:10:29:52 +0200] "GET /service/plugins/secure/policies/download/clusterName_kafka?lastKnownVersion=42&pluginId=kafka@host.domain-clusterName_kafka HTTP/1.1" 304 -
192.168.0.1 - - [17/Jul/2017:14:11:42 +0000] "GET /service/assets/accessAudit?page=0&pageSize=25&total_pages=66&totalCount=1626&startIndex=0&sortBy=eventTime&startDate=07%2F17%2F2017 HTTP/1.1" 401 1113
192.168.0.1 - - [17/Jul/2017:14:11:43 +0000] "GET /service/assets/accessAudit?page=0&pageSize=25&total_pages=66&totalCount=1626&startIndex=0&sortBy=eventTime&startDate=07%2F17%2F2017 HTTP/1.1" 200 11187
192.168.0.1 - - [17/Jul/2017:14:06:03 +0000] "GET /service/plugins/policy/52/versionList HTTP/1.1" 401 1113
192.168.0.1 - - [17/Jul/2017:14:06:03 +0000] "GET /service/plugins/policy/52/versionList HTTP/1.1" 200 23
192.168.0.1 - - [17/Jul/2017:14:06:03 +0000] "GET /service/plugins/policies/eventTime?eventTime=2017-07-17T14%3A05%3A47Z&policyId=52&_=1500297123319 HTTP/1.1" 401 1113
192.168.0.1 - - [17/Jul/2017:14:06:03 +0000] "GET /service/plugins/policies/eventTime?eventTime=2017-07-17T14%3A05%3A47Z&policyId=52&_=1500297123319 HTTP/1.1" 200 708
192.168.0.1 - - [19/Jul/2017:10:20:19 +0200] "OPTIONS /kms/v1/?op=GETDELEGATIONTOKEN&renewer=rm%2Fhost.domain%40CIB.NET HTTP/1.1" 401 997
192.168.0.1 - - [19/Jul/2017:10:20:19 +0200] "OPTIONS /kms/v1/?op=GETDELEGATIONTOKEN&renewer=rm%2Fhost.domain%40CIB.NET HTTP/1.1" 200 3484
192.168.0.1 - - [19/Jul/2017:10:20:19 +0200] "GET /kms/v1/?op=GETDELEGATIONTOKEN&renewer=rm%2Fhost.domain%40CIB.NET HTTP/1.1" 200 132
Is anyone get some idea of what can be wrong? We have secured clusters and two ranger admin/kms hosts on each. Thanks
... View more
Labels:
- Labels:
-
Apache Ranger
03-20-2017
08:03 PM
Hi guys, Today we finally find what's wrong. Our kafka keytab have (like others keytabs in our clusters) a key encrypted in aes256-cts-hmac-sha1-96. This keytab works as expected to authenticate in zookeeper but is problematic for communicate with broker. Replacing this keytab by one with arcfour-hmac key encryption resolve our issue. For security reasons this solution is not acceptable so we will make some other tests in the next days. If I have some update related to this issue I'll post here our finds. Thanks @schandhok and @Jay SenSharma
... View more
03-18-2017
04:16 PM
@Jay SenSharma Thanks for your reply. Yes the port is in LISTEN state: # nc -v kafka.host.com 6667
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 10.194.231.5:6667.
^C
# lsof -i :6667
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 2289859 kafkauser 141u IPv4 675950500 0t0 TCP kafka.host.com:ircu-3 (LISTEN)
# pstree -anlp 2289859
java,2289859 -Xmx4G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/usr/hdp/2.5.3.0-37/kafka/bin/../config/log4j.properties -Djava.security.auth.login.config=/usr/hdp/current/kafka-broker/config/kafka_jaas.conf -cp :/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar:/usr/lib/ambari-metrics-kafka-sink/lib/*:/export/home/kafkauser:/etc/kafka/conf:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/etc/hadoop/conf:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar:/usr/lib/ambari-metrics-kafka-sink/lib/*:/usr/hdp/current/kafka-broker/bin:/etc/kafka/conf:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/etc/hadoop/conf:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar:/usr/lib/ambari-metrics-kafka-sink/lib/*:/usr/hdp/2.5.3.0-37/kafka/bin:/etc/kafka/conf:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/etc/hadoop/conf:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/argparse4j-0.5.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/connect-api-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/connect-file-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/connect-json-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/connect-runtime-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/guava-18.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-core-2.6.3.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-databind-2.6.3.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javassist-3.18.2-GA.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javax.annotation-api-1.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javax.inject-1.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-client-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-common-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-guava-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jersey-server-2.22.2.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/jopt-simple-4.9.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka_2.10-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-clients-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-ganglia-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-log4j-appender-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-streams-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-streams-examples-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/kafka-tools-0.10.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/log4j-1.2.17.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/lz4-1.3.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/metrics-ganglia-2.2.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/ojdbc6.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/ranger-kafka-plugin-impl:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/ranger-kafka-plugin-shim-0.6.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/ranger-plugin-classloader-0.6.0.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/reflections-0.9.10.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/rocksdbjni-4.8.0.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/scala-library-2.10.4.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/slf4j-api-1.7.21.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/zkclient-0.8.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/zookeeper-3.4.6.jar:/usr/hdp/2.5.3.0-37/kafka/bin/../libs/zookeeper.jar kafka.Kafka /usr/hdp/2.5.3.0-37/kafka/config/server.properties
ââ{java},2290073
ââ{java},2290074
ââ{java},2290075
... View more
03-18-2017
02:22 PM
Hi all,
After "sucessfully" install kafka on a host with HDP 2.5.3, started the service from ambari, we are not able to produce or consum topics.
We can create and describe a topic: -bash-4.2$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper zoo.host.com:2181 --create --topic newmikl --partition 1 --replication-factor 1
Created topic "newmikl".
-bash-4.2$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper zoo.host.com:2181 --describe --topic newmikl Topic:newmikl PartitionCount:1 ReplicationFactor:1 Configs:
Topic: newmikl Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001
Unfortunately in the logs.dir we can't see the tpoic files: -bash-4.2$ ls -l /hadoop/kafka/kafka-logs/
total 4
-rwxrwxrwx 1 kafkauser kafkagroup 0 Mar 17 09:45 cleaner-offset-checkpoint
-rwxrwxrwx 1 kafkauser kafkagroup 57 Mar 17 09:45 meta.properties
-rwxrwxrwx 1 kafkauser kafkagroup 0 Mar 17 09:45 recovery-point-offset-checkpoint
-rwxrwxrwx 1 kafkauser kafkagroup 0 Mar 17 09:45 replication-offset-checkpoint
I case we want to produce ad consum messages, we're facing an error: -bash-4.2$ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list kafka.host.com:6667 --topic newmikl --security-protocol SASL_PLAINTEXT
test
test1
test2
test3
^C[2017-03-17 11:14:24,025] WARN TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
-bash-4.2$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper zoo.host.com:2181 --topic newmikl --security-protocol SASL_PLAINTEXT --from-beginning
{metadata.broker.list=kafka.host.com:6667, request.timeout.ms=30000, client.id=console-consumer-56271, security.protocol=SASL_PLAINTEXT}
[2017-03-17 11:14:35,594] WARN Fetching topic metadata with correlation id 0 for topics [Set(newmikl)] from broker [BrokerEndPoint(1001,kafka.host.com,6667)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:81)
at kafka.producer.SyncProducer.send(SyncProducer.scala:126)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:96)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:67)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2017-03-17 11:14:35,597] WARN [console-consumer-56271_slbifregkhn01.fr.intranet-1489745675236-1a4e34e3-leader-finder-thread], Failed to find leader for Set([newmikl,0]) (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics [Set(newmikl)] from broker [ArrayBuffer(BrokerEndPoint(1001,kafka.host.com,6667))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:96)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:67)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:81)
at kafka.producer.SyncProducer.send(SyncProducer.scala:126)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
... 3 more
{metadata.broker.list=kafka.host.com:6667, request.timeout.ms=30000, client.id=console-consumer-56271, security.protocol=SASL_PLAINTEXT}
[...]
etc...
Looking kafka logs, we can see errors since the start of kafka: # tail -f /var/log/kafka/controller.log
java.io.IOException: Connection to kafka.host.com:6667 (id: 1003 rack: null) failed
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:63)
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:59)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:112)
at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:59)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:233)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:182)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2017-03-18 15:15:29,340] WARN [Controller-1003-to-broker-1003-send-thread], Controller 1003's connection to broker kafka.host.com:6667 (id: 1003 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to kafka.host.com:6667 (id: 1003 rack: null) failed
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:63)
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:59)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:112)
at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:59)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:233)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:182)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Anyone have an idea about what can be wrong in our configuration? Thanks.
... View more
Labels:
- Labels:
-
Apache Kafka