Member since
12-08-2016
88
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1490 | 12-10-2016 03:00 AM |
12-10-2018
01:17 AM
Hello, when hbase reads data from hdfs, there is an exception in region, it is 2018-12-10 00:13:56,596 INFO [RpcServer.FifoWFPBQ.default.handler=3,queue=3,port=16020] shortcircuit.ShortCircuitCache: ShortCircuitCache(0xc75b9da): could not load 1074372655_BP-739229554-10.101.203.32-1526543732919 due to InvalidToken exception.
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/tsdb/17caedda58b3035c481a972a1d498e2d/t/a097b33304184834b4c3019bcfcf3fcb. Ans also there is an expired exception in datanode, it is 2018-12-10 01:09:04,404 ERROR datanode.DataNode (DataXceiver.java:run(278)) - cbdc-node-3.crservice.cn:50010:DataXceiver error processing REQUEST_SHORT_CIRCUIT_FDS operation src: unix:/var/lib/hadoop-hdfs/dn_socket dst: <local>
org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with block_token_identifier (expiryDate=1544402454744, keyId=742060374, userId=hbase, blockPoolId=BP-739229554-10.101.203.32-1526543732919, blockId=1074375591, access modes=[READ]) is expired.
at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
at org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:98)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1309)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitFds(DataXceiver.java:311)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitFds(Receiver.java:187)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:89)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:748). The Exception causes exception when my applicaiton runs. How can I do with it?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
10-31-2017
01:09 AM
Yes, I add it in my execute command, but that does't work.
... View more
07-13-2017
01:21 AM
OK, I will share kafka state-change.log and please attention to test-kafka-topic at about 2017-07-12 16:04:00. Kafka state change log is state-change.txt
... View more
07-12-2017
08:12 AM
Hello, attachment is the operations in my environment. After I delete a topic named 'test-kafka-topic' I try to create a topic named 'test-kafka-topic', but it is created failed. And of course I set 'delete.topic.enable=true'.
... View more
07-07-2017
08:00 AM
When I configure 'delete.enable.topic=true' in kafka configuration file server.properties, but after I enable Ranger Kafka Plugin, I try to delete kafka topic and the topic can not be deleted completed. Why? Can anyone help me solve the question?
... View more
Labels:
- Labels:
-
Apache Kafka
05-16-2017
08:54 AM
My kafka server.properties content is # Generated by Apache Ambari. Tue May 16 06:15:33 2017
advertised.host.name=172.21.9.35
advertised.listeners=PLAINTEXT://172.21.9.35:6666,SSL://172.21.9.35:6667
allow.everyone.if.no.acl.found=true
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
host.name=172.21.9.35
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.host=node3
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.protocol=http
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=bigdata
kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks
kafka.timeline.metrics.truststore.type=jks
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://node1:6666,SSL://node1:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
port=6667
principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
security.inter.broker.protocol=SSL
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.key.password=hadoop
ssl.keystore.location=/etc/kafka/conf/ssl/kafka.server.keystore.jks
ssl.keystore.password=hadoop
ssl.keystore.type=JKS
ssl.truststore.location=/etc/kafka/conf/ssl/kafka.server.truststore.jks
ssl.truststore.password=hadoop
ssl.truststore.type=JKS
super.users=User:CN=node1,OU=test,O=test,L=test,ST=test,C=te
zookeeper.connect=node2:2181,node3:2181,node4:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
... View more
05-16-2017
08:09 AM
Yes, I set "security.inter.broker.protocol" as "SSL".
... View more
05-16-2017
08:08 AM
Hello, My produce command is $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list node3:6667 --topic my-replicated-topic --producer.config /etc/kafka/conf/ssl/producer.properties
consume command is ./bin/kafka-console-consumer.sh --bootstrap-server node1:6667 --topic my-replicated-topic --new-consumer --consumer.config /etc/kafka/conf/ssl/producer.properties --from-beginning
I use SSL to authorize not Kerberos, so I add --security-protol SSL
option to my Kafka Producer/Consumer. But in produce process, console prints messages as follows [2017-05-16 16:03:21,447] WARN Error while fetching metadata with correlation id 0 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:21,548] WARN Error while fetching metadata with correlation id 1 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:21,655] WARN Error while fetching metadata with correlation id 2 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:21,765] WARN Error while fetching metadata with correlation id 3 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:21,874] WARN Error while fetching metadata with correlation id 4 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:21,984] WARN Error while fetching metadata with correlation id 5 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,093] WARN Error while fetching metadata with correlation id 6 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,202] WARN Error while fetching metadata with correlation id 7 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,313] WARN Error while fetching metadata with correlation id 8 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,423] WARN Error while fetching metadata with correlation id 9 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,532] WARN Error while fetching metadata with correlation id 10 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,642] WARN Error while fetching metadata with correlation id 11 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,751] WARN Error while fetching metadata with correlation id 12 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,858] WARN Error while fetching metadata with correlation id 13 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:22,967] WARN Error while fetching metadata with correlation id 14 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,076] WARN Error while fetching metadata with correlation id 15 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,183] WARN Error while fetching metadata with correlation id 16 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,292] WARN Error while fetching metadata with correlation id 17 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,400] WARN Error while fetching metadata with correlation id 18 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,512] WARN Error while fetching metadata with correlation id 19 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,629] WARN Error while fetching metadata with correlation id 20 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,738] WARN Error while fetching metadata with correlation id 21 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-05-16 16:03:23,848] WARN Error while fetching metadata with correlation id 22 : {my-replicated-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
... View more
05-16-2017
06:53 AM
After I configure Kafka security with SSL, I execute the command to produce and consume message, but it prints messages as follows: [2017-05-16 06:45:20,660] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:20,937] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:21,087] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:21,403] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:21,629] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:21,776] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:21,932] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,155] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,305] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,451] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,602] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,751] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:22,958] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:23,104] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:23,315] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:23,536] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:23,686] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:23,922] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:24,071] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-05-16 06:45:24,221] WARN Bootstrap broker Node1:6667 disconnected (org.apache.kafka.clients.NetworkClient)
Why it happens, any other encoutered?
... View more
Labels:
- Labels:
-
Apache Kafka
05-12-2017
07:06 AM
Can you implement Kafka ACL with SSL? I want to use SSL to control the authorization of Kafka? Can you help me?
... View more
05-11-2017
10:07 AM
Hi, I want to configure kafka authentication with ssl, and I do not want to use kerberos, but I do not know how to do it. Can someone tell me how you do it?
... View more
Labels:
- Labels:
-
Apache Kafka
03-31-2017
06:28 AM
Hi, does it mean that ranger kafka plugin can not define policy among users, and only among hosts?
... View more
03-30-2017
10:17 AM
# bin/kafka-acls.sh --list --topic test5
After executing the command, there is no acls for topic test5.
... View more
03-30-2017
09:33 AM
After enable ranger kafka plugin, execute command "/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list bigdata001:6667 --topic test5", but when I input content to send message, the result returns as follows: [2017-03-30 17:06:45,507] WARN Error while fetching metadata with correlation id 0 : {test5=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
[2017-03-30 17:06:45,507] ERROR Error when sending message to topic test5 with key: null, value: 7 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [test5]
[2017-03-30 17:11:45,563] WARN Error while fetching metadata with correlation id 1 : {test5=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient
... View more
Labels:
- Labels:
-
Apache Kafka
03-23-2017
11:05 AM
yes, I can see in '/etc/ranger/mytest_hadoop/policycache' path there is a file named 'hdfs_mytest_hadoop.json'. the file contains policies under service 'mytest_hadoop'.
... View more
03-23-2017
10:28 AM
Hi, I can use my allow user to cat the assigned directory. Such as I want to use user 'hive' to cat '/user/accumulo' after I set allowing permission in ranger policy, but when I execute the command, hdfs permission policy prevent it.
... View more
03-23-2017
09:59 AM
OK. hdfs-ranger-audit-bigdata001istuarycom.txt
... View more
03-23-2017
08:47 AM
Yes, I add before. But it can not affect. And I see xa_portal.log not change.
... View more
03-23-2017
08:26 AM
After Enable HDFS Ranger Plugin, I create policy in default service. But there is error when I execute command to read a directory. Here is the example:
... View more
Labels:
- Labels:
-
Apache Ranger
03-15-2017
07:26 AM
I find pom.xml in contrib/views/tez path that "tez.view.version" is 0.7.0.0-SNAPSHOT, but in stack_advisor.py there is such code:
if os.path.exists(views_work_dir) and os.path.isdir(views_work_dir):
last_version = '0.0.0'
for file in os.listdir(views_work_dir):
if fnmatch.fnmatch(file, 'TEZ{*}'):
current_version = file.lstrip("TEZ{").rstrip("}") # E.g.: TEZ{0.7.0.2.3.0.0-2154}
if self.versionCompare(current_version.replace("-", "."), last_version.replace("-", ".")) >= 0:
latest_tez_jar_version = current_version
last_version = current_version
pass
pass
pass<br>
def versionCompare(self, version1, version2):
def normalize(v):
return [int(x) for x in re.sub(r'(\.0+)*$','', v).split(".")]
return cmp(normalize(version1), normalize(version2))
pass
So when versionCompare function is called, 'SNAPSHOT' can not transformed to int, so throw exception. But ambari does not change this, and it can run. In my environment, there is error.
... View more
03-15-2017
04:51 AM
No, I custom a stack named isdp, so under my stack directory as follows:
... View more
03-14-2017
11:48 AM
First, I Enable Ranger Plugin for HDFS, and then save the change, but there is an error message "The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes?". I do not know why it happens the error message, can you help me?
... View more
03-14-2017
11:43 AM
After I enable ranger plugin for hdfs, I save the change, it presents error message "The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes?". What is it happened?
... View more
Labels:
- Labels:
-
Apache Ranger
03-14-2017
11:02 AM
I custom a stack, but after I enable ranger plugin for hdfs, I can not see the default service on the ranger ui. As usual, it may be a default service named bigdata_hadoop($stackname_$componentname). but I can not see it after I custom a stack.
... View more
Labels:
- Labels:
-
Apache Ranger
02-17-2017
02:03 PM
The "/check" path must registered?
... View more
02-17-2017
01:46 PM
I delete the @Produces, but the error is same.
... View more
02-17-2017
12:53 PM
the path is not registered?
... View more