Member since
03-18-2016
18
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1476 | 02-01-2017 05:47 PM |
04-25-2017
04:21 PM
I cannot get my Zeppelin shiro to map my group on HDP2.6. I used a config like shown here, but I always get "roles":"[]". Are you guys using any particular attribute in Active Directory for which groups a user is part of? We use "memberOf", and in some other definitions, such as Knox Gateway we have to configure which attribute is the group list. I am curious if Zeppelin/Shiro might be hard-coding that attribute and if that is why I can't get my users mapped to groups.
... View more
03-13-2017
06:49 PM
I have io.file.buffer.size set to 128K. MAX_PACKET_SIZE, I believe, is 16M.
... View more
03-13-2017
02:10 PM
Anyone have further info that can be provided regarding this error? 2017-03-10 15:18:44,317 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(935)) - Exception for BP-282268147-124.121.209.38-1430171074465:blk_1117945085_44312886
java.io.IOException: Incorrect value for packet payload size: 2147483128
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
This comes from PacketReceiver.java on the HDFS Data Node. I think the value of MAX_PACKET_SIZE is hard-coded to 16M in that code, but somehow I have a client operation which is resulting in a payload size of a hair under 2GB. Not sure where to look for settings that would control this behavior. The client gets a connection reset by peer: 2017-03-10 15:18:44.317 -0600 WARN DFSOutputStream$DataStreamer$ResponseProcessor - DFSOutputStream ResponseProcessor exception for block BP-282268147-124.121.209.38-1430171074465:blk_1117945085_44312886
java.io.IOException: Connection reset by peer
2017-03-10 15:18:45.020 -0600 WARN
DFSOutputStream$DataStreamer - Error Recovery for block
BP-282268147-164.121.209.38-1430171074465:blk_1117945085_44312886 in pipeline
DatanodeInfoWithStorage[164.121.209.43:50010,DS-8de5e011-72e8-4097-bbf9-5467b1542f22,DISK],
DatanodeInfoWithStorage[164.121.209.30:50010,DS-4c9dd8a3-07ee-4e45-bef1-73f6957c1383,DISK]:
bad datanode DatanodeInfoWithStorage[164.121.209.43:50010,DS-8de5e011-72e8-4097-bbf9-5467b1542f22,DISK]
... View more
Labels:
- Labels:
-
Apache Hadoop
02-22-2017
08:31 PM
Sorry I was trying to get clarification for this *second* issue, as I was also hitting this same exact scenario. For others in the future, the response by @Josh Elser that is upvoted at the top (when sorted by Votes) also worked for me to correct this java.net.SocketTimeoutException when connecting to PQS. I was missing the PQS host in hadoop.proxyuser.HTTP.hosts. I didn't realize the upvoted response was for this second issue because the comment sorting was showing things out of order for me. I never did track down an impersonation error message, but I didn't increase tracing at all to try real hard at capturing the error.
... View more
02-22-2017
06:32 PM
Did you ever find the resolution to this other error?
... View more
02-03-2017
04:28 AM
1 Kudo
I enabled Kerberos on an HDP2.5.3 cluster via Ambari. I am trying run HBase on Slider, but it is failing with an error related to YarnRegistry in ZooKeeper. 2017-02-01 08:20:16,968 [main] INFO appmaster.SliderAppMaster - Starting Yarn registry
2017-02-01 08:20:17,183 [main] INFO appmaster.SliderAppMaster - Service YarnRegistry in state YarnRegistry: STARTED
Connection="fixed ZK quorum "los90hdpc4m2.EXAMPLE.COM:2181,los90hdpc4m3.EXAMPLE.COM:2181,los90hdpc4m4.EXAMPLE.COM:2181"
" root="/registry" secure cluster; secure registry; Curator service access policy: anon; System ACLs:
0x01: 'world,'anyone
0x1f: 'sasl,'yarn@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'mapred@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hdfs@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hadoop@LAKEE.EXAMPLE.COM
User: hbase: hbase (auth:KERBEROS) hasKerberosCredentials=false isFromKeytab=false kerberos is enabled in Hadoop =true;
Kerberos Realm: LAKEE.EXAMPLE.COM;
java.security.auth.login.config=(undefined);
zookeeper.sasl.client=false;
zookeeper.allowSaslFailedClients=(undefined but defaults to true);
zookeeper.maintain_connection_despite_sasl_failure=(undefined)
2017-02-01 08:20:23,071 [main] INFO appmaster.SliderAppMaster - Slider AM Security Mode: KEYTAB
2017-02-01 08:20:23,108 [main] INFO appmaster.SliderAppMaster - Kind: RM_DELEGATION_TOKEN, Service: 10.0.0.111:8032,10.0.0.112:8032, Ident: (owner=hbase@LAKEE.EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1485958371269, maxDate=1486563171269, sequenceNumber=3341, masterKeyId=684); owner=hbase@LAKEE.EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1485958371269, maxDate=1486563171269, sequenceNumber=3341, masterKeyId=684; Renewer: yarn; Issued: 2/1/17 8:12 AM; Max Date: 2/8/17 8:12 AM
2017-02-01 08:20:23,271 [main] INFO security.SecurityConfiguration - No host keytab file path specified. Will attempt to retrieve keytab file hbase-slider.headless.keytab as a local resource for the container
2017-02-01 08:20:23,271 [main] INFO appmaster.SliderAppMaster - Logging in as hbase with keytab keytabs/hbase-slider.headless.keytab
2017-02-01 08:20:23,333 [main] INFO security.UserGroupInformation - Login successful for user hbase using keytab file /mnt/hdfs5/yarn/local/usercache/hbase/appcache/application_1485859889689_0043/container_e340_1485859889689_0043_01_000001/keytabs/hbase-slider.headless.keytab
2017-02-01 08:20:29,326 [AmExecutor-006] ERROR actions.QueueExecutor - Exception processing org.apache.slider.server.appmaster.actions.ActionRegisterServiceInstance@14479ca8 name='ActionRegisterServiceInstance', delay=0, attrs=0, sequenceNumber=5}: org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hbase/services/org-apache-slider/hbase95': Not authorized to access path; ACLs: [
0x01: 'world,'anyone
0x1f: 'sasl,'yarn@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'mapred@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hdfs@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hadoop@LAKEE.EXAMPLE.COM
]: KeeperErrorCode = NoAuth for /registry/users/hbase/services/org-apache-slider/hbase95
org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: `/registry/users/hbase/services/org-apache-slider/hbase95': Not authorized to access path; ACLs: [
0x01: 'world,'anyone
0x1f: 'sasl,'yarn@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'mapred@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hdfs@LAKEE.EXAMPLE.COM
0x1f: 'sasl,'hadoop@LAKEE.EXAMPLE.COM
]: KeeperErrorCode = NoAuth for /registry/users/
at org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:381)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkCreate(CuratorService.java:598)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkSet(CuratorService.java:638)
at org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.bind(RegistryOperationsService.java:114)
at org.apache.slider.server.services.yarnregistry.YarnRegistryViewForProviders.putService(YarnRegistryViewForProviders.java:189)
at org.apache.slider.server.services.yarnregistry.YarnRegistryViewForProviders.registerSelf(YarnRegistryViewForProviders.java:224)
at org.apache.slider.server.appmaster.SliderAppMaster.registerServiceInstance(SliderAppMaster.java:1343)
at org.apache.slider.server.appmaster.actions.ActionRegisterServiceInstance.execute(ActionRegisterServiceInstance.java:57)
at org.apache.slider.server.appmaster.actions.QueueExecutor.run(QueueExecutor.java:73)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /registry/users/
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
In reading, I thought my understanding was that YARN was supposed to create this registry structure in ZooKeeper. However, it seems that the Application Manager for my HBase instance is using Curator to try and create this (and getting the permission denied). Any advice on what one does to get past this error?
... View more
Labels:
02-01-2017
05:47 PM
Found the error in Zookeeper log file. 2017-02-01 11:33:59,000 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing stmk command from /0:0:0:0:0:0:0:1:47438
2017-02-01 11:33:59,001 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@218] - Ignoring unexpected runtime exception
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:412)
at org.apache.zookeeper.server.NIOServerCnxn.checkFourLetterWord(NIOServerCnxn.java:865)
at org.apache.zookeeper.server.NIOServerCnxn.readLength(NIOServerCnxn.java:924)
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:237)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
Found the related bug: https://issues.apache.org/jira/browse/ZOOKEEPER-2227 Hopefully this will be included in the next HDP release.
... View more
02-01-2017
05:43 PM
I am having trouble on my system trying to use the stmk command for Zookeeper. [hadoop@los90hdpc4m3][~]$ echo "envi" | nc localhost 2181 | grep zookeeper.version
zookeeper.version=3.4.6-37--1, built on 11/29/2016 17:59 GMT
[hadoop@los90hdpc4m3][~]$ echo "gtmk" | nc localhost 2181
306
[hadoop@los90hdpc4m3][~]$ perl -e "print 'stmk', pack('q>', 0b0011111010)" | nc localhost 2181 The stmk command, built using the Perl example shown in the ZooKeeper Administrator Guide, just hangs and never returns. Has anyone else had better luck enabling Zookeeper tracing via stmk? This is on a HDP 2.5.3 system.
... View more
Labels:
01-23-2017
07:22 PM
The purpose of renewable tickets was missed in the provided answers - one renews a ticket in order to avoid the authentication process again. You can issue a renewal request (without authenticating) up until renew_lifetime. Use klist to see the valid/expire/renew timestamps.
... View more
07-05-2016
07:59 PM
Do we have an ETA yet on the availability of the CentOS/RHEL7 ODBC driver?
... View more