Member since
05-22-2017
13
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2678 | 05-25-2017 08:33 AM |
07-10-2017
03:19 PM
@Michael Young Is there any flush time also like flush.hours? If not and if i hadn't set any of the below two properties <ramBufferSizeMB>100</ramBufferSizeMB> <maxBufferedDocs>1000</maxBufferedDocs> Will the data always be in the memory until size reaches 100 MB? In my case, after indexing data i am not having any update calls only select calls
... View more
07-10-2017
06:47 AM
1 Kudo
I have my java service and solr(single node solr-cloud),zookeeper running on same machine.Java service contains rest api for fetching data from solr and showing on ui. Now when the load increases on that machine, like 100 users hitting the api to get result simulataneously,machine hangs up and solr crashes but restart after few minutes with entire data loss.Why the data is getting lost, solr crashes that is ok but data should be there. Any help will be appreciated. zookeeper Logs: <code>[2017-06-1213:11:20,055] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)EndOfStreamException:Unable to read additional data from client sessionid 0x15c78310667004e, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:748)[2017-06-1214:40:16,978] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)EndOfStreamException:Unable to read additional data from client sessionid 0x15c78310667006e, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:748)
Solr Logs: 420206[searcherExecutor-7-thread-1] INFO org.apache.solr.core.SolrCore–[rwindex_shard1_replica1]Registerednew searcher Searcher@492f5449[rwindex_shard1_replica1] main{StandardDirectoryReader(segments_2:9:nrt)}420207[qtp1725097945-17] INFO org.apache.solr.update.processor.LogUpdateProcessor–[rwindex_shard1_replica1] webapp=/solr path=/update params={stream.body=<delete><query>*:*</query></delete>&commit=true}{deleteByQuery=*:*(-1572256668905897984),commit=}0323420224[qtp1725097945-17] ERROR org.apache.solr.servlet.SolrDispatchFilter–null:org.eclipse.jetty.io.EofException
at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:914)
at org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:443)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:100)
at org.eclipse.jetty.server.AbstractHttpConnection$Output.flush(AbstractHttpConnection.java:1094)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at org.apache.solr.util.FastWriter.flush(FastWriter.java:137)
at org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:766)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:426)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:748)Caused by: java.net.SocketException:Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(NativeMethod)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at org.eclipse.jetty.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:375)
at org.eclipse.jetty.io.bio.StreamEndPoint.flush(StreamEndPoint.java:164)
at org.eclipse.jetty.io.bio.StreamEndPoint.flush(StreamEndPoint.java:194)
at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:838)...35 more
420226[qtp1725097945-17] ERROR org.apache.solr.servlet.SolrDispatchFilter–null:org.eclipse.jetty.io.EofException
at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:914)
at org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:443)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:100)
at org.eclipse.jetty.server.AbstractHttpConnection$Output.flush(AbstractHttpConnection.java:1094)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
... View more
Labels:
- Labels:
-
Apache Solr
-
Apache Spark
06-02-2017
08:14 PM
I have done some changes in the code and here i am getting stuck: 1) driver log: 17/06/02 16:25:33 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:37 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:38 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:43 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:46 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:25:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:56 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:01 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:04 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:08 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:10 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:14 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:16 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:20 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:25 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:27 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:28 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:32 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:35 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:39 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:42 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:44 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:58 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 2) Master log: ]
2017-06-02 15:58:01,638 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/11 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,649 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/12 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,654 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/13 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,663 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/14 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,671 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/15 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,679 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/16 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,688 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/10 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,696 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,704 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,712 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/draining , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,721 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/my_ns , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,750 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/default , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,771 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/NS , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,779 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/hbase , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,787 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,796 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/hbaseid , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,804 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,812 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:meta , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,821 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:unharnonized_alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,829 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:namespace , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,838 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:admin_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,846 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,854 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
] 3) RegionServer log: 2017-06-02 15:58:01,069 INFO [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-1] regionserver.HRegion: Onlined 18164fdd3694952b1527fb91c4724198; next sequenceid=67
2017-06-02 15:58:01,070 INFO [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] regionserver.HRegionServer: Post open deploy tasks for hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198.
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:43f9faceac3db2010e6070bec70200c0] hbase.MetaTableAccessor: Updated row my_ns:unharnonized_alarm_audit,,1493121241792.43f9faceac3db2010e6070bec70200c0. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] hbase.MetaTableAccessor: Updated row hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:40c95c7accfb12179ed5b65207eee989] hbase.MetaTableAccessor: Updated row my_ns:admin_audit,,1494314936756.40c95c7accfb12179ed5b65207eee989. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,225 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,225 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,283 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,283 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,298 INFO [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-2] regionserver.HRegion: Onlined 09be4fb5ee3fe8ac146a63600d5d5006; next sequenceid=107
2017-06-02 15:58:01,299 INFO [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] regionserver.HRegionServer: Post open deploy tasks for my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006.
2017-06-02 15:58:01,305 INFO [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] hbase.MetaTableAccessor: Updated row my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 16:02:56,017 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=50, hits=43, hitRatio=86.00%, , cachingAccesses=50, cachingHits=43, cachingHitsRatio=86.00%, evictions=29, evicted=0, evictedPerRun=0.0
2017-06-02 16:07:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=53, hits=46, hitRatio=86.79%, , cachingAccesses=53, cachingHits=46, cachingHitsRatio=86.79%, evictions=59, evicted=0, evictedPerRun=0.0
2017-06-02 16:12:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=56, hits=49, hitRatio=87.50%, , cachingAccesses=56, cachingHits=49, cachingHitsRatio=87.50%, evictions=89, evicted=0, evictedPerRun=0.0
2017-06-02 16:17:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=59, hits=52, hitRatio=88.14%, , cachingAccesses=59, cachingHits=52, cachingHitsRatio=88.14%, evictions=119, evicted=0, evictedPerRun=0.0
2017-06-02 16:22:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=62, hits=55, hitRatio=88.71%, , cachingAccesses=62, cachingHits=55, cachingHitsRatio=88.71%, evictions=149, evicted=0, evictedPerRun=0.0
2017-06-02 16:27:56,015 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=65, hits=58, hitRatio=89.23%, , cachingAccesses=65, cachingHits=58, cachingHitsRatio=89.23%, evictions=179, evicted=0, evictedPerRun=0.0
2017-06-02 16:32:56,015 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=75, hits=68, hitRatio=90.67%, , cachingAccesses=75, cachingHits=68, cachingHitsRatio=90.67%, evictions=209, evicted=0, evictedPerRun=0.0 My spark code: def getHbaseConnection(properties: SerializedProperties): Connection = {
var connection: Connection = null
val HBASE_ZOOKEEPER_QUORUM_VALUE = properties.zkQuorum
val config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM_VALUE);
config.set("hbase.zookeeper.property.clientPort",
properties.zkPort);
if (properties.hbaseAuth != null
&& properties.hbaseAuth
.equalsIgnoreCase("kerberos")) {
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.cluster.distributed", "true");
config.set("hbase.rpc.protection", "privacy");
config.set("hbase.client.retries.number", "5");
config.set("hbase.regionserver.kerberos.principal", properties.kerberosRegion);
config.set("hbase.master.kerberos.principal", properties.kerberosMaster);
UserGroupInformation.setConfiguration(config);
// Every time, complete the TGT check first
var loginUser = UserGroupInformation.getLoginUser();
if (loginUser != null) {
loginUser.checkTGTAndReloginFromKeytab();
} else {
if (SparkFiles.get(properties.keytab) != null
&& (new java.io.File(SparkFiles.get(properties.keytab)).exists)) {
loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
SparkFiles.get(properties.keytab));
} else {
loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
properties.keytab);
}
}
loginUser.doAs(new PrivilegedExceptionAction[Void]() {
override def run(): Void = {
connection=ConnectionFactory.createConnection(config);
return null
}
});
println("getuser.,......"+loginUser.getUserName)
}
return connection;
} Config: hbase.zookeeper.quorum=localhost
hbase.zookeeper.property.clientPort=2181
hadoop.security.authentication=kerberos
hbase.security.authentication=kerberos
hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.kerberos.principal=admin@EXAMPLE.COM
hbase.kerberos.keytab=zkpr2.keytab Jass File: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
doNotPrompt=true
keyTab="/home/priyanshu/git/rwuptime/rockwell-services/audit-spark-services/zkpr2.keytab"
useTicketCache=false
renewTicket=true
debug=true
storeKey=true
principal="admin@EXAMPLE.COM"
client=true
serviceName="zookeeper";
};
... View more
06-02-2017
06:58 AM
@Geoferry I have already done the following steps and then ran the job. But havent succeeded.
... View more
05-31-2017
01:15 PM
Here are the logs: 17/05/30 19:25:06 ERROR RpcClientImpl: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 25 more I ran it on yarn-cluster. It seems key is inavalid. I have tried changing permissions of keytab file but no luck. I had done kinit and it is working fine for the same keytab
... View more
05-25-2017
08:33 AM
Resolved the issue, there was one property missing in hdfs-site.xm: <property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
... View more
05-25-2017
06:11 AM
Hi, I am using below principals for hbase kerberos authentication:
hbase.zookeeper.quorum=localhost hbase.zookeeper.property.clientPort=2181 hadoop.security.authentication=kerberos hbase.security.authentication=kerberos hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM hbase.kerberos.principal=zookeeper/localhost@EXAMPLE.COM hbase.kerberos.keytab=zkpr.keytab Now, when i run my spark job on local it is not connecting to hbase, it shows error message: Unable to connect to zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM. I have done kinit zookeeper/localhost@EXAMPLE.COM -k -t zkpr.keytab and it is running fine. Any help will be appreciated.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
05-25-2017
06:09 AM
Hi, I am using below principals for hbase kerberos authentication: hbase.zookeeper.quorum=localhost
hbase.zookeeper.property.clientPort=2181
hadoop.security.authentication=kerberos
hbase.security.authentication=kerberos
hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.kerberos.keytab=zkpr.keytab Now, when i run my spark job on local it is not connecting to hbase, it shows error message: Unable to connect to zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM. I have done kinit zookeeper/localhost@EXAMPLE.COM -k -t zkpr.keytab and it is running fine. Any help will be appreciated.
... View more
05-22-2017
01:05 PM
here is the output for validity: It seems to be valid as per IST. klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: zookeeper/localhost@EXAMPLE.COM
Valid starting Expires Service principal
2017-05-22T18:40:52 2017-05-23T04:40:52 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 2017-05-29T18:40:52
... View more
05-22-2017
11:59 AM
Getting this error with keberos enabled hadoop while copying file using copyFromLocal: 2017-05-22 17:15:25,294 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: infoobjects-Latitude-3550:1025:DataXceiver error processing unknown operation src: /127.0.0.1:35436 dst: /127.0.0.1:1025
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2207)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiationCipherOptions(DataTransferSaslUtil.java:233)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:369)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:297)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:124)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:185)
at java.lang.Thread.run(Thread.java:745) Getting this error while using copyFromLocal <filename>. Any help will be appreciated. Here is my hdfs-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<!-- Default is 1 -->
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/priyanshu/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/priyanshu/hadoop_data/hdfs/datanode</value>
</property>
<!-- NameNode security config -->
<property>
<name>dfs.namenode.keytab.file</name>
<value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
<!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
<!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<!---Secondary NameNode config-->
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<!---DataNode config-->
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1025</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1027</value>
</property>
<property>
<name>dfs.data.transfer.protection</name>
<value>authentication</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.http.policy</name>
<value>HTTPS_ONLY</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>zookeeper/localhost@EXAMPLE.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
<!-- path to the HTTP keytab -->
</property>
<property>
<name>dfs.namenode.kerberos.internal.spnego.principal</name>
<value>${dfs.web.authentication.kerberos.principal}</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
<value>>${dfs.web.authentication.kerberos.principal}</value>
</property>
</configuration>
... View more