Member since
05-22-2017
13
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2746 | 05-25-2017 08:33 AM |
07-10-2017
03:24 PM
@priyanshu hasija You can manually call a commit after indexing data using something like http://localhost:8983/solr/collection_name/update?commit=true. Here is a link to information on autoCommit: https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig
... View more
06-02-2017
08:14 PM
I have done some changes in the code and here i am getting stuck: 1) driver log: 17/06/02 16:25:33 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:37 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:38 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:43 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:46 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:25:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:56 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:01 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:04 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:08 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:10 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:14 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:16 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:20 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:25 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:27 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:28 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:32 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:35 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:39 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:42 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:44 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:58 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 2) Master log: ]
2017-06-02 15:58:01,638 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/11 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,649 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/12 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,654 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/13 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,663 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/14 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,671 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/15 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,679 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/16 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,688 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/10 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,696 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,704 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,712 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/draining , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,721 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/my_ns , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,750 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/default , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,771 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/NS , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,779 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/hbase , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,787 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,796 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/hbaseid , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,804 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,812 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:meta , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,821 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:unharnonized_alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,829 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:namespace , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,838 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:admin_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,846 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,854 INFO [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
] 3) RegionServer log: 2017-06-02 15:58:01,069 INFO [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-1] regionserver.HRegion: Onlined 18164fdd3694952b1527fb91c4724198; next sequenceid=67
2017-06-02 15:58:01,070 INFO [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] regionserver.HRegionServer: Post open deploy tasks for hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198.
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:43f9faceac3db2010e6070bec70200c0] hbase.MetaTableAccessor: Updated row my_ns:unharnonized_alarm_audit,,1493121241792.43f9faceac3db2010e6070bec70200c0. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] hbase.MetaTableAccessor: Updated row hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO [PostOpenDeployTasks:40c95c7accfb12179ed5b65207eee989] hbase.MetaTableAccessor: Updated row my_ns:admin_audit,,1494314936756.40c95c7accfb12179ed5b65207eee989. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,225 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,225 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,283 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,283 INFO [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,298 INFO [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-2] regionserver.HRegion: Onlined 09be4fb5ee3fe8ac146a63600d5d5006; next sequenceid=107
2017-06-02 15:58:01,299 INFO [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] regionserver.HRegionServer: Post open deploy tasks for my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006.
2017-06-02 15:58:01,305 INFO [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] hbase.MetaTableAccessor: Updated row my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 16:02:56,017 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=50, hits=43, hitRatio=86.00%, , cachingAccesses=50, cachingHits=43, cachingHitsRatio=86.00%, evictions=29, evicted=0, evictedPerRun=0.0
2017-06-02 16:07:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=53, hits=46, hitRatio=86.79%, , cachingAccesses=53, cachingHits=46, cachingHitsRatio=86.79%, evictions=59, evicted=0, evictedPerRun=0.0
2017-06-02 16:12:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=56, hits=49, hitRatio=87.50%, , cachingAccesses=56, cachingHits=49, cachingHitsRatio=87.50%, evictions=89, evicted=0, evictedPerRun=0.0
2017-06-02 16:17:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=59, hits=52, hitRatio=88.14%, , cachingAccesses=59, cachingHits=52, cachingHitsRatio=88.14%, evictions=119, evicted=0, evictedPerRun=0.0
2017-06-02 16:22:56,016 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=62, hits=55, hitRatio=88.71%, , cachingAccesses=62, cachingHits=55, cachingHitsRatio=88.71%, evictions=149, evicted=0, evictedPerRun=0.0
2017-06-02 16:27:56,015 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=65, hits=58, hitRatio=89.23%, , cachingAccesses=65, cachingHits=58, cachingHitsRatio=89.23%, evictions=179, evicted=0, evictedPerRun=0.0
2017-06-02 16:32:56,015 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=75, hits=68, hitRatio=90.67%, , cachingAccesses=75, cachingHits=68, cachingHitsRatio=90.67%, evictions=209, evicted=0, evictedPerRun=0.0 My spark code: def getHbaseConnection(properties: SerializedProperties): Connection = {
var connection: Connection = null
val HBASE_ZOOKEEPER_QUORUM_VALUE = properties.zkQuorum
val config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM_VALUE);
config.set("hbase.zookeeper.property.clientPort",
properties.zkPort);
if (properties.hbaseAuth != null
&& properties.hbaseAuth
.equalsIgnoreCase("kerberos")) {
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.cluster.distributed", "true");
config.set("hbase.rpc.protection", "privacy");
config.set("hbase.client.retries.number", "5");
config.set("hbase.regionserver.kerberos.principal", properties.kerberosRegion);
config.set("hbase.master.kerberos.principal", properties.kerberosMaster);
UserGroupInformation.setConfiguration(config);
// Every time, complete the TGT check first
var loginUser = UserGroupInformation.getLoginUser();
if (loginUser != null) {
loginUser.checkTGTAndReloginFromKeytab();
} else {
if (SparkFiles.get(properties.keytab) != null
&& (new java.io.File(SparkFiles.get(properties.keytab)).exists)) {
loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
SparkFiles.get(properties.keytab));
} else {
loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
properties.keytab);
}
}
loginUser.doAs(new PrivilegedExceptionAction[Void]() {
override def run(): Void = {
connection=ConnectionFactory.createConnection(config);
return null
}
});
println("getuser.,......"+loginUser.getUserName)
}
return connection;
} Config: hbase.zookeeper.quorum=localhost
hbase.zookeeper.property.clientPort=2181
hadoop.security.authentication=kerberos
hbase.security.authentication=kerberos
hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.kerberos.principal=admin@EXAMPLE.COM
hbase.kerberos.keytab=zkpr2.keytab Jass File: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
doNotPrompt=true
keyTab="/home/priyanshu/git/rwuptime/rockwell-services/audit-spark-services/zkpr2.keytab"
useTicketCache=false
renewTicket=true
debug=true
storeKey=true
principal="admin@EXAMPLE.COM"
client=true
serviceName="zookeeper";
};
... View more
05-25-2017
08:33 AM
Resolved the issue, there was one property missing in hdfs-site.xm: <property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
... View more