Member since
01-28-2017
20
Posts
0
Kudos Received
0
Solutions
04-19-2018
04:40 AM
@Ronak bansal could you check
... View more
04-19-2018
04:38 AM
@Artem Ervits @Ayub Khan guys could you help
... View more
04-19-2018
04:37 AM
Just update here it was working fine before..I removed atlas service and reinstalled with different user in ambari and then I started getting problem. First thing I noticed user did not get created in ranger and bcz of that atlas user did not able to create hbase table. Please let me know if you found the problem.
... View more
04-19-2018
04:32 AM
After i installed atlas on my machine am getting below error and atlas metadata server did not come up. and atlas user not getting created in ranger also. nil
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.coprocessor.CoprocessorException: HTTP 400 Error: atlas is Not Found
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1186)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7833)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1961)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1943)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
... View more
Labels:
12-13-2017
05:10 PM
@Jonas Straub I saw your article but it does not help to delete old ranger audit logs from solr. Could you please check above
... View more
12-13-2017
05:08 PM
I already tried deleting ranger audit using below links but nothing helps https://community.hortonworks.com/articles/63853/solr-ttl-auto-purging-solr-documents-ranger-audits.html And updated 60 days for ranger_retention_days as well but still I cannot see any effect. @Artem ErvitsPlease help
... View more
Labels:
08-28-2017
03:15 PM
Hi@Artem Ervits . Could you please check below and help In our environment we are using falcon to delete the hdfs data against retention. But location of data path should be in year-month-date format but the problem is before year-month-day pattern there are numbers which always changing. So is there any way we can give that as variable in data path for falocn. FYi... 2017-08-28 09:58 /test/998 2017-08-24 22:26 /test/999 and if you see inside this number time stamp folders are there as below 2017-08-24 10:23 /test/999/2017-08-23 2017-08-24 10:23 /test/999/2017-08-24
... View more
- Tags:
- Falcon
- Hadoop Core
Labels:
05-12-2017
08:13 PM
Hi @Daniel Kozlowski Thanks for suggestion, But still unable to login.... see below attached configuration parameters for Advanced Zeppelin-shiro-ini and advised me if anything missing. advanced-zeppelin-shiro-ini.txt
... View more
05-12-2017
08:13 PM
@Dan Zaratsian Thanks for reply But these docs do not provide you exact info. If you already set up zeppelin then if can you share advanced zeppellin-shiro-ini content then it would be grateful.
... View more
05-10-2017
08:51 PM
I tried updating zeppelin config files but still I got authentication issues for LDAP accounts. I am not able to understand what should be actual value for below properties activeDirectoryRealm.systemUsername = activeDirectoryRealm.systemPassword = @Artem Ervits could you help
... View more
Labels:
03-25-2017
12:23 AM
We have certain Map Reduce and spark jobs which are scheduled and working fine but once in a while we are getting failed job not even starting and giving below error. Please see @Artem Ervits mapreduce.Cluster:
Failed to use org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: java.lang.IllegalArgumentException:
java.net.UnknownHostException: ambari-host-name at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411) at
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311) at
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:68) at
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:152) at
org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.<init>(RetryInvocationHandler.java:62) at
org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:161) at
org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:155) at
org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58) Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name
and the correspond server addresses. 17/03/23
19:30:06 ERROR loader.LoaderBatch: Something went wrong during the load
process java.io.IOException:
Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses. at
org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) at
org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82) at
org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75) at
org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260) at
org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at
org.apache.hadoop.mapreduce.Job.connect(Job.java:1255) at
org.apache.hadoop.mapreduce.Job.submit(Job.java:1284) at
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
... View more
Labels:
03-16-2017
02:44 PM
Hi @Artem Ervits..But for moving files we need manual effort as script only showing old data. Please advice
... View more
03-15-2017
01:02 PM
Thanks a lot @Artem Ervits It is working fine....
... View more
03-14-2017
10:37 AM
But falcon feed only look for feed path which manually created . But it never does any operation on unix time stamp.
... View more
03-14-2017
10:36 AM
Thanks a lot for info...Let me try the same and will get back
... View more
02-09-2017
09:00 PM
Thanks a lot..Let me try that
... View more
02-09-2017
08:13 PM
Hi @Artem Ervits Thanks for response. I checked /etc/zookeeper/conf/zoo.cfg and after upgradation I found same values as before. So I could not find out the exact problem on looking into the configuration files. 2017-02-08 00:41:52,616 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x15a19e6cf730013, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2017-02-08 00:41:52,675 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /172.17.0.2:34852 which had sessionid 0x15a19e6cf730013 2017-02-08 00:41:54,441 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.2:40119 2017-02-08 00:41:54,442 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@861] - Client attempting to renew session 0x15a19e6cf730013 at /172.17.0.2:40119 2017-02-08 00:41:54,443 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@617] - Established session 0x15a19e6cf730013 with negotiated timeout 10000 for client /172.17.0.2:40119 2017-02-08 00:42:19,328 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x15a19e6cf730013, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2017-02-08 00:42:19,329 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /172.17.0.2:40119 which had sessionid 0x15a19e6cf730013 2017-02-08 00:42:20,004 - INFO [SessionTracker:ZooKeeperServer@347] - Expiring session 0x15a19e6cf730013, timeout of 10000ms exceeded 2017-02-08 00:42:20,005 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x15a19e6cf730013 2017-02-08 00:42:20,477 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.2:40193 2017-02-08 00:42:20,488 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@861] - Client attempting to renew session 0x15a19e6cf730013 at /172.17.0.2:40193 2017-02-08 00:42:20,489 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@610] - Invalid session 0x15a19e6cf730013 for client /172.17.0.2:40193, probably expired 2017-02-08 00:42:20,489 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /172.17.0.2:40193 which had sessionid 0x15a19e6cf730013 2017-02-08 00:42:20,588 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.2:40194 2017-02-08 00:42:20,589 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.2:40194 2017-02-08 00:42:20,626 - INFO [SyncThread:0:ZooKeeperServer@617] - Established session 0x15a19e6cf730015 with negotiated timeout 10000 for client /172.17.0.2:40194 Above are logs but I could not find any issue from here. Please advise .
... View more
02-09-2017
04:40 PM
Hi @Artem Ervits can you help.
... View more
02-08-2017
07:21 PM
Hi, We upgraded our prod cluster from HDP 2.3 to HDP 2..5 and after few days we have been getting new errors which never got in our former cluster. suddenly our zookeeper sever went down then we saw log "maximum client connection reached 60" then we changed the attribute to 200 in zoo.cfg. Again that host reached the max connection that is 200. So I dont understand why its hitting maximum connection everytime. Because once application job completes then session should be closed , why clint connections are active...Please advice me what should be done....T
... View more
- Tags:
- Hadoop Core
- Zookeeper