Member since
01-16-2017
7
Posts
0
Kudos Received
0
Solutions
07-06-2019
04:14 PM
On my Hadoop cluster setup I do have these parameters below, but still yarn aggregated log retention is still disabled Current Value: yarn.log-aggregation-enable: true yarn.nodemanager.remote-app-log-dir: /app-logs yarn.nodemanager.remote-app-log-dir-suffix: logs-ifile yarn.log-aggregation.retain-seconds: 2592000 yarn.nodemanager.log.retain-seconds: 604800 Do I still need to add or modify some parameters in yarn.xml?
... View more
Labels:
08-29-2018
02:29 PM
Hi, Currently our user is trying to connect on hive using dbeaver but they are unable to connect due to the following errors encountered. Please see the sample logs below as reference. 2018-08-21 13:39:47,545 INFO [HiveServer2-HttpHandler-Pool: Thread-269164]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(316)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
2018-08-21 13:39:47,548 ERROR [HiveServer2-HttpHandler-Pool: Thread-269164]: security.JniBasedUnixGroupsMapping (JniBasedUnixGroupsMapping.java:logError(73)) - error looking up the name of group 133437102: No such file or directory
2018-08-21 13:39:47,548 WARN [HiveServer2-HttpHandler-Pool: Thread-269164]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(330)) - Error opening session:
org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of knox for <User> at org.apache.hive.service.auth.HiveAuthFactory.verifyProxyAccess(HiveAuthFactory.java:394)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getProxyUser(ThriftCLIService.java:768)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getUserName(ThriftCLIService.java:389)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:416)
at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:319)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1257)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1242)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TServlet.doPost(TServlet.java:83)
at org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:479)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:349)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:925)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:952)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:76)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:609)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:45)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User: knox is not allowed to impersonate <User> at org.apache.hadoop.security.authorize.DefaultImpersonationProvider.authorize(DefaultImpersonationProvider.java:119)
at org.apache.hadoop.security.authorize.ProxyUsers.authorize(ProxyUsers.java:102)
at org.apache.hadoop.security.authorize.ProxyUsers.authorize(ProxyUsers.java:116)
at org.apache.hive.service.auth.HiveAuthFactory.verifyProxyAccess(HiveAuthFactory.java:390)
... 32 more Initial resolution that we had implemented was this one https://stackoverflow.com/questions/50462847/error-getting-a-jdbc-connection-to-hive-via-knox But still the user are unable to connect and encountered the same issues.
... View more
Labels:
07-19-2018
12:51 PM
Hi, I want to have an implementation regarding on my production environment to initialize the high-availability setup of all Ambari components. Please provide me some artifacts or recommendations what needs to do first on the HA setup Thanks!
... View more
Labels:
07-16-2018
02:15 PM
I've encountered error upon reconnecting the node to Ambari. Two way SSL authentication is set as disabled which is in a default setup. Let me know how can I resolve this. You may see the logs below ERROR 2018-07-16 13:42:28,329 security.py:249 - Certificate signing failed.
In order to receive a new agent certificate, remove existing certificate file from keys directory. As a workaround you can turn off two-way SSL authentication in server configuration(ambari.properties)
Exiting..
ERROR 2018-07-16 13:42:28,329 Controller.py:212 - Unable to connect to: https://localhost:8441/agent/v1/register/localhost
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 165, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 496, in sendRequest
raise IOError('Request to {0} failed due to {1}'.format(url, str(exception)))
IOError: Request to https://localhost:8441/agent/v1/register/localhost failed due to ()
ERROR 2018-07-16 13:42:28,329 Controller.py:213 - Error:Request to https://localhost:8441/agent/v1/register/localhost failed due to ()
WARNING 2018-07-16 13:42:28,329 Controller.py:214 - Sleeping for 17 seconds and then trying again
... View more
Labels:
06-28-2018
07:40 AM
Caused by: java.sql.SQLException: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:134)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.ambari.view.hive20.internal.HiveConnectionWrapper$1.run(HiveConnectionWrapper.java:78)
at org.apache.ambari.view.hive20.internal.HiveConnectionWrapper$1.run(HiveConnectionWrapper.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.ambari.view.hive20.internal.HiveConnectionWrapper.connect(HiveConnectionWrapper.java:75)
... 98 more
Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper
at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:96)
at org.apache.hive.jdbc.Utils.configureConnParams(Utils.java:514)
at org.apache.hive.jdbc.Utils.parseURL(Utils.java:434)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:132)
... 107 more
Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Tried all existing HiveServer2 uris from ZooKeeper.
at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:68)
... 110 more
... View more
Labels:
05-15-2017
08:19 AM
INFO I wrote this conflicted ephemeral node a while back in a different session, hence I will back-off for this node to be deleted by Zookeeper and retry
... View more
Labels: