Member since
05-31-2017
14
Posts
3
Kudos Received
0
Solutions
03-03-2018
07:11 PM
We are trying something similar as well i.e. Create a custom entity to store information on data stored in S3. Once the hard delete is enabled, we can no longer able to delete or update the entity. Getting the error "Given instance is invalid/not found: Could not find entities in the repository with guids" Were you able to find any resolution? HDP Version: 2.6.1 Atlas Version: 0.8.0
... View more
06-23-2017
09:07 PM
@Beverley Andalora Thank you. This helped my case. I have provided the error which I receiving below (before making your recommended changes) as it may help others. java.lang.RuntimeException: org.apache.falcon.FalconException: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:59)
at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:549)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:136)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.falcon.util.EmbeddedServer.start(EmbeddedServer.java:58)
at org.apache.falcon.FalconServer.main(FalconServer.java:118)
Caused by: org.apache.falcon.FalconException: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:50)
at org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:56)
... 11 more
Caused by: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at com.tinkerpop.blueprints.GraphFactory.open(GraphFactory.java:50)
at org.apache.falcon.metadata.MetadataMappingService.initializeGraphDB(MetadataMappingService.java:146)
at org.apache.falcon.metadata.MetadataMappingService.init(MetadataMappingService.java:113)
at org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:47)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.tinkerpop.blueprints.GraphFactory.open(GraphFactory.java:45)
... 15 more
Caused by: java.lang.NoClassDefFoundError: com/sleepycat/je/LockMode
at com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager.<clinit>(BerkeleyJEStoreManager.java:47)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:42)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1275)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
... 20 more
Caused by: java.lang.ClassNotFoundException: com.sleepycat.je.LockMode
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 29 more
2017-06-23 20:51:14,687 INFO - [main:] ~ Started SocketConnector@0.0.0.0:15000 (log:67)
... View more
06-16-2017
07:24 PM
Is the cluster Kerborized? If so, the "GSS Initate failed" error usually means that there no valid kerberos ticket for the current user who is running the script.
... View more
06-05-2017
06:51 PM
1 Kudo
Awesome. The referred Articles helped me solve my ambari-metrics restart issue. Here is the short form of the steps 1) Stop Ambari Metrics Collector and Grafana 2) Backup the embedded hbase and hbase-temp folders and deleted them. (The paths could differ from default. So I looked up the hbase.rootdir and hbase.tmp.dir properties in Ambari > Ambari Metrics > Config for the appropriate path) 3) Restart Ambari Metrics Collector and Grafana The Ambari Metrics Collector Started correctly without any errors. Note: As you may have suspected, by deleting the existing hbase folder, you will loose the metrics history.
... View more
05-31-2017
06:09 PM
Thank you @Markovich. I have been looking for this fix for months. I even opened multiple tickets with Hortonworks but they were not very helpful. Finally, I can put this to rest. Just to add more clarity, for HDP clusters the hive.server2.parallel.ops.in.session=true propery has to be added to hive-site.xml (if done manually) or at Hive > Config > Advanced > "Custom hive-site" section if done via Ambari.
... View more
12-05-2016
08:46 PM
Hi Ram, thanks for the tips. Those directories already exists but empty. So, not sure what else I may need to do. I opened a ticket with Hortonworks support. Will update this forum once I have a resolution.
... View more
12-05-2016
05:59 PM
I reinstalled Falcon using Ambari and it did not make any difference. The files under /apps/falcon/extensions/hdfs-mirroring/libs/runtime are still missing.
... View more
12-03-2016
09:30 PM
I am trying to mirror set of directories from HDP 2.3.4 to HDP 2.5 and I am using Falcon's HDFS-Mirroring extension to do this. However, when I run the job it fails. When I took a close look at the error, it seems like the extension libraries which are supposed to be under HDFS path "/apps/falcon/extensions/hdfs-mirroring/libs/runtime" are missing. The same is true of all falcon extensions. I tried to copy this from the instance where falcon server is running, but the corresponding folders (Ex. /usr/hdp/2.5.0.0-1245/falcon/extensions/hdfs-mirroring/libs/runtime) are also empty. Where can I find the relevant jar files so that I can copy them into my local instance as well as HDFS?
... View more
Labels:
11-17-2016
09:59 PM
I am trying to connect to a HiveServer2 which has LDAP authentication enabled. When logging in through beeline, I am able to connect successfully and query tables. However, if I try to connect it via pyspark it fails with "Peer indicated failure: Error validating the login" error. The interesting finding is that, I am able use 3rd party JDBC based SQL Clients like SQLWorkbench/J or Aginity Workbench for Hadoop to connect to HiveServer2 successfully with LDAP username and password. This indicates the problem lies within Spark/PySpark, JDBC and Hive Driver. Note: I cannot use HiveContext to connect to Hive because it has Ranger Authorization enabled. Spark does not work with Ranger when using native Hive libraries. It has to go through Hive Server (Thrift Server) via JDBC. Pyspark Code from pyspark.sql import SQLContext
sqlCtx = SQLContext(sc)
df = (sqlContext
.load(source="jdbc",
url="jdbc:hive2://thrift-server-url:10000/default?user=[ldap_user]&password=[ldap_user_password]",
dbtable="table_name")
)
sc.stop()
sc.stop() Error Stack
ERROR [HiveServer2-Handler-Pool: Thread-66]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: Error validating the login [Caused by javax.security.sasl.AuthenticationException: LDAP Authentication failed for user [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]]]
at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:109)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.security.sasl.AuthenticationException: LDAP Authentication failed for user [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]]
at org.apache.hive.service.auth.LdapAuthenticationProviderImpl.Authenticate(LdapAuthenticationProviderImpl.java:185)
at org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(PlainSaslHelper.java:106)
at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:102)
... 8 more
Caused by: javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3135)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3081)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2883)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2797)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.InitialContext.<init>(InitialContext.java:216)
at javax.naming.directory.InitialDirContext.<init>(InitialDirContext.java:101)
at org.apache.hive.service.auth.LdapAuthenticationProviderImpl.Authenticate(LdapAuthenticationProviderImpl.java:167)
... 10 more
2016-11-17 21:48:20,508 ERROR [HiveServer2-Handler-Pool: Thread-66]: server.TThreadPoolServer (TThreadPoolServer.java:run(297)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Error validating the login
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Error validating the login
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
... View more
Labels:
11-17-2016
07:54 AM
1 Kudo
The workaround is to use SQLContext (instead of HiveContext) and JDBC to connect to HiveServer2 which will honor ranger's authorization policies. Following links will give you some idea about how Spark, JDBC and SQLContext works. http://stackoverflow.com/questions/32195946/method-not-supported-in-spark https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_dataintegration/content/hive-jdbc-odbc-drivers.html
... View more
11-11-2016
10:04 PM
Awesome! This worked for me. The timing could have not been better. I was working on setting up Zeppelin with OpenLDAP and livy today (HDP 2.5) and this was one of the issue I had to solve. Thank you!
... View more
11-08-2016
05:44 PM
I tried running the command that @Terje Berg-Hansen suggested. I ran it as user "Zeppelin" on the node where Zeppelin is installed but it did not make any difference. Here is another thread that talks about the same topic. This issue is still unresolved. https://community.hortonworks.com/questions/58454/hdp-25-zeppelin-06-ldap-interpreters-are-not-shown.html
... View more
11-08-2016
12:46 AM
1 Kudo
I had a similar issue where ambari server got stuck in a weird state. It was technically running but could not collect any stats from the agents and in-turn it showed that the nodes are not running on the UI. I spent couple of days looking for a solution. Then based on the suggestion by @CS User above, I took a leap of faith and deleted all requests and corresponding data on ambari schema. Upon restarting the ambari-server, everything came back to normal. Thank you for the tip. Error on ambar-server.log -------- 07 Nov 2016 19:09:56,536 ERROR [qtp-ambari-agent-253] ContainerResponse:419 - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.NullPointerException
at java.lang.String.replace(String.java:2240)
at org.apache.ambari.server.topology.HostRequest.getLogicalTasks(HostRequest.java:303)
at org.apache.ambari.server.topology.LogicalRequest.getCommands(LogicalRequest.java:158)
at org.apache.ambari.server.topology.LogicalRequest.getRequestStatus(LogicalRequest.java:231)
at org.apache.ambari.server.topology.TopologyManager.isLogicalRequestFinished(TopologyManager.java:812)
at org.apache.ambari.server.topology.TopologyManager.replayRequests(TopologyManager.java:766)
at org.apache.ambari.server.topology.TopologyManager.ensureInitialized(TopologyManager.java:150)
at org.apache.ambari.server.topology.TopologyManager.onHostRegistered(TopologyManager.java:407)
at org.apache.ambari.server.state.host.HostImpl$HostRegistrationReceived.transition(HostImpl.java:313)
at org.apache.ambari.server.state.host.HostImpl$HostRegistrationReceived.transition(HostImpl.java:275)
at org.apache.ambari.server.state.fsm.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:354)
at org.apache.ambari.server.state.fsm.StateMachineFactory.doTransition(StateMachineFactory.java:294)
at org.apache.ambari.server.state.fsm.StateMachineFactory.access$300(StateMachineFactory.java:39)
at org.apache.ambari.server.state.fsm.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:440)
at org.apache.ambari.server.state.host.HostImpl.handleEvent(HostImpl.java:584)
at org.apache.ambari.server.agent.HeartBeatHandler.handleRegistration(HeartBeatHandler.java:464)
at org.apache.ambari.server.agent.rest.AgentResource.register(AgentResource.java:95)
at sun.reflect.GeneratedMethodAccessor188.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
at org.apache.ambari.server.security.SecurityFilter.doFilter(SecurityFilter.java:67)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745) -------- Error on ambari-agent: ----------- Unable to connect to: https://<ambari-server-fqdn>:8441/agent/v1/register/<ambari-agent-fqdn>; Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 165, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 499, in sendRequest
+ '; Response: ' + str(response)) <head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 Server Error</title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /agent/v1/register/<ambari-agent-fqdn>. Reason:
<pre> Server Error</pre></p>
<hr /><i><small>Powered by Jetty:// 8.1.19.v20160209</small></i> ----------- Solution: Here is the (postgresql) query that I used. You have to run this as "ambari" user on the Ambari db. HDP version: 2.5 Ambari version: 2.4.0.1 Caution: Since you are touching ambari db directly, you are on your own. Note: I had to write individual queries to delete records from dependent tables first because CASCADE DELETE option was not turned on them.
delete from execution_command where task_id in (select task_id from host_role_command where stage_id in (select stage_id from stage where request_id in (select request_id from request))); delete from topology_logical_task where physical_task_id in (select task_id from host_role_command where stage_id in (select stage_id from stage where request_id in (select request_id from request))); delete from host_role_command where stage_id in (select stage_id from stage where request_id in (select request_id from request)); delete from stage where request_id in (select request_id from request); delete from requestresourcefilter where request_id in (select request_id from request); delete from requestoperationlevel where request_id in (select request_id from request); delete from request;
... View more
10-27-2016
09:14 PM
Soft linking of hdfs and core site xmls on KNOX Gateway server fixed the UnknownHostException issue. The versions I was working with are HDP - 2.5 Ranger - 0.6.0.2.5 Knox - 0.9.0.2.5 Thanks
... View more