Member since
06-15-2016
45
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6059 | 09-05-2019 04:46 AM |
12-24-2020
02:20 AM
Dear Team, Problem Statement: While creating new topics ISR for all the topics getting shrink automatically, even though we created topic it was throwing below errors in kafka server.log. VERSIONS: Kafka version (0.10.0), HDP-2.6.3.0 No. of Brokers 3 with 16GB JVM Heap. We have tried below steps as per the URL but no luck 😞 https://medium.com/@nblaye/reset-consumer-offsets-topic-in-kafka-with-zookeeper-5910213284a2 We stop all brokers. Removed below files from all brokers. cleaner-offset-checkpoint , .lock , recovery-point-offset-checkpoint , replication-offset-checkpoint Restarted all brokers. Logs for reference found in server.log [2020-12-24 00:22:23,481] ERROR [ReplicaFetcherThread-0-1002], Error for partition [__consumer_offsets,9] to broker 1002:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread) [2020-12-24 00:18:59,924] ERROR [ReplicaFetcherThread-0-1002], Error for partition [__consumer_offsets,36] to broker 1002:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread) [2020-12-24 00:18:59,951] ERROR [ReplicaFetcherThread-0-1002], Error for partition [__consumer_offsets,45] to broker 1002:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread) [2020-12-24 01:14:20,923] INFO Partition [__consumer_offsets,14] on broker 1003: Shrinking ISR for partition [__consumer_offsets,14] from 1002,1003,1001 to 1002,1003 (kafka.cluster.Partition) [2020-12-24 01:14:20,925] INFO Partition [__consumer_offsets,32] on broker 1003: Shrinking ISR for partition [__consumer_offsets,32] from 1002,1003,1001 to 1002,1003 (kafka.cluster.Partition) [2020-12-24 01:14:20,927] INFO Partition [__consumer_offsets,29] on broker 1003: Shrinking ISR for partition [__consumer_offsets,29] from 1003,1002,1001 to 1003,1002 (kafka.cluster.Partition) [2020-12-24 01:14:20,928] INFO Partition [__consumer_offsets,44] on broker 1003: Shrinking ISR for partition [__consumer_offsets,44] from 1003,1002,1001 to 1003,1002 (kafka.cluster.Partition) Please suggest on this and Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Kerberos
09-05-2019
04:46 AM
Hi All, Resolved using below steps: 1) To observe the Datanode threads: Created a widget in Ambari under HDFS for DataNode Threads (Runnable, Waited, Blocked) Monitored that from a particular date the threads went in wait stage. Exported the graph widget CSV file to view the exact time of wait threads. 2) Restart all Datanodes manually and observed that the wait threads were released. 3) With default 4096 threads the DataNode is running properly. Still unable to understand: 1) How to check the wait threads are in which DataNode? 2) Which task or process tend to threads in the wait stage? Would like to know if anyone comes across this and able to find in detail. Else the above steps are the only solution for wait threads.
... View more
08-29-2019
04:20 AM
Ambari 2.6 and HDP 2.6.3. The error is displayed while performing the following error: 1) HDFS get operation. 2) While aggregating and writing file on HDFS using pyspark. Error: "19/08/29 15:53:02 WARN hdfs.DFSClient: Failed to connect to /DN_IP:1019 for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available " We found the following links to resolve the above error. => To set dfs.datanode.max.transfer.threads=8196 1) https://www.netiq.com/documentation/sentinel-82/admin/data/b1nbq4if.html (Performance Tuning Guidelines) 2) https://github.com/hortonworks/structor/issues/7 (jmaron commented on Jul 28, 2014) Could you all please suggest shall i go ahead with this resolution? Does this configuration affects any other services? Thankyou
... View more
Labels:
06-11-2019
11:37 AM
@Jay Kumar SenSharma, Thanks for the support!!! Yeah, there was inconsistency in Ambari-Server DB which was not allowing Alert to function on Ambari-UI. The Ambari-server DB size was grown to 294 MB. By purging the last 6 months from DB and restarting the ambari functioned the Alerts back on Ambari-UI. Would like to know in detail if this happens on PROD env what measures should be taken as an admin.
... View more
06-11-2019
11:28 AM
@Geoffrey Shelton Okot, The purging saved me. Thanks a lot for the support. It would be great if I can automate purging every 6 months, please let me know if there is any way to do it.
... View more
06-10-2019
01:06 PM
@Geoffrey Shelton Okot, The ambari-server DB was created on 2018-09-04 20:10:19. The size of table schema ambari which belongs to ambari-server is 294 MB. Do you really think I need to purge the schema? Should the ambari-server schema size have a limit? If it is so How to limit it? OR automate purging?
... View more
06-10-2019
12:20 PM
@Jay Kumar SenSharma 1st Point ambari-server.log : Error Processing URI: /api/v1/clusters/cluster-name/alerts - (java.lang.NullPointerException) null 2nd Incognito Browser- Console (Tab) 3rd There is enough memory for Ambari-Server to function approx 166GB is Available. 4th With the help of your link above there is no log in ambari server saying exceeded Java Heap Size. The error in ambari-server is Java Null Pointer Exception null running alerts tab. Please Suggest.
... View more
05-31-2019
08:45 AM
Hi All, The configured HDP 2.6.3 cluster with Ambari 2.6 is unable to show contents inside the Alerts page. Can anyone help me, how to find the RCA for the same? Attaching screenshot of the Ambari Alerts Page: How to monitor log ? for this Alert page.
... View more
Labels:
- Labels:
-
Apache Ambari
09-26-2018
01:32 PM
@Gitanjali Bare Even i have faced the timeout error. There are two ways to solve this error: 1) Validate the KDC port 88 is allowed to ESTABLISH for both TCP and UDP connections. netstat -an | grep 88 2) If the UDP is not allowed to used, add the following entry in krb5.conf under [libdefaults] udp_preference_limit = 1 This worked for me . Hope this will help you also. Thanks...
... View more
09-18-2018
10:45 AM
@Gonçalo Cunha, Thanks for your response. the current krb5.conf looks like: [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = XYZ.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.example.com = XYZ.COM
example.com = XYZ.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
XYZ.COM = {
admin_server = FQDN
kdc = FQDN
} The principal pointing to this is like: username/hostname@XYZ.COM Keytab: username.service.keytab
... View more
09-18-2018
07:03 AM
Hi All, We are trying to push data into kerberized kafka using external non-kerberized nifi. The nifi node was previously configured in /etc/krb5.conf to use ABC.COM in realm and was working fine. Now when we want to change the pointing of realm in same file to XYZ.COM we are getting following error : Please let me know if any service needs to be restarted while using kerberized cluster from non-kerberized node. 2018-09-17 17:24:58,905 ERROR
[Timer-Driven Process Thread-144] o.a.n.p.kafka.pubsub.PublishKafka_0_10
PublishKafka_0_10[id=e01735cf-1a47-12b1-8151-82518eab4545]
PublishKafka_0_10[id=e01735cf-1a47-12b1-8151-82518eab4545] failed to process
session due to org.apache.kafka.common.KafkaException: Failed to construct
kafka producer: {}org.apache.kafka.common.KafkaException:
Failed to construct kafka producer
at
org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:342)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:166)
at
org.apache.nifi.processors.kafka.pubsub.PublisherPool.createLease(PublisherPool.java:61)
at org.apache.nifi.processors.kafka.pubsub.PublisherPool.obtainPublisher(PublisherPool.java:56)
at
org.apache.nifi.processors.kafka.pubsub.PublishKafka_0_10.onTrigger(PublishKafka_0_10.java:312)
at
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)Caused by:
org.apache.kafka.common.KafkaException:
javax.security.auth.login.LoginException: Cannot locate KDC
at
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at
org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:51)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:84)
at
org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:305)
... 16 common frames omittedCaused by:
javax.security.auth.login.LoginException: Cannot locate KDC
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.GeneratedMethodAccessor1592.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:58)
at
org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:109)
at
org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:55)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:83)
at
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
... 20 common frames omittedCaused by: sun.security.krb5.KrbException:
Cannot locate KDC
at sun.security.krb5.Config.getKDCList(Config.java:1084)
at sun.security.krb5.KdcComm.send(KdcComm.java:218)
at sun.security.krb5.KdcComm.send(KdcComm.java:200)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
at
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776)
... 36 common frames omittedCaused by:
sun.security.krb5.KrbException: Generic error (description in e-text) (60) -
Unable to locate KDC for realm XYZ.COM
at sun.security.krb5.Config.getKDCFromDNS(Config.java:1181)
at sun.security.krb5.Config.getKDCList(Config.java:1057)... 41 common frames omitted
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
07-06-2018
11:55 AM
Hi All, I am using Ambari 2.6. with HDP 2.6. While using LogSearch UI, the logs are not visible. The logsearch server error logs are: [pool-2-thread-1] ERROR apache.solr.client.solrj.impl.CloudSolrClient (CloudSolrClient.java:903) - Request to collection hadoop_logs failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry? 0[logfeeder_filter_setup] ERROR apache.solr.client.solrj.impl.CloudSolrClient (CloudSolrClient.java:903) - Request to collection history failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry? 0 Logfeeder error logs are: ERROR org.apache.ambari.logfeeder.util.LogFeederUtil (LogFeederUtil.java:303) - Error getting filter config from solr. Please suggest me how to view logs in LogSearch-UI. Thanks...
... View more
Labels:
- Labels:
-
Apache Ambari
05-22-2018
10:44 AM
@yifan zhao You can take look here Link, hope it will help you.
... View more
05-15-2018
06:24 AM
Hi All, I resolved this issue, in my cluster the user was created in Ambari View and HDFS /user directory not in OS having hive view access through which i was trying to query "select count(*) from table;". In a kerberized zone if you have enabled Hive impersonation i.e., run hive query using end user, so the "select count(*) from table;" query searches the access and privileges of /tmp/hive directory on the node where Hiveserver2 service is installed. The Solution to this are as follows:
1) Create a user in OS and allow HDFS group permission using "usermod -g hdfs username".
2) Create a user in OS and allow the HDFS and give permission through Ranger of /tmp/hive directory.
3) Disable Hive impersonation i.e., run hive query using end user. This will use hive user to fetch query output in background. Let me know if there are other possible ways to achieve the same.
... View more
05-09-2018
08:31 AM
Hi All, In a kerberized HDP cluster I have created a external table from hive view successfully. i am able to query select * from table and see the records in table in hive view. While query select count(*) from table i am getting a error on hive view: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask Attaching error in detail: java.lang.Exception: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
java.lang.Exception: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.ambari.view.hive2.resources.jobs.JobService.getOne(JobService.java:143) at sun.reflect.GeneratedMethodAccessor399.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:287) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.apache.ambari.server.security.authentication.AmbariDelegatingAuthenticationFilter.doFilter(AmbariDelegatingAuthenticationFilter.java:132) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212) at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201) at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:139) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:748) Caused by: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:348) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:251) at org.apache.ambari.view.hive2.HiveJdbcConnectionDelegate.execute(HiveJdbcConnectionDelegate.java:49) at org.apache.ambari.view.hive2.actor.StatementExecutor.runStatement(StatementExecutor.java:87) at org.apache.ambari.view.hive2.actor.StatementExecutor.handleMessage(StatementExecutor.java:70) at org.apache.ambari.view.hive2.actor.HiveActor.onReceive(HiveActor.java:38) at akka.actor.UntypedActor$anonfun$receive$1.applyOrElse(UntypedActor.scala:167) at akka.actor.Actor$class.aroundReceive(Actor.scala:467) at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Please let me know if you need more information on same.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
05-07-2018
01:36 PM
@Kibrom Gebrehiwot, I met into same issue you faced, the steps all here suggested are accurately perfect . After you added the realm name of your HDP in Kerberised cluster Nifi node you have change Kerberos keytab path in PutHDFS property to krb5.conf path which worked like charm for me.
... View more
04-24-2018
07:28 PM
Hi, I am planning to set a kerberize zone in HDP and HDF clusterusing Ambari, there are some highlights which I would like to know from you all. As I am new to this zone. 1. What are kerberize zone advantages. ? 2. Which services should i considered to keep in that zone? 3. Which approach is good to use other the this.? 4. If it's HA and Prod environment what are best practices.? 5. How to implement and configure if I am planning to add ranger poorly? 6. If it is integrated with HDP and HDF cluster what would be administraror good practice? 7. Study materials if any in HDP? Thanks all.
... View more
11-21-2017
02:41 PM
@Jay Kumar SenSharma IT Worked pefectly !!!! A heartily thank you for your valuable support. Appreciate the way you tackle the issues. Appreciate your way of providing perfect solution with smooth writing skill.
... View more
11-21-2017
07:26 AM
Found this in ambari-server.log : 21 Nov 2017 12:55:02,177 INFO [pool-18-thread-1] MetricsServiceImpl:65 - Attempting to initialize metrics sink
21 Nov 2017 12:55:02,178 INFO [pool-18-thread-1] MetricsServiceImpl:81 - ********* Configuring Metric Sink **********
21 Nov 2017 12:55:02,178 INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - No clusters configured.
21 Nov 2017 12:55:12,117 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_version_select for an invalid cluster named abcdefg ambari-audit.log: 2017-11-21T12:54:37.162+0530, User(admin), RemoteIp(Desktop-IP), RequestType(POST), url(http://10.128.20.10:8080/api/v1/version_definitions?dry_run=true), ResultStatus(201 Created)
2017-11-21T12:55:17.758+0530, User(admin), RemoteIp(Desktop-IP), Operation(Repository version removal), RequestType(DELETE), url(http://ambari-server_fqdn:8080/api/v1/stacks/HDP/versions/2.6/repository_versions/1), ResultStatus(500 Internal Server Error), Reason(org.apache.ambari.server.controller.spi.SystemException: Repository version can't be deleted as it is used by the following hosts: CURRENT on node03-FQDN), Stack(HDP), Stack version(2.6), Repo version ID(1)
... View more
11-21-2017
06:23 AM
Hi all, I am confused to call it as error/issue. While installing automatic Ambari 2.6 with HDP 2.6.3 on 4 node cluster in CLUSTER INSTALL WIZARD all went fine till i reached Review step and clicked on deploy. The installation is stuck on step 8 of CLUSTER INSTALL WIZARD review and Deploy button faded. Steps Performed: 1) Ambari-server and Agent restart on all 4 node for 5 times. 2) Step 0 to step 8 and step 8 to step 0 for 5 times. 3) Tested on Chrome, IE and Mozilla same steps for 3 times. is that any thing i missed to perform please let me know?
... View more
Labels:
11-20-2017
12:33 PM
@Rahul Narayanan Use of NTPSERVERARGS=iburst Refer this link hope it helps you
... View more
11-16-2017
02:00 PM
@Rahul Narayanan Kindly check on your host when you write command $hostname the output exactly matches the fqdn you used specified in repo's. If not then configure in /etc/sysconfig/network NETWORKING=yes HOSTNAME=fqdn NTPSERVERARGS=iburst Hope this helps you.
... View more
11-16-2017
01:35 PM
Thanks @Jay Kumar SenSharma It worked!!! 🙂
... View more
11-08-2017
01:43 PM
Hi all, The ambari-server is upgraded from 2.5.3 to 2.6 in my cluster. The repository is visible and updated ambari rpm through yum. The ambari-server setup went smooth and "completed successfully". The ambari-server start command fails to start ambari by, Unable to determine server PID and Ambari Server java process died with exitcode 255. Full error message is shown below: ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. /var/log/ambari-server/ambari-server.out: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
An unexpected error occured during starting Ambari Server.
org.apache.ambari.server.AmbariException: Current database store version is not compatible with current server version, serverVersion=2.6
.0.0, schemaVersion=2.5.2
at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkDBVersionCompatible(DatabaseConsistencyCheckHelper.java:24
5)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1058) Please help at the earliest... Thanks alot.
... View more
Labels:
- Labels:
-
Apache Ambari
10-23-2017
08:23 AM
@Alexander Daher Thank you!!!... The steps worked for me. I have small question, my linux server on which the ambari-server is running rebooted as it went unresponsive, so in this how only postgresql service stopped and other all services were running? any idea?
... View more
10-09-2017
11:43 AM
@Jay SenSharma 1) No, i'm not getting HTTP ERROR 401. 2) No its not kerberized cluster 3) Amabri-2.5.1.0 thanks for the instant reply.
... View more
10-05-2017
02:20 PM
Hi, I am using HDP-2.6 with Amabri-2.5.1.0. The insert query is written in Hive, when i click on Tez view i can see insert query is running but with following error and statement: Adapter operation failed » 404: Failed to fetch results by the proxy from url: http://srdcb0004gpm04.ril.com:8088/proxy/application_1496812988153_0058/ws/v2/tez/dagInfo?counters=&dagID=2&_=1 Details:
{
"trace": "<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=ISO-8859-1\"/>\n<title>Error 404 Not current Dag: 2</title>\n</head>\n<body><h2>HTTP ERROR 404</h2>\n<p>Problem accessing /ui/ws/v2/tez/dagInfo. Reason:\n<pre> Not current Dag: 2</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n\n</body>\n</html>\n",
"message": "Failed to fetch results by the proxy from url: http://srdcb0004gpm04.ril.com:8088/proxy/application_1496812988153_0058/ws/v2/tez/dagInfo?counters=&dagID=2&_=1507211109914&user.name=admin",
"status": 404
} Request: {
"adapterName": "dag-am",
"url": "http://10.128.20.10:8080/api/v1/views/TEZ/versions/0.7.0.2.6.1.0-118/instances/TEZ_CLUSTER_INSTANCE/resources/rmproxy/proxy/application_1496812988153_0058/ws/v2/tez/dagInfo",
"responseHeaders": {
"pragma": "no-cache",
"x-content-type-options": "nosniff",
"x-frame-options": "SAMEORIGIN",
"content-type": "application/json",
"cache-control": "no-store",
"user": "admin",
"content-length": "2111",
"x-xss-protection": "1; mode=block"
},
"queryParams": {
"dagID": 2,
"counters": ""
},
"namespace": "app"
} Stack: Error: Adapter operation failed
at ember$data$lib$adapters$errors$AdapterError.EmberError (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:24586:21) at new ember$data$lib$adapters$errors$AdapterError (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:80222:50) at Class.handleResponse (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:81517:16) at Class.hash.error (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:81597:33) at fire (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:3320:30)
at Object.fireWith [as rejectWith] (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:3432:7) at done (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:8487:14) at XMLHttpRequest.<anonymous> (http://10.128.20.10:8080/views/TEZ/0.7.0.2.6.1.0-118/TEZ_CLUSTER_INSTANCE/assets/vendor.js:8826:9)
... View more
Labels:
07-20-2017
08:26 AM
@Jay SenSharma I can see in the ambari ui that Node 4 in my cluster is giving the STALE alert. I verified the URL registered hosts and Hosts entry on all the nodes it seems perfect.
... View more
07-17-2017
11:48 AM
@Jay SenSharma
Sorry to inform you i got this: Please Help !!!! Thank you
... View more
07-17-2017
11:10 AM
@Jay SenSharma It worked for now !!!!!!!!!
thanks for the support.....
... View more