Member since
05-23-2019
19
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2426 | 05-28-2019 07:39 AM |
11-01-2020
10:26 PM
Hi all, I'm not sure if this issue is considered solved. In case it helps, I explain how we did it. We found the same error after removing several nodes of our kerberized cluster (Ambari 2.7.4 and HDP 3.1.4). $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:04:39 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:04:39 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 ats-hbase Failed : HTTP error code : 500 Following this thread, we checked carefully the YARN configuration to ensure that all the variables were correctly scaled to the available nodes. After that, we destroyed the yarn app: $ yarn app -destroy ats-hbase 20/11/02 07:06:13 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:13 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:14 INFO client.ApiServiceClient: Successfully destroyed service ats-hbase $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:06:19 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:06:19 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 Service ats-hbase not found Thus, we restarted all the YARN service on ambari. Now, everything is running fine. $ /usr/hdp/current/hadoop-yarn-client/bin/yarn app -status ats-hbase 20/11/02 07:09:02 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 20/11/02 07:09:02 INFO client.AHSProxy: Connecting to Application History server at XXXXX/YYY.YYY.YY.YY:10200 {"name":"ats-hbase","id":"application_1604297264331_0001","artifact":{"id":"/hdp/apps/3.1.4.0-315/hbase/rm2/hbase.tar.gz","type":"TARBALL"},"lifetime":-1,"components":[{"name":"master","dependencies":[],"artifact":{"id":"/hdp/apps/3.1.4.0-315/hbase/rm2/hbase.tar.gz","type":"TARBALL"},"resource":{"cpus":1,"memory":"4096","additional":{}},"state":"STABLE","configuration":{"properties":{"yarn.service.container-failure.retry.max":"10","yarn.service.framework.path":"/hdp/apps/3.1.4.0-315/yarn/rm2/service-dep.tar.gz"},"env":{"HBASE_LOG_PREFIX":"hbase-$HBASE_IDENT_STRING-master-$HOSTNAME","HBASE_LOGFILE":"$HBASE_LOG_PREFIX.log","HBASE_MASTER_OPTS":"-Xms3276m -Xmx3276m -Djava.security.auth.login.config=/usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase/yarn_hbase_master_jaas.conf", [...]
... View more
09-08-2020
07:38 PM
is there any resolution to this. I am seeing this issue with one of the acid tables which has around 25 M records. Other tables have 700 M records and are working fine. Facing this issue only for few tables.
... View more
05-12-2020
11:47 AM
You should not use Flume. Flume and it's connectors are deprecated. This and any flow can easily move to NiFi. https://dev.to/tspannhw/migrating-apache-flume-flows-to-apache-nifi-jms-to-x-and-x-to-jms-1g02
... View more
02-27-2020
07:45 AM
Turning on debug mode showed a little bit of additional information and eventually got me to look at the code and added a few debug lines of my own to /usr/hdp/current/superset/lib/python3.6/site-packages/flask_appbuilder/security/manager.py in _search_ldap to show the filter_str and username being passed to the LDAP search. I saw the filter_str was set to userPrinicipalName=jeff.watson@our.domain, so I got rid of the @our.domain adding AUTH_LDAP_APPEND_DOMAIN, but that still didn't work. I finally remembered that Ranger used sSAMAcountName as an AD search name,so I changed add AUTH_ LDAP_ UID_ FIELD as sAMAccountName and poof, LDAP logins work. Note: Ambari settings aren't saved where the command line version can find them until I saved and restarted superset in Ambari, then stopped it again so I could run it interactively to see the debug logging. I'm busy and lazy, so I didn't start removing other settings to see what I needed or didn't need, so here are the settings that worked for me. Our cluster is Kerberized and uses self signed certificates. AUTH_LDAP_UID_FIELD=sAMAccountName AUTH_LDAP_BIND_USER=CN=Bind,OU=Admin,dc=our,dc=domain AUTH_LDAP_SEARCH=OU=Employees,dc=our,dc=domain AUTH_LDAP_SERVER=ldap://our.domain AUTH_LDAP=AUTH_LDAP AUTH_LDAP_ALLOW_SELF_SIGNED=True AUTH_LDAP_APPEND_DOMAIN=False AUTH_LDAP_FIRSTNAME_FIELD=givenName AUTH_LDAP_LASTNAME_FIELD=sn AUTH_LDAP_USE_TLS=False AUTH_USER_REGISTRATION=True ENABLE_KERBEROS_AUTHENTICATION=True KERBEROS_KEYTAB=/etc/security/keytabs/superset.headless.keytab KERBEROS_PRINCIPAL=superset-sdrdev@OUR.DOMAIN
... View more
10-03-2019
02:11 AM
have the same symptom, but a slightly different message on a newly built HDP3.0.1 cluster. This is from the YARN app log for the failed Oozie application: 2019-10-03 09:06:54,805 INFO [Thread-75] org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain. Thread state is :WAITING 2019-10-03 09:06:54,905 INFO [Thread-75] org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain. Thread state is :WAITING 2019-10-03 09:06:54,986 ERROR [Job ATS Event Dispatcher] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Exception while publishing configs on JOB_SUBMITTED Event for the job : job_1570085949108_0002 org.apache.hadoop.yarn.exceptions.YarnException: Failed while publishing entity at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher.dispatchEntities(TimelineV2ClientImpl.java:548) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putEntities(TimelineV2ClientImpl.java:149) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.publishConfigsOnJobSubmittedEvent(JobHistoryEventHandler.java:1254) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processEventForNewTimelineService(JobHistoryEventHandler.java:1414) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleTimelineEvent(JobHistoryEventHandler.java:742) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.access$1200(JobHistoryEventHandler.java:93) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1795) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1791) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) at java.lang.Thread.run(Thread.java:745) Caused by: com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) at com.sun.jersey.api.client.Client.handle(Client.java:652) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.put(WebResource.java:539) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.doPutObjects(TimelineV2ClientImpl.java:291) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.access$000(TimelineV2ClientImpl.java:66) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$1.run(TimelineV2ClientImpl.java:302) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$1.run(TimelineV2ClientImpl.java:299) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:299) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:251) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:374) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:367) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.publishWithoutBlockingOnQueue(TimelineV2ClientImpl.java:495 ) at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.run(TimelineV2ClientImpl.java:433) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) ... 21 more
... View more
09-09-2019
02:01 PM
Were you able to resolve this error?
... View more
06-11-2019
04:26 AM
@jingyong zou The issue is with the flowfile format that is passed through the processor. As ConvertAvroToJson processor accepts only Avro format but i think you are passing Json format to the processor which is causing java.io.IOException: Not a data file error.
... View more
05-28-2019
07:39 AM
3 Kudos
have already been solved hive://ip:10500/default?auth=KERBEROS&kerberos_service_name=hive
... View more