Member since
09-19-2013
38
Posts
1
Kudos Received
0
Solutions
07-04-2019
09:40 AM
Hello all
I have question about merge two distribution and new CDP release
when new CDP comes out will it be free version too like cludera manager express is free
and if so will it remain 100 node limitation in the express version or unlimited nodes like it has hortonworks?
P.S. and last question: for new deployments what would you recommend install HDP or CDH , from where will it be easier to migrate to new CDP in the future?
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
04-05-2018
01:13 PM
Hi I tried queue dump: oozie admin -oozie http://localhost:11000/oozie -queuedump
[Server Queue Dump]:
[action.start_0000000-180404170027904-oozie-oozi-W@sqoop-6bbd] priority=0 delay=419
******************************************
[Server Uniqueness Map Dump]:
action.start_0000000-180404170027904-oozie-oozi-W@sqoop-6bbd=Thu Apr 05 16:59:23 GET 2018
then restarted oozie and tried to kill the job again but it is still remaining in running state in PREP stage 😕 do you have any idea what am I doing wrong? Thank you
... View more
04-04-2018
01:46 PM
Hi guys I tried to run sqoop import job from oozie workflow to test if it working, but there is some problem: job stucks in PREP stage here is the configuration: <sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import \
--driver com.microsoft.sqlserver.jdbc.SQLServerDriver \
--connect 'jdbc:sqlserver://IP.ADDRESS.CHANGED;database=DATABASETEST123' \
--username=USERNAME \
--password=******** \
--table dbo.Testtable \
--compress \
--as-parquetfile \
--split-by id \
--hive-import \
--hive-overwrite \
--hive-table table1 \
--m 30</command>
<configuration />
</sqoop> here are oozie.logs generated exactly after start this job: 2018-04-04 17:15:24,061 WARN ParameterVerifier:523 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] The application does not define formal parameters in its XML definition
2018-04-04 17:15:24,464 INFO ActionStartXCommand:520 - SERVER[ooziehost] USER[username] GROUP[-] TOKEN[] APP[Batch job for sqoop test] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[0000000-180404170027904-oozie-oozi-W@:start:] Start action [0000000-180404170027904-oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2018-04-04 17:15:24,465 INFO ActionStartXCommand:520 - SERVER[ooziehost] USER[username] GROUP[-] TOKEN[] APP[Batch job for sqoop test] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[0000000-180404170027904-oozie-oozi-W@:start:] [***0000000-180404170027904-oozie-oozi-W@:start:***]Action status=DONE
2018-04-04 17:15:24,465 INFO ActionStartXCommand:520 - SERVER[ooziehost] USER[username] GROUP[-] TOKEN[] APP[Batch job for sqoop test] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[0000000-180404170027904-oozie-oozi-W@:start:] [***0000000-180404170027904-oozie-oozi-W@:start:***]Action updated in DB!
2018-04-04 17:15:24,722 INFO WorkflowNotificationXCommand:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[0000000-180404170027904-oozie-oozi-W@:start:] No Notification URL is defined. Therefore nothing to notify for job 0000000-180404170027904-oozie-oozi-W@:start:
2018-04-04 17:15:24,723 INFO WorkflowNotificationXCommand:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000000-180404170027904-oozie-oozi-W
2018-04-04 17:15:24,770 INFO ActionStartXCommand:520 - SERVER[ooziehost] USER[username] GROUP[-] TOKEN[] APP[Batch job for sqoop test] JOB[0000000-180404170027904-oozie-oozi-W] ACTION[0000000-180404170027904-oozie-oozi-W@sqoop-6bbd] Start action [0000000-180404170027904-oozie-oozi-W@sqoop-6bbd] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2018-04-04 17:15:43,705 INFO CoordMaterializeTriggerService$CoordMaterializeTriggerRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] CoordMaterializeTriggerService - Curr Date= 2018-04-04T17:20+0400, Num jobs to materialize = 0
2018-04-04 17:15:43,706 INFO CoordMaterializeTriggerService$CoordMaterializeTriggerRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.CoordMaterializeTriggerService]
2018-04-04 17:15:43,942 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:15:43,943 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running coordinator status service from last instance time = 2018-04-04T17:14+0400
2018-04-04 17:15:43,949 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running bundle status service from last instance time = 2018-04-04T17:14+0400
2018-04-04 17:15:43,952 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:15:44,047 INFO PauseTransitService:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.PauseTransitService]
2018-04-04 17:15:44,061 INFO PauseTransitService:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.PauseTransitService]
2018-04-04 17:16:43,953 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:16:43,954 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running coordinator status service from last instance time = 2018-04-04T17:15+0400
2018-04-04 17:16:43,959 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running bundle status service from last instance time = 2018-04-04T17:15+0400
2018-04-04 17:16:43,962 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:16:44,062 INFO PauseTransitService:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.PauseTransitService]
2018-04-04 17:16:44,075 INFO PauseTransitService:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.PauseTransitService]
2018-04-04 17:17:43,963 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:17:43,963 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running coordinator status service from last instance time = 2018-04-04T17:16+0400
2018-04-04 17:17:43,969 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Running bundle status service from last instance time = 2018-04-04T17:16+0400
2018-04-04 17:17:43,972 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Released lock for [org.apache.oozie.service.StatusTransitService]
2018-04-04 17:17:44,075 INFO PauseTransitService:520 - SERVER[ooziehost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.PauseTransitService]
I cant even kill the job it stays in running mode in the workflow Any idea? Thank you
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Sqoop
-
Cloudera Hue
03-16-2018
09:02 AM
Thank you very much for reply such a great answers
... View more
03-16-2018
08:34 AM
Hi all I have a question if I it is possible install new ambari server and then add existing working HDP 2.6 cluster in it, in case if existing ambari is damaged or completely lost? If it is possible then will it change or reset HDP services existing configurations? Thank you
... View more
Labels:
02-20-2018
08:23 AM
when you use impersonation you must have system users created in the OS with the same name which you have in zeppelin shiro.ini (user1 in your case)
... View more
02-16-2018
03:26 PM
Hi all I have hdp 2.6.3 with ranger security ssl enabled and plugins(hdfs, yarn and hive) enabled hive plugin not works here is hiveserver2.log: 2018-02-16 17:34:00,920 WARN [Thread-14]: client.RangerAdminRESTClient (RangerAdminRESTClient.java:getServicePoliciesIfUpdated(162)) - Error getting policies. secureMode=false, user=hive (auth:SIMPLE), response={"httpStatusCode":400,"statusCode":0}, serviceName=hive and /var/log/ranger/admin/xa_portal.log: 2018-02-16 08:27:59,754 [http-bio-6182-exec-28] ERROR org.apache.ranger.common.ServiceUtil (ServiceUtil.java:1359) - Requested Service not found. serviceName=hive I am almost 99% sure that all configurations have done correctly from ambari(because other plugins are working properly), also searched on google wondering what I have missed, but was not able to find useful information P.S. I have configured ranger.plugin.hive.policy.rest.ssl.config.file = /usr/hdp/current/hive-client/conf/conf.server/ranger-policymgr-ssl.xml which conatins all information about keystores and truststores, also I am sure that keystore file passwords are correct(checked many times) here is the file ranger-policymgr-ssl.xml <configuration>
<property>
<name>xasecure.policymgr.clientssl.keystore</name>
<value>/etc/security/key.jks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.keystore.credential.file</name>
<value>jceks://file/etc/ranger/hive/cred.jceks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.keystore.password</name>
<value>crypted</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore</name>
<value>/etc/security/trust.jks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore.credential.file</name>
<value>jceks://file/etc/ranger/hive/cred.jceks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore.password</name>
<value>crypted</value>
</property>
do you have any idea what I miss and how can I fix this? Thank you
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
02-08-2018
03:59 AM
for example I have saved query named "sqoop test" in My Documents, where can I find it on the filesystem? I was looking for in local linux filesystem and in the hdfs too but couldn't find my saved query 😕
... View more
01-24-2018
03:05 AM
Hi all I have compiled hue 4.1 from source, I am trying figure out where hue physically stores its notebook I also can't understand what the "My Documents" is. Is it folder somewhere in the local OS filesystem or in the hdfs or is it something abstract name for notebook collections? my hue installation path is /usr/local/hue [hue@localhost hue]$ pwd
/usr/local/hue
[hue@localhost hue]$ ll
total 80
-rw-r--r-- 1 hue hue 2782 Jan 9 14:40 app.reg
drwxr-xr-x 22 hue hue 4096 Jan 9 14:40 apps
drwxr-xr-x 4 hue hue 4096 Jan 9 14:42 build
drwxr-xr-x 5 hue hue 4096 Jan 9 16:10 desktop
drwxrwxr-x 3 hue hue 4096 Jan 9 13:24 ext
-rw-r--r-- 1 hue hue 11358 Jan 9 13:24 LICENSE.txt
drwxrwxr-x 2 hue hue 4096 Jan 23 17:26 logs
-rw-r--r-- 1 hue hue 4929 Jan 9 13:22 Makefile
-rw-r--r-- 1 hue hue 44 Jan 9 14:35 Makefile.buildvars
-rw-r--r-- 1 hue hue 8505 Jan 9 13:22 Makefile.sdk
-rw-r--r-- 1 hue hue 3705 Jan 9 13:22 Makefile.vars
-rw-r--r-- 1 hue hue 2192 Jan 9 13:22 Makefile.vars.priv
-rw-r--r-- 1 hue hue 1305 Jan 9 13:24 README
drwxr-xr-x 4 hue hue 4096 Jan 9 14:35 tools
-rw-r--r-- 1 hue hue 932 Jan 9 13:24 VERSION Thank you
... View more
Labels:
- Labels:
-
Cloudera Hue
01-06-2018
07:50 AM
userSearchBase system usernames and passwords are correct, I copied them from working shiro.ini of zeppelin service
... View more
01-05-2018
08:40 AM
@mvaradkar thank you tryed but same 401 status in the logs btw after I enter url in the internet browser (h t t p s :// knox . ragaca . com : 8443/gateway/default/webhdfs/v1) there is 401 not only when I enter my real existing AD username and password but when I enter random symbols in the login prompt there are same "response status 401" in the gateway-audit.log every time
... View more
12-28-2017
05:39 PM
Hi all I am trying figure out knox gateway but I have problem when I access services like WEBHDFS this is error log from /var/log/knox/gateway-audit.log: 17/12/28 21:30:30 ||de5c4e70-c89c-487e-8fea-6260c6701efb|audit|IPADDR|WEBHDFS||||access|uri|/gateway/default/webhdfs/v1|unavailable|Request method: GET
17/12/28 21:30:30 ||de5c4e70-c89c-487e-8fea-6260c6701efb|audit|IPADDR|WEBHDFS||||access|uri|/gateway/default/webhdfs/v1|success|Response status: 401
this is my topology configuration: <topology>
<gateway>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>15</value>
</param>
<param>
<name>main.ldapRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<param>
<name>main.ldapContextFactory</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.contextFactory</name>
<value>$ldapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://ragaca.com:389</value>
</param>
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>simple</value>
</param>
<param>
<name>main.ldapRealm.userDnTemplate</name>
<value>sAMAccountName={0}</value>
</param>
<param>
<name>main.ldapRealm.userSearchAttributeName</name>
<value>sAMAccountName</value>
</param>
<param>
<name>main.ldapRealm.userObjectClass</name>
<value>person</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.systemUsername</name>
<value>CN=testUser,OU=testUsers,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.systemPassword</name>
<value>*********</value>
</param>
<param>
<name>main.ldapRealm.searchBase</name>
<value>OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.userSearchBase</name>
<value>Users,OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.userSearchScope</name>
<value>subtree</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>OU=Groups,OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.memberAttribute</name>
<value>member</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
<provider>
<role>authorization</role>
<name>AclsAuthz</name>
<enabled>true</enabled>
</provider>
</gateway>
<service>
<role>NAMENODE</role>
<url>hdfs://namenode1.ragaca.com:8020</url>
</service>
<service>
<role>JOBTRACKER</role>
<url>rpc://jt.ragaca.com:8050</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://namenode1.ragaca.com:50070/</url>
<url>http://namenode2.ragaca.com:50070/</url>
</service>
</topology>
I also have hadoop.proxyuser.knox.hosts=* and hadoop.proxyuser.knox.groups=* in the core-site of the HDFS configuration could anyone guess what am I missing Thank you very much and happy new year
... View more
Labels:
- Labels:
-
Apache Knox
11-10-2017
01:00 PM
Hi all I have latest HDP 2.6.3 and latest zeppelin in it - Version 0.7.3 I am trying configure ActiveDirectoryGroupRealm in the shiro ini zeppelin starts but when I try login with my username and password it says that my password is incorect here is exact error message from logs: WARN [2017-11-10 16:15:35,301] ({qtp64830413-78} LoginRestApi.java[postLogin]:119) - {"status":"FORBIDDEN","message":"","body":""}
ERROR [2017-11-10 16:15:40,681] ({qtp64830413-75} LoginRestApi.java[postLogin]:111) - Exception in login:
org.apache.shiro.authc.AuthenticationException: LDAP authentication failed.
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.doGetAuthenticationInfo(ActiveDirectoryGroupRealm.java:132)
at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:180)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:267)
at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:270)
at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:256)
at org.apache.zeppelin.rest.LoginRestApi.postLogin(LoginRestApi.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:205)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:102)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:595)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C090400, comment: AcceptSecurityContext error, data 52e, v1db1 ]
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3136)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3082)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2883)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2797)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154)
at org.apache.shiro.realm.ldap.DefaultLdapContextFactory.createLdapContext(DefaultLdapContextFactory.java:276)
at org.apache.shiro.realm.ldap.DefaultLdapContextFactory.getLdapContext(DefaultLdapContextFactory.java:263)
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.queryForAuthenticationInfo(ActiveDirectoryGroupRealm.java:201)
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.doGetAuthenticationInfo(ActiveDirectoryGroupRealm.java:128)
... 64 more
WARN [2017-11-10 16:15:40,684] ({qtp64830413-75} LoginRestApi.java[postLogin]:119) - {"status":"FORBIDDEN","message":"","body":""}
ERROR [2017-11-10 16:15:43,806] ({qtp64830413-77} LoginRestApi.java[postLogin]:111) - Exception in login:
org.apache.shiro.authc.AuthenticationException: LDAP authentication failed.
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.doGetAuthenticationInfo(ActiveDirectoryGroupRealm.java:132)
at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:180)
LDAP: error code 49 with 52e means AD_INVALID CREDENTIALS but username and password are correct my shiro ini: [users]
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
activeDirectoryRealm=org.apache.zeppelin.realm.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername="CN=Sys User,OU=SysUsers,DC=qwe,DC=rty"
activeDirectoryRealm.systemPassword=qwerty
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks
activeDirectoryRealm.searchBase="OU=Users,OU=Domain Users & Groups,DC=qwe,DC=rty"
activeDirectoryRealm.url = ldap://activedirectory.qwe.rty:389
activeDirectoryRealm.groupRolesMap = "CN=group1,OU=Groups,OU=Domain Users & Groups,DC=qwe,DC=rty":"admin","CN=group2,OU=Groups,OU=Domain Users & Groups,DC=qwe,DC=rty":"user"
activeDirectoryRealm.authorizationCachingEnabled = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
#Uncomment the line below when running Zeppelin-Server in HTTPS mode
#cookie.secure = true
cookie.httpOnly = true
sessionManager.sessionIdCookie = $cookie
securityManager.sessionManager = $sessionManager
securityManager.realms = $activeDirectoryRealm
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
#admin = *
#user = *
[urls]
/api/version = anon
#/api/interpreter/** = authc, roles[admin]
#/api/configurations/** = authc, roles[admin]
#/api/credential/** = authc, roles[admin]
#/** = anon
/** = authc
any idea? Thank you
... View more
Labels:
- Labels:
-
Apache Zeppelin
10-16-2017
10:33 AM
Problem solved after update HDP stack from 2.6.1 to latest 2.6.2 version Thank you
... View more
10-13-2017
01:35 PM
Hi
I am trying enable ssl for ranger using this link I have java keystore and truststore files, I only use this 2 files, for other services they work properly, also checked password with java keytool and it is correct, tested several passwords for keystore file from simple to hard passwords but ranger-admin gives error in /var/log/ranger/admin/catalina.out during start: INFO: Initializing ProtocolHandler ["http-bio-6182"]
Oct 13, 2017 1:11:14 PM org.apache.coyote.AbstractProtocol init
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-6182"]
java.io.IOException: Keystore was tampered with, or password was incorrect
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
configuration are done from ambari then I checked ranger-admin-site.xml and: <property>
<name>ranger.service.https.attrib.keystore.pass</name>
<value>_</value>
</property>
here I cant see any password there is only " _ " symbol(but from ambari I set actual password, then I tried manually edit this xml file but after restart ranger service resets it and there is "_" anyway) this is permissions of the files(tried different permissions too): -rw------- 1 ranger ranger 1586 Oct 11 14:29 truststore.jks
-rw-r----- 1 ranger ranger 2872 Oct 12 14:03 keystore.jks
any idea? Thank you
... View more
Labels:
- Labels:
-
Apache Ranger
09-23-2017
08:19 AM
1 Kudo
place /** = authc in the end of [urls] section makes sense, also I made little changes in the ldapRealm.rolesByGroup(before it was incorrect syntax) and now everything is working properly place urls by correct order was a key, thank you very much
... View more
09-22-2017
12:37 PM
P.S. also there is some warnings in the /var/log/zeppelin/zeppelin-zeppelin-zeppelin.node.log WARN [2017-09-22 16:29:38,301] ({qtp760563749-56} JAXRSUtils.java[findTargetMethod]:499) - No operation matching request path "/api/login" is found, Relative Path: /, HTTP Method: GET, ContentType: */*, Accept: application/json,text/plain,*/*,. Please enable FINE/TRACE log level for more details.
WARN [2017-09-22 16:29:38,302] ({qtp760563749-56} WebApplicationExceptionMapper.java[toResponse]:73) - javax.ws.rs.ClientErrorException
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:218)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXR
etc ... -----------------------------
WARN [2017-09-22 16:29:47,865] ({qtp760563749-26} JAXRSUtils.java[findTargetMethod]:499) - No operation matching request path "/api/login;JSESSIONID=a26c09a0-e86d-4e56-97ae-ac3e8d45a057" is found, Relative Path: /, HTTP Method: GET, ContentType: */*, Accept: application/json,text/plain,*/*,. Please enable FINE/TRACE log level for more details.
WARN [2017-09-22 16:29:47,866] ({qtp760563749-26} WebApplicationExceptionMapper.java[toResponse]:73) - javax.ws.rs.ClientErrorException
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:218)
etc... ----------------------------- warnings occurs when user logins in the zeppelin UI maybe something wrong with path which starts with "api"? where is the path configs for zeppelin?
... View more
09-22-2017
12:28 PM
Thank you for reply ok here is my new config for urls: [urls]
/** = authc
/api/interpreter/** = authc, roles[admin]
/api/configuration/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
#/** = anon but everyone has access to everything anyway does [urls] and [roles] sections works for LdapRealm?
... View more
09-21-2017
01:20 PM
Hello guys I have zeppelin component in the HDP and configured shiro for active directory auth(LdapRealm) I also have set uesr search filter by group(only specified groups can login in the zeppelin web interface) and have created 2 roles: admins and users, but I think the roles does not works at all roles configuration I have in the shiro.ini like that: [roles]
admin = *
users = *:ToDoItemsJdo:*:*,*:ToDoItem:*:*
goal is that I do not want users to access some configurations in the zeppelin for example restrict access interpreter configs I have url config too: [urls]
/** = authc
**/interpreter/** = authc, roles[admin]
**/configuration/** = authc, roles[admin]
but this does not works either, all loged in users have access to everything 😕 in the [main] section: ldapRealm.rolesByGroup = "Admins":admin,"Users":users user search by group works, only this 2 group members can login("Admins" and "Users" in the ActiveDirectory) Any ideas? P.S. here is version numbers: Installed Packages Name : zeppelin_2_6_1_0_129 Arch : noarch Version : 0.7.0.2.6.1.0 Thank you
... View more
Labels:
- Labels:
-
Apache Zeppelin
09-05-2017
02:38 PM
Yes hive.llap.io.threadpool.size was invalid, I set it as number of executors as you said and now everything works fine. Thank you very much
... View more
09-04-2017
11:58 AM
Thank you for answer, as I understand I could virtualize most of dataflow services too: kafka, storm, nifi and SAM is it right?
... View more
09-01-2017
12:38 PM
Hello I have question what is difference between virtual namenode and physical dedicated namenode server consider that all workernodes I have physical dedicated servers will it be performance difference of data processing between virtual and physical master servers, even if virtual and physical servers will have same storeage capacity cpu number and ram, and not only namenodes for example can I have virtual hbase master, yarn, resource manager etc.. what services can I virtualize without impact performance? thank you
... View more
Labels:
- Labels:
-
Apache Hadoop
06-05-2017
09:33 AM
Tthat's great I will try use config groups in the ambari thank you very much
... View more
06-03-2017
04:58 AM
What if I have nodes with different hardware configurations? for example I have 10 slave nodes and 5 each of them have 64 gb ram and 24 core cpu and rest has 32 gb ram and 12 core cpu, can I have some optimal configuration for yarn? for example "Memory allocated for all YARN containers on a node" - can I set it more than 32GB, or I have to remove all slave nodes with low hardware configuration in this case and use only servers which has 64 GB ram? Is it must to have slave nodes with identical hardware configurations? Please link me some documentation about this question Thank you for advice 😉
... View more
Labels:
- Labels:
-
Apache YARN
05-02-2017
07:35 AM
I've shared hiveserver2 interactive logs already if you know where more logs are located please inform me I can't find more logs than provided above, thank you locate interactive
/usr/libexec/git-core/git-add--interactive
/usr/libexec/git-core/git-rebase--interactive
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_interactive_thrift_port.py
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_interactive_thrift_port.pyc
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_interactive_thrift_port.pyo
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_interactive.py
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_interactive.pyc
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_interactive.pyo
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.pyc
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.pyo
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service_interactive.py
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service_interactive.pyc
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service_interactive.pyo
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/setup_ranger_hive_interactive.py
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/setup_ranger_hive_interactive.pyc
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/setup_ranger_hive_interactive.pyo
/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/HIVE/configuration/hive-interactive-env.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/HIVE/configuration/hive-interactive-site.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/HIVE/configuration/hiveserver2-interactive-site.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/HIVE/configuration/tez-interactive-site.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/HIVE/configuration/hive-interactive-env.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/HIVE/configuration/hive-interactive-site.xml
/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/HIVE/configuration/tez-interactive-site.xml
/var/lib/ambari-agent/tmp/start_hiveserver2_interactive_script
... View more
05-01-2017
09:18 AM
and this is full log of LLAP: Log Type: llap-daemon-hive-somehost5.somedomain.log
Log Upload Time: Mon May 01 13:04:54 +0400 2017
Log Length: 16486
2017-05-01T13:04:21,632 INFO [main ()] org.apache.hadoop.hive.conf.HiveConf: Found configuration file file:/DATA/hadoop/yarn/local/usercache/hive/appcache/application_1493310509760_0032/container_e23_1493310509760_0032_01_000007/app/install/conf/hive-site.xml
2017-05-01T13:04:22,044 INFO [main ()] org.apache.hadoop.hive.llap.LlapUtil: Using local dirs from environment: /DATA/hadoop/yarn/local/usercache/hive/appcache/application_1493310509760_0032
2017-05-01T13:04:22,200 WARN [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: LLAP daemon logging initialized from file:/DATA/hadoop/yarn/local/usercache/hive/appcache/application_1493310509760_0032/container_e23_1493310509760_0032_01_000007/app/install/conf/llap-daemon-log4j2.properties in 153 ms. Async: true
2017-05-01T13:04:22,202 WARN [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon:
$$\ $$\ $$$$$$\ $$$$$$$\
$$ | $$ | $$ __$$\ $$ __$$\
$$ | $$ | $$ / $$ |$$ | $$ |
$$ | $$ | $$$$$$$$ |$$$$$$$ |
$$ | $$ | $$ __$$ |$$ ____/
$$ | $$ | $$ | $$ |$$ |
$$$$$$$$\ $$$$$$$$\ $$ | $$ |$$ |
\________|\________|\__| \__|\__|
2017-05-01T13:04:22,203 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Zookeeper Quorum: somehost1.somedomain:2181,somehost2.somedomain:2181,somehost3.somedomain:2181
2017-05-01T13:04:22,462 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Starting daemon as user: hive
2017-05-01T13:04:22,475 WARN [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Attempting to start LlapDaemonConf with the following configuration: maxJvmMemory=839909370 (801.00MB), requestedExecutorMemory=838860800 (800.00MB), llapIoCacheSize=134217728 (128.00MB), xmxHeadRoomMemory=41995468 (40.05MB), adjustedExecutorMemory=796865332 (759.95MB), numExecutors=1, llapIoEnabled=true, llapIoCacheIsDirect=true, rpcListenerPort=0, mngListenerPort=15004, webPort=15002, outputFormatSvcPort=15003, workDirs=[/DATA/hadoop/yarn/local/usercache/hive/appcache/application_1493310509760_0032], shufflePort=15551, waitQueueSize= 10, enablePreemption= true
2017-05-01T13:04:22,546 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2-llapdaemon.properties
2017-05-01T13:04:22,800 INFO [main ()] org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2017-05-01T13:04:22,805 INFO [main ()] org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink: Identified hostname = somehost5.somedomain, serviceName = llapdaemon
2017-05-01T13:04:22,870 INFO [main ()] org.apache.hadoop.metrics2.sink.timeline.availability.MetricSinkWriteShardHostnameHashingStrategy: Calculated collector shard somehost3.somedomain based on hostname: somehost5.somedomain
2017-05-01T13:04:22,871 INFO [main ()] org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink: Collector Uri: http://somehost3.somedomain:6188/ws/v1/timeline/metrics
2017-05-01T13:04:22,871 INFO [main ()] org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink: Container Metrics Uri: http://somehost3.somedomain:6188/ws/v1/timeline/containermetrics
2017-05-01T13:04:22,884 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink timeline started
2017-05-01T13:04:22,903 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-05-01T13:04:22,904 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: LlapDaemon metrics system started
2017-05-01T13:04:22,909 INFO [org.apache.hadoop.util.JvmPauseMonitor$Monitor@324a0017 ()] org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2017-05-01T13:04:22,940 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Started LlapMetricsSystem with displayName: LlapDaemonExecutorMetrics-somehost5.somedomain sessionId: 7b7897de-b8a8-48d5-b4a8-1143f5ef91ea
2017-05-01T13:04:22,954 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.AMReporter: Setting up AMReporter with heartbeatInterval(ms)=10000, retryTime(ms)=10000, retrySleep(ms)=2000
2017-05-01T13:04:22,966 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl: Creating: LlapProtocolServerImpl with port configured to: 0
2017-05-01T13:04:23,131 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.QueryTracker: QueryTracker setup with numCleanerThreads=1, defaultCleanupDelay(s)=300, routeBasedLogging=true
2017-05-01T13:04:23,135 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService: TaskExecutorService is being setup with parameters: numExecutors=1, waitQueueSize=10, waitQueueComparatorClassName=org.apache.hadoop.hive.llap.daemon.impl.comparator.ShortestJobFirstComparator, enablePreemption=true
2017-05-01T13:04:23,159 INFO [main ()] org.apache.tez.hadoop.shim.HadoopShimsLoader: Trying to locate HadoopShimProvider for hadoopVersion=2.7.3.2.6.0.3-8, majorVersion=2, minorVersion=7
2017-05-01T13:04:23,162 INFO [main ()] org.apache.tez.hadoop.shim.HadoopShimsLoader: Picked HadoopShim org.apache.tez.hadoop.shim.HadoopShimsomehost, providerName=org.apache.tez.hadoop.shim.HadoopShimsomehostProvider, overrideProviderViaConfig=null, hadoopVersion=2.7.3.2.6.0.3-8, majorVersion=2, minorVersion=7
2017-05-01T13:04:23,162 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl: ContainerRunnerImpl config: memoryPerExecutorDerviced=796865344
2017-05-01T13:04:23,165 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Adding shutdown hook for LlapDaemon
2017-05-01T13:04:23,390 WARN [main ()] org.apache.hadoop.hive.conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
2017-05-01T13:04:23,514 INFO [main ()] LlapIoImpl: Initializing LLAP IO in cache mode
2017-05-01T13:04:23,521 INFO [main ()] org.apache.hadoop.hive.llap.metrics.LlapDaemonIOMetrics: Created interval PercentileDecodingTime_30s
2017-05-01T13:04:23,523 INFO [main ()] LlapIoImpl: Started llap daemon metrics with displayName: LlapDaemonIOMetrics-somehost5.somedomain sessionId: 7b7897de-b8a8-48d5-b4a8-1143f5ef91ea
2017-05-01T13:04:23,525 INFO [main ()] LlapIoImpl: LRFU cache policy with min buffer size 262144 and lambda 0.009999999776482582 (heap size 512)
2017-05-01T13:04:23,527 INFO [main ()] LlapIoImpl: Memory manager initialized with max size 134217728 and ability to evict blocks
2017-05-01T13:04:23,527 INFO [main ()] LlapIoImpl: LRFU cache policy with min buffer size 262144 and lambda 0.009999999776482582 (heap size 321)
2017-05-01T13:04:23,527 INFO [main ()] LlapIoImpl: Memory manager initialized with max size 83990936 and ability to evict blocks
2017-05-01T13:04:23,531 INFO [main ()] LlapIoImpl: Buddy allocator with direct buffers; allocation sizes 262144 - 16777216, arena size 16777216, total size 134217728
2017-05-01T13:04:23,551 INFO [main ()] LlapIoImpl: Low level cache; cleanup interval 600 sec
2017-05-01T13:04:23,557 INFO [main ()] org.apache.hadoop.service.AbstractService: Service LlapDaemon failed in state INITED; cause: java.lang.RuntimeException: Failed to create org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl
java.lang.RuntimeException: Failed to create org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:61) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.initializeLlapIo(LlapProxy.java:50) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:393) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
Caused by: java.lang.IllegalArgumentException
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1307) ~[?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1230) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:67) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:59) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl.<init>(LlapIoImpl.java:181) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
2017-05-01T13:04:23,562 WARN [main ()] org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService: Stopping non-existent registry service
2017-05-01T13:04:23,563 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.AMReporter: Stopped service: org.apache.hadoop.hive.llap.daemon.impl.AMReporter
2017-05-01T13:04:23,563 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.QueryTracker: QueryTracker stopped
2017-05-01T13:04:23,563 INFO [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService: Wait-Queue-Scheduler-%d thread has been interrupted after shutdown.
2017-05-01T13:04:23,565 INFO [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService: Wait queue scheduler worker exited with success!
2017-05-01T13:04:23,569 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: LlapDaemon shutdown invoked
2017-05-01T13:04:23,569 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping LlapDaemon metrics system...
2017-05-01T13:04:23,570 INFO [timeline ()] org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: timeline thread interrupted.
2017-05-01T13:04:23,572 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: LlapDaemon metrics system stopped.
2017-05-01T13:04:23,572 INFO [main ()] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: LlapDaemon metrics system shutdown complete.
2017-05-01T13:04:23,577 WARN [main ()] org.apache.hadoop.service.AbstractService: When stopping the service LlapDaemon : java.lang.IllegalStateException: LlapOutputFormatService must be started before invoking get
java.lang.IllegalStateException: LlapOutputFormatService must be started before invoking get
at com.google.common.base.Preconditions.checkState(Preconditions.java:149) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.LlapOutputFormatService.get(LlapOutputFormatService.java:97) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceStop(LlapDaemon.java:437) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:171) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
2017-05-01T13:04:23,578 WARN [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Failed to start LLAP Daemon with exception
java.lang.RuntimeException: Failed to create org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:61) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.initializeLlapIo(LlapProxy.java:50) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:393) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
Caused by: java.lang.IllegalArgumentException
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1307) ~[?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1230) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:67) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:59) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl.<init>(LlapIoImpl.java:181) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
2017-05-01T13:04:23,578 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: LlapDaemon shutdown invoked
2017-05-01T13:04:23,579 WARN [main ()] org.apache.hadoop.metrics2.util.MBeans: Error unregistering Hadoop:service=LlapDaemon,name=LlapDaemonInfo
javax.management.InstanceNotFoundException: Hadoop:service=LlapDaemon,name=LlapDaemonInfo
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) ~[?:1.8.0_112]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) ~[?:1.8.0_112]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) ~[?:1.8.0_112]
at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) ~[?:1.8.0_112]
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:110) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.shutdown(LlapDaemon.java:445) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:537) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... View more
05-01-2017
09:13 AM
Log Type: llap-daemon-hive-somehostname.log
Log Upload Time: Mon May 01 13:04:54 +0400 2017
Log Length: 16486
Showing 4096 bytes of 16486 total. Click here for the full log.
.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
2017-05-01T13:04:23,578 WARN [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Failed to start LLAP Daemon with exception
java.lang.RuntimeException: Failed to create org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:61) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.initializeLlapIo(LlapProxy.java:50) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.serviceInit(LlapDaemon.java:393) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
Caused by: java.lang.IllegalArgumentException
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1307) ~[?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1230) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:67) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool.<init>(StatsRecordingThreadPool.java:59) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl.<init>(LlapIoImpl.java:181) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.hive.llap.io.api.LlapProxy.createInstance(LlapProxy.java:59) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... 4 more
2017-05-01T13:04:23,578 INFO [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: LlapDaemon shutdown invoked
2017-05-01T13:04:23,579 WARN [main ()] org.apache.hadoop.metrics2.util.MBeans: Error unregistering Hadoop:service=LlapDaemon,name=LlapDaemonInfo
javax.management.InstanceNotFoundException: Hadoop:service=LlapDaemon,name=LlapDaemonInfo
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) ~[?:1.8.0_112]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) ~[?:1.8.0_112]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) ~[?:1.8.0_112]
at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) ~[?:1.8.0_112]
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:110) ~[hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.shutdown(LlapDaemon.java:445) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:537) ~[hive-llap-server-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
... View more
04-28-2017
12:29 PM
Hello I tried to activate LLAP and add HiveServer2 Interactive service but the service fails to start I could not find any solution, any idea? here is part of output in ambari when process is starting:
2017-04-28 15:45:34,633 [main] INFO util.ExitUtil - Exiting with status 0
2017-04-28 15:45:35,491 - Submitted LLAP app name : llap0
2017-04-28 15:45:35,492 -
2017-04-28 15:45:35,492 - LLAP status command : /usr/hdp/current/hive-server2-hive2/bin/hive --service llapstatus -w -r 0.8 -i 2 -t 400
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.0.3-8/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.0.3-8/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
LLAPSTATUS WatchMode with timeout=400 s
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1493310509760_0013. Started 0/6 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1493310509760_0013. Started 0/6 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1493310509760_0013. Started 0/6 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1493310509760_0013. Started 0/6 instances
--------------------------------------------------------------------------------
WARN cli.LlapStatusServiceDriver: Application stopped while launching. COMPLETE state reached while waiting for RUNNING state. Failing fast..
LLAP Application already complete. ApplicationId=application_1493310509760_0013
FAILED container: container_e23_1493310509760_0013_01_000003, Logs at: http://somehost1:19888/jobhistory/logs/somehostN1:45454/container_e23_1493310509760_0013_01_000003/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000002, Logs at: http://somehost1:19888/jobhistory/logs/somehostN2:45454/container_e23_1493310509760_0013_01_000002/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000005, Logs at: http://somehost1:19888/jobhistory/logs/somehostN3:45454/container_e23_1493310509760_0013_01_000005/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000004, Logs at: http://somehost1:19888/jobhistory/logs/somehostN4:45454/container_e23_1493310509760_0013_01_000004/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000007, Logs at: http://somehost1:19888/jobhistory/logs/somehostN5:45454/container_e23_1493310509760_0013_01_000007/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000006, Logs at: http://somehost1:19888/jobhistory/logs/somehostN6:45454/container_e23_1493310509760_0013_01_000006/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000009, Logs at: http://somehost1:19888/jobhistory/logs/---z---:45454/container_e23_1493310509760_0013_01_000009/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000008, Logs at: http://somehost1:19888/jobhistory/logs/----z---:45454/container_e23_1493310509760_0013_01_000008/ctx/hive
FAILED container: container_e23_1493310509760_0013_01_000010, Logs at: http://somehost1:19888/jobhistory/logs/-----z----:45454/container_e23_1493310509760_0013_01_000010/ctx/hive
Unstable Application Instance : - failed with component LLAP failed 'recently' 6 times (6 in startup); threshold is 5 - last failure: Failure container_e23_1493310509760_0013_01_000007 on host somehostN12 (0): http://somehost1:19888/jobhistory/logs/somehostN12:45454/container_e23_1493310509760_0013_01_000007/ctx/hive
--------------------------------------------------------------------------------
P.S. not real hostnames, I've deleted or changed
... View more
- Tags:
- Data Processing
- hiveserver2
- llap
- start
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
- Labels:
-
Apache Hive
02-25-2016
12:31 AM
Hi all I have question: can I install phoenix on cdh5.5.x without parcels, I never used the parcels before, I installed cdh cluster using apt package manager on ubuntu servers
... View more
09-26-2013
12:29 AM
@Vamsee wrote: @Hi @shota We will help you figure out the problem! 🙂 FYI, if you are using CM; you needn't install any Solr packages manually. CM does the complete setup for you. From what i understand from your previous comment, you installed some solr packages manually and just to make sure we are on right setup, executing "$ ps -elf | grep solr" should result in listing only one solr process running in your system. Once you have confirmed this, you can try refreshing the entire service.. (Assuming you have no stored data in Solr) Stop the Solr service from CM Restart the ZooKeeper service from CM Issue this command "$ solrctl init --force" Start the Solr service back from CM Now you can try creating some collections.. -Vamsee It works thank you very much
... View more