Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3065 | 06-30-2017 05:30 PM | |
3985 | 06-30-2017 02:57 PM | |
3307 | 05-30-2017 07:00 AM | |
3884 | 01-20-2017 10:18 AM | |
8399 | 01-11-2017 02:11 PM |
05-23-2016
10:04 AM
@Andrey Nikitin Just give a try of restarting ambari server and agent(on which hiveserver2 runs) Also pls let us know the version of ambari and hdp you are using.
... View more
05-22-2016
02:17 PM
@Andrey Nikitin Enable hive debug and check if you are able to see logs. Also once hive is up try with below commands - # beeline >!connect jdbc:hive2://HOSTNAME:10000/default Else try - >!connect jdbc:hive2://<host>:<port>/<db>;transportMode=http;httpPath=<http_endpoint> where:- <http_endpoint> is the corresponding HTTP endpoint configured in hive-site.xml. Default value is cliservice. Default port for HTTP transport mode is 10001. If you are having kerberized cluster then user - >!connect jdbc:hive2://<host>:10000/;principal=<Server_Principal_of_HiveServer2>
... View more
05-21-2016
02:20 PM
@Sridhar Bandaru In addition to what @Jitendra Yadav mentioned if you still wish to get all services up after reboot then below are 2 options -
You can use ambari api to start the service. Just club all the services according to the order they needs to be started in a script/file and put the file in /etc/rc.local.
refer below link - https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=41812517 Or you can create init script of the file too so that it will act as like existing(ssh) service where you can start/stop using command and enable after reboot using chkconfig. Pls do refer this - https://community.hortonworks.com/questions/825/how-to-write-cluster-startup-shutdown-order-and-sc.html Sample script - =========== export $PASSWORD = *****
export $CLUSTER_NAME = {Your Cluster Name}
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start HDFS via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http: //`hostname -f`:8080/api/v1/clusters/$CLUSTER_NAME/services/HDFS
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start YARN via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http: //`hostname -f`:8080/api/v1/clusters/$CLUSTER_NAME/services/YARN
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start MAPREDUCE2 via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http: //`hostname -f`:8080/api/v1/clusters/$CLUSTER_NAME/services/MAPREDUCE2 ========== 2. You can use ambari blueprint to start and stop the services.
... View more
05-20-2016
03:50 PM
Hi @vinay kumar From log this seems to be issue from ldap server side. Can you check if the same users are able to login on any ldap client node ? Issue also might be related to network connectivity with ldap server.
... View more
05-20-2016
01:35 PM
Not sure if you are looking for something like referral - + <property>
+ <name>ranger.ldap.referral</name>
+ <value>ignore</value>
+ <description>Set to follow if multiple LDAP servers are configured to return continuation
references for results. Set to ignore (default) if no referrals should be followed</description>
+ </property>
... View more
05-20-2016
01:08 PM
@Jay Kumar
I wont think this is supported as off now. For HUE its supported from HDP 2.2.0.0
... View more
05-20-2016
09:43 AM
1 Kudo
@Roopa Raphael once you login with root:hadoop it will ask you to change your password. so put the hadoop when asking for "Current unix password." and now it will ask you for new password -put new password (say - P@ssw0rd) Now once done it will bring you back to Sandbox login screen. Now try with your new password.
... View more
05-20-2016
08:54 AM
1 Kudo
@Roopa Raphael Please try username: root password: hadoop Once login you can change password using #passwd command Let me know if it doesnot work.
... View more
05-20-2016
07:08 AM
1 Kudo
@Vikas Gadade Can you please check this and let me know if this helps - https://community.hortonworks.com/content/kbentry/30653/openldap-setup.html
... View more
05-19-2016
10:40 AM
1 Kudo
Problem Statement: Ranger HDFS repository test connection getting failed while the policies are working fine. When tried enabling debug more for ranger admin service found below error in log - (RangerAuthenticationProvider.java:335) - Unix Authentication Failed:
org.springframework.security.authentication.AuthenticationServiceException: FAILED: unable to authenticate to AuthenticationService: node.example.com:5151
at org.springframework.security.authentication.jaas.DefaultLoginExceptionResolver.resolveException(DefaultLoginExceptionResolver.java:33)
at org.springframework.security.authentication.jaas.AbstractJaasAuthenticationProvider.authenticate(AbstractJaasAuthenticationProvider.java:181)
at org.apache.ranger.security.handler.RangerAuthenticationProvider.getUnixAuthentication(RangerAuthenticationProvider.java:327)
at org.apache.ranger.security.handler.RangerAuthenticationProvider.authenticate(RangerAuthenticationProvider.java:114)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:174)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:168)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:183)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Resolution: As checked the user principal was different than that was created in KDC. Corrected the user principal for HDFS repository in ranger as well as in HDFS configs for ranger plugin properties.
... View more
Labels: