Member since
01-24-2017
68
Posts
7
Kudos Received
0
Solutions
06-04-2018
06:54 PM
@Felix Albani I have the YARN UI in my topology, but when I click on the logs link for an application I get redirected to /gateway/nodemanagerui, which gives me 404 not found. looking at the folder /usr/hdp/current/knox-server/data/services, I can see there is a service called nodemanagerui, the service.xml file for it looks like below. <service role="NODEUI" name="nodeui" version="2.7.1"> <routes> <route path="/node/"> <rewrite apply="NODEUI/node/root" to="response.body"/> </route> <route path="/node/**"> <rewrite apply="NODEUI/node/path" to="response.body"/> </route> <route path="/node/**?**"> <rewrite apply="NODEUI/node/query" to="response.body"/> </route> <route path="/node/conf"> <rewrite apply="NODEUI/configuration" to="response.body"/> </route> </routes> </service> What is the topology definition for it?
... View more
06-04-2018
03:46 PM
Thank you Felix, would you have an example on how to add it to the Knox Topology? I tried everything and did not work. The issues is that /gateway/nodemanagerui looks weird to me since all the other Knox urls are /gateway/default/<service> I appreciate some direction on how to try it out.
... View more
06-04-2018
02:53 PM
Trying to view yarn app logs using know on a kerberized cluster. Getting 404 when clicking on logs, it is trying to go to /gateway/nodemanagerui. Is that a separate topology?
... View more
Labels:
05-29-2018
05:44 PM
I am trying to configure accessing the yarn logs through Knox and I saw with the latest version of Knox that I need to add the NODEUI topology to make that work. I am trying the below topology but it does not seem to be working. <service>
<role>NODEUI</role>
<url>http://{{nm_host}}:{{nodeui_port}}</url>
</service>
This will also link to a single nodemanager, which is not very practical. Any recommendations to get this working please?
... View more
Labels:
05-29-2018
02:42 PM
1 Kudo
I am trying to view Yarn application logs using Knox on a kerberized cluster. Every time I click on the logs link it takes me to an invalid URL. I looked at the rewrite rules for yarnui and I see the below section: <rule dir="OUT" name="YARNUI/yarn/outbound/node/containerlogs">
<match pattern="{scheme}://{host}:{port}/node/containerlogs/{**}"/>
<rewrite template="{gateway.scheme}://{gateway.host}:{gateway.port}/gateway/nodemanagerui/node/containerlogs/{**}?{scheme}?host={$hostmap(host)}?{port}"/>
</rule>
From the look of it, it is trying to redirect to another topology, not another service in the same topology. I am also unable to find any service definition for nodemanagerui, which confirms my suspicion about the additional topology. Anyone had any idea how to resolve this issue, or how to define this topology please?
... View more
Labels:
02-08-2018
06:24 PM
I am building am mpack for a service I would like to install in Ambari. The mpack installed properly but the alerts are not accurate. I tried to modify the alerts.json several times to fix the issue but Ambari keeps using the original version of the alerts.json. I restarted Ambari server several times but nothing worked. How would I force Ambari to use the newly modified alerts.json Best, Theyaa.
... View more
Labels:
01-18-2018
04:10 PM
I have configured knox to work with two hiveserver2 instances and during my testing I connected to hive through knox using beeline. Then stopped the current hiveserver2 and ran a query in beeline. I can see that knox did fail over to the second one, but my query through a 500 error. Looking into hiveserver2 log I saw the following error. 2018-01-17 14:14:38,611 WARN servlet.ServletHandler (ServletHandler.java:doHandle(546)) - /cliservice
java.lang.IllegalArgumentException: Invalid sign, original = Qp+dxp0/PGiidNY7TWAHYo4nlJs= current = XaZLVoAJAF+88lAl9Z/9VYXWeg0=
at org.apache.hive.service.CookieSigner.verifyAndExtract(CookieSigner.java:85)
at org.apache.hive.service.cli.thrift.ThriftHttpServlet.getClientNameFromCookie(ThriftHttpServlet.java:253)
at org.apache.hive.service.cli.thrift.ThriftHttpServlet.validateCookie(ThriftHttpServlet.java:309)
at org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:142)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:479)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:349)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:925)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:952)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:76)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:609)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:45)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The following article talks about the same exact issue : https://community.hortonworks.com/content/supportkb/150288/error-opening-sessionorgapachethrifttransportttran.html But, That solution is not optimal as it defeats the purpose of have HA configuration if I have to bounce knox server everytime a hiveserver2 goes down. Also the Jira ticket mentioned in it does not exist. Does anyone has a better solution or suggestion?
... View more
Labels:
01-09-2018
07:12 PM
@Robert Levas Thank you for the reply. I was able to accomplish the functionality using 4 steps. 1 - Rest call to create the group. curl -ivk -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Groups/group_name":"1234_group"}' http://localhost:8080/api/v1/groups 2 - Sql command to modify the group type. su - ambari -c "export PGPASSWORD=bigdata;psql -c \"update ambari.groups set group_type='PAM' where group_name='1234_group';\"" 3 - Rest call to assign 'Service Operator' role to group. curl -ivk -u admin:admin -H "X-Requested-By: ambari" -X POST -d '[{"PrivilegeInfo":{"permission_name":"SERVICE.OPERATOR","principal_name":"1234_group","principal_type":"GROUP"}}]' http://localhost:8080/api/v1/clusters/1234_cluster/privileges 4 - Restart Ambari Server ambari-server restart
... View more
01-09-2018
04:08 PM
I am trying to configure Ambari to use PAM and would like to assign a specific group to role during the configuration. I am performing the configuration inline and would like to add the option of adding a specific group to the service operator group while performing inline setup. My PAM setup command looks like this: ambari-server setup-pam --pam-config-file /etc/pam.d/login --pam-auto-create-groups true I have seen in some distribution that I can specify custom group mapping during the PAM configuration, like in this link: https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_pam_ambari.html While with HDP 2.6.2 and Ambari 2.5.2, I do not get any option to update the custom group mapping to roles. I added the property pam.group.cluster.admin=abcgroup to ambari.properties but that did not work also as it did show the group mapping in Ambari after restarting the ambari server. Does anyone know what I am doing wrong and what should I be doing instead?
... View more
Labels:
12-15-2017
06:01 PM
I tried your suggestion and it seems to work when the transport mode is binary but it does not when transport mode is http. Is this a bug need to be reported? or there is a different config for the http transport mode?
... View more
12-14-2017
04:31 PM
@Aditya Sirna I tried your suggestion but it does not seem to work at all. Hive Server2 is still listening on all hosts according to below. I wanted to listen to localhost/127.0.0.1 instead of the *. LISTEN 0 50 *:10001 *:* users:(("java",2232873,540))
... View more
12-14-2017
12:17 AM
Is there a way to make Hiveserver2 listen to 127.0.0.1 or localhost? I am installing a separate hiveserver2 instance to be used by an application running on an edge node and I want to restrict access to that hiveserver2 instance to only queries coming from that application.
... View more
Labels:
12-05-2017
01:50 PM
Is there a way to ask Oozie to disable some actions and not make them available? I am looking to disable shell action and couple others please?
... View more
11-22-2017
06:40 PM
@Jay Kumar SenSharma I have both hostnames in the /etc/hosts file, but when I go to the LDAP server I see principal for only one of them. I have dual homing on the cluster, the public name containt "-a" as a suffix to the shorname. So for example the hostname could be host.test.com, the public one would be host-a.test.com. Both have different IP addresses. Any idea how can I ask kerberos to create both principals please?
... View more
11-21-2017
06:30 PM
Thank you @Jay Kumar SenSharma for the quick response. I also noticed that yarn resource managers are not able to communicate on the 8088 port due to having dual networks on the cluster. Kerberos seems to have added the internal network and did not add the external one, which yarn is trying to communicate with. How would I add the external hostname/IP to kerberos also?
... View more
11-21-2017
05:49 PM
I am unable to start the Job History Server after kerberizing the cluster. I keep getting this exception everytime I try to start it. at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1089)
... 6 more
2017-11-21 12:43:06,984 FATAL hs.JobHistoryServer (JobHistoryServer.java:launchJobHistoryServer(224)) - Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: History Server Failed to login
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:128)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:221)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:231)
Caused by: java.io.IOException: Login failure for jhs/HOST@KRB_HOST from keytab /etc/security/keytabs/jhs.service.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1098)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:307)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.doSecureLogin(JobHistoryServer.java:175)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:126)
... 3 more
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1089)
... 6 more
... View more
Labels:
11-17-2017
07:18 PM
I am using blueprint to create HDP 2.6.2 cluster, with namenode HA and resourcemanager HA. It installs all the components but when trying to start the components, only HDFS and Zookeeper start and the others fail, like yarn, hive, etc.... when trying to list hdfs on the command line: hdfs dfs -ls / I get the following error: Incomplete HDFS URI, no host: hdfs://1234_test_cluster Anyone know what the issue could be?
... View more
11-08-2017
08:14 PM
Can you limit access to WebHdfs/Httpfs to be only through Knox? If that is possible, how is that accomplished?
... View more
11-08-2017
07:17 PM
So I was able to get it to work by copying the libjpam.so to the WEB-INF/lib folder and restart httpfs. The problem now is that it requires from me to provide the parameter user.name=<username> as the user performing the action, not the one logging in. Is there a way to eliminate this variable or block it so it is always the logged in user who is performing the actions on hdfs?
... View more
11-08-2017
03:25 PM
My cluster is not kerberized and I do not want to go with webhdfs due to the impersonation issue. Httpfs is a better option and would prevent a user from impersonating hdfs.
... View more
11-08-2017
02:59 PM
I am trying to use HttpFS with PAM authentication. I was able to install it using this post: https://community.hortonworks.com/content/kbentry/804/httpfs-configure-and-run-with-hdp-224x.html Then I tried to configure PAM authentication using JPam. I copied the jpam-1.1.jar under /usr/hdp/current/hadoop-httpfs/webapps/webhdfs/WEB-INF/lib I also modified /usr/hdp/current/hadoop-httpfs/webapps/webhdfs/META-INF/context.xml to be <Context><Realm className="org.apache.catalina.realm.JAASRealm" appName="jpamLogin"/></Context> everytime I try to access httpfs and provide the username:password it gives me this error org.apache.catalina.realm.JAASRealm authenticateSEVERE: Unexpected errorjavax.security.auth.login.LoginException: No LoginModules configured for jpamLoginat javax.security.auth.login.LoginContext.init(LoginContext.java:264)at javax.security.auth.login.LoginContext.<init>(LoginContext.java:417)at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:393)at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:334)at org.apache.catalina.authenticator.BasicAuthenticator.authenticate(BasicAuthenticator.java:181)at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:610)at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:503)at java.lang.Thread.run(Thread.java:748)
UPDATE 2: I made some progress. I created the file /etc/hadoop-httpfs/tomcat-deployment/conf/jaas.config with the following content jpamLogin {net.sf.jpam.jaas.JpamLoginModule required serviceName="password-auth";}; then I updated httpfs-env.sh with export CATALINA_OPTS='-Djava.security.auth.login.config=/etc/hadoop-httpfs/tomcat-deployment/conf/jaas.config -Djava.library.path=/usr/hdp/current/hadoop-httpfs/webapps/webhdfs/WEB-INF/lib/' Restart httpfs and I get this exception now.
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px}
span.s1 {font-variant-ligatures: no-common-ligatures}
span.Apple-tab-span {white-space:pre}
javax.security.auth.login.LoginException: java.lang.UnsatisfiedLinkError: no jpam in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at net.sf.jpam.Pam.<clinit>(Pam.java:51)
at net.sf.jpam.jaas.JpamLoginModule.createPam(JpamLoginModule.java:171)
at net.sf.jpam.jaas.JpamLoginModule.login(JpamLoginModule.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:409)
at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:334)
at org.apache.catalina.authenticator.BasicAuthenticator.authenticate(BasicAuthenticator.java:181)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:610)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:503)
at java.lang.Thread.run(Thread.java:748)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:856)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:409)
at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:334)
at org.apache.catalina.authenticator.BasicAuthenticator.authenticate(BasicAuthenticator.java:181)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:610)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:503)
at java.lang.Thread.run(Thread.java:748)
I appreciate your help
... View more
Labels:
11-03-2017
04:55 PM
Is there a way to limit access to WebHDFS to only users coming from certain hosts? Something similar to hadoop.proxyuser
... View more
Labels:
09-28-2017
01:23 PM
I would like to use a custom queue, for example queue1, as the default queue when any user submits a job to yarn and that user is not mapped to any existing queue. The default behavior for yarn would be to use the default queue for those sort of jobs, but I would like to instruct Yarn to use another queue as its default queue. Is there a way to do it please?
... View more
Labels:
08-23-2017
07:11 PM
Is there a way to integrate Solr with LDAP. In other ways is there a way to configure Solr to use LDAP for user authentication? I am using Solr 6.2
... View more
Labels:
08-11-2017
07:15 PM
I do have https on the cluster, but the logs URL is still HTTP and I have to manually edit the url to be able to view the logs.
... View more
08-11-2017
05:27 PM
Is there a way to modify the Yarn application container logs URL to use HTTPS instead of HTTP? and also to modify how the URL is constructed to use custom location if that is possible?
... View more
Labels:
08-11-2017
04:57 PM
Do you have an example please? I would appreciate it.
... View more
08-03-2017
06:49 PM
I am trying to update the yarn scheduler queues through rest api and it does not seem to be working as I expect. I followed the example at the post: https://community.hortonworks.com/questions/33578/api-to-manage-yarn-capacity-queue.html but I keep getting the error 415 Unsupported Media Type. Here is the command I am using: curl -kv -u admin:admin -H "Content-Type: application/json" -H "X-Requested-By:ambari" -X PUT -d@yarn_queues.json https://<FQDN>:<PORT>/api/v1/views/CAPACITY-SCHEDULER/versions/1.0.0/instances/AUTO_CS_INSTANCE/resources/scheduler/configuration
... View more
Labels: