Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3067 | 06-30-2017 05:30 PM | |
3988 | 06-30-2017 02:57 PM | |
3312 | 05-30-2017 07:00 AM | |
3884 | 01-20-2017 10:18 AM | |
8403 | 01-11-2017 02:11 PM |
11-16-2016
08:23 PM
2 Kudos
The problem solved, this is because some application(oracle) is taking the port 8080, and the ambari is unable to use the port 8080, the login screenshot I upload is just the login for the oracle application, I changed the host port from 8080 to some else that no application is using (like 8083), so that everything is OK, it is really a silly mistake.
... View more
11-15-2016
06:44 PM
2 Kudos
@Kate Shaw It normally takes 30secs to refresh policies. You can check the "Plugin" option in ranger UI to check if the policy is getting sync or not.There is no option to force policies. I see there is option in each service plugin in Ambari to define time interval. Example for HDFS service is shown below In Ambari UI->HDFS->Services->Configs->"Advance ranger-hdfs-security" you can change the poll interval here[refresh time]. Check few link which can help to understand better - https://community.hortonworks.com/questions/13070/ranger-policy-is-not-applied.html
... View more
11-15-2016
02:09 PM
6 Kudos
ISSUE: While performing unkerberizing cluster all services were down and nothing was coming up. Also the unkeberized cluster step failed. The start of services was failed. Tried to manually start Namenodes which came up but the status was not displayed correctly in Ambari UI. The journal node were not able to start and was failing with error as shown below. ERROR: Screenshot is attached below Journal node error: ROOT CAUSE: There were multiple issue as below - 1. From the JN error it says "missing spnego keytab". From the error It seems the kerberos was not properly disabled on cluster. 2. As checked in hdfs-site.xml the property "hadoop.http.authentication.type" was set to kerberos. 3. Oozie was not able to detect active namenode, since the property "hadoop.http.authentication.simple.anonymous.allowed" was set to false. RESOLUTION: 1. Setting hadoop.http.authentication.type to simple in hdfs-site.xml, HDFS was able to restart 2. Setting the property hadoop.http.authentication.simple.anonymous.allowed=true in hdfs-site.xml oozie was able to detect active namenode and also namenode status was corrrectly displayed in namenode UI.
... View more
Labels:
11-15-2016
02:09 PM
7 Kudos
updateddeleteuser.zipISSUE: Ranger ldap integration was working fine. Customer delete user from ranger UI and was facing issue while re-importing user in Ranger.
ROOT CAUSE: Customer removed the users from ranger UI and expected that the user should be automatically imported from ranger usersync process Below are sample screenshot - User named 'testuser' is deleted from Ranger UI. But below you can see the user is still available in database.
RESOLUTION: There are multiple tables which has the entry for the user. You need to run delete script to delete the user entries from database and re-start ranger usersync process to re-import the user. Please find attach delete script- Syntax to run the script - $ deleteUser.sh -f input.txt -u ranger_user -p password -db ranger [-r <replaceUser>]
... View more
Labels:
11-15-2016
05:29 AM
6 Kudos
ISSUE: After enabling Ambari SSL Hive views stopped working. ERROR: 08 Nov 2016 11:32:23,330 WARN [qtp-ambari-client-263] nio:720 - javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
08 Nov 2016 11:32:23,331 ERROR [qtp-ambari-client-256] ServiceFormattedException:100 - org.apache.ambari.view.utils.ambari.AmbariApiException: RA040 I/O error while requesting Ambari
org.apache.ambari.view.utils.ambari.AmbariApiException: RA040 I/O error while requesting Ambari
at org.apache.ambari.view.utils.ambari.AmbariApi.requestClusterAPI(AmbariApi.java:176)
at org.apache.ambari.view.utils.ambari.AmbariApi.requestClusterAPI(AmbariApi.java:142)
at org.apache.ambari.view.utils.ambari.AmbariApi.getHostsWithComponent(AmbariApi.java:99)
at org.apache.ambari.view.hive.client.ConnectionFactory.getHiveHost(ConnectionFactory.java:79)
at org.apache.ambari.view.hive.client.ConnectionFactory.create(ConnectionFactory.java:68)
at org.apache.ambari.view.hive.client.UserLocalConnection.initialValue(UserLocalConnection.java:42)
at org.apache.ambari.view.hive.client.UserLocalConnection.initialValue(UserLocalConnection.java:26)
at org.apache.ambari.view.utils.UserLocal.get(UserLocal.java:66)
at org.apache.ambari.view.hive.resources.browser.HiveBrowserService.databases(HiveBrowserService.java:87)
at sun.reflect.GeneratedMethodAccessor186.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
Root Cause: Truststore configuration for Ambari Server was missing. Resolution: Setup the trustore for ambari server as per link below after which above issue was resolved. https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Security_Guide/content/_set_up_truststore_for_ambari_server.html
... View more
Labels:
11-14-2016
05:30 PM
6 Kudos
SYMPTOM: During HDP upgrade from 2.3 to 2.5 YARN check is failing due to NoSuchMethodError org.apache.hadoop.yarn.api.records.Resource.getMemorySize()J ERROR: Below was the error in application logs - 16/11/14 10:30:12 FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
java.lang.NoSuchMethodError: org.apache.hadoop.yarn.api.records.Resource.getMemorySize()J
at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:585)
at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
ROOT CAUSE: There was issue with classpath where the nodemanager on which the job was running was pointing to older version[ie. 2.3] classpath. RESOLUTION: There are two solutions as below - 1. Skip this step in Ambari upgrade UI and proceed. Ambari will take care of setting up the classpath. 2. Modify the classpath manually and confirm the set classpath using "hadoop classpath" command and re-run the service check.
... View more
05-31-2017
03:15 PM
Thank you @Lester Martin That helped!
... View more
04-20-2018
10:36 PM
Dear @Sagar Shimpi The problem I encountered was : The following 6 host component(s) have not been upgraded to version 1.1.5.0-235. Please install and upgrade the Stack Version on those hosts and try again.
Host components:
GLOBALMASTER on host e19e07452.et15sqa
LDSERVER on host e19e07452.et15sqa
LOCALMASTER on host e19e07452.et15sqa
LDSERVER on host e19e07466.et15sqa
LDSERVER on host e19e10465.et15sqa
LOCALMASTER on host e19e10465.et15sqa the "GLOBALMASTER" is my service component. Can you please help? Many thanks in advance.
... View more
09-27-2017
02:16 AM
Easy way to detect duplicate value is: select component_name, service_name, host_id, cluster_id,count(*) from ambari.hostcomponentdesiredstate group by component_name, service_name, host_id, cluster_id order by count desc; select component_name, service_name, host_id, cluster_id,count(*) from ambari.hostcomponentstate group by component_name, service_name, host_id, cluster_id order by count desc; You will find that count of one of the table is different from other. Just delete that by id and you are good to go.
... View more
11-08-2016
07:08 PM
2 Kudos
1. Lets assume you have HDP cluster installed and managed by Ambari. 2. When we want to delete a service [either Custom service or HDP service] using api, you generally use below command - curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/<SERVICENAME>; 2. After executing above command you might see below error - $curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/HBASE
{ "status" : 500, "message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Cannot remove HBASE. Desired state STARTED is not removable. Service must be stopped or disabled." 3. If you see the above error while Removing/Stopping service please use below steps to resolve the issue 4. Login to ambari database [In my case its postgresql] and check the values of the service in below tables - # psql -U ambari
[Default password is 'bigdata']
ambari=> select * from servicedesiredstate where service_name='HBASE';
ambari=> select * from servicecomponentdesiredstate where service_name='HBASE';
5. Make sure here in above output the value of column 'desired_state' should be INSTALLED 6. If you see the above value of "desired_state" is set to STARTED then update the column and set it to STARTED using below command - ambari=> update servicedesiredstate set desired_state='INSTALLED' where service_name='HBASE';
7. Follow same steps for "servicecomponentdesiredstate" table - ambari=> update servicecomponentdesiredstate set desired_state='INSTALLED' where service_name='HBASE'; 8. Now try removing/deleting the service now. It should work. $curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-server>:8080/api/v1/clusters/c1/services/HBASE
... View more
Labels: